For those experimenting with multiple BRP app instances per GPU, it might be interesting to know that the "new" BRP4 units are quite a bit less CPU intensive than the formerly distributed BRP3 workunits. (while the app itself is the same, the signal data is different).
That means that the GPU load will be higher now, and the saving you get by running several units in parallel will be smaller. It will be interesting to see new runtime measurements.
I've searched but am still clueless so please excuse the basic question.
I have a GTX560 w/1GB and would like to try 2 GPU apps at once. The system dual boots Win 7 Pro and Ubuntu 11.04.
I do not seem to have an app_info.xml file. Is there a writeup on how to create one from scratch? Are they universal enough that people are just copying the ones posted in this thread (and one other)?
Edit: Would a GT240 w/1GB benefit from 2 simultaneous apps?
App_info.xml files are specific to the operationg system and need to be changed whenever a new version of any of the apps listed in the file is released. The project as such does not generally recommend or offers support for writing app_info.xml files, but volunteers have posted files here.
App_info.xml files require that you monitor your system and this forum (for new app releases) so in general I personally would only recommend it to expert users. Or put very bluntly: if you don't know how to write one, please think twice about whether you want to use one ;-)
As for the GT240: with the new BRP4 workunits I would not expect a dramatic throughput increase by running two units in parallel. Maybe 10% ? It will be interesting to see the actual results.
As for the GT240: with the new BRP4 workunits I would not expect a dramatic throughput increase by running two units in parallel. Maybe 10% ? It will be interesting to see the actual results.
And, to be very specific, the GT240 and all earlier-generation NVidia cards lack the context-switching hardware which make multiple-WU operation worthwhile on 4xx and 5xx series cards.
In short, two apps will run simultaneously: but parallel running is likely to be less efficient than running in series, on older hardware.
I've run 1-2-3 & 4 at a time on each one of my dual GTX 580's and I don't see where I'm gaining anything. Running 1 @ a time is just as productive as running 2-3 or 4 as the times just go up 2-3 or 4 times the amount of time it takes to run 1 @ a time ... :/
I've searched but am still clueless so please excuse the basic question.
I have a GTX560 w/1GB and would like to try 2 GPU apps at once. The system dual boots Win 7 Pro and Ubuntu 11.04.
I do not seem to have an app_info.xml file. Is there a writeup on how to create one from scratch? Are they universal enough that people are just copying the ones posted in this thread (and one other)?
Edit: Would a GT240 w/1GB benefit from 2 simultaneous apps?
Joe
Here are links to both Linux and Windows app_info.xml files that I made for my systems. I've been able to see 8-13% performance increase running in Linux compared to Windows so Linux is probably the way to go.
1.0 for one unit per GPU, .50 for two units per GPU, .33 for three units per GPU, etc.
The files still have BRP3 GPU in them as I still get a few of those from time-to-time. You also have the option of removing the CPU related sections if you plan to run GPU only to simplify the configuration file. You have to make sure all the necessary project files as specified in the XML are available before using the XML file.
As for the GT240: with the new BRP4 workunits I would not expect a dramatic throughput increase by running two units in parallel. Maybe 10% ? It will be interesting to see the actual results.
And, to be very specific, the GT240 and all earlier-generation NVidia cards lack the context-switching hardware which make multiple-WU operation worthwhile on 4xx and 5xx series cards.
In short, two apps will run simultaneously: but parallel running is likely to be less efficient than running in series, on older hardware.
I think running 3 (Memory Permitting) is the most Optimal, anything over that & the run times start increasing Per Wu ... Just observation running different amounts on several GTX 580 Box's ...
As for the GT240: with the new BRP4 workunits I would not expect a dramatic throughput increase by running two units in parallel. Maybe 10% ? It will be interesting to see the actual results.
And, to be very specific, the GT240 and all earlier-generation NVidia cards lack the context-switching hardware which make multiple-WU operation worthwhile on 4xx and 5xx series cards.
In short, two apps will run simultaneously: but parallel running is likely to be less efficient than running in series, on older hardware.
I think running 3 (Memory Permitting) is the most Optimal, anything over that & the run times start increasing Per Wu ... Just observation running different amounts on several GTX 580 Box's ...
Wow, you have an impressive array of hosts there!!! Most of your hosts even come with 2 cards !?! But you were not running 2 x 3 WU in parallel, (3 on each GPU), right??
As for the GT240: with the new BRP4 workunits I would not expect a dramatic throughput increase by running two units in parallel. Maybe 10% ? It will be interesting to see the actual results.
And, to be very specific, the GT240 and all earlier-generation NVidia cards lack the context-switching hardware which make multiple-WU operation worthwhile on 4xx and 5xx series cards.
In short, two apps will run simultaneously: but parallel running is likely to be less efficient than running in series, on older hardware.
I think running 3 (Memory Permitting) is the most Optimal, anything over that & the run times start increasing Per Wu ... Just observation running different amounts on several GTX 580 Box's ...
Wow, you have an impressive array of hosts there!!! Most of your hosts even come with 2 cards !?! But you were not running 2 x 3 WU in parallel, (3 on each GPU), right??
HB
Right, I was only running 1 on each GPU until a few days ago. I couldn't get an app file to work until then, now I'm just running 2 Dual 580 Box's & settled in on 3 Per GPU as by my figures that's the most Optimal on the GTX 580's I have ...
App_info.xml files require that you monitor your system and this forum (for new app releases) so in general I personally would only recommend it to expert users. Or put very bluntly: if you don't know how to write one, please think twice about whether you want to use one ;-)
I appreciate the honesty (bluntness) I sometimes miss the subtleties.
I consider myself trainable and since I'm in the explore and learn stage I don't mind getting into a little trouble.
If I understand properly how this works copying the appropriate file Jeroen linked to into my BOINC directory and changing the coprocessor count 0.5 should allow 2 CUDA tasks to run.
The problem will be I have to watch for new versions and modify the file myself, or delete it and go back to one CUDA task at a time. If I miss an update the problem would be no tasks for the new program. Right?
Right now it looks like I am using 492 our of 1024 MB of GPU memory with 60-70% or the processors resulting in one BPR4 task every 35 min or so. Temps are pretty steady at 59C with the fan at 40%. Looks like there is excess capacity.
This is my new home machine so I keep a pretty close eye on it. I may upgrade that 240 and give it to my son.
Hi! For those
)
Hi!
For those experimenting with multiple BRP app instances per GPU, it might be interesting to know that the "new" BRP4 units are quite a bit less CPU intensive than the formerly distributed BRP3 workunits. (while the app itself is the same, the signal data is different).
That means that the GPU load will be higher now, and the saving you get by running several units in parallel will be smaller. It will be interesting to see new runtime measurements.
HBE
I've searched but am still
)
I've searched but am still clueless so please excuse the basic question.
I have a GTX560 w/1GB and would like to try 2 GPU apps at once. The system dual boots Win 7 Pro and Ubuntu 11.04.
I do not seem to have an app_info.xml file. Is there a writeup on how to create one from scratch? Are they universal enough that people are just copying the ones posted in this thread (and one other)?
Edit: Would a GT240 w/1GB benefit from 2 simultaneous apps?
Joe
Hi! App_info.xml files are
)
Hi!
App_info.xml files are specific to the operationg system and need to be changed whenever a new version of any of the apps listed in the file is released. The project as such does not generally recommend or offers support for writing app_info.xml files, but volunteers have posted files here.
App_info.xml files require that you monitor your system and this forum (for new app releases) so in general I personally would only recommend it to expert users. Or put very bluntly: if you don't know how to write one, please think twice about whether you want to use one ;-)
As for the GT240: with the new BRP4 workunits I would not expect a dramatic throughput increase by running two units in parallel. Maybe 10% ? It will be interesting to see the actual results.
HBE
RE: As for the GT240: with
)
And, to be very specific, the GT240 and all earlier-generation NVidia cards lack the context-switching hardware which make multiple-WU operation worthwhile on 4xx and 5xx series cards.
In short, two apps will run simultaneously: but parallel running is likely to be less efficient than running in series, on older hardware.
I've run 1-2-3 & 4 at a time
)
I've run 1-2-3 & 4 at a time on each one of my dual GTX 580's and I don't see where I'm gaining anything. Running 1 @ a time is just as productive as running 2-3 or 4 as the times just go up 2-3 or 4 times the amount of time it takes to run 1 @ a time ... :/
RE: I've searched but am
)
Here are links to both Linux and Windows app_info.xml files that I made for my systems. I've been able to see 8-13% performance increase running in Linux compared to Windows so Linux is probably the way to go.
Linux App
Win App
The option for changing how many run at once is:
CUDA
0.500000
1.0 for one unit per GPU, .50 for two units per GPU, .33 for three units per GPU, etc.
The files still have BRP3 GPU in them as I still get a few of those from time-to-time. You also have the option of removing the CPU related sections if you plan to run GPU only to simplify the configuration file. You have to make sure all the necessary project files as specified in the XML are available before using the XML file.
RE: RE: As for the GT240:
)
I think running 3 (Memory Permitting) is the most Optimal, anything over that & the run times start increasing Per Wu ... Just observation running different amounts on several GTX 580 Box's ...
RE: RE: RE: As for the
)
Wow, you have an impressive array of hosts there!!! Most of your hosts even come with 2 cards !?! But you were not running 2 x 3 WU in parallel, (3 on each GPU), right??
HB
RE: RE: RE: RE: As
)
Right, I was only running 1 on each GPU until a few days ago. I couldn't get an app file to work until then, now I'm just running 2 Dual 580 Box's & settled in on 3 Per GPU as by my figures that's the most Optimal on the GTX 580's I have ...
RE: App_info.xml files
)
I appreciate the honesty (bluntness) I sometimes miss the subtleties.
I consider myself trainable and since I'm in the explore and learn stage I don't mind getting into a little trouble.
If I understand properly how this works copying the appropriate file Jeroen linked to into my BOINC directory and changing the coprocessor count 0.5 should allow 2 CUDA tasks to run.
The problem will be I have to watch for new versions and modify the file myself, or delete it and go back to one CUDA task at a time. If I miss an update the problem would be no tasks for the new program. Right?
Right now it looks like I am using 492 our of 1024 MB of GPU memory with 60-70% or the processors resulting in one BPR4 task every 35 min or so. Temps are pretty steady at 59C with the fan at 40%. Looks like there is excess capacity.
This is my new home machine so I keep a pretty close eye on it. I may upgrade that 240 and give it to my son.
Joe