There's no need to stick with an even number of tasks. I'd try 2-3-4 and report results here :)
To do that: average run times over ~10 WUs and write them down. Repeat with 2 concurrent tasks, but divide the resulting time by 2. Repeat with 3 concurrent tasks, but divide the resulting time by 3 - and so on. The runtime decreases will become smaller as you add tasks and at some point you'll say "well, that's not really any better" and stick with the previous setting.
If the WUs had different sizes and different amounts of credits awarded the procedure would become a bit more complicated.. but for Einstein this is sufficient.
This is entirely based on experiences of other users, most reported that for a card this powerful on PCIe 3.0, upto 8 GPU tasks still produce a slight gain if the CPU is also powerful enough.
This is only valid IMHO if the system is a purely dedicated cruncher not used for any other task.
If it's a typical System used for other tasks as well, I'd start off with 2 GPU tasks to get a good number of results and see how it behaves (i.e. GPU temperatures and generated fan noise) - then step it up to 4 tasks and compare.
Also, I imagine having 4 or even more taks loaded on the GPU will inreasingly interfere with normal everyday Desktop operations (slowdowns, visual artifacts, especially HD videos or Flash animations may not run smooth etc.). It will definitely increase temperatures, that need to be checked out at least once to ensure they're still in a safe region. Again, a good set of Fans blowing fresh air onto the entire Video Card is almost mandatory, not so much for the GPU itself but for the many capacitors/voltage regulators etc. on the Video Card's PCB. Those tend to suffer the most from excess/prolonged heat, which decreases their lifespan.
Yeah I'll keep that in mind. This thing logs temperature, fan speed, power, every second and sometimes I put that info into a chart at the end of the day to see how stressful it got.
I also run other projects, not all of which have a 'run x OpenCL tasks simultaneously on my GPU' option, so I think I'll keep the 'use 3 out of 4 cores for CPU tasks' option and stick to maybe 2 simultaneous BRP app instances.
Thanks everyone; if there was a +rep system I would've used it liberally in this thread.
There's no need to stick with
)
There's no need to stick with an even number of tasks. I'd try 2-3-4 and report results here :)
To do that: average run times over ~10 WUs and write them down. Repeat with 2 concurrent tasks, but divide the resulting time by 2. Repeat with 3 concurrent tasks, but divide the resulting time by 3 - and so on. The runtime decreases will become smaller as you add tasks and at some point you'll say "well, that's not really any better" and stick with the previous setting.
If the WUs had different sizes and different amounts of credits awarded the procedure would become a bit more complicated.. but for Einstein this is sufficient.
MrS
Scanning for our furry friends since Jan 2002
RE: This is entirely based
)
Yeah I'll keep that in mind. This thing logs temperature, fan speed, power, every second and sometimes I put that info into a chart at the end of the day to see how stressful it got.
I also run other projects, not all of which have a 'run x OpenCL tasks simultaneously on my GPU' option, so I think I'll keep the 'use 3 out of 4 cores for CPU tasks' option and stick to maybe 2 simultaneous BRP app instances.
Thanks everyone; if there was a +rep system I would've used it liberally in this thread.
Where exactly can I make this
)
Where exactly can I make this setting of "BRP Utilization factor" ?
Edit: don't mind. I found it...