These BRP4 tasks cause a bit of confusion now.
In the past they were a bit different and run on CPUs and GPUs.
A while ago, you switched BRP4 to CPU-only (Arecibo).
Now you released BRP4 for Intel GPUs (opencl-intel_gpu), which belong under the recent BRP (Arecibo) group (of CPU-only tasks).
And Bernd is now working on BRP4 bundles for GPUs, which is now a new BRP (Arecibo, GPU) group...
Is it possible to make a better distinguish/grouping between these sorts of tasks, so one knows what to expect and what to choose from ?
Sorry if this is confusing, but from our point of view, we have to size jobs for hosts that differ in computing power by a factor of no less than ca. 1000 (!!), if you compare the (soon to be supported) Raspberry Pi and Android phone/tablets to high end GPUs. So the days of one-size-fits-all are definitely over.
Also we have to re-balance the task distribution to GPUs and CPUs from time to time to make sure that a) whoever requests work gets work, b) our server infrastructure can handle the requests and c) all of the sub-projects / searches make good progress to meet our scientific goals.
But yes, maybe we should place one we bpage or forum thread in a prominent place that 'explains it all'.
Since my little dramas yesterday with driver versions and such have updated a few other machines that were already attached to Einstein. I have 4 on the job at the moment.
I had to free up a CPU core to get reasonable run times. That seems to be a common issue with running OpenCL apps regardless of which project.
Bernd commented over at Albert that he corrected the scheduler issue. It seems to be allocating the correct amount of work now. Thanks Bernd and thanks for the app, I can finally use the iGPU for something other than running a screen
These BRP4 tasks cause a bit of confusion now.
In the past they were a bit different and run on CPUs and GPUs.
A while ago, you switched BRP4 to CPU-only (Arecibo).
Now you released BRP4 for Intel GPUs (opencl-intel_gpu), which belong under the recent BRP (Arecibo) group (of CPU-only tasks).
And Bernd is now working on BRP4 bundles for GPUs, which is now a new BRP (Arecibo, GPU) group...
Is it possible to make a better distinguish/grouping between these sorts of tasks, so one knows what to expect and what to choose from ?
Sorry if this is confusing, but from our point of view, we have to size jobs for hosts that differ in computing power by a factor of no less than ca. 1000 (!!), if you compare the (soon to be supported) Raspberry Pi and Android phone/tablets to high end GPUs. So the days of one-size-fits-all are definitely over.
Also we have to re-balance the task distribution to GPUs and CPUs from time to time to make sure that a) whoever requests work gets work, b) our server infrastructure can handle the requests and c) all of the sub-projects / searches make good progress to meet our scientific goals.
But yes, maybe we should place one we bpage or forum thread in a prominent place that 'explains it all'.
Unfotunatelly it doesn't work on my Win 7 machine. But the cause is the Intel driver that doesn't work on Win 7 x32 in patched mode (when it uses over 4 GBs of RAM) though NVidia does. It takes about 20 minutes to make a dummy plug. Have to notice that some Mobos have no VGA jack and have a DVI jack where the plug can not be installed on due to lack of analog pins and VGA jack.
Unfotunatelly it doesn't work on my Win 7 machine. But the cause is the Intel driver that doesn't work on Win 7 x32 in patched mode (when it uses over 4 GBs of RAM) though NVidia does. It takes about 20 minutes to make a dummy plug. Have to notice that some Mobos have no VGA jack and have a DVI jack where the plug can not be installed on due to lack of analog pins and VGA jack.
It's also worth mentioning that you won't need any dummy plugs / funny business if you connect your main display to the Intel. If the other GPU(s) are nVidias they'll still crunch along just fine under Win (don't knwo for others) with recent drivers. Don't know about AMDs, could also work.
The only downside of this approach is if you want to game on your real GPU: you'd need to use Virtu, which costs some performance.
it works with my HD4000 plus my 610M on my laptop. The downside is that I had to reduce CPU work from 4 parallel tasks to only one task due to too much heat. So now I am running two Perseus tasks, one BRP4 (GPU) and one CPU task in parallel.
Just one question: I have set the GPU utilization factor to 0.5 in order to run two GPU tasks in parallel. Is it also possible to run parallel GPU tasks on the HD4000?
Just one question: I have set the GPU utilization factor to 0.5 in order to run two GPU tasks in parallel. Is it also possible to run parallel GPU tasks on the HD4000?
In theory: yes. We have so far not enabled the feature that makes this configurable in the web preferences. However, this feature is most useful for relatively powerful GPUs with many hundreds of cores. From what I've read so far, volunteers have reported that (sometimes after reserving a CPU core for the GPU work), the GPU load on the Intel HD GPUs is close to 95% with just one task, so it doesn't seem necessary to run more than one task in parallel to saturate these tiny internal GPUs.
RE: These BRP4 tasks cause
)
Sorry if this is confusing, but from our point of view, we have to size jobs for hosts that differ in computing power by a factor of no less than ca. 1000 (!!), if you compare the (soon to be supported) Raspberry Pi and Android phone/tablets to high end GPUs. So the days of one-size-fits-all are definitely over.
Also we have to re-balance the task distribution to GPUs and CPUs from time to time to make sure that a) whoever requests work gets work, b) our server infrastructure can handle the requests and c) all of the sub-projects / searches make good progress to meet our scientific goals.
But yes, maybe we should place one we bpage or forum thread in a prominent place that 'explains it all'.
Cheers
HB
Cheers
HB
Since my little dramas
)
Since my little dramas yesterday with driver versions and such have updated a few other machines that were already attached to Einstein. I have 4 on the job at the moment.
I had to free up a CPU core to get reasonable run times. That seems to be a common issue with running OpenCL apps regardless of which project.
Bernd commented over at Albert that he corrected the scheduler issue. It seems to be allocating the correct amount of work now. Thanks Bernd and thanks for the app, I can finally use the iGPU for something other than running a screen
BOINC blog
RE: RE: These BRP4 tasks
)
Maybe something like XMIND may help here.
This is a first attempt and to be seen as an example only.
https://dl.dropboxusercontent.com/u/50246791/First%20Attempt.png
XMIND is freeware.
Unfotunatelly it doesn't work
)
Unfotunatelly it doesn't work on my Win 7 machine. But the cause is the Intel driver that doesn't work on Win 7 x32 in patched mode (when it uses over 4 GBs of RAM) though NVidia does. It takes about 20 minutes to make a dummy plug. Have to notice that some Mobos have no VGA jack and have a DVI jack where the plug can not be installed on due to lack of analog pins and VGA jack.
RE: Unfotunatelly it
)
This can solve your problems:
http://www.overclock.net/t/384733/the-30-second-dummy-plug
It's also worth mentioning
)
It's also worth mentioning that you won't need any dummy plugs / funny business if you connect your main display to the Intel. If the other GPU(s) are nVidias they'll still crunch along just fine under Win (don't knwo for others) with recent drivers. Don't know about AMDs, could also work.
The only downside of this approach is if you want to game on your real GPU: you'd need to use Virtu, which costs some performance.
MrS
Scanning for our furry friends since Jan 2002
Hi, it works with my
)
Hi,
it works with my HD4000 plus my 610M on my laptop. The downside is that I had to reduce CPU work from 4 parallel tasks to only one task due to too much heat. So now I am running two Perseus tasks, one BRP4 (GPU) and one CPU task in parallel.
Just one question: I have set the GPU utilization factor to 0.5 in order to run two GPU tasks in parallel. Is it also possible to run parallel GPU tasks on the HD4000?
Is a Intel HD4600 on duty
)
Is a Intel HD4600 on duty here?
@ MrS:
AMD without dummy plug: shure, this is stuff from long long ago! Intel needs to learn that.
Alex
RE: Is a Intel HD4600 on
)
Yes, see here: https://einsteinathome.org/node/197053&nowrap=true#125643
HB
RE: Just one question: I
)
In theory: yes. We have so far not enabled the feature that makes this configurable in the web preferences. However, this feature is most useful for relatively powerful GPUs with many hundreds of cores. From what I've read so far, volunteers have reported that (sometimes after reserving a CPU core for the GPU work), the GPU load on the Intel HD GPUs is close to 95% with just one task, so it doesn't seem necessary to run more than one task in parallel to saturate these tiny internal GPUs.
Cheers
HB