I was surprised that you guys didn't report any benefit of running 2 WUs in parallel. In one of the 1st posts I did report a substantial improvement for my main machine. Now I've collected some more data on another machine, an i7 3770 @ 4.0 GHz, HT enabled, HD4000 @ 1250 MHz, DDR3-1866 9-10-9, Win7 64:
1 WU: 6954 RAC (averaged over 19 WUs)
2 WU: 8009 RAC (averaged over 31 WUs)
Did you also check the runtimes for the other gpu-wu's? Did they increase with on wu and increase more with 2 wu's?
Also I assume you used an adapted app_info.xml from Albert. But not everyone here is able to do the modifications by himself; so it might be helpful to post the app_info here.
Also I assume you used an adapted app_info.xml from Albert. But not everyone here is able to do the modifications by himself; so it might be helpful to post the app_info here.
[/pre]
And then edit the values for gpu_usage and cpu_usage. I'm running with gpu_usage=1 and cpu_usage=1 to run one task and dynamically reserve a core for it. I'm also using Process Lasso to set up the core affinity so that the cpu support part gets to run alone on the reserved core and with the priority raised to above normal.
Did you also check the runtimes for the other gpu-wu's? Did they increase with on wu and increase more with 2 wu's?
Do you mean the WUs running on my nVidia?
And, yes, I'm using an app_config rather than an app_info (never going to touch that again, if I can help it). I'm using:
Quote:
einsteinbinary_BRP4
0.5
0.5
einsteinbinary_BRP5
0.5
0.5
That seeems to work well since the Intel OpenCL app thankfully uses little CPU.
Another issue: I disabled "Binary Radio Pulsar Search (Arecibo)" in my profile and left "Binary Radio Pulsar Search (Arecibo, GPU)" active, but afterwards didn't get any Intel GPU WUs any more. Shouldn't that app / WUs also be in the "with GPU" category?
The reason I tried this was that both apps seem to crunch the same WUs, so I want to avoid running them on the CPU via BRP4X64, as this is less efficient and would rather throw my CPU cores at apps / WUs where there is no GPU version available.
I have deactivated "Run CPU versions of applications for which GPU versions are available" in my profile, but currently if I want to use the iGPU I need to accept the BRP4X64 as well.
Another issue: I disabled "Binary Radio Pulsar Search (Arecibo)" in my profile and left "Binary Radio Pulsar Search (Arecibo, GPU)" active, but afterwards didn't get any Intel GPU WUs any more. Shouldn't that app / WUs also be in the "with GPU" category?
I have the same issue. I'm manually aborting the BRP4X64 tasks when I see them, hardly the best way to go.
I guess the iGPU app is not included in the BRP4(GPU) category. Maybe that is intended for a "proper" GPU app that never materialized?
Unlike Holmis, who reported not to see an increase of performance on his i7 / HD4000 I see an increas in the range of ~10% when running 2 wu`s on my HD2500.
According to this thread http://einsteinathome.org/node/197052 some guys reported an increase in runtime for their other gpu-wu's, which caused me to ask MrS to share his experiance with us.
I'm currently testing it on my i3 with 2 AMD gpu's running 3 wu's each.
The posted issue with the BRP4(GPU) is definitely a (mistake?) matter of the project. In the tab task for a specific computer you will find the intel-gpu tasks together with the non gpu brp-arecibo while the brp-arecibo-gpu tab does not show any wu's.
Another thing with the tasks/gpu on mixed systems (iGPU and external GPU('s) is:
one may want to run 2 tasks on intel, 3 tasks on amd or 4 tasks on nvidia. Is there a way to make this selectable for the specific gpu-types or is there a way with app_config or app_info to make these setups?
A working example might help.
For now the setting of 0.33 in the preferences and a app_config do the job, but when Cuda5.5 apps will be available I plan to replace one AMD with a nVidia which will be the worst case in terms of setup for optimum performance.
Well, as the "Intel GPU" is only available as part of (certain) CPUs, we preliminarily treated the "Intel GPU" App (version) more or less as a CPU application. However I understand that this is confusing for the general public.
Please bear with us a little longer until we found the time to straighten things up a bit.
6 wu's are not significant for statistic purposes, but: longest runtime without Intel-gpu wu's running was 24.052 sec, now 24.814. This is ~3% longer.
Since I'm running 3 wu's at a time the loss is ~ 300 credits, in the same time the intel-gpu earns ~600 credits. Relation is much better with HD4000 / HD4600.
According to this thread http://einsteinathome.org/node/197052 some guys reported an increase in runtime for their other gpu-wu's, which caused me to ask MrS to share his experiance with us.
Thanks for the clarification, I wasn't sure which other WUs you meant. I'll answer in the other thread.
It seems it is still over-fetching. When I set to .75 of a day it downloaded approx 300 tasks (each and on 3 different machines). They have been running through for the last 2 days now and it still hasn't cleared the backlog yet.
Anyway apart from that issue they are going through ranging from 12 to 14 minutes a work unit, when I leave a CPU core free.
You might want to try Boinc 7.2.4 on that host, it seems stable enough, at least better than 7.2.1
BOINC v7.2.4 overfetches for me, 7.1.18 does not. I had to revert all but 1 of my 7.2.4 boxes back to 7.1.18 to stop the overfetching (among other problems).
Edit: v7.2.1 was an absolute disaster.
Edit 2: Correction, The last 7.2.4 box started overfetching, had to back that one off to 7.1.18 to fix it. What's also irritating is that every time we go from 7.2.4 to 7.1.18, we have to run BOINC setup twice: setup for 7.1.18 and then a repair. Update: 7.2.4 is a disaster too. Sometimes when rebooting with 7.2.4, BOINC fails to detect any GPUs even though Afterburner and Windows see them correctly. This was supposed to be fixed from 7.2.1. It's not.
7.2.5 is out with more changes around the GPU detection and the CPID issue.
My 7.1.18 clients are consistently getting more work than the 7.2.4 (now 7.2.5) machine, even though they are identical machines. They may have corrected the over fetch issue after all.
RE: Running 2 WUs on an
)
Did you also check the runtimes for the other gpu-wu's? Did they increase with on wu and increase more with 2 wu's?
Also I assume you used an adapted app_info.xml from Albert. But not everyone here is able to do the modifications by himself; so it might be helpful to post the app_info here.
Alex
RE: Also I assume you used
)
No app_info needed if you have Boinc v7.0.40 or newer, then you use a app_config.xml instead.
For the documentation check the bottom of this page.
Use something like this:
[pre]
einsteinbinary_BRP4
.5
.4
[/pre]
And then edit the values for gpu_usage and cpu_usage. I'm running with gpu_usage=1 and cpu_usage=1 to run one task and dynamically reserve a core for it. I'm also using Process Lasso to set up the core affinity so that the cpu support part gets to run alone on the reserved core and with the priority raised to above normal.
RE: Did you also check the
)
Do you mean the WUs running on my nVidia?
And, yes, I'm using an app_config rather than an app_info (never going to touch that again, if I can help it). I'm using:
That seeems to work well since the Intel OpenCL app thankfully uses little CPU.
Another issue: I disabled "Binary Radio Pulsar Search (Arecibo)" in my profile and left "Binary Radio Pulsar Search (Arecibo, GPU)" active, but afterwards didn't get any Intel GPU WUs any more. Shouldn't that app / WUs also be in the "with GPU" category?
The reason I tried this was that both apps seem to crunch the same WUs, so I want to avoid running them on the CPU via BRP4X64, as this is less efficient and would rather throw my CPU cores at apps / WUs where there is no GPU version available.
I have deactivated "Run CPU versions of applications for which GPU versions are available" in my profile, but currently if I want to use the iGPU I need to accept the BRP4X64 as well.
MrS
Scanning for our furry friends since Jan 2002
RE: Another issue: I
)
I have the same issue. I'm manually aborting the BRP4X64 tasks when I see them, hardly the best way to go.
I guess the iGPU app is not included in the BRP4(GPU) category. Maybe that is intended for a "proper" GPU app that never materialized?
Unlike Holmis, who reported
)
Unlike Holmis, who reported not to see an increase of performance on his i7 / HD4000 I see an increas in the range of ~10% when running 2 wu`s on my HD2500.
According to this thread http://einsteinathome.org/node/197052 some guys reported an increase in runtime for their other gpu-wu's, which caused me to ask MrS to share his experiance with us.
I'm currently testing it on my i3 with 2 AMD gpu's running 3 wu's each.
The posted issue with the BRP4(GPU) is definitely a (mistake?) matter of the project. In the tab task for a specific computer you will find the intel-gpu tasks together with the non gpu brp-arecibo while the brp-arecibo-gpu tab does not show any wu's.
Another thing with the tasks/gpu on mixed systems (iGPU and external GPU('s) is:
one may want to run 2 tasks on intel, 3 tasks on amd or 4 tasks on nvidia. Is there a way to make this selectable for the specific gpu-types or is there a way with app_config or app_info to make these setups?
A working example might help.
For now the setting of 0.33 in the preferences and a app_config do the job, but when Cuda5.5 apps will be available I plan to replace one AMD with a nVidia which will be the worst case in terms of setup for optimum performance.
This is my current i3 3220 system:
https://dl.dropboxusercontent.com/u/50246791/einstein%20mix2.PNG
Well, as the "Intel GPU" is
)
Well, as the "Intel GPU" is only available as part of (certain) CPUs, we preliminarily treated the "Intel GPU" App (version) more or less as a CPU application. However I understand that this is confusing for the general public.
Please bear with us a little longer until we found the time to straighten things up a bit.
BM
BM
6 wu's are not significant
)
6 wu's are not significant for statistic purposes, but: longest runtime without Intel-gpu wu's running was 24.052 sec, now 24.814. This is ~3% longer.
Since I'm running 3 wu's at a time the loss is ~ 300 credits, in the same time the intel-gpu earns ~600 credits. Relation is much better with HD4000 / HD4600.
RE: According to this
)
Thanks for the clarification, I wasn't sure which other WUs you meant. I'll answer in the other thread.
@Bernd: no worries, we're happy you got this far!
MrS
Scanning for our furry friends since Jan 2002
RE: RE: RE: It seems it
)
7.2.5 is out with more changes around the GPU detection and the CPID issue.
My 7.1.18 clients are consistently getting more work than the 7.2.4 (now 7.2.5) machine, even though they are identical machines. They may have corrected the over fetch issue after all.
BOINC blog
I recently found that the
)
I recently found that the OpenCL enabled driver is only for Windows 7 and 8. Is this true?