I have an Intel(R) Celeron(R) CPU J1900 incl. Intel(R) HD Graphics and a NVIDIA GeForce GT 730 VGA card. Win8.1 x64.
At SETI@home I let run SETI and AstroPulse WUs on both GPUs.
On Intel iGPU 1 WU, and on the NV GT730 2 WUs simultaneously.
How is it here at Einstein@Home?
I could let run the same values here?
In the project prefs I can change 'GPU utilization factor' from default 1 to 0.5 - so I did.
But then on the Intel iGPU would run also 2 WUs simultaneously, or?
I could make an app_config.xml file for 1 WU on Intel iGPU and 2 WUs on NV GT730?
After looking to the apps overview: http://einstein.phys.uwm.edu/apps.php
- on my PC the following apps should run:
- - - - - - - - - -
CPU:
Gravitational Wave search S6Bucket Follow-up #2
Windows/x86 1.01 (SSE2)
Gamma-ray pulsar search #4
Windows/x86 1.06 (FGRP4-SSE2)
NV GPU:
Binary Radio Pulsar Search (Parkes PMPS XT)
Windows/x86 1.52 (BRP6-cuda32-nv301)
Binary Radio Pulsar Search (Arecibo, GPU)
Windows/x86 1.39 (BRP4G-cuda32-nv301)
Intel iGPU:
Binary Radio Pulsar Search (Arecibo)
Microsoft Windows running on an AMD x86_64 or Intel EM64T CPU 1.34 (opencl-intel_gpu-new)
- - - - - - - - - -
How should look the app_config.xml file?
Thanks.
Copyright © 2024 Einstein@Home. All rights reserved.
NV GT730 (2 WUs) and Intel iGPU (1 WU)?
)
Maybe it should look this way?
1st & 2nd for NV GT730 and 3rd for Intel iGPU.
But I don't know if the 's are correct ...
einsteinbinary_BRP6
2
0.5
0.2
einsteinbinary_BRP4G
2
0.5
0.2
einsteinbinary
1
1
0.2
No entries for CPU apps?
Thanks.
For Intel iGPUs you could run
)
For Intel iGPUs you could run 2 types of tasks:
* "Binary Radio Pulsar Search (Parkes PMPS XT)" (BRP6 for short, also sent to Nvidia and AMD GPUs).
You might need to opt in to run beta tasks to get BRP6-tasks for Intel iGPU, not sure about that.
For the full documentation on using app_config.xml see this page.
I use the following for BRP4 to run 1 task with 1 CPU core to support it:
[pre]
einsteinbinary_BRP4
1
1
[/pre]
For BRP6 on the other hand the above method won't work as the same type of task is sent to both Intel iGPUs and Nvidia/AMD GPUs. To be able to have different settings for these combinations one have to use instead of so they can be separated via the -tag, the following works for Nvidia GPUs and will tell Boinc to run 2 tasks on the Nvidia GPU with 1 CPU core for support and run 1 task on the Intel iGPU with 1 CPU core for support:
[pre]
einsteinbinary_BRP6
BRP6-Beta-cuda32-nv301
0.5
0.5
einsteinbinary_BRP6
BRP6-Beta-opencl-intel_gpu
1.0
1.0
[/pre]
Then there's also "Binary Radio Pulsar Search (Arecibo, GPU)" (BRP4G for short) that's only sent to Nvidia and AMD GPUs. Work isn't always available for this search, although right now there is work available. Use the following in app_config.xml to run 2 tasks on the GPU with one CPU core for support:
[pre]
einsteinbinary_BRP4G
0.5
0.5
[/pre]
No need to try and control CPU apps with app_config.xml as that won't get you any more performance from them.
Dirk, You might also want
)
Dirk,
You might also want to look at how long it takes that GPU to crunch that 1 work unit.
Arecibo work unit's (einsteinbinary_BRP4G) tend to be quicker units to crunch than compared to the Parkes (BRP6-Beta-cuda32-nv301)
It's not like Seti where 1 MB will take 10-20 minutes. We are talking hours with Einstein.
I would start with only 1 work unit to begin with on that 730. Looking at Holmis computer with the 660, it takes him over 3 hours to crunch his (Parkes) work units. (assuming 2 at a time)
His Arecibo takes 73 minutes (again assuming 2 at time)
I crunch 2 at a time as well but it takes my Titans 75 minutes to do 2 Parks and the 980s 2 hours to do 2 Parks. I haven't tested the Arecibos but I suspect they would also be much quicker.
Once you know how long it takes to do 1 work unit, you can look at doing 2 and see if the time is shorter than just doing 2 (1 right after the other) if it's not then it's being counter productive.
Zalster
Just to confirm the
)
Just to confirm the assumptions Zalster made I do run 2 tasks at a time on my 660Ti, it's a factory overclock model that I run at even higher clocks.