Discussion Thread for the Continuous GW Search known as O2MD1 (now O2MDF - GPUs only)

Stef
Stef
Joined: 8 Mar 05
Posts: 206
Credit: 110568193
RAC: 0

Thanks Keith. The VII is

Thanks Keith. The VII is "slightly" above my budget, but there seem to be a lot of used RX580 available. To which nvidia card would they compare performance-wise in GW/O2MD?
I went through a lot of workunit runtimes, but I seem to only find data for nvidia cards.

Richie
Richie
Joined: 7 Mar 14
Posts: 656
Credit: 1702989778
RAC: 0

Stef wrote:Thanks Keith. The

Stef wrote:
Thanks Keith. The VII is "slightly" above my budget, but there seem to be a lot of used RX580 available. To which nvidia card would they compare performance-wise in GW/O2MD?

GTX 1060 comes close, GTX 1070 is probably somewhat faster. From older 900-series... GTX 980 should be able to be on par with RX 580. GTX 970 a little bit behind.

RX 570 isn't much slower than 580 but might be available for notably lower price.

Also RX 470 and 480 should work with GW app and would have even lower price tags. Technically RX 470/480 shoud be only about 10 % slower than RX 570/580.

Stef
Stef
Joined: 8 Mar 05
Posts: 206
Credit: 110568193
RAC: 0

I got me an RX 580.

I got me an RX 580. Unfortunately it only gets about 25-40% GPU utilization, no matter how many GW workunits I run in parallel [I tried up to 4]. But with that it's about double as fast as the GTX 1050. Would be good if I could also use the rest of the GPU.

archae86
archae86
Joined: 6 Dec 05
Posts: 3157
Credit: 7230108177
RAC: 1156248

Stef wrote:Would be good if I

Stef wrote:
Would be good if I could also use the rest of the GPU.

While low utilization while running only one GW task imay be inherent, probably you are not getting adequate CPU support to push the GPU as hard as it is capable of working with the extra GW tasks.  Some options that might move you in this direction:

1. reducing or eliminating any non-GW work your CPU cores are doing.

2. raising the priority of the support task that runs on the CPU for your GW tasks.

3. getting a CPU with faster cores.

4. getting a CPU with more cores.

I think if you look around you'll find cases where people with this class of GPU have gotten their utilization up higher than you report.

And, yes, I understand that replacing the CPU may well be not appealing financially--especially as the practical result is far from guaranteed.

Good luck

 

Stef
Stef
Joined: 8 Mar 05
Posts: 206
Credit: 110568193
RAC: 0

Ah ok, if it is CPU bound,

Ah ok, if it is CPU bound, then I understand. The computer has a lot of (non-BOINC) jobs to do, which cause permanent load on all cores.

Thanks for the information.

Richie
Richie
Joined: 7 Mar 14
Posts: 656
Credit: 1702989778
RAC: 0

App has matured today.

App has matured today. Version is now 2.06 for CPU and GPU.

Stef
Stef
Joined: 8 Mar 05
Posts: 206
Credit: 110568193
RAC: 0

Only for Nvidia, it seems. Is

The 2.06 is not for amd cards, it seems. Is there a changelog?

Richie
Richie
Joined: 7 Mar 14
Posts: 656
Credit: 1702989778
RAC: 0

Stef wrote:The 2.06 is not

Stef wrote:
The 2.06 is not for amd cards, it seems.

You're right.. I didn't see that initially.

Quote:
Is there a changelog?

Nah, unless there will be something posted in News.

Stef
Stef
Joined: 8 Mar 05
Posts: 206
Credit: 110568193
RAC: 0

New versions as of

New versions as of 19.12.:

2.07 (GW-opencl-nvidia) (linux/windows/macos)

2.07 (GWnew) (linux/windows/macos)

 

I wish there was a bit more communication from the devs/science staff.

 

 

 

wujj123456
wujj123456
Joined: 16 Sep 08
Posts: 18
Credit: 2006201815
RAC: 2758176

Is there some issue with

Is there some issue with O2MDFV2_VelaJr1 on Linux? If you look at https://einsteinathome.org/host/12794803/tasks/0/54, all workloads took much longer time compared to the other machine. https://einsteinathome.org/host/12752100/tasks/0/54?sort=desc&order=Sent

Sure, 1080 and 1660 super is not as good as 1080Ti, but it shouldn't be 3-4x times slower even for 1080. I took a look at SM utilization when the workload was running and most of time it hovers around 10%, which means GPU cores were hardly doing any work at all. On the 1080Ti host it's 60%+. I've already used app_config to reserve a whole core for each GPU workload to avoid CPU starvation.

Does this workload require full x16 PCIE bandwidth? The other difference is that my 1080Ti is on PCIE3x16, but the other two are one x8 and x4 respectively. The efficiency when crunching SETI and Milkyway on these two cards are on par from what I expect based on hardware spec. I believe they are known for not relying on PCIE transfer much.

Anyone else seeing problems with these workload on Linux as well? Or cards on x8/x4 mode?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.