At the moment the GTX275 is running at 28% (says GPU-z) and BOINC manager lists three CPUs and one CPU+GPU but the times are the same.
Can anyone suggest what might make the GTX275 go at 100%?
It is an X3360 (a 64 bit quad), 4 Gb ram, 32 bit XP home and the latest drivers from Nvidia.
Robert
Copyright © 2024 Einstein@Home. All rights reserved.
How do I maximumise GPU use?
)
Hi Robert,
the current GPU-application is a hybrid which uses CPU && GPU. There is nothing you can do.
Michael
Team Linux Users Everywhere
Exactly. One implication
)
Exactly.
One implication of this is the following: the same CUDA card will show a higher GPU utilization percentage when put in a faster CPU host compared to a slower CPU host (the GPU spends less time idling around waiting for the CPU to finish it's part of the computation). So if you have several CUDA enabled hosts and you want to run several CUDA apps and ABP2 is one of them, this might be something you want to consider to make best use of the hardware.
CU
H-B
Michael and Bikeman Oh.
)
Michael and Bikeman
Oh. Thanks.
It's up to 54% at the moment - crunching an ABP2, Bikeman!
How can I choose ABP2s?
Rob
In case that was amibiguous
)
In case that was amibiguous Bikeman, I meant you were right again.
It's now going through ABP2s in 55 minutes it appears so there you go.
Perhaps a little addendum to the first mention of using CUDA on the BOINC front pages could be that it MIGHT be much faster with the right CPU and work unit et cetera.
Rob
RE: Exactly. One
)
Why not use the GPU as an aid to several processes executing on different CPU cores? If one core is unable to fully load the work to fast GPU, why not just run 2 (or 4 for quad CPUs) processes using the CPU + GPU each?
Or is there some sort of restriction in CUDA, which does not allow to use GPU more than one application?
And if so - whether there are same restrictions in OpenCL?
RE: RE: Exactly. One
)
Nice idea actually, but for it to work you would have to have an E@H app that is mult-threaded (BOINC assigns the GPU to one task and cannot, AFAIK, split the GPU among many tasks(=results). Even if that were possible, you would not want 4 E@H tasks to compete uncoordinated for the CUDA-usable on board memory (each task requires 450 MB currently). But with a single, multi-threaded app you could theoretically pipeline the CUDA tasks to the GPU. But that would not be trivial to program, and I think the better solution is to invest "programmer time" to increase the share of computation that is done on the GPU.
CU
Bikeman