I just returned to E@H after long idle time.
I added PC i870 which seems to doing a good job.
I have ATI Radeon 5870 video card there but I don't see significant advantage in performance. I don't see any messages saying that my GPU isn't recognized by BOINC software, so it should work well. 5870 is a number 2 card after ATI flagship 5970 and still be quite powerfull video card. However I see typical performance of tasks which is a normal for i870. I think my video card is not used. How I can check that my GPI is used by BOINC?
Copyright © 2024 Einstein@Home. All rights reserved.
How can I make sure mu GPU is used?
)
Sorry for repeated question.
Answer is found on this forum.
Unfortunately Einstein only
)
Unfortunately Einstein only makes use of CUDA enabled nVidia GPUs.
Shih-Tzu are clever, cuddly, playful and rule!! Jack Russell are feisty!
Milky Way, Collatz C. and
)
Milky Way, Collatz C. and DNETC, have ATI GPU processing, last 2 SP, MW DP.
SP=Single Precision and DP=Double Precision.
SETI@home also uses ATI in Main and Bêta AstroPulse.
The above posted info is off topic.
;^)
Run MilkyWay@home on your GPU
)
Run MilkyWay@home on your GPU instead.
What's interesting, is that
)
What's interesting, is that Einstein only uses about 10% of my Nvidia GPU's, and it takes a full processor core for each GPU. It looks like the GPU's process the same work as the CPU, but faster due to the parallel processing of the GPU's.
Steve
Crunching as member of The GPU Users Group team.
RE: What's interesting, is
)
As I understand it the E@H application use the Nvidia only for a part (Fast Fourier transform?) of WU processing. The CPU still needs to do alot (the majority?) of the work.
This is supposedly because the E@H calculations can't be parallelised which is needed to run the whole calculation on the GPU.
RE: RE: What's
)
Correct, ABP2 does FFT calculation on GPU, and everything else on CPU. Is that FFT part "the majority"? It depends on how you see it: In the CPU variant of the app, FFT takes most of the processing time. But because FFT can be handled somewhat faster on the GPU (except for the slowest cards), FFT is no longer the major part of the computation in the CUDA apps for most users, which explains the lower GPU utilization.
I must leave the details here for the project staff to expand on, but I think it's fair to say that luckily, all the major parts of the ABP computation can be parallelized with at least a decent performance gain. It's just not trivial, even more so if you consider that the app has to be written in a way so that CPU and GPU results are reasonably close to each other so they are
a) all of scientific value and
b) indeed validate against each other
(all in all you have to consider 6 platforms: OSX, Windows, Linux; each with a CPU and GPU variant).
So as Oliver has written here already, there's an ongoing effort to have a new version of the ABP CUDA app that will put more load on the GPU and far less load on the CPU. Stay tuned. It's ready....when it's ready.
CU
HB
Well I can say that my GPU's
)
Well I can say that my GPU's are making a difference. A CPU wu is completing in 3:14, and the CPU/GPU wu's are completing in 1:44.
Steve
Crunching as member of The GPU Users Group team.
It's been suggested to me
)
It's been suggested to me that the low GPU utilisation might be added to the FAQ page - it certainly qualifies!
Though how many SETI refugees (Hi, Steve!) would read that on the way through remains a moot point ;-)
RE: Well I can say that my
)
That is one fine piece of hardware that you've got there :-), Have you considered running it in hyper-threaded mode, with 12 threads in parallel?
CU
HB