Read you guys say the newest CUDA 5.5 may solve the bug that has been interfering with your app.
Any news in regards to this? Would love to come back crunching here. I mean, I can now, it's just that the app is rather inefficient on newer NVIDIA GPUs.
Cheers
Copyright © 2024 Einstein@Home. All rights reserved.
Any update on CUDA 5.5?
)
Yes, but this seems related to the drivers from NVidia, it doesn't get better with CUDA 5.5.
Still under investigation.
BM
BM
Well, it seems like the
)
Well, it seems like the purchase of the GTX660 will be postponed, while a couple of GTX560ti will take place instead :D
First, thx for the update.
)
First, thx for the update. Shame you guys are having driver issues with the app. Other projects are seeing over a 30% increase in performance.
To the other guy, I believe it would appear this project is now becoming much much better for AMD cards.
I was planing on replacing an
)
I was planing on replacing an OEM ATI card with a GTX760, maybe I should hold off for a while?
Hi, RE: Read you
)
Hi,
Depends on what you mean by "the bug". There has been a bug that prevented us from using anything newer than CUDA 3.2. NVIDIA was able to reproduce it but couldn't fix it for quite some time. This bug now seems fixed with CUDA 5.5 and we could start testing it on albert as we have built all necessary binaries. It's just a matter of time.
However, there's also this performance regression we see with the latest drivers (>= 319.xx) but that's unrelated to CUDA 5.5 as we received reports that also our CUDA 3.2 app is affected, which clearly points at the driver. That's one of the reasons why we don't yet roll out CUDA 5.5 apps as these would require the affected driver versions, whereas the CUDA 3.2 app can still be used with older unaffected drivers.
Best,
Oliver
Einstein@Home Project
I am crunching on Linux
)
I am crunching on Linux Ubuntu 12.04 with a NVIDIA 650 ti. While some tools are installed during driver installation like "nvidia-smi" they do not allow you to acquire GPU utilization stats, etc. I can get GPU temp and GPU fan speed and a few other attributes but nothing having to do with GPU performance/utilization.
I wrote to NVIDIA support who informed me that the data I wanted is not supplied in their current driver set and suggested that I download and install CUDA tool set because it "might" provide the data I seek. Before expending the effort to install this tool set I thought I would ask the community about their experience with this tool set.
My question: has anyone running Linux installed the NVIDIA CUDA tool set and how much "value added" performance was gained from the installation effort? Can someone who has experience with the CUDA tool set enumerate the "tools"/features of the NVIDIA CUDA tool set.
TIA
Hi robl, It seems you're
)
Hi robl,
It seems you're asking two questions here:
1) What nvidia-smi reports doesn't just depend on the driver but also on the device. I'm not sure that fan speeds or utilisation details are reported by consumer cards. It could very well be that only the Tesla (workstation/GPGPU) series do support this (which I know they do).
2) I can't really follow you regarding the "value added" performance gain by installing the "NVIDIA CUDA tool set". The CUDA toolkit is meant to used for building and running GPGPU applications. You don't need this for Einstein@Home as we provide all necessary libraries with our apps. Also, the CUDA development drivers are in principle not optimised or by any means better than the regular drivers. AFAIK, they are technically the same and the dev drivers are just the ones which CUDA releaes get developed and tested against. It shouldn't matter if you install any newer regular driver.
HTH,
Oliver
Einstein@Home Project
RE: Hi, RE: Read you
)
I was interested by this, because I've recently upgraded an old 9800GT host from the 310.70 to 326.14 beta driver. That should have crossed the 'slowdown' threshhold at 319, but I did not see a slowdown with a thirdy-party SETI app I'm testing.
So I ran it past the SETI developer (Jason Gee), and he replied:
So, the more you can do to minimise the data transfers and communications overheads, the better. Strangely enough, that's exactly the same point as was being stressed at the recent 'Preparing for Parallella' event, which I attended with Claggy.
Claggy has posted links to videos of the event in Interesting Project on Kickstarter, but I'd especially commend the keynote address by Iann Barron (Inmos, Transputer) - http://www.youtube.com/watch?v=8sO-jj9X2xc - to even a general audience.
RE: RE: I probably did
)
SM : N/A
Memory : N/A
Applications Clocks
Graphics : N/A
Memory : N/A
Max Clocks
Graphics : N/A
SM : N/A
Memory : N/A
Compute Processes : N/A
My reference to "value added" was meant to imply if I go to the effort of installing the CUDA tool kit will I have the "tools" to acquire % of GPU utilization or will this tool set also fail to provide the information. In other words if it does not provide what I want do I want to invest the time installing it.
I'm not aware that support
)
I'm not aware that support for that got dropped in a certain driver version, but it may of course very well be the case (for desktop cards). All our Linux boxes run Tesla-series cards and those still show these values with driver 319.37 (Tesla only fan, Fermi also GPU util). But again, installing the CUDA toolkit shouldn't make any difference.
Sorry,
Oliver
Einstein@Home Project