Just a note to say that the rpmfusion packaged Nvidia rpms for Fedora Linux release distros have been upgraded to the latest 560.35.03 version (from the previously available 555.58.02). No improvement in run time, but it's nice to be on the latest drivers.
Compared to the Linux driver available directly from Nvidia's web site, rpmfusion takes care of additional installation tasks such as disabling the nouveau driver, and indeed Nvidia themselves note: "many Linux distributions provide their own packages of the NVIDIA Linux Graphics Driver in the distribution's native package management format. This may interact better with the rest of your distribution's framework, and you may want to use this rather than NVIDIA's official package." Install guide here: https://rpmfusion.org/Howto/NVIDIA
On my 2080 Super I can solo crunch 4 O3AS WUs in an hour compared to 3 an hour in Windows, so it's worth having a Linux installation - even a scratch one on a USB stick. Linux also seems to handle running CPU tasks at the same time as GPU tasks better than Windows - a GPU O3AS WU completed in around half the time for me in Linux with all CPU cores also running CPU tasks compared to in Windows.
I've NOT seen the 1.14 (gpu-recalc) application used for any of my WUs - only 1.15 (cuda, cpu re-calc). There is a slight improvement in run-time with 1.15 CUDA over the 1.07 opencl application. In order to get 1.15 you need to allow beta/test applications in your E@H web account: Preferences -> Project -> Run test applications? YES.
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
I am getting the impression that these really high performance AI oriented GPU's are not quite so hot in the type of crunching we do?
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
I am getting the impression that these really high performance AI oriented GPU's are not quite so hot in the type of crunching we do?
Those really expensive accelerators (H100, A100, Bx00) focus more on low precision computation since that is the direction AI and machine learning went/is going. That being said, they would still be AMAZING at fp32 work here but that is not where they "shine".
I still want one (or eight) though...
Edit: some of them are GREAT at fp64, but none of the work here is fp64 that I am aware of.
Edit: some of them are GREAT at fp64, but none of the work here is fp64 that I am aware of.
there are some FP64 calculations in both the BRP7 and O3AS apps. but it's not a huge percentage of the overall computation so you don't really see any gains scale from better FP64 alone.
you see most gains here from better GPU memory speed/bandwidth and better FP32 performance. and in the case of the CPU-recalc O3AS apps (1.07/1.08/1.15) also significant gains from faster CPU performance (mostly in core clock speed)
I am very interested in finding 2 slot (strickly) GPU's with active cooling with power plugs off of the back.
I know that M4000 and P4000 have this. And I suspect the P100 does.
Any ideas?
I have heard of adapters to switch the top to the back but they would need to be very thin.
I am looking at a GPU space with very constrainted top spacing.
Thank you.
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
I am very interested in finding 2 slot (strickly) GPU's with active cooling with power plugs off of the back.
I know that M4000 and P4000 have this. And I suspect the P100 does.
Any ideas?
I have heard of adapters to switch the top to the back but they would need to be very thin.
I am looking at a GPU space with very constrainted top spacing.
Thank you.
You would want the GP100 (not the P100- it is passive). You would also need an odd connector- it is not standard. We like the P100 GPUs we have for FP64 work (only). They are not good for FP32 work.
Tom M
)
This is why I normally re-paste my GPU's in 2-3 yrs.
Proud member of the Old Farts Association
Just a note to say that the
)
Just a note to say that the rpmfusion packaged Nvidia rpms for Fedora Linux release distros have been upgraded to the latest 560.35.03 version (from the previously available 555.58.02). No improvement in run time, but it's nice to be on the latest drivers.
Compared to the Linux driver available directly from Nvidia's web site, rpmfusion takes care of additional installation tasks such as disabling the nouveau driver, and indeed Nvidia themselves note: "many Linux distributions provide their own packages of the NVIDIA Linux Graphics Driver in the distribution's native package management format. This may interact better with the rest of your distribution's framework, and you may want to use this rather than NVIDIA's official package." Install guide here: https://rpmfusion.org/Howto/NVIDIA
On my 2080 Super I can solo crunch 4 O3AS WUs in an hour compared to 3 an hour in Windows, so it's worth having a Linux installation - even a scratch one on a USB stick. Linux also seems to handle running CPU tasks at the same time as GPU tasks better than Windows - a GPU O3AS WU completed in around half the time for me in Linux with all CPU cores also running CPU tasks compared to in Windows.
I've NOT seen the 1.14 (gpu-recalc) application used for any of my WUs - only 1.15 (cuda, cpu re-calc). There is a slight improvement in run-time with 1.15 CUDA over the 1.07 opencl application. In order to get 1.15 you need to allow beta/test applications in your E@H web account: Preferences -> Project -> Run test applications? YES.
Thank you for the info.
)
Thank you for the info.
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
I am getting the impression
)
I am getting the impression that these really high performance AI oriented GPU's are not quite so hot in the type of crunching we do?
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
Tom M wrote: I am getting
)
"It depends", as always. sometimes yes, sometimes no. really depends which GPU you're referring to.
_________________________________________________________________________
Tom M wrote:I am getting
)
Those really expensive accelerators (H100, A100, Bx00) focus more on low precision computation since that is the direction AI and machine learning went/is going. That being said, they would still be AMAZING at fp32 work here but that is not where they "shine".
I still want one (or eight) though...
Edit: some of them are GREAT at fp64, but none of the work here is fp64 that I am aware of.
Boca Raton Community HS
)
there are some FP64 calculations in both the BRP7 and O3AS apps. but it's not a huge percentage of the overall computation so you don't really see any gains scale from better FP64 alone.
you see most gains here from better GPU memory speed/bandwidth and better FP32 performance. and in the case of the CPU-recalc O3AS apps (1.07/1.08/1.15) also significant gains from faster CPU performance (mostly in core clock speed)
_________________________________________________________________________
I am very interested in
)
I am very interested in finding 2 slot (strickly) GPU's with active cooling with power plugs off of the back.
I know that M4000 and P4000 have this. And I suspect the P100 does.
Any ideas?
I have heard of adapters to switch the top to the back but they would need to be very thin.
I am looking at a GPU space with very constrainted top spacing.
Thank you.
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
Tom M wrote: I am very
)
You would want the GP100 (not the P100- it is passive). You would also need an odd connector- it is not standard. We like the P100 GPUs we have for FP64 work (only). They are not good for FP32 work.
He’s basically wanting
)
He’s basically wanting something analogous to the Titan V but with active cooling and front facing power connector.
GP100 fits the bill, at significantly worse performance
GV100 is pretty much the only option that matches all the specs and capabilities, but there’s no free lunch and they’re all still like $1000+ each
_________________________________________________________________________