My experience with NVIDIA on macOS over the past 3 years with a various software is:
If you need OpenCL go for AMD.
Although Open CL Performance on NVIDIA has improved a lot over the years it's still not there. In the beginning when the software I use started supporting GPUs the AMD cards were 10 times faster, I get the impression NVIDIA wants to push cuda too much and cares about OpenCL too little.
I'm not sure how OpenCL stacks up against AMD on Windows and Linux, but the GTX 950 I have on a Windows machine seems to be working well. It cranks out a work unit every 33-35 minutes, while my GTX 960 on macOS takes over an hour. Are both of those times poor compared to their potential (with the Mac being particularly egregious)?
I've seen LuxMark scores based on OpenCL that suggest OpenCL on NVIDIA on the Mac should be pretty close to their Windows counterparts and suspect the fault lies with the code here, though I can't fault them too much since most Mac users don't have high end NVIDIA cards unless they are using NVIDIA web drivers. So they probably prioritize their coding efforts elsewhere. But what I wouldn't give to have my GTX 960 performing back the way it was when this project used CUDA. I don't want to replace it with another AMD card because I plan to start working with machine learning on it again soon.
My experience with NVIDIA on
)
My experience with NVIDIA on macOS over the past 3 years with a various software is:
If you need OpenCL go for AMD.
Although Open CL Performance on NVIDIA has improved a lot over the years it's still not there. In the beginning when the software I use started supporting GPUs the AMD cards were 10 times faster, I get the impression NVIDIA wants to push cuda too much and cares about OpenCL too little.
I'm not sure how OpenCL
)
I'm not sure how OpenCL stacks up against AMD on Windows and Linux, but the GTX 950 I have on a Windows machine seems to be working well. It cranks out a work unit every 33-35 minutes, while my GTX 960 on macOS takes over an hour. Are both of those times poor compared to their potential (with the Mac being particularly egregious)?
I've seen LuxMark scores based on OpenCL that suggest OpenCL on NVIDIA on the Mac should be pretty close to their Windows counterparts and suspect the fault lies with the code here, though I can't fault them too much since most Mac users don't have high end NVIDIA cards unless they are using NVIDIA web drivers. So they probably prioritize their coding efforts elsewhere. But what I wouldn't give to have my GTX 960 performing back the way it was when this project used CUDA. I don't want to replace it with another AMD card because I plan to start working with machine learning on it again soon.