I have a mix of up to four different Nvidia GPUs running in the same setup and using different slots (by color). Works fine.
Just have to watch out that the driver supports all types.
The BIOS doesn't care, except for the "above 4GB" parm when using a lot of GPUs.
S-F-V
It turns out one slot is an X16 slot while the other slot is an X4 slot, I think the X4 slot is causing the problems. I am going to rethink which pc to do this with and maybe not do this at this time.
I have a mix of up to four different Nvidia GPUs running in the same setup and using different slots (by color). Works fine.
Just have to watch out that the driver supports all types.
The BIOS doesn't care, except for the "above 4GB" parm when using a lot of GPUs.
S-F-V
It turns out one slot is an X16 slot while the other slot is an X4 slot, I think the X4 slot is causing the problems. I am going to rethink which pc to do this with and maybe not do this at this time.
An X4 slot that is working well and fixed at gen3 should be able to run reliably.
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
I have a mix of up to four different Nvidia GPUs running in the same setup and using different slots (by color). Works fine.
Just have to watch out that the driver supports all types.
The BIOS doesn't care, except for the "above 4GB" parm when using a lot of GPUs.
S-F-V
It turns out one slot is an X16 slot while the other slot is an X4 slot, I think the X4 slot is causing the problems. I am going to rethink which pc to do this with and maybe not do this at this time.
An X4 slot that is working well and fixed at gen3 should be able to run reliably.
Hmmm that's throws a wrench in my thoughts, okay back to trying again tonight or tomorrow.
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
I kind of feel bad for AMD (and Intel) developing these super high end datacenter GPU products that almost no one will use because of the dominance of CUDA. especially for anything for AI/ML. all the hardware and efficiency in the world doesnt matter when the software you need or want to use is based in CUDA (and has been for a long time). not a lot of places are going to rewrite code from scratch or port code (which arguably might take longer) when they can use any of the OTS stuff with CUDA with much less latency in getting up and running.
I kind of feel bad for AMD (and Intel) developing these super high end datacenter GPU products that almost no one will use because of the dominance of CUDA. especially for anything for AI/ML. all the hardware and efficiency in the world doesnt matter when the software you need or want to use is based in CUDA (and has been for a long time). not a lot of places are going to rewrite code from scratch or port code (which arguably might take longer) when they can use any of the OTS stuff with CUDA with much less latency in getting up and running.
I am hoping all the news about a couple of open source frameworks for a couple of languages are going to make a difference. Not rapidly but sufficient to keep the other GPU manufacturers trying to grab more market.
Right now NVIDIA has a market monopoly. Hopefully NOT a permanent one.
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
all the hardware and efficiency in the world doesnt matter when the software you need or want to use is based in CUDA
This goes both ways: if the software you need doesn't support CUDA, NVIDIA cards are not that attractive anymore.
Also consider some of the architecture these cards provide is designed specific to meet the requirements of one type of task. And the software then is optimised for that architecture only.
Yes CUDA and AI is the hype at the moment, but there is a industry besides that and they depend on those cards.
all the hardware and efficiency in the world doesnt matter when the software you need or want to use is based in CUDA
This goes both ways: if the software you need doesn't support CUDA, NVIDIA cards are not that attractive anymore.
Also consider some of the architecture these cards provide is designed specific to meet the requirements of one type of task. And the software then is optimised for that architecture only.
Yes CUDA and AI is the hype at the moment, but there is a industry besides that and they depend on those cards.
the other option is OpenCL, which Nvidia still supports.
or HIP, which Nvidia supports
the other option is OpenCL, which Nvidia still supports.
Well... yes supports, but in my experience when OpenCL is needed AMD is the better choice, at least for the applications I use.
But as said: it highly depends on the computing that has to be done. As I see it AMD doesn't really try to compete with NVIDIA where it can't but focuses on all the rest where then in turn NVIDIA can't compete. Of course NVIDIA now has the cherry and is making lots of money.
And this DNA then is seen even in gaming cards, when used for computation suddenly with one task an 7900xtx can be 2-4 times faster than an 4090, just to be beaten the same way by the 4090 in the next task.
Similar things can be observed with CPUs, if you really want the best for what you do a lot of research is required. Consumers or small businesses will rarely do that as they don't have the capacity. But when it comes to enterprise usage the gains in performance and reduction of running costs are a whole different level and companies market their products straight to that specific client base.
I have a mix of up to four
)
I have a mix of up to four different Nvidia GPUs running in the same setup and using different slots (by color). Works fine.
Just have to watch out that the driver supports all types.
The BIOS doesn't care, except for the "above 4GB" parm when using a lot of GPUs.
S-F-V
San-Fernando-Valley wrote: I
)
It turns out one slot is an X16 slot while the other slot is an X4 slot, I think the X4 slot is causing the problems. I am going to rethink which pc to do this with and maybe not do this at this time.
mikey
)
An X4 slot that is working well and fixed at gen3 should be able to run reliably.
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
Tom M wrote: mikey
)
Hmmm that's throws a wrench in my thoughts, okay back to trying again tonight or tomorrow.
https://wccftech.com/amd-inst
)
https://wccftech.com/amd-instinct-mi300a-apu-cdna-3-gpu-zen-4-cpu-unified-memory-up-to-4x-speedup-versus-discrete-gpus/
Now if the price was within reach. :)
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
I kind of feel bad for AMD
)
I kind of feel bad for AMD (and Intel) developing these super high end datacenter GPU products that almost no one will use because of the dominance of CUDA. especially for anything for AI/ML. all the hardware and efficiency in the world doesnt matter when the software you need or want to use is based in CUDA (and has been for a long time). not a lot of places are going to rewrite code from scratch or port code (which arguably might take longer) when they can use any of the OTS stuff with CUDA with much less latency in getting up and running.
_________________________________________________________________________
Ian&Steve C. wrote: I kind
)
I am hoping all the news about a couple of open source frameworks for a couple of languages are going to make a difference. Not rapidly but sufficient to keep the other GPU manufacturers trying to grab more market.
Right now NVIDIA has a market monopoly. Hopefully NOT a permanent one.
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
Ian&Steve C. wrote: all the
)
This goes both ways: if the software you need doesn't support CUDA, NVIDIA cards are not that attractive anymore.
Also consider some of the architecture these cards provide is designed specific to meet the requirements of one type of task. And the software then is optimised for that architecture only.
Yes CUDA and AI is the hype at the moment, but there is a industry besides that and they depend on those cards.
B.I.G wrote:Ian&Steve C.
)
the other option is OpenCL, which Nvidia still supports.
or HIP, which Nvidia supports
_________________________________________________________________________
Ian&Steve C. wrote: the
)
Well... yes supports, but in my experience when OpenCL is needed AMD is the better choice, at least for the applications I use.
But as said: it highly depends on the computing that has to be done. As I see it AMD doesn't really try to compete with NVIDIA where it can't but focuses on all the rest where then in turn NVIDIA can't compete. Of course NVIDIA now has the cherry and is making lots of money.
And this DNA then is seen even in gaming cards, when used for computation suddenly with one task an 7900xtx can be 2-4 times faster than an 4090, just to be beaten the same way by the 4090 in the next task.
Similar things can be observed with CPUs, if you really want the best for what you do a lot of research is required. Consumers or small businesses will rarely do that as they don't have the capacity. But when it comes to enterprise usage the gains in performance and reduction of running costs are a whole different level and companies market their products straight to that specific client base.