yeah that's just the next iteration in Nvidia's x4 line. they had P4, T4, A2 (basically the T4 successor), now L4. they are always low profile single slot cards with fairlow low TDP under 75W. I don't know where you see that the L4 has a 8-pin power connection. the TPU link you provided says it has 16-pin, but that's not correct either. the L4 only has a 72W TDP and gets power from the slot with no external power.
the host you linked is a cloud rental of some kind.
I also don't remember where I read that single 8 pin.
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
... usually <75 W there are no pin-connetors needed.
BUT please see
askgeek.io
+
technical.city
---------> 16-pin supplementary connector
nice sunday
S-F-V
you have to realize when some third party data scraper is wrong. it's wrong.
the L4 does not have any power connector. just look at pics of it online, and it makes no sense to put any power connector on such a low power card. it's likely that it's getting confused with the specs from the L40 which does have a 16-pin power connector
... usually <75 W there are no pin-connetors needed.
BUT please see
askgeek.io
+
technical.city
---------> 16-pin supplementary connector
nice sunday
S-F-V
you have to realize when some third party data scraper is wrong. it's wrong.
the L4 does not have any power connector. just look at pics of it online, and it makes no sense to put any power connector on such a low power card. it's likely that it's getting confused with the specs from the L40 which does have a 16-pin power connector
Is it just my imagination or are rtx 4090 GPU's popping up like Dandelions in the top 50?
I thought they were too expensive? You could usually get two rtx 3080 ti's for the price of a 4090. And get more production?
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
theyre all in the bottom half of the top50 list though. easily outperformed by the "weaker" GPUs on Linux. Some of that probably has to do with being on Windows and the CUDA Linux app being better.
those with 4090s on Windows hosts might be using them for gaming also. most of them are single GPU systems. and some people don't care much about cost, they just want the best single GPU they can get.
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
I am basing the it won't help us a lot on its focus on smaller and smaller (down to 4 bits) calculation sizes. My understanding is we are best served by the fastest single precision floating point we can get. Not sure that we can use 16 bit or less calculations at all.
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
Ian&Steve C. wrote: yeah
)
I also don't remember where I read that single 8 pin.
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
... usually <75 W there are
)
... usually <75 W there are no pin-connetors needed.
BUT please see
askgeek.io
+
technical.city
---------> 16-pin supplementary connector
nice sunday
S-F-V
San-Fernando-Valley
)
you have to realize when some third party data scraper is wrong. it's wrong.
the L4 does not have any power connector. just look at pics of it online, and it makes no sense to put any power connector on such a low power card. it's likely that it's getting confused with the specs from the L40 which does have a 16-pin power connector
_________________________________________________________________________
Ian&Steve C.
)
... that's what i wanted to point out.
As i said <75 W not needed.
Thanks for clarifying
Is it just my imagination or
)
Is it just my imagination or are rtx 4090 GPU's popping up like Dandelions in the top 50?
I thought they were too expensive? You could usually get two rtx 3080 ti's for the price of a 4090. And get more production?
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
Since one 4090 has a max TDP
)
Since one 4090 has a max TDP of 450 watts it will draw much less power than two 3080ti at 350 watts each. So cheaper to run long term.
theyre all in the bottom half
)
theyre all in the bottom half of the top50 list though. easily outperformed by the "weaker" GPUs on Linux. Some of that probably has to do with being on Windows and the CUDA Linux app being better.
those with 4090s on Windows hosts might be using them for gaming also. most of them are single GPU systems. and some people don't care much about cost, they just want the best single GPU they can get.
_________________________________________________________________________
https://spectrum.ieee.org/nvi
)
https://spectrum.ieee.org/nvidia-blackwell
My understanding is it won't help us though.
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
It will trickle down into the
)
It will trickle down into the consumer GPUs soon enough with the RTX 50 series. We will probably see them within 12 months?
Rumor has it that it will have faster memory but the same memory bandwidth as current generation RTX GPUs. No one knows CUDA count yet.
PS: It actually could help us A LOT if we only had ~$40,000 sitting around. haha
I am basing the it won't help
)
I am basing the it won't help us a lot on its focus on smaller and smaller (down to 4 bits) calculation sizes. My understanding is we are best served by the fastest single precision floating point we can get. Not sure that we can use 16 bit or less calculations at all.
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!