AMD radeon 6000 series (big navi)

Aron
Aron
Joined: 29 Sep 06
Posts: 29
Credit: 1154829667
RAC: 0
Topic 224119

Hi guys,

Was thinking about getting a new generation gpu for gaming/crunching, and with the launch of big navi RX 6800, 6800 XT and 6900 XT (soon) I was wondering if there are any users here crunching work with such a gpu? How well does it compare to the vega gpus in terms if WUs per day? Tried to find info but couldn't really find anything so I made a new topic. Feel free to remove if such a thread already exists.

Thanks!

mikey
mikey
Joined: 22 Jan 05
Posts: 12702
Credit: 1839107474
RAC: 3620

Aron wrote: Hi guys, Was

Aron wrote:

Hi guys,

Was thinking about getting a new generation gpu for gaming/crunching, and with the launch of big navi RX 6800, 6800 XT and 6900 XT (soon) I was wondering if there are any users here crunching work with such a gpu? How well does it compare to the vega gpus in terms if WUs per day? Tried to find info but couldn't really find anything so I made a new topic. Feel free to remove if such a thread already exists.

Thanks! 

I found this on MIlkyWay it may help you with some info on it.

https://milkyway.cs.rpi.edu/milkyway/forum_thread.php?id=4668

archae86
archae86
Joined: 6 Dec 05
Posts: 3157
Credit: 7229874859
RAC: 1155477

I'd love to think the 6800

I'd love to think the 6800 might meet the optimistic promise.  If it did the best one might hope, it might substitute for the 5700 which is already very good at running Einstein gamma-ray pulsar tasks with a card that runs substantially more tasks per day in a single slot, while burning only moderately more power.  That might make it the power efficiency for Einstein credit production king, which I think is so far the rather rare, expensive, and somewhat problematic VII.

Some things to worry about (mostly based on my VII and 5700 experience).

1. Will it generate results that almost always validate on Einstein GRP and GW work?  We should remember that for months the 5700 (and others of that generation) only validated on GRP when the quorum partner was also of that family.  It was fixed at one driver revision, and we never learned what actually changed.

2. Size: will the card fit in the potential user's case?  I liked a particular XFX implementation of the 5700 a lot, but it was very large.  Too long to fit at all in one of my cases, and too fat to double up gracefully with another of the same model, or many other cards.  I had to select a second card based on thin-ness, just to get a thermally acceptable solution for two cards in a box.

3. availability:  these things seem unusually hard to find at the moment, 

4. price: AMD seems determined to chase Nvidia into the higher spheres of price.

5. power consumption: the proof of the pudding is in the running.  If you don't run games, you only should care about power consumption running Einstein.  Power consumption from reviews on the web is only a very rough guide.  It is quite possible this one could be a breakthrough.  It could also be a dud on Einstein.

6. CPU support on GW: this seems unlikely to be an issue on Einstein GRP, but on GW it may be difficult to find and fund a motherboard CPU combination that can keep the beast usefully busy.  If it wastes too much time sitting around waiting CPU support, part of the expensive purchase price is wasted.

If I spotted someone selling an XFX implementation that seemed suitable for one of my two boxes that could possibly take it, at a non ripoff price, I'd order it tomorrow.  I'd figure it might quite possibly be very good, and even if only pretty good I'd have done myself a favor and also those who could learn from my reporting.

I hope that someone reading this thread buys one and reports here soon.

My single-card 5700 box that is only slightly throttled and has decent (but not fabulous) CPU support has a current RAC running GRP at 3X right around 900,000.  If the 6800 comes anywhere near the claimed relative performance, then single-card RAC well over 1,000,000 should be easily reached.  GW RAC, of course, would be very much lower than that.

 

Aron
Aron
Joined: 29 Sep 06
Posts: 29
Credit: 1154829667
RAC: 0

archae86 wrote:I'd love to

archae86 wrote:

I'd love to think the 6800 might meet the optimistic promise.

[...]

 

Thanks for the reply! I'm looking to replace my 3 Radeon VIIs. They are buggy and crashes a lot, rubbish drivers really. Plus the temps are like a wildcard and super inconsistent! I use water cooling so the physical size of the card is not a big problem. But the VIIs are rare and annoying to find with water cooling blocks, so I figure its time to go next gen! I'm eagerly awaiting the 6900 RAC numbers!

ExtraTerrestrial Apes
ExtraTerrestria...
Joined: 10 Nov 04
Posts: 770
Credit: 579193532
RAC: 203811

The memory interface of the

The memory interface of the new Navi cards can be problematic. Big Navi comes with exactly the same GDDR bandwidth as the 5700, half of the Radeon VII. That's a killer feature for Einstein, in the sense that it kills the computational advantage the CUs have.

The elephant in the room is obviously the Infinity Cache. If it can be used well, like in games with tiled rendering, it provides 4x the bandwidth and better power efficiency. Then the new cards would probably swipe the floor (again) with the competition. However, for this to work perfectly the "hot" working set should not exceed 128 MB, running a single WU. Anyone knows how this may work out for Einstein? The pure GPU memory consumption is going to be higher than the current working set, as a real program is never using all variables at the same time.

MrS

Scanning for our furry friends since Jan 2002

mikey
mikey
Joined: 22 Jan 05
Posts: 12702
Credit: 1839107474
RAC: 3620

ExtraTerrestrial Apes

ExtraTerrestrial Apes wrote:

The memory interface of the new Navi cards can be problematic. Big Navi comes with exactly the same GDDR bandwidth as the 5700, half of the Radeon VII. That's a killer feature for Einstein, in the sense that it kills the computational advantage the CUs have.

The elephant in the room is obviously the Infinity Cache. If it can be used well, like in games with tiled rendering, it provides 4x the bandwidth and better power efficiency. Then the new cards would probably swipe the floor (again) with the competition. However, for this to work perfectly the "hot" working set should not exceed 128 MB, running a single WU. Anyone knows how this may work out for Einstein? The pure GPU memory consumption is going to be higher than the current working set, as a real program is never using all variables at the same time.

MrS 

They talk about the 'new cache' in this review:

https://www.phoronix.com/scan.php?page=article&item=rx6800-more-performance&num=1

archae86
archae86
Joined: 6 Dec 05
Posts: 3157
Credit: 7229874859
RAC: 1155477

ExtraTerrestrial Apes

ExtraTerrestrial Apes wrote:
However, for this to work perfectly the "hot" working set should not exceed 128 MB, running a single WU.

Ah, how we have gotten spoiled on cache sizes.

The IBM 360 model 195, a very high end machine for the day, had 32 kilobytes of cache.

The iconic DEC VAX 11/780 had 2 kilobytes of cache.

Caches a whole lot smaller than the working set can still do some good.

Of course you are right--holding the whole working set is better.

Chooka
Chooka
Joined: 11 Feb 13
Posts: 134
Credit: 3723835759
RAC: 1934904

As Archae86 mentioned, driver

As Archae86 mentioned, driver concerns could be an issue. We just won't know. 

If I was buying the 5700 when it came out, I'd wait for someone else to be the guinea pig lol. I'm quite happy not to be the first owner of an expensive, brand new card.

The ONLY driver i use for my Radeon VII's is 19.9.2. Anything after that resulted in black screens for me. I've stuck with this since the driver came out and have no intention of changing. Mine are usually undervolted too which drops the wattage. This is all on the other Navi thread.

Archae86 was extremely helpful with his U/V numbers and I have no issues at all running my Radeon VII's.

Regarding the 6800, AMD & NVIDIA are pushing the cards more towards the gaming market than the productivity side. They still perform well though. 

Also...supply. Good luck. I would just wait till about Feb/Mar next year as a minimum.


Aron
Aron
Joined: 29 Sep 06
Posts: 29
Credit: 1154829667
RAC: 0

archae86 wrote: My

archae86 wrote:

My single-card 5700 box that is only slightly throttled and has decent (but not fabulous) CPU support has a current RAC running GRP at 3X right around 900,000.  If the 6800 comes anywhere near the claimed relative performance, then single-card RAC well over 1,000,000 should be easily reached.  GW RAC, of course, would be very much lower than that.

 

About power efficiency: can I ask actually how much power one 5700 draws? My Radeon VIIs draw about 220W for 1.5-1.6M RAC. Hopefully the 6000 series will be able to improve on RAC/Watt.

archae86
archae86
Joined: 6 Dec 05
Posts: 3157
Credit: 7229874859
RAC: 1155477

Aron wrote: can I ask

Aron wrote:
can I ask actually how much power one 5700 draws? My Radeon VIIs draw about 220W for 1.5-1.6M RAC.

People can mean many things when discussing power consumption.  My personal preference is to take wall power watt-meter difference between the system running the Einstein workload and the system being idle of Einstein work.  An error in that method is that the "off" number actually includes the idle consumption of the card.  A merit of that method is that it gets close to the real cost of Einstein support, as it includes not only power burned on the card, but extra power consumed in the CPU, motherboard memory, and other motherboard components, plus whatever the imperfect efficiency of the PC power supply means gets added there.  It is also an actual measurement, while most others just accept the "confession" of the card.

Most people here seem to mean numbers they get from self-reporting by the card to one or another monitoring application.

Then there is the operating condition of the card.  Is it throttled down or overclocked? and if so how much and by what method?

For the XFX recent AMD cards, there is also the matter of a BIOS switch, which has a mode originally supplied for the miners which adjusts some parameters to a more power-efficient set for continuous computation.   Cecht publicized the virtues of the mining position of this switch on this forum, and I've used it for several XFX cards ever since.

So, with all those caveats, I'll give a number that is easy for me to get without hooking up the watt meter, and ask you what else might interest you.

My simplest 5700 machine runs a XFX 5700. 

The BIOS switch is set to the mining position.
It is slightly throttled by MSIafterburner setting of Power Limit -35%.
It runs three Einstein GRP GPU tasks and no other BOINC work.
RAC is pretty stable quite near 900,000.

HWiNFO reports "GPU ASIC Power" as averaging 108 watts.  I think this is reporting the same parameter that GPU-Z labels as "GPU Chip Power Draw".

While -35% might sound like a lot of power limitation, for this card with the BIOS switch in mining mode it only just barely slows the card and reduces the power.  Relying on memory, I think all settings from 0 to -30% gave identical results.

However, this card, with the BIOS switch in the standard (gaming?) position, power consumption would be appreciably higher.

I have a couple of MSI 5700 cards I bought because this XFX card model that I like would not fit in the boxes.  They lack the BIOS mining mode switch, and have quite different throttling behavior.  They live in more poorly cooled locations, and I throttle both of them heavily.  Both produce quite a lot less than 900,000 RAC.

 

 

subsonic
subsonic
Joined: 8 Apr 17
Posts: 3
Credit: 89275904
RAC: 0

Here are results of a Radeon

Here are results of a Radeon RX 6800 from Milkyway:
https://milkyway.cs.rpi.edu/milkyway/show_host_detail.php?hostid=872217

internal benchmark says: Estimated AMD GPU GFLOP/s: 545 SP GFLOP/s, 109 DP FLOP/s (Clock frequency: 1815 Mhz)
Runtime is about 74 seconds (not clear for me how many running in parallel)

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.