A walk to the AMD side

archae86
archae86
Joined: 6 Dec 05
Posts: 3157
Credit: 7221544931
RAC: 972157

Gary Roberts wrote:I don't

Gary Roberts wrote:
I don't know what the 2nd hand market is like in your neck of the woods, but would it be possible to get enough for the 1060 + 1050 to pay for a RX 570 - particularly if you could get one for $140?

Your comment motivated me to browse around on eBay auctions.  It appears likely that I could get appreciable money for my RTX 2080, some money for my Founders Edition GTX 1070.  Even the 1050 and 1060 cards seem to have a pretty active auction market for sales of used cards by individuals likely to get $50 or a bit more.  If I re-learn how to sell on eBay with the 2080, I should be able to post some of the others with little added effort.

If Vega VII continues to look good (I'm particularly interested in more user power measurements and fan noise comments), I think my big move may be to push the RTX 2080 and GTX 1070 out of my big box in favor of a Vega VII when the supply channel pushes the available price down near list, then sell the 2080, and try to leverage that sales effort to sell the most attractive of my growing pile of retired cards. 

My other two boxes have too little length space and too little ventilation to be candidates for anything close to a Vega VII upgrade.  I may yet push the 1060 3Gb and 1050 out in favor of a 570 soon.  I continue to be very happy with my first RX 570, now that it has responded very nicely to power limitation.

koschi
koschi
Joined: 17 Mar 05
Posts: 86
Credit: 1688507555
RAC: 824275

After exclusively using

After exclusively using Nvidia (on Linux) since 2008, I was curious about AMDs driver and open source efforts and needed to build some knowledge in that area to help a team member. So a week ago I bought a Sapphire RX 580 Nitro+. It also comes with a BIOS switch to choose between gaming and compute modes.

 While in gaming mode @ 120W power cap the card already had twice the output of my GTX 1060 3G (@70W PL), consuming 70W more power. GPU clock was automatically throttled to 1300MHz, RAM at 2000MHz. Average completion time for 2 WUs was 1060 seconds.

Now in compute mode, default power cap is 122W, but it consumes only 82W while crunching 2 WUs. Core clock dropped to 1120MHz, but vRAM was increased to 2075MHz. 2 WUs now complete in 1030 seconds.

Quite amazing, double the performance of a GTX 1060 3G for just 12W more power consumed. I payed 220€ because I couldn't wait, otherwise those cards sell for 160€ on ebay. Other RX580 are even cheaper, but I wanted a calm one without coil whine...

cecht
cecht
Joined: 7 Mar 18
Posts: 1533
Credit: 2902268871
RAC: 2177298

Gary Roberts wrote:Maybe,

Gary Roberts wrote:
Maybe, with AUTO, the Einstein app would need to offer some sort of signal to trigger a mode switch.  I don't imagine the Einstein app is fancy enough to do that.  Can you set that COMPUTE mode and, if so, does it make any difference?

Short answer: yes and yes, but not in a good way.

Long answer: In both the gaming and mining BIOS, manually setting the RX570 performance state to COMPUTE seems to shuttle all computations to the CPU. The GPU goes into a resting state (300 MHz clocks) with zero usage while tasks are running. My four cores of CPU, on the other hand, rotated through intense periods of usage. It reminded me that class of E@H tasks when the CPU takes on crunching with pulses of high activity during the last 10% of task completion. In COMPUTE mode, CPU activity was high enough to really slow down system response time; I thought the system froze at first, but it was just too busy to respond normally. E@H tasks continued to progress toward completion, but I didn't get a sense of whether it was at the normal rate because of the fits and starts of BOINC window updates. I assumed it was slower.

When I set the performance state back to AUTO (3D_FULL_SCREEN), the card remained remained resting until I restarted the host. I've no idea why the card didn't fully reset. 

Setting GPU performance modes may only be a Linux thing. Both amdgpu-utils and rocm-smi utilities allow setting the modes.; I'm using amdgpu-utils.  I don't recall AMD's Wattman in Windows allowing changes to performance mode, at least not with AMD's Adrenalin driver package that I had used.  

Ideas are not fixed, nor should they be; we live in model-dependent reality.

cecht
cecht
Joined: 7 Mar 18
Posts: 1533
Credit: 2902268871
RAC: 2177298

Here are some details on GPU

Here are some details on GPU performance modes.

This is a screenshot of the card's performance modes table, from the amdgpu-utils utility:

And this is an edited description of those modes, from the ROC-smi documentation :

SCLK_UP_HYST - Delay before sclk is increased (in milliseconds) SCLK_DOWN_HYST - Delay before sclk is decreased (in milliseconds) SCLK_ACTIVE_LEVEL - Workload required before sclk levels change (in %)  MCLK_UP_HYST - Delay before mclk is increased (in milliseconds) MCLK_DOWN_HYST - Delay before mclk is decreased (in milliseconds) MCLK_ACTIVE_LEVEL - Workload required before mclk levels change (in %)

Given that the performance mode seems to just adjust delays in response times of the shader (core) clock and memory clock, I don't understand the different computing behavior I saw between the COMPUTE vs 3D_FULL_SCREEN modes. Does anyone have any ideas?

Ideas are not fixed, nor should they be; we live in model-dependent reality.

Sir Antony Magnus
Sir Antony Magnus
Joined: 15 Mar 05
Posts: 8
Credit: 2577198
RAC: 0

Very interesting commentary,

Very interesting commentary, and I have been out of the loop for awhile so not much to say here in the form of answering questions, however I would like to state that AMD/ATI cards are very capable. I am as you were an exclusive Nvidia user for years, recently bought a AMD/ATI RX 560 and very happy with it both in terms of crunching and gaming. It is able to handle my type of lifestyle quite effectively! And very quiet, even whilst crunching! 

 

Where Nvidia shines most is for the ultra gamers who want best performance and HIGH FPS, perhaps even high crunch output. Wish AMD/ATI would give Nvidia a run for the money similar to how they deal with Intel. Ha

kb9skw
kb9skw
Joined: 25 Feb 05
Posts: 21
Credit: 376405916
RAC: 11192

I don't know how much if any

I don't know how much if any bit of a difference it makes, but if you are using Windows 10 there is an option in the Radion Settings under Gaming -> Global Settings for GPU Workload type. This is basically a succession of the blockchain driver from a while back from my understanding.

 

Anyone want to play around with mining BIOS's and see if they offer increased E@H performance? I don't know how similar the workload is between the two so no clue what the difference would be but apparently hash rates go up with the custom BIOSs.

 

Bill
Bill
Joined: 2 Jun 17
Posts: 38
Credit: 328427526
RAC: 246971

Archae86, you said in your

Archae86, you said in your first post that you were completing tasks in about 660 seconds.  It appears more recently that they are being completed in about 1200 seconds.  What changed?

kb9skw
kb9skw
Joined: 25 Feb 05
Posts: 21
Credit: 376405916
RAC: 11192

Bill wrote:Archae86, you said

Bill wrote:
Archae86, you said in your first post that you were completing tasks in about 660 seconds.  It appears more recently that they are being completed in about 1200 seconds.  What changed?

 

He is running two work units at once. Average of one workunit is 660 seconds so twice that would be 1,320 seconds. In reality if you run two work units he seems to be around 1,240 seconds. Anything less than twice the time, you would come out ahead on the RAC.

 

My two cards get a return of 690 and 720 seconds, with about 1,220 and 1,280 at two work units per GPU.

archae86
archae86
Joined: 6 Dec 05
Posts: 3157
Credit: 7221544931
RAC: 972157

kb9skw wrote:He is running

kb9skw wrote:

He is running two work units at once. Average of one workunit is 660 seconds so twice that would be 1,320 seconds. In reality, if you run two work units he seems to be around 1,240 seconds. Anything less than twice the time, you would come out ahead on the RAC.

 

My two cards get a return of 690 and 720 seconds, with about 1,220 and 1,280 at two work units per GPU.

My card is currently running under a directive to limit power to 60% of the maximum allowed (the actual slider reads -40% in MSIAfterburner).  This saves less power than you'd think, but appreciable, and lost me a little performance, and got the reported temperature down to about 70C at fan noise I and my wife can live with.

If your case is better ventilated, your willingness to see high temperatures higher, your fan noise tolerance high, and your aversion to spending power consumption less, you may well not only not use power limitation, but have a try at overclocking.  So quite likely a person in that camp can get noticeably more throughput than I am getting.

Personally, I'm pretty happy with the tradeoff at my current operating point.

Richie
Richie
Joined: 7 Mar 14
Posts: 656
Credit: 1702989778
RAC: 0

cecht wrote:I've been running

cecht wrote:
I've been running that card with the BIOS switch in the default gaming position (toward read of card), which gives a top clock speed of 1286 MHz, a top memory clock of 1750 MHz, and a power limit of 125 W. I switched to the mining BIOS (shut down, *flip*, reboot), and now see a top core clock of 1100 MHz, a top memory clock of 1850 MHz, and a power limit of 120 W.  Without tweaking any amdgpu settings, while running E@H, GPU power hasn't gone above 88 W and temps are 74 C.  And task times? While running at 2X tasks, individual task times are ~582 s with the mining BIOS  vs. ~606 s with the gaming BIOS. (For those gaming BIOS times, I had the card power capped at 97 W, or -22%; when paired with a RX 460 in the same host, using the power cap paradoxicallly gave me faster task times, and cooler temps, than if the 570 were run at full speed.)

That's a great leap with the mining BIOS !! That information triggered me to try if my card without a BIOS switch would be able to get even near those completion times. To add to the challenge... this is a Windows host, so I got a bad feeling it wouldn't be possible, at least if running any kind of 'eco' settings.

Long story short... yeah, NO, it wasn't possible with 'eco' settings. I tried a few power limiting / memory overclocking settings, but run times (2x) clearly kept going over 20 minutes.

archae86 wrote:
If your case is better ventilated, your willingness to see high temperatures higher, your fan noise tolerance high, and your aversion to spending power consumption less, you may well not only not use power limitation, but have a try at overclocking.  So quite likely a person in that camp can get noticeably more throughput than I am getting.

Yes, that triggered me to try if those 'under 1200 s' run times could be achieved by throwing 'eco' settings out of the window and overclocking the card with '+' values also for the power limit slider.

My card is Asus RX 570 Expedition 4GB. Stock clocks are 1256 GPU / 1750 Memory. Not the fastest model in this category for sure.

Those stock clock speeds gave about 660 / 1230 s run times earlier... similar to what others had observed at the time.

 I use Sapphire TRIXX for overclocking. I tested a couple of faster settings for memory, but got screen blacking out... but these settings now seem to be stable:

1280 GPU / 2120 Memory / +28 % Board Power Limit ... (62 % fan / 75 C GPU temp )

This card is running now 1046L_180 tasks. Run times seem to be low 19 minutes...  114x s. Fastest so far has been 1137 s.

I don't know how many watts the card is using. I don't have a hardware meter. While looking at the TRIXX "Power consumption" reading and comparing it to readings while running stock clocks... I don't think the card would be taking now as much as +28 %. I believe it's already self-limiting and it wouldn't change anything to set for example + 50 %. I believe... to make the card really use that amount of power it would require also positive values for the VDDC Offset. I haven't touched that. But I can see the card is taking some more power than with stock clocks. These settings are far from being best bang for the buck, that's for sure. Can't really recommend. Small additional productivity with this overclocking is not in line with the disadvantages (additional fan noise, heat, power usage...).

kb9skw wrote:
I don't know how much if any bit of a difference it makes, but if you are using Windows 10 there is an option in the Radion Settings under Gaming -> Global Settings for GPU Workload type. This is basically a succession of the blockchain driver from a while back from my understanding.

I tried changing that workload type setting to "compute", but didn't notice any difference in crunching speed.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.