All things RX 500 series (460/560/570/580)

Tom M
Tom M
Joined: 2 Feb 06
Posts: 6453
Credit: 9580119839
RAC: 7367255
Topic 222864

I have a pair of Rx 570's here

They are 4 Meg video memory cards.

I am running both GW gpu and Gamma Ray Pulsar#1 on these cards.

Currently I am running 1 GW gpu task per card and 2 Gamma Ray's per card.

I have been reading other threads on these topics and am now a little confused.

What is the highest # of each type of gpu task I should be running on these cards? I want to maintain reliable crunching while maximizing production.

Tom M

A Proud member of the O.F.A.  (Old Farts Association).  Be well, do good work, and keep in touch.® (Garrison Keillor)  I want some more patience. RIGHT NOW!

mikey
mikey
Joined: 22 Jan 05
Posts: 12689
Credit: 1839095161
RAC: 3719

Tom M wrote: I have a pair

Tom M wrote:

I have a pair of Rx 570's here

They are 4 Meg video memory cards.

I am running both GW gpu and Gamma Ray Pulsar#1 on these cards.

Currently I am running 1 GW gpu task per card and 2 Gamma Ray's per card.

I have been reading other threads on these topics and am now a little confused.

What is the highest # of each type of gpu task I should be running on these cards? I want to maintain reliable crunching while maximizing production.

Tom M 

2  Gamma-ray pulsar OR 1 GW task, the GW tasks take up to 4gb for each workunit so there is none left over for any extra workunits.

cecht
cecht
Joined: 7 Mar 18
Posts: 1534
Credit: 2907685437
RAC: 2166036

I run my 4GB RX 570s at 3x

I run my 4GB RX 570s at 3x for gamma ray#1, which gives a few percent improvement of task time over 2x, on my system at any rate.  With nothing else running on the host, that will result in a stable RAC of ~1M after about a month.  I don't have a monitor connected to the cards.  They run at 1071 MHz, 906 mV (a p-state mask of 0,6).

Like Mikey said, 1x for the GW tasks gives the best overall result; run at full 1100 MHz (p-state mask 0,7).

Everyone's system and needs are different, so the best way to tell is let them run at one setting for 4-6 weeks, try another setting & repeat, and see what give the best RAC.

Ideas are not fixed, nor should they be; we live in model-dependent reality.

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5872
Credit: 117657242738
RAC: 35186607

Tom M wrote:I am running both

Tom M wrote:
I am running both GW gpu and Gamma Ray Pulsar#1 on these cards.

Plus you are also running the GRP CPU search on your CPU cores.  The host details page says the host was created on June 18 but it has a total credit of ~43M and a RAC of ~1.1M so I guess this machine had a 'former life' with perhaps a quite different setup.

It also says that you currently have ~7K tasks of which ~5K are CPU tasks with 4,747 still 'in progress'.  The oldest ~500 of those are due to expire in less than 2 days.  What do you intend to do with those?  Why aren't you running with a very minimal work cache size until you establish a suitable set of operating conditions?

To have any chance of achieving a workable setup, you *must* use a minimal work cache size (eg 0.1 days max) until you have something close to a stable setup.

You should forget playing with GPU crunching parameters until you have solved the insane CPU task oversupply.  the client will be in panic mode trying to maximise the completion of those and that will play havoc with the supply of CPU cycles to support particularly the GW GPU tasks.  You will have no chance of making sensible measurements under current conditions.

You are going to have to abort the majority of your CPU tasks.  Better to do that now rather than delay the process and force the server to handle it later.  You know how many cores you have.  You know how long a CPU task takes.  So reduce your cache size to stop any repeat overfetch, calculate the rough number (with the longest available deadline) that you can handle and immediately abort the rest.

I didn't look at GPU tasks at all.  You should assess whether or not there are too many of those.  If they are progressing OK, continue with the GW GPU tasks set to x1, and the FGRPB1G tasks set to x2 and come back here when you have the situation under some sort of control.

Cheers,
Gary.

Tom M
Tom M
Joined: 2 Feb 06
Posts: 6453
Credit: 9580119839
RAC: 7367255

Gary Roberts wrote: Tom M

Gary Roberts wrote:

Tom M wrote:
I am running both GW gpu and Gamma Ray Pulsar#1 on these cards.

Plus you are also running the GRP CPU search on your CPU cores.  The host details page says the host was created on June 18 but it has a total credit of ~43M and a RAC of ~1.1M so I guess this machine had a 'former life' with perhaps a quite different setup.

It also says that you currently have ~7K tasks of which ~5K are CPU tasks with 4,747 still 'in progress'.  The oldest ~500 of those are due to expire in less than 2 days.  What do you intend to do with those?  Why aren't you running with a very minimal work cache size until you establish a suitable set of operating conditions?

To have any chance of achieving a workable setup, you *must* use a minimal work cache size (eg 0.1 days max) until you have something close to a stable setup.

The work cache has always been 0.1 days and so has the additional work cache (0.1).

I plead guilty to consolidating systems.  And to moving R5700's do a different system.

I have been reluctant to abort the cpu tasks but I will get right no that.

Tom M

A Proud member of the O.F.A.  (Old Farts Association).  Be well, do good work, and keep in touch.® (Garrison Keillor)  I want some more patience. RIGHT NOW!

archae86
archae86
Joined: 6 Dec 05
Posts: 3157
Credit: 7224654931
RAC: 1031446

Tom M wrote:The work cache

Tom M wrote:
The work cache has always been 0.1 days and so has the additional work cache (0.1).

Then something is fouled up.
One way to get this apparent situation is to set the work cache for a different location (aka venue) than the one actually in use for the host.

You might wish to check the host location current assignment (generic|home|school|work) currently displayed in the location scroll box at the bottom of your computer details page, then check your preferences for that location.

Tom M
Tom M
Joined: 2 Feb 06
Posts: 6453
Credit: 9580119839
RAC: 7367255

The RX 580's with 8GB of

The RX 580's with 8GB of video ram showed up.  They are installed on this box.

I am running gpu tasks from "all" available apps.  

After running 1 per gpu and 2 (I think) it is running 3 per gpu and still taking less total time than 3 discrete GW gpu single tasks.

When I get enough baseline on the Gamma Ray apps I will probably kick them up to 3 per gpu too.

Tom M

 

A Proud member of the O.F.A.  (Old Farts Association).  Be well, do good work, and keep in touch.® (Garrison Keillor)  I want some more patience. RIGHT NOW!

archae86
archae86
Joined: 6 Dec 05
Posts: 3157
Credit: 7224654931
RAC: 1031446

Tom M wrote:After running 1

Tom M wrote:

After running 1 per gpu and 2 (I think) it is running 3 per gpu and still taking less total time than 3 discrete GW gpu single tasks.

You'll find that the memory requirement of GW tasks varies quite substantially.  In particular, I think it likely that you'll find that while tasks with Delta Frequency (DF) below .75 will indeed benefit from 3X running on your 8Gb system, tasks of DF .75 through .95 will run slower (or just fail).  You may wish to bear in mind the memory requirements vs. DF and multiplicity graph shown in this message in the Navi 10 thread.

Aside from the memory requirements variation, current GW CPU tasks here vary quite a lot in Elapsed Time.  So accurate efficiency comparisons require either careful matching of tasks or very long time averaging (I believe Cecht has suggested 6 weeks).

Tom M
Tom M
Joined: 2 Feb 06
Posts: 6453
Credit: 9580119839
RAC: 7367255

Because another person was

Because another person was having good production with RX 580 (8GB) I upgraded to those.

I am still running 2 gpu tasks per card.

Because it has the video ram it looks like I can run maybe as high as 4? GR's per card.  So the real question becomes what is the highest production point?

Since I have a baseline of 2 tasks, I guess I will switch up to 3 and see if it still runs "faster" per task than a linear increase in time.

Tom M

A Proud member of the O.F.A.  (Old Farts Association).  Be well, do good work, and keep in touch.® (Garrison Keillor)  I want some more patience. RIGHT NOW!

mikey
mikey
Joined: 22 Jan 05
Posts: 12689
Credit: 1839095161
RAC: 3719

Tom M wrote: Because another

Tom M wrote:

Because another person was having good production with RX 580 (8GB) I upgraded to those.

I am still running 2 gpu tasks per card.

Because it has the video ram it looks like I can run maybe as high as 4? GR's per card.  So the real question becomes what is the highest production point?

Since I have a baseline of 2 tasks, I guess I will switch up to 3 and see if it still runs "faster" per task than a linear increase in time.

Tom M 

That is the only way way to know what really works for you and your overall machine and crunching habits, ie if you use the machine for other things or if it's a Boinc only machine also the amount of cpu tasks versus gpu tasks. Be sure to give the gpu enough cpu time to give it a good chance of being optimal.

Tom M
Tom M
Joined: 2 Feb 06
Posts: 6453
Credit: 9580119839
RAC: 7367255

I was dead flat on RAC at

I was dead flat on RAC at just under 1,000,000 on a pair of Rx 580's under Linux (8 GB)

The 2 Rx 580's under Linux on the top 50 list are doing 1,100,000 or so.

I looked at his tasks.  He was/is consistently getting lower CPU times than I am.  (Intel cpu vs. Amd cpu).

So I am now set at 1 cpu / gpu task.  2 GPU tasks per card.

I am now using some AMD GPU tools that another Boinc person created to toggle the gpus to "Compute" from "3d screen" the apparent default.

Is there anything else I can do to make it up to the magic 1,100,000 plateau? (Besides adding another card).

Tom M

 

 

A Proud member of the O.F.A.  (Old Farts Association).  Be well, do good work, and keep in touch.® (Garrison Keillor)  I want some more patience. RIGHT NOW!

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.