going from GTX 980 to 1080 does not seem to have any performance benefit

Domenic
Domenic
Joined: 22 Sep 15
Posts: 21
Credit: 95582242
RAC: 12
Topic 217123

Hello All,

I just upgraded from a 980 to a 1080 (Reference and Founder's Edition). As far as I can tell, I seem to get no performance bump to the rate I can get work units done, and the GPU seems to hover between 87 and 91 percent utilization. I am wondering why there is no jump.

I did see that there is a post about an issue with the Maxwell cards sticking themselves into P2 state, but as I was reading through the whole post, it seems that the P2 state doesn't mean much for the Pascal generation. I could be wrong through or read misinformation. My card appears to be stuck in P2 state, although clock rate is at 1847 under load.

Maybe there is something I have missed? Does the work unit have a variable size based on how powerful the GPU is, so I AM technically doing more work? My RAC does not seem to have changed, so that makes me think I am not doing any more work than before.

Any information would be most helpful. I currently run an FX 8350 (this does not appear to bottleneck, no one core is maxed out), the single 1080, and 16GB of RAM. I only do Einstein@Home on GPUs, as I have a separate project for CPU work.

mikey
mikey
Joined: 22 Jan 05
Posts: 12702
Credit: 1839106411
RAC: 3622

Domenic wrote:Hello All, I

Domenic wrote:

Hello All,

I just upgraded from a 980 to a 1080 (Reference and Founder's Edition). As far as I can tell, I seem to get no performance bump to the rate I can get work units done, and the GPU seems to hover between 87 and 91 percent utilization. I am wondering why there is no jump.

I did see that there is a post about an issue with the Maxwell cards sticking themselves into P2 state, but as I was reading through the whole post, it seems that the P2 state doesn't mean much for the Pascal generation. I could be wrong through or read misinformation. My card appears to be stuck in P2 state, although clock rate is at 1847 under load.

Maybe there is something I have missed? Does the work unit have a variable size based on how powerful the GPU is, so I AM technically doing more work? My RAC does not seem to have changed, so that makes me think I am not doing any more work than before.

Any information would be most helpful. I currently run an FX 8350 (this does not appear to bottleneck, no one core is maxed out), the single 1080, and 16GB of RAM. I only do Einstein@Home on GPUs, as I have a separate project for CPU work.

The problem is the small workunits we are going thru right now, they are so small your new gpu doesn't have time to stretch it's legs and run away from the 980 times.

Domenic
Domenic
Joined: 22 Sep 15
Posts: 21
Credit: 95582242
RAC: 12

Gotcha, that makes sense. So

Gotcha, that makes sense. So here is a question then. If I want to run 2 work units at a time, what is the best way to set up BOINC for that? I have read through multiple guides and I never seem to understand how to get it to work correctly. Maybe I am just reading the wrong guide material.

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5872
Credit: 117770528677
RAC: 34798242

Firstly, the current tasks

Firstly, the current tasks are quite normal and not any different from the majority that have been crunched over the last year or two.  It's not correct to blame a perceived poor performance on 'small' tasks.  A GPU doesn't need to 'stretch its legs'.

Occasionally, and for relatively short periods, there will be groups of tasks that crunch even faster than the current 'normal' tasks.  You may see them referred to as 'high pay' tasks when people comment about them.  Conversely, the current type of task has been referred to as 'low pay' since they take longer to crunch.

it's been about a month or so since there was a batch of high pay tasks.  That batch only lasted about 2 weeks or so.  There is no guarantee there will be further such batches but there could be.  I think there have been about 3 (maybe 4) such batches during the whole of 2018.  The current type have been much more predominant.  If your 980 happened to be running when a batch of high pay work was in play, I'm not surprised that your current 1080 performance on low pay work might seem a bit underwhelming :-).

Domenic wrote:
... If I want to run 2 work units at a time, what is the best way to set up BOINC for that?

Your computers are hidden so it's not possible to give advice based on your full hardware details.  However, if you just have 1 GPU installed and your host has 8 cores and enough memory, the easiest thing to do is go to your project preferences page and change the GPU utilization factor for FGRP apps to 0.5 an make sure to save the change.

The next time your computer asks for work, the new setting will be returned to it and a second GPU task will start crunching.  As a consequence (and if you are using all available CPU threads for crunching) a CPU task will stop (and be finished later when another CPU task completes).  This is because each crunching GPU task needs an available CPU thread for support.  This is enforced by BOINC.  Without that support, GPU crunch times will suffer.

Don't expect a big increase in performance.  Hopefully you will get a modest one.  Also, don't expect that going to 3 or more concurrent tasks will lead to further gains.  Going to 2 will give you most of what can be achieved.  The only way to find out for sure is to do proper experiments.  Unless a batch of high pay work comes along, the current tasks should be fairly uniform in their crunch times.  This makes it fairly easy to judge performance gains.  If crunch times become overly variable and slow, your settings are too aggressive for your hardware to cope with.  Try reducing further the number of active CPU threads.  There is a preference setting that controls the % of cores that BOINC is allowed to use.

 

Cheers,
Gary.

Zalster
Zalster
Joined: 26 Nov 13
Posts: 3117
Credit: 4050672230
RAC: 0

If they take longer to run

If they take longer to run and pay the same, then shouldn't they be called "Long" runs instead of "Low" pay as the "pay" isn't any different.

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4968
Credit: 18767623384
RAC: 7101652

+1 I too dislike the

+1 I too dislike the reference to "high" or "low" pay tasks.  All tasks produce the same credit.  Just some run faster or shorter than others.

 

Richie
Richie
Joined: 7 Mar 14
Posts: 656
Credit: 1702989778
RAC: 0

Keith Myers wrote:Just some

Keith Myers wrote:
Just some run faster or shorter than others.

Me too. That's how I would mention this nature of tasks to any user who knew nothing about tasks here and would be reading about it for the first time. This pay-thing feels quite backwards twisted for me. I think those pay-words basically need further explanation too often to not remain confusing for an outsider.

After running a deep analysis for my consciousness I have personally decided not to use pay-terminology any more until I will see a new type of work that would be introducing a completely different kind of time-Cobblestone-credit-fabric than what I've seen so far.

Domenic
Domenic
Joined: 22 Sep 15
Posts: 21
Credit: 95582242
RAC: 12

Thank you all for your help

Thank you all for your help everyone! I thought I had to do some app config files and other stuff to make it split the GPU work up, but I will just try the project settings.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.