Interesting observation on Nvidea GPUs.

Betreger
Betreger
Joined: 25 Feb 05
Posts: 992
Credit: 1595512309
RAC: 778506
Topic 209908

I have 2 I5 boxes w a GTX1060 each. One of them was a dedicated Einstein box the other splits it's time with Seti. The dedicated box always ran the Gama ray pulsar tasks ~ in 30m 2 at a time the other one consistently took over a min longer. My goal w the 2nd 1060 was to get my RAC here up to 500k then give Seti whatever resources that were left. Once I got there I gave Seti a small resource share from the first box and I noted my Einstein time increased by a bit over a minute. What i figured out was when either box finished a Seti GPU task it starts 2 Einstein tasks at the same time. That means they end at the same time so they do that low gpu usage thing for a couple of minutes, whereas if they start times are staggered when on task finishes the other is running and the gpu utilization is much higher and the run time drops. That doesn't sound like a lot but it is over 3 more a day getting done. 

I am in  the process of clearing out  Seti on the first box and will run a mixed load only on one. That will increase my Boinc thruput a fair amount. 

mmonnin
mmonnin
Joined: 29 May 16
Posts: 291
Credit: 3436716540
RAC: 4137994

MilkyWay did something

MilkyWay did something similar at the end of a task as well and many people ran more simultaneous tasks than needed just to keep the GPUs busy during the CPU crunch period. A single task pegged my 280x while the GPU was working but I ran 4x. This is probably the same for AMD cards as well I would think.

Betreger
Betreger
Joined: 25 Feb 05
Posts: 992
Credit: 1595512309
RAC: 778506

As I understand it, some one

As I understand it, some one correct if I'm wrong, AMDs don't have that problem. 

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.