My RTX 3080 can do about 480 Fermi tasks/day, and a 3090 should be able to do around 560. But before screwing around with my client files to fake the number of CPU/GPUs I have (which causes its own set of issues) my main host was topped out at 448 tasks/day and running out of work every day.
Copyright © 2024 Einstein@Home. All rights reserved.
I looked a the FAQ but it
)
I looked a the FAQ but it only mentioned CPU tasks
In other cases, your machine will not be sent more work because it has already been issued with its daily quota of tasks (currently set to 8 tasks per CPU). In most cases for which a machine is running up against daily quota limits, there is a problem with the BOINC installation or Einstein@Home execution on your machine, and it is 'erroring out' the tasks and returning them as unsuccessfully completed. You can see if this is the case by going to 'Your Account' on the Einstein@Home web page and reviewing the results for that machine. The stderr error messages may reveal the problem. If you can't fix it yourself, please post a message on the message boards in the Problems and Bug Reports section.
I am running my three P102-100 at %0 resource which means I get one tasks each time a tasks completes. I average 918 work units in 24 hours and have never run dry. Obviously, if the project goes down for maintenance I will be idle.
What error message, if any, do you get?
[edit] I recently switched to %100 resource but when I was running at %0 I always had at least one work unit ready to go.
Before I started spoofing my
)
Before I started spoofing my number of GPUS I was getting this in the event log every day:
> 10/12/2021 10:03:51 PM | Einstein@Home | (reached daily quota of 448 tasks)
That would come with a multi-hour backoff and my GPU switching to a backup project.
Your 3080 should be able to
)
Your 3080 should be able to do better than that. My 3080Ti does about 785 tasks per day.
Keith’s 3080 is doing about 680 tasks per day.
try overclocking the memory. Einstein FGRPB1G tasks are very sensitive to memory latency. Or maybe Windows is a lot slower than Linux.
_________________________________________________________________________
+500 on memory overclock
)
+500 on memory overclock doesn't appear to've done anything. Might be a hair faster if I were to average dozens of results, but still in the 2:50-3:00 range.
I suspect it's probably something OS/Driver related. I've got a vague recollection from a few generations back that NVidia did something in the driver to put their cards in a lower power state for compute than gaming, and that while there was a way to force it: it was complicated, needed to be done manually, and wouldn't survive a reboot. At the time I decided it was too much of a hassle to bother with and forgot all the details.
The Nvidia driver compute
)
The Nvidia driver compute penalty is much less in the latest generations. Only a 500Mhz deficit in P2 compared to P0 for Ampere series. Was as much as 1400Mhz in Pascal.
So the benefit was quite good for restoring memory clocks to gaming mode for that series. I run with a 2000Mhz overclock on my Pascal cards.
I run +1000 on my GDDR6X
)
I run +1000 on my GDDR6X cards, 3080Ti and 3070Ti (an effective +500 overclock from P0 since GDDR6X gets only a 500MHz compute penalty). running at 19.5Gbps total.
I would also check that thermals are adequate (check both GPU memory and GPU core temps) and that you aren't being heavily throttled
the older/slower CPU is also likely a factor. My 3080Ti runs on a R9 5950X, so it's really in the best situation, and I do see slight slowdowns on GPUs with lesser CPUs. 2080Ti runs slower on EPYC 7642 (2.9GHz) than EYPC 7402P (3.35GHz), and slower still than when it was on the 5950X (4.4GHz). also make sure you are not overloading the CPU, you need to keep at LEAST one thread free to handle the GPU tasks, don't try to run 8 CPU tasks + GPU tasks on an 8-thread CPU.
_________________________________________________________________________
GPU temp is around 75C. I
)
GPU temp is around 75C. I don't see a ram temperature value, in my current version of Afterburner. Will try upgrading later.
I'll try pushing the ram speed more later, i want to go fairly slow to make sure I've got good baselines at each level to detect potential problems.
Not much I can do about the CPU short term. I really would like to wait another year before doing the big upgrade for either v2 Intel big.LITTLE or v2 AMD 3dcache; along with DDR5 that's started to shift beyond the early adopter performance hit.
I'm swinging back towards
)
I'm swinging back towards blaming my older CPU again not my OS. I ran some GPU tasks with no CPU tasks in the background and got run times of between 2:20 and 2:30; substantially better than the 2:50 to 3:00 I get normally. Fiddling with the total number of CPU tasks running to see how many CPU tasks I can run without starting to bottleneck my GPU's made it's way onto my todo list for the next few days.