My queues are full of O2MD1 GPU v1.10 tasks. Times are looking great and CPU usage is light. Running my RX 570s at 3X tasks, completion times are averaging around 6 to 7 minutes per task and ~11-13 min per tasks on my RX 460s, also running at 3X tasks. I had been running O2MD1 cpu-only tasks along with FGRBP gpu tasks, which worked fine, but when the O2MD1 gpu v1.10 tasks started showing up, the cpu workload slowed way down, so I aborted the few O2MD1 v1.01 cpu tasks remaining. I'm still receiving the occasional O2AS20-500 v1.09 gpu task, but they seem to run fine alongside the O2MD1 v1.10 tasks on the same GPU.
Ideas are not fixed, nor should they be; we live in model-dependent reality.
Finally got some O2MD work. Had to check yes on LIBC215, which makes no sense for a windows system, but, whatever. My work cache is full of O1AS work, so it will be a bit before the box starts crunching and munching the new food. :)
Trying to run down the cache on 2 machines. Now is only CPU O2MD1 and the other is GPU O2MD1 but they need to clear out the other work units before they start. Maybe by tomorrow. Don't want to abort any work.
I aborted about 120 tasks more (about 370 tasks total, including yesterday) on the overwhelmed host and left only enough for what 1 core will be able to crunch before deadlines. I don't feel anything. Those tasks had been in queue (for nothing) only for two days. Another host somewhere else will be happy to crunch them, maybe even a host that will accept 'cpu work only'. I'm sure that project staff can easily see if tasks have been manually aborted or if there have been a technical problem while crunching.
I left linux hosts to crunch cpu tasks only... but allowed some gpu tasks for windows hosts now (+1 cpu task per host... for planetary love). These times with new app versions coming are just too thrilling.
I see once again the project has overwhelmed my host with work that cannot be finished. My half day cache setting has netted me 44 days of cpu work and 3 and 1/2 days of gpu work. I also see that the O2MD1 tasks have circumvented any request for Gamma Ray work.
I see once again the project has overwhelmed my host with work that cannot be finished. My half day cache setting has netted me 44 days of cpu work and 3 and 1/2 days of gpu work. I also see that the O2MD1 tasks have circumvented any request for Gamma Ray work.
Back to aborting tasks.
I have chronicled a similar sad story (on three Ubuntu machines) at WCG after upgrading to BOINC 7.16.3 from 7.14.2. But I have not had the problem here. I don't know why it affects some projects and not others, except maybe that I am on Win7 here. However, I think it corrects itself with enough time and work units. At least that is my current hope.
... I also see that the O2MD1 tasks have circumvented any request for Gamma Ray work.
My preferences are set for GPU only, accept beta, and "Yes" for downloading non-preferred tasks and I've been picking up a few FGRBP1 v1.18 tasks along with GW beta v1.10 and v1.09 tasks. I thought that maybe the GW tasks were in short supply, but I'm still getting them. Ha! while I was typing, my two RX 570s began running all three task flavors simultaneously.
Ideas are not fixed, nor should they be; we live in model-dependent reality.
I thought that setting up a dedicated Einstein host on its own venue would solve the oversupply problem here. If the project would just update their server software so that apps use an APR instead of the old deprecated task duration correction factor, which is wildly inaccurate, maybe I wouldn't have to constantly abort work.
Strange. Do you have use
)
Strange. Do you have use LIBC215 apps enabled in your preferences? Shouldn't need that with a modern distro.
I got just the regular O2AS-500 and O2MD1 apps.
I wonder if it's related to
)
I wonder if it's related to Bernd dropping the pre LibC 2.15 version entirely.
https://einsteinathome.org/goto/comment/173755
My queues are full of O2MD1
)
My queues are full of O2MD1 GPU v1.10 tasks. Times are looking great and CPU usage is light. Running my RX 570s at 3X tasks, completion times are averaging around 6 to 7 minutes per task and ~11-13 min per tasks on my RX 460s, also running at 3X tasks. I had been running O2MD1 cpu-only tasks along with FGRBP gpu tasks, which worked fine, but when the O2MD1 gpu v1.10 tasks started showing up, the cpu workload slowed way down, so I aborted the few O2MD1 v1.01 cpu tasks remaining. I'm still receiving the occasional O2AS20-500 v1.09 gpu task, but they seem to run fine alongside the O2MD1 v1.10 tasks on the same GPU.
Ideas are not fixed, nor should they be; we live in model-dependent reality.
Finally got some O2MD work.
)
Finally got some O2MD work. Had to check yes on LIBC215, which makes no sense for a windows system, but, whatever. My work cache is full of O1AS work, so it will be a bit before the box starts crunching and munching the new food. :)
Trying to run down the cache
)
Trying to run down the cache on 2 machines. Now is only CPU O2MD1 and the other is GPU O2MD1 but they need to clear out the other work units before they start. Maybe by tomorrow. Don't want to abort any work.
I aborted about 120 tasks
)
I aborted about 120 tasks more (about 370 tasks total, including yesterday) on the overwhelmed host and left only enough for what 1 core will be able to crunch before deadlines. I don't feel anything. Those tasks had been in queue (for nothing) only for two days. Another host somewhere else will be happy to crunch them, maybe even a host that will accept 'cpu work only'. I'm sure that project staff can easily see if tasks have been manually aborted or if there have been a technical problem while crunching.
I left linux hosts to crunch cpu tasks only... but allowed some gpu tasks for windows hosts now (+1 cpu task per host... for planetary love). These times with new app versions coming are just too thrilling.
I see once again the project
)
I see once again the project has overwhelmed my host with work that cannot be finished. My half day cache setting has netted me 44 days of cpu work and 3 and 1/2 days of gpu work. I also see that the O2MD1 tasks have circumvented any request for Gamma Ray work.
Back to aborting tasks.
Keith Myers wrote:I see once
)
I have chronicled a similar sad story (on three Ubuntu machines) at WCG after upgrading to BOINC 7.16.3 from 7.14.2. But I have not had the problem here. I don't know why it affects some projects and not others, except maybe that I am on Win7 here. However, I think it corrects itself with enough time and work units. At least that is my current hope.
Keith Myers wrote:... I also
)
My preferences are set for GPU only, accept beta, and "Yes" for downloading non-preferred tasks and I've been picking up a few FGRBP1 v1.18 tasks along with GW beta v1.10 and v1.09 tasks. I thought that maybe the GW tasks were in short supply, but I'm still getting them. Ha! while I was typing, my two RX 570s began running all three task flavors simultaneously.
Ideas are not fixed, nor should they be; we live in model-dependent reality.
I thought that setting up a
)
I thought that setting up a dedicated Einstein host on its own venue would solve the oversupply problem here. If the project would just update their server software so that apps use an APR instead of the old deprecated task duration correction factor, which is wildly inaccurate, maybe I wouldn't have to constantly abort work.