Today around 7:35 CET we generated the last workunits of "O1Spot1Hi", the high-frequency part of the "Continuous Gravitational Wave Galactic Center search" in O1 data. We then "opened" the "low-frequency" part "O1Spot1Lo" to all CPU models to finish it quickly. As these run much faster on the "fast" hosts that previously got sent "O1Spot1Hi" tasks, we reduced the credit and flops-estimation. Owner of "slow" hosts may therefore see a reduction of credit during the rest of the search and may opt-out of it.
BM
Copyright © 2024 Einstein@Home. All rights reserved.
Bernd Machenschalk
)
That just begs the question of what's next? ;)
Can you elaborate on the
)
Can you elaborate on the GFlops vs credits ratio for the different work units? For example, how close to the actual GFlops are the nominal GFlops? This interests me on several levels, forgive me for straying off-topic. First, I have noticed that an RX580 card does around 1100 GFlops/s for FGRPB1G, assuming that the 525 000 GFlops/unit is correct. That means the utilization factor is around 18%, given that max theoretical is 6290 GFlops/s.
Second, my R5 1600 likes the FGRP5 tasks much better than O1spotLo. (It never got any O1spotHi tasks, the CPU was not high-end enough. Or there was not enough RAM.) It can do around 1.25 O1spotLo tasks per hour, compared to almost 4 FGRP5 tasks per hour. So already before the lowered credit for O1spotLo the difference was huge, and even bigger now that one FGRP5 yields more credit than an O1spotLo. Is the nominal FGRP5 GFlops value much offset from the actual? Or is it just that the kind of work suits the CPU better.
Should I continue going for the credit and run FGRP5? I assume yes, it's the only feedback we have and I guess even more so now that the credit for O1spotLo is reduced.
When you say tasks will run
)
When you say tasks will run much faster on fast computers can anybody give me an idea of how much of a speed increase there should be? I have a Haswell E system and they are taking over 16,000 seconds running 16 at a time.
It looks like the low tasks
)
It looks like the low tasks will be exhausted in about 2 more weeks; will there be a fresh batch of GW work immediately available or will we have another interval with only fermi tasks available for the CPU?
Speedy wrote:When you say
)
By design the "Lo" tasks contain half as may "templates" as the "Hi" ones, so they should run half as long. This may not apply exactly to each particular computer out there, though.
BM
DanNeely wrote:It looks like
)
We are trying to set up another search for Gravitational Waves in time. But as we are currently experiencing technical problems that affect these preparations, and also the end of the current run is likely to fall in the holiday period a the end of the year, there likely will be a couple of days or even a few weeks where we will search for Gamma-Ray pulsars almost exclusively.
BM
Understood, it's not worth
)
Understood, it's not worth screwing up anyone's vacation plans over.
I'm not sure if I am
)
I'm not sure if I am experiencing problems or not, but I have a couple of lo WUs downloaded now, and they are showing ETAs of 2-3 days. Before, for non-GPU WUs, I have had ETAs somewhere between 12 hours and a full day.
These WUs have just started, so who knows if that time will cut down. Just wondering if anyone else is experiencing this, and/or if there is something I should be doing.
ETA will be tuned by client
)
ETA will be tuned by client when some of them will be completed.
Bernd Machenschalk
)
Well if that happens I will gladly crunch Seti on my CPUs and increase Einstein's resource share accordingly to maintain a RAC of 500,000 which is my goal with this project.