As far as we can predict both the ABP and S5GC1 searches will run out of work / data in November, so this is a short update on what we are planning. These plans, however, are not yet finalized in all aspects.
For S5GC this is rather easy: the current analysis ends at 1200Hz, data is available already on the servers up to 1500Hz. So we'll simply set up a new analysis (tech: workunit generator) to look into the upper 300Hz. Although this "run" will have a new "label" and in BOINC terms will be a new "application", we'll use the same codes, in particular application binaries, and just rename them. This run will last for 3-4 months. We'll use this time to further develop and decide on options for the then next run.
ABP2 anti-center data will run out in about two weeks. We have a few older "center" data that we can and will feed into E@H, but that's all from Arecibo. We are planning to use other data then (currently aiming for Parkes), which will require different pre-processing and in particular a new workunit generator, which is what I am currently working on. I hope to have that finished and the data prepared before we run out of Arecibo data, but it might be that E@H will need to run without any Radio data for a few days or weeks.
BM
BM
Copyright © 2024 Einstein@Home. All rights reserved.
Upcoming searches
)
And there are still no suitable data for processing from the S6 run?
In total S6 was less
)
In total S6 was less sensitive than S5, we have to carefully decide which data to take from S6, and preparng the data for use on the project will take some weeks. We're looking into this for the then-next run.
BM
BM
RE: In total S6 was less
)
Why is that? A science choice to get more coverage, problems with the technology, or just unlucky with background noise? (if you'd rather link me to a paper that explains it, that's fine too)
Short update on what I am
)
Short update on what I am busy with:
A couple of decisions have been made for the upcoming continuation of S5GC1. It will be named S5GC1HF. S5GC1HF will have a new set of applications, with only very minor changes to the current ones, mainly regarding the printed precision of numbers in the result file. There will be no change to the "science code", but we will use a very recent BOINC version, in particular in the hope to fix the "signal 11" issue with current Linux Apps (see the thread in "Problems and Bug Reports").
I'm currently working om sorting out a conflict between the autotools installation on our 'Linux compatibility apps build machine' and recent changes to the autoconf macros in lalsuite - very, very technical issues deep in autohell.
After this has been solved I intend to pick up my work on the new workunit generatior for the Radio-Pulsar search. When ready it will enable us to feed data from other sources than only Arecibo into the Einstein@home search.
Of course a couple of issues will show up along the way that I will have to spend some time on, e.g. during tests of S5GC1HF. But the next larger project I intend to work on is basically rewriting the code that reads the SFT data files for the GW search into the applications memory. It was originally written for a somewhat different data format and is not very (I/O-) efficient with the SFTs we're currently using on Einstein@home, causing more and more trouble.
BM
BM
Hallo Bernd! I find it verry
)
Hallo Bernd!
I find it verry nice to get some look into the near future, the new upcoming projects. That gives me the feeling, we are not only stupid volunteers, but participants, which I prefere much more.
From my relative careful records of E@H Server-Status etc. I can predict, that S5GC1 will be finished at 22nd Nov. (+/- 1 day)and ABPS2 at 2nd Dec. (+/- 2 days). As the crunching speed undergo a severe exponential increase since 7 month ( doubling within 258 days ), the time till end is tendentially shorter. If this increase will continue, we will cross the 500TFlops marker at end of Jan 2011!
Kind regards
Martin
RE: If this increase will
)
I think the large increase from 330 to 372.6 TFLOPS in the last weeks is due to the shut down of SETI@home. 500 TFLOPS would be very nice, but probably not practical because that would mean an increase of 40% in 3 months.
RE: I can predict, that
)
In September I'd predicted ABP would dry up soon into the New Year, but as you point out the crunch power has upped. For ABP the rate was then just under 500% of the data taking rate, now is over 700%. Which is all to the good ... :-)
@Bernd : the 'HF' in 'S5GC1HF' is 'high frequency' ie beyond 1200Hz going toward 1500Hz ?
@Ver Greeneyes : you're best bet is probably the interferometer online logs for Hanford and Livingston. Just follow the read-only access instructions. The short answer is a bit of everything.
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
RE: I think the large
)
As I wrote in my thread, this forecast was derived from a fit over the full last 7 month. The correlation factor for this fit was acceptable R^2 = 0,960. ( Unfortunately I can´t show the graph now here. But I hope soon.) The fitfunction gives for these days an increase of 0,987[TFlops/d], where as we had during the last 3 days an increase of 2,77 [TFlops/d]. So my forecast isn´t too overwhelming. The much higher real increase is likely to result from the nearly not working SETI project since some weeks.
Kind regards
Martin
RE: @Bernd : the 'HF' in
)
Yep.
BM
BM
RE: The fitfunction gives
)
Ok. 1 TFLOPS per day doesn´t sound impossible. I´m not sure about the real performance of a new Intel core i7, but it should be at least 50 GFlops. So only 20 new PCs are needed for an increase of one TFlops if they run 24/7.