Only if those systems were allocated to other projects. Some people concentrate on a single project. For example, my Pentium 4 is attached to 3 projects now, but it is currently only doing Einstein work, as SETI and Cosmology are both set to not get any new work.
True, but even for single-project hosts there might be some effect from the DCF problem because some got more work than they actually could handle during the deadline. The projection of the project run progress and duration on the server status page is made based on results created, not on results returned (!), so the initial "over-creation" is biasing the projection a bit. Currently the WU creation rate is below 0.2% of the total effert, so 5 days for one percent progress. I guess this needs some more time to stabilize.
The daily credit output is now around 12 Mio credits per day, the total credits for the whole run should be (based on current credit scheme) around 5,5 billion credits , so at current pace the run would require about 450 days, but that's without Atlas and with current apps.
My two cents to set ATLAS as a standalone user.
Moreover, it will be great to reset its statistics (at least to start of S5R4), just to look how it is going through the top-users list from the ground and make decisions and comparisons.
About the project completion:
Bikeman, I told somewhere here earlier this summer that we will finish S5R3 even before autumn starts and won that :)
Now I predict (if the run would not start from the beginning because of new optimisations) the succesful finish of the run to 8-9 months.
So, we'll see whos predictions will be exact enough.
I think the best solution is to just transfer ATLAS's credits to my account where I'll hug and pet and squeeze and kiss them daily. Knowing that they have found a good home should make everyone happy.....right?
_________________ ***I used to have a little friend, but now he don't move no more.... Tell me about the credits again, George.
It only updates once a day from the published xml, so there is a time lag, and so far it has only seen one day, but in that one day Atlas has surged ahead of 95% of all Einstein individual users.
I think it is important for BOINC the fact that a big iron like ATLAS runs a @home program. I hope Bruce Allen points this out at the Grenoble meeting, it may teach something to the people at CERN.
Tullio
As of BoincSynergy stats it looks like only 1162 processors are working for E@H and it currently on the 1463rd place for the total score and on the 7th by productivity. Looks promising... But, when will all the cores start crunching?
As of BoincSynergy stats it looks like only 1162 processors are working for E@H and it currently on the 1463rd place for the total score and on the 7th by productivity. Looks promising... But, when will all the cores start crunching?
In comparison to Einstein@Home, Atlas is very general-purpose. It offers high IO bandwidth, rapid access to more than 1 Petabyte of data, fast interprocessor communication, 'reliable' hardware, and other features that E@H lacks. Since there are many other types of gravitational wave searches other than searches for Continuous Wave sources, our hope is that Atlas is primarily used for these, and that the Atlas cores are occupied running analysis that can not be done on E@H.
For example two of the significant activities on Atlas just now are the post-processing of the E@H S5R1 and S5R3 results. In the past week, Holger Pletsch has completed a first pass through the S5R1 results. This work requires a resource like Atlas to carry out.
As of BoincSynergy stats it looks like only 1162 processors are working for E@H and it currently on the 1463rd place for the total score and on the 7th by productivity. Looks promising... But, when will all the cores start crunching?
In comparison to Einstein@Home, Atlas is very general-purpose. It offers high IO bandwidth, rapid access to more than 1 Petabyte of data, fast interprocessor communication, 'reliable' hardware, and other features that E@H lacks. Since there are many other types of gravitational wave searches other than searches for Continuous Wave sources, our hope is that Atlas is primarily used for these, and that the Atlas cores are occupied running analysis that can not be done on E@H.
For example two of the significant activities on Atlas just now are the post-processing of the E@H S5R1 and S5R3 results. In the past week, Holger Pletsch has completed a first pass through the S5R1 results. This work requires a resource like Atlas to carry out.
Cheers,
Bruce Allen
Thanks for the explanation Dr. Allen... Post-processing is definitely a high priority...
I wonder: Is maybe preliminary post-processing of S5R3 data used to prioritize the work generator for S5R4? Because we are seeing so many jumps in the frequency of workunits, it's not like in S5R3 where we would pretty much start with the low frequencies and progress steadily upwards in the frequency range.
I wonder: Is maybe preliminary post-processing of S5R3 data used to prioritize the work generator for S5R4? Because we are seeing so many jumps in the frequency of workunits, it's not like in S5R3 where we would pretty much start with the low frequencies and progress steadily upwards in the frequency range.
Random distribution is how it's designed to be. But as S5R2 already covered the lower frequencies of S5R3 and we used the same data files for both, people's machines already had the lower data files at the start of S5R3. Together with the "locality scheduling" that tries to minimize the additional downloads this lead to eating up the frequency band from bottom to top.
RE: Only if those systems
)
True, but even for single-project hosts there might be some effect from the DCF problem because some got more work than they actually could handle during the deadline. The projection of the project run progress and duration on the server status page is made based on results created, not on results returned (!), so the initial "over-creation" is biasing the projection a bit. Currently the WU creation rate is below 0.2% of the total effert, so 5 days for one percent progress. I guess this needs some more time to stabilize.
The daily credit output is now around 12 Mio credits per day, the total credits for the whole run should be (based on current credit scheme) around 5,5 billion credits , so at current pace the run would require about 450 days, but that's without Atlas and with current apps.
CU
Bikeman
My two cents to set ATLAS as
)
My two cents to set ATLAS as a standalone user.
Moreover, it will be great to reset its statistics (at least to start of S5R4), just to look how it is going through the top-users list from the ground and make decisions and comparisons.
About the project completion:
Bikeman, I told somewhere here earlier this summer that we will finish S5R3 even before autumn starts and won that :)
Now I predict (if the run would not start from the beginning because of new optimisations) the succesful finish of the run to 8-9 months.
So, we'll see whos predictions will be exact enough.
I think the best solution is
)
I think the best solution is to just transfer ATLAS's credits to my account where I'll hug and pet and squeeze and kiss them daily. Knowing that they have found a good home should make everyone happy.....right?
_________________
*** I used to have a little friend, but now he don't move no more.... Tell me about the credits again, George.
If you like the BOINCstats
)
If you like the BOINCstats way of showing a user, you might like to check out Atlas at this URL:
BOINCStats page for ATLAS on Einstein
It only updates once a day from the published xml, so there is a time lag, and so far it has only seen one day, but in that one day Atlas has surged ahead of 95% of all Einstein individual users.
I think it is important for
)
I think it is important for BOINC the fact that a big iron like ATLAS runs a @home program. I hope Bruce Allen points this out at the Grenoble meeting, it may teach something to the people at CERN.
Tullio
As of BoincSynergy stats it
)
As of BoincSynergy stats it looks like only 1162 processors are working for E@H and it currently on the 1463rd place for the total score and on the 7th by productivity. Looks promising... But, when will all the cores start crunching?
RE: As of BoincSynergy
)
In comparison to Einstein@Home, Atlas is very general-purpose. It offers high IO bandwidth, rapid access to more than 1 Petabyte of data, fast interprocessor communication, 'reliable' hardware, and other features that E@H lacks. Since there are many other types of gravitational wave searches other than searches for Continuous Wave sources, our hope is that Atlas is primarily used for these, and that the Atlas cores are occupied running analysis that can not be done on E@H.
For example two of the significant activities on Atlas just now are the post-processing of the E@H S5R1 and S5R3 results. In the past week, Holger Pletsch has completed a first pass through the S5R1 results. This work requires a resource like Atlas to carry out.
Cheers,
Bruce Allen
Director, Einstein@Home
RE: RE: As of
)
Thanks for the explanation Dr. Allen... Post-processing is definitely a high priority...
Brian
Hi! I wonder: Is maybe
)
Hi!
I wonder: Is maybe preliminary post-processing of S5R3 data used to prioritize the work generator for S5R4? Because we are seeing so many jumps in the frequency of workunits, it's not like in S5R3 where we would pretty much start with the low frequencies and progress steadily upwards in the frequency range.
Cheers
Bikeman
RE: I wonder: Is maybe
)
Random distribution is how it's designed to be. But as S5R2 already covered the lower frequencies of S5R3 and we used the same data files for both, people's machines already had the lower data files at the start of S5R3. Together with the "locality scheduling" that tries to minimize the additional downloads this lead to eating up the frequency band from bottom to top.
BM
BM