I think Einstein is going to step into a big pile of

Bruce Allen
Bruce Allen
Moderator
Joined: 15 Oct 04
Posts: 1119
Credit: 172127663
RAC: 0

RE: I personally believe

Message 37942 in response to message 37941

Quote:
I personally believe the developers are doing what is good for the science, and wish them all well. I hope that it will continue to grown, and be a very good thing for all our futures.

Thank you. We're certainly *trying* to do what is good for the science. But a lot of effort has gone into ensuring that the average user will continue to receive the same credit/cpu-hour with the new and old applications. We've gone to longer workunits because we don't want our project server to be the bottleneck for adding more users. And the data files are larger because we have more and better data now than we did a year or two ago.

Cheers,
Bruce

Director, Einstein@Home

Beach Bum
Beach Bum
Joined: 12 Dec 05
Posts: 68
Credit: 215346
RAC: 0

I have a request

I have a request Bruce.

Can you take a long look at the size of the units. Maybe spool them back just slightly. Some of the long units are a little out there. I have one machine setting here with a 33 hour estimated time. Its a P4, that was spooling s-4 at 1.5 hours regular. Same machine has some short units on it with an estimated 2.5 hours. The differance on the large units seems to be much more than what was told us. I know the long units have got to be hitting the dial up guys really hard.

Personally I would not be against the short one maybe being tiered towards the dial ups, and the longer one towards the broadband guys. That would be putting long ones on my machines. Which I can deal with. But to have a couple machines fairly close to each other in specs , and have time estimates ranging from 9 hours to 33 hours on long units. That seems a bit out of wack to me.

Any thoughts on the differances would be wonderful to hear.

Come Join us at Hawaiian Beach Bums

Jord
Joined: 26 Jan 05
Posts: 2952
Credit: 5893653
RAC: 42

RE: Its a P4, that was

Message 37944 in response to message 37943

Quote:
Its a P4, that was spooling s-4 at 1.5 hours regular.


Was this with an Akosf optimized version? If so, you are comparing the wrong applications. You should be comparing the times to the stock S4 applications.

If it was with a stock application, take a look at what Bernd said:

Quote:
To make up for the faster Apps we increased the size of the workunits. The "long" ones will be roughly five times as long as the "long" ones from S4, the "short" ones will be roughly twice as long as their S4 counterparts.

make sure you are comparing the correct S4 and S5 results. If you only ran fast S4s on a stock application and now you run one of the long S5 results while you never had a long S4 before, you cannot compare correctly.

Udo
Udo
Joined: 19 May 05
Posts: 203
Credit: 8945570
RAC: 0

RE: From what I see, the

Message 37945 in response to message 37934

Quote:

From what I see, the credit claims on S5 work are a fair deal. They grants what official app compute, plus minus few percent.

Because new official app is 4x more efficient than S4 official app, expect drop to 25% of your usual RAC.

If the new official S5 app is 4 times faster, WUs will get only 25% of the CREDITS. But you will process 4 WUs in the same time the S4 app processed 1 WU. So your RAC will stay the same!

Udo

Udo

Jord
Joined: 26 Jan 05
Posts: 2952
Credit: 5893653
RAC: 42

RE: If the new official S5

Message 37946 in response to message 37945

Quote:
If the new official S5 app is 4 times faster, WUs will get only 25% of the CREDITS.


The credit calculation is changed.

From Bernd:

"No information from the client, neither runtime nor benchmark nor FLOPs count, is used for granting credit. It's totally based on information on the server side, i.e. on the FLOPs estimate the WU generator writes into the database (based on the number of templates it writes into a Workunit).

The FLOPs estimation is passed to the App, so that with recent enough clients it will also claim the right amount of credit to make this transparent to the users, i.e. that they can see which credit they will be granted when the result is reported.

The latter doesn't work with older or some non-official clients, but that doesn't matter for the granted credit."

So even if you claim less than or more than someone else, you will get the credit that is determined by the server and acknowledged by the other host.

Beach Bum
Beach Bum
Joined: 12 Dec 05
Posts: 68
Credit: 215346
RAC: 0

RE: Was this with an Akosf

Quote:
Was this with an Akosf optimized version? If so, you are comparing the wrong applications. You should be comparing the times to the stock S4 applications.


Ageless I am comparing to a akos S-4 build. Which is a proper compare. If you have followed what has gone on with the S-5 application. You would know already that a lot of the improvments that Akos did on s-4 are in s-5. So they are a proper comparison. To compare the s-4 stock app to s-5 would be wrong, as s-5 is optimized already, and will be further optimized. I was talking more of what they said the length change per long and short wu's would be. And from what I am seeing, the long WU's are way more in length than what was originally told. This is compared to the Akos s-4.
What I was asking Bruce, was to take a look at the long wu's, because they are longer than they had originally said they would be. I have serious time differances between long and short wu's and serious time differances between long WU's on similair machines. I am talking ranges for long wu's on similar machines that are between 9 hours to as much as 34 hours on the last couple large units.
Personally I will crunch the 34 hour units, but I am more interested in what effect this is having on the crunchers that are on dial-up. Which in the US if you go by the average is close to 70% of the people on the internet. I fear that with the long WU's being as large as 15 meg or more, that many of the dial-up guys are going to be shutting down Einstein as a project and moving on to shorter WU projects.

That is why I requested Bruce take a look into the length. I understand they are wanting to use the longer units to give the server breathing room. But with the super long units, I feel that the project will be losing crunchers, and that many of the new crunchers are going to steer clear. I know when I first looked at this project before Akos started the optimizations. The times on the WU's turned me away. And now the times are back to roughly the same. There is a break point that most crunchers want to see on the length of WU crunch time. And for the most part I think that ends at no more than about 5 hours. I believe after 5 hours most crunchers don't feel like the project is doing anything worth while, even if in truth they are doing something wonderful.

Come Join us at Hawaiian Beach Bums

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6591
Credit: 320176435
RAC: 432619

RE: Ageless I am comparing

Message 37948 in response to message 37947

Quote:

Ageless I am comparing to a akos S-4 build. Which is a proper compare. If you have followed what has gone on with the S-5 application. You would know already that a lot of the improvments that Akos did on s-4 are in s-5. So they are a proper comparison. To compare the s-4 stock app to s-5 would be wrong, as s-5 is optimized already, and will be further optimized. I was talking more of what they said the length change per long and short wu's would be. And from what I am seeing, the long WU's are way more in length than what was originally told. This is compared to the Akos s-4.
What I was asking Bruce, was to take a look at the long wu's, because they are longer than they had originally said they would be. I have serious time differances between long and short wu's and serious time differances between long WU's on similair machines. I am talking ranges for long wu's on similar machines that are between 9 hours to as much as 34 hours on the last couple large units.
Personally I will crunch the 34 hour units, but I am more interested in what effect this is having on the crunchers that are on dial-up. Which in the US if you go by the average is close to 70% of the people on the internet. I fear that with the long WU's being as large as 15 meg or more, that many of the dial-up guys are going to be shutting down Einstein as a project and moving on to shorter WU projects.

That is why I requested Bruce take a look into the length. I understand they are wanting to use the longer units to give the server breathing room. But with the super long units, I feel that the project will be losing crunchers, and that many of the new crunchers are going to steer clear. I know when I first looked at this project before Akos started the optimizations. The times on the WU's turned me away. And now the times are back to roughly the same. There is a break point that most crunchers want to see on the length of WU crunch time. And for the most part I think that ends at no more than about 5 hours. I believe after 5 hours most crunchers don't feel like the project is doing anything worth while, even if in truth they are doing something wonderful.


Have you included this in your thinking? :

Quote:
There are also two types of data files: short and long. The short data files (l1_XXXX.X) are from the LIGO Livingston Observatory, and are about 4.5MB in size. The long data files (h1_XXXX.X) are from LIGO Hanford and are about 16MB in size. Note: once your computer downloads one of these data files, it should be able to do many workunits for that same file.


( my red highlighting )
Is your concern
- download size? ( clearly a bigger issue for dial-up than broadband )
- calculation time? ( the long 'thumpers' really winding the time out on slower boxes )
- credit determination? ( now unilateral, pre-paid or server-side )
- credit 'worth' changes? ( to align with other projects )
- one or more of the above?

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Beach Bum
Beach Bum
Joined: 12 Dec 05
Posts: 68
Credit: 215346
RAC: 0

Mostly download size there

Mostly download size there Mike.
And for the same reason you noted, dial-up folks.
I see it coming out in different threads, a lot of dial-up users are getting rather upset.
Personally I am on a high speed connect. SO the size is not as much an issue for me.

I am also a bit confused to how they are considering the length of WU on the long units. If the size is supposed to be the same, or very close. Then why am I having such a serious width in long wu's between very similair machines? Like I said before, swings from 9 to 34 hours for long wu's. The shorts are swinging from 1 1/4 to 2 1/2 hours.

So, that is the reasoning behind me asking Bruce to look into the length of the wu's.
1. Dial-up transfer problems.
2. Serious swings in length of time.
3. Length of crunch for the long WU's.

Like I said before , I understand the need with the improvments in the application to lengthen the wu's. But I am not sure that taking them out this far is as good for the project as one might think it is. The length may be good for the server load. But I feel it is ultimately going to cost the project many crunchers due to the dial-up weight of the download, and the length of the crunch needed to return the file.

Come Join us at Hawaiian Beach Bums

Jord
Joined: 26 Jan 05
Posts: 2952
Credit: 5893653
RAC: 42

You do know that you only

Message 37950 in response to message 37949

You do know that you only download the big initial file? That this file will be sliced in pieces on your computer, without further downloading needed?

It's not as with Seti, for instance, where you download new results for every result. At Einstein you download the H1_XXXX.X_S5R1x_* (15.47MB) file or the L1_XXXX.X_S5R1x_* (4.5MB) file and you slice it in little pieces that are crunched.

You do not download a new 4.5 or 15.47MB file after every result upload & report.

Beach Bum
Beach Bum
Joined: 12 Dec 05
Posts: 68
Credit: 215346
RAC: 0

RE: You do know that you

Quote:

You do know that you only download the big initial file? That this file will be sliced in pieces on your computer, without further downloading needed?

It's not as with Seti, for instance, where you download new results for every result. At Einstein you download the H1_XXXX.X_S5R1x_* (15.47MB) file or the L1_XXXX.X_S5R1x_* (4.5MB) file and you slice it in little pieces that are crunched.

You do not download a new 4.5 or 15.47MB file after every result upload & report.

I understand that Ageless. Its not me I have the concern for, I am on a high speed . Was just bringing the concern up, as I have seen more posts from dial-ups about the length of the download. Which if I remember right on a descent connection is around 15 to 20 minutes a Meg. So that would make the large unit a 225 to 300 minute download for a dial-up user. Which I would not blame them one bit if they left the project due to the huge time on connection to get work. Figure in if they have maybe 2 or 3 machines trying to get work units. Now you have one very serious problem tying up your phone line, if you do not have a second line dedicated.

Now you have insight on what I was bringing up about size. Its wonderful for the project on server load, but horrible on dial-up users. Which sorry to say is a large group when it comes to users. Figures in the US put about 70 % (give or take) of Internet users still on dial-up.

Come Join us at Hawaiian Beach Bums

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.