The new datapacks must really be huge. On my desktop, which I have just reinstalled so it didn't have any datafiles stored from the previous run, I had download times of about a minute or so with 6 MBit downstream on my connection. No problem for me but it might be kinda tough on dial-up users...
I know that the WUs are now computing an other frequenzy band and therefore the time to finish will change. But as far as I can remember, Bernd wrote something about half the length of the S5R2 WU.
That's why I'm thinking about optimizations already included in the new app.
No optimization in the sense of hand coded assembly instructions or the like. Shorter WU crunch times are the result of partitioning the work in a different way, as a response to popular demand as many crunchers regarded the S5R2 WUs as being too long. The S5R3 app has a new feature that allows a more fine tuned partitioning of work, so this would not have possible with the S5R2 app (just in case some people wonder why we haven't seen shorter WUs before).
The new datapacks must really be huge. On my desktop, which I have just reinstalled so it didn't have any datafiles stored from the previous run, I had download times of about a minute or so with 6 MBit downstream on my connection. No problem for me but it might be kinda tough on dial-up users...
Perhaps that you had a slow connection or slow server, or the connection was otherwise occupied as well. I have a comparable connection (8Mbit) and the dat file came in in 2 seconds.
Well, the figure is based on S5R3 results only and many crunchers are still busy with S5R2 work in the pipeline, so this number should come down significantly. A year perhaps.
CU
H-BE
Yea hopefully since it for the moment is rising rapidly.
Ageless: Maybe the connection was occupied, I wouldn't know since there are 5 computers in this household... But I also seem to remember sth about datafiles coming in "parts", so parts of them could be reused... so maybe you just had to download a part of the datafile, whereas I had to get everything new?
Ageless: Maybe the connection was occupied, I wouldn't know since there are 5 computers in this household... But I also seem to remember sth about datafiles coming in "parts", so parts of them could be reused... so maybe you just had to download a part of the datafile, whereas I had to get everything new?
I think that's about the answer. The datafiles are the same as in S5R2 (and therfore can be re-used), and consist of pairs of files l1_* and h1_*. I think the former contain data from the Livingston observatory and the latter data from Hanfort.
Anyway, each individual WU now in s5r3 seems to require more of those files to be present at the same time. I think S5R2 needed about 3 pairs of such files for each WU. I'm not sure about S5R3, you will see this when you inspect your client_state.xml file.
The required datafiles should be listed in the commandline of the task, can't miss it in client_state.xml . There are also xml tags that list the required files for each result.
cu
h-b
The new datapacks must really
)
The new datapacks must really be huge. On my desktop, which I have just reinstalled so it didn't have any datafiles stored from the previous run, I had download times of about a minute or so with 6 MBit downstream on my connection. No problem for me but it might be kinda tough on dial-up users...
RE: I know that the WUs
)
No optimization in the sense of hand coded assembly instructions or the like. Shorter WU crunch times are the result of partitioning the work in a different way, as a response to popular demand as many crunchers regarded the S5R2 WUs as being too long. The S5R3 app has a new feature that allows a more fine tuned partitioning of work, so this would not have possible with the S5R2 app (just in case some people wonder why we haven't seen shorter WUs before).
CU
Bikeman
RE: The new datapacks must
)
Perhaps that you had a slow connection or slow server, or the connection was otherwise occupied as well. I have a comparable connection (8Mbit) and the dat file came in in 2 seconds.
2007-09-22 16:09:36 [Einstein@Home] [file_xfer] Started download of file skygrid_0530Hz_S5R3.dat
2007-09-22 16:09:36 [---] [file_xfer_debug] PERS_FILE_XFER::start_xfer(): URL: http://einstein.aei.mpg.de/download/3f/skygrid_0530Hz_S5R3.dat
2007-09-22 16:09:38 [Einstein@Home] [file_xfer] Finished download of file skygrid_0530Hz_S5R3.dat
2007-09-22 16:09:38 [Einstein@Home] [file_xfer] Throughput 348514 bytes/sec
It's only 2,222KB. :-)
RE: RE: Only 748 days
)
Yea hopefully since it for the moment is rising rapidly.
The first new run for me
)
The first new run for me shows about 18 hrs. on Boinc.
e6600 quad @ 2.5ghz
2418 floating point
5227 integer
e6750 dual @ 3.71ghz
3657 floating point
8105 integer
Ageless: Maybe the connection
)
Ageless: Maybe the connection was occupied, I wouldn't know since there are 5 computers in this household... But I also seem to remember sth about datafiles coming in "parts", so parts of them could be reused... so maybe you just had to download a part of the datafile, whereas I had to get everything new?
RE: Ageless: Maybe the
)
I think that's about the answer. The datafiles are the same as in S5R2 (and therfore can be re-used), and consist of pairs of files l1_* and h1_*. I think the former contain data from the Livingston observatory and the latter data from Hanfort.
Anyway, each individual WU now in s5r3 seems to require more of those files to be present at the same time. I think S5R2 needed about 3 pairs of such files for each WU. I'm not sure about S5R3, you will see this when you inspect your client_state.xml file.
CU
H-B
What exactly do I have to
)
What exactly do I have to look for?
RE: What exactly do I have
)
The required datafiles should be listed in the commandline of the task, can't miss it in client_state.xml . There are also xml tags that list the required files for each result.
cu
h-b
Is it normal, that the
)
Is it normal, that the progressbar goes forward only in steps and not countinously like by S5R2?