You download a large chunk of data from the detectors. The workunits are basically a set of parameters run against this data. I believe the average is about 5 workunits per data file, with the highest reported number I have seen about 14 workunits in one data file. A data file is usually good for about 1 week then you will get a new one.
The project staff here has taken steps to reduce the number of data files a given host will need to download. This is one reason you may see the same hosts matched up with yours on several workunits, they all have the same data file. It also sometimes causes more lag in granting credits, have to wait for a host with the right data file to need more work.
Since I heard about how Einstein distributes work units, I did a little digging to find out just how many I've been getting out of a data file. So far the most out of a single data file is 62 or another one with 70 if you count 16 ghosts. The lowest is one (the first one I got), and the current one is 16 and still going. The average so far is 27 per file.
It seems with a fast machine turning them in ahead of everyone else in the group and starting at the beginning of the file, you should be able to work just about all the work units in the file (85?), and more if it switches configuration files during it.
Workunit
)
I believe that this is the normal size. Looking at my E@H disk use it's at 18 or so Mb.
What basically happens is you get a huge download and then it is used for a number of crunching cycles (for lack of a better term).
Are you on DialUp?
Kathryn :o)
Einstein@Home Moderator
To clarify Kathryn's answer.
)
To clarify Kathryn's answer.
You download a large chunk of data from the detectors. The workunits are basically a set of parameters run against this data. I believe the average is about 5 workunits per data file, with the highest reported number I have seen about 14 workunits in one data file. A data file is usually good for about 1 week then you will get a new one.
The project staff here has taken steps to reduce the number of data files a given host will need to download. This is one reason you may see the same hosts matched up with yours on several workunits, they all have the same data file. It also sometimes causes more lag in granting credits, have to wait for a host with the right data file to need more work.
BOINC WIKI
BOINCing since 2002/12/8
Since I heard about how
)
Since I heard about how Einstein distributes work units, I did a little digging to find out just how many I've been getting out of a data file. So far the most out of a single data file is 62 or another one with 70 if you count 16 ghosts. The lowest is one (the first one I got), and the current one is 16 and still going. The average so far is 27 per file.
It seems with a fast machine turning them in ahead of everyone else in the group and starting at the beginning of the file, you should be able to work just about all the work units in the file (85?), and more if it switches configuration files during it.