I wonder ... isn't BOINC supposed to evaluate the performance levels of different application versions on a particular computer and include that info in scheduler requests?
Generally, "BOINC" has this functionality. However, it does not work too well because of the factors I mentioned early. It is implemented (more or less) in that way that the server periodically sends different versions and check what was the time of calculation. Except that this behavior does not take into account whether the computer at that time is eg playing mp3 (it is basically idle) or is rendering a big 3d stage and has memory bandwidth 100% saturated. Because of that, it can lead to the erroneous conclusion that the application which is in fact slower is better. A further disadvantage is that just the test can be very expensive (in terms of computing power.) If the project sends a relatively short tasks (few hours), then even quite a big difference in the performance of different versions is not a problem. However, for example for such projects as CPDN where a single task takes 600-700 hours, sending an application which is just about 5-10% slower only to do a test is a huge waste of resources. Essentially if you look deeper, the problem is nontrivial.
My numbers are taken from the E@H database, averaged over >2000 computers and roughly 50000 tasks, and these are consistent with the result of a single previous experiment under tightly controlled conditions.
Not only did I average over the whole database, but actually did experiments under tightly controlled conditions (AFAIK tighter than yours, i.e. without BOINC and with the exact same workunit / task). And at least at that time, the results were indeed consistent.
For clarity, I also used the same tasks. Except that I ran it through the BOINC.
Quote:
A fundamental problem that we can't solve is that neither the average participant nor the average machine does actually exist, therefore what's good for the average participant need not be good for a particular one.
The majority of Linux hosts attached to E@H are still from (LSC) clusters (like Atlas) and almost exclusively based on Intels Core2 architecture. This, btw. is the reason I chose such a cluster node for the comparison test I did at that time. So I doubt that the average timing has changed that much since back then.
That explains a lot. And since most of the work is done by these nodes probably not much has changed.
Quote:
Anyway, I'll put a re-evaluation of that issue on my todo-list; these days I'm too busy with other things.
There is no need to hurry as the current state probably will continue for a relatively long time.
RE: I wonder ... isn't
)
Generally, "BOINC" has this functionality. However, it does not work too well because of the factors I mentioned early. It is implemented (more or less) in that way that the server periodically sends different versions and check what was the time of calculation. Except that this behavior does not take into account whether the computer at that time is eg playing mp3 (it is basically idle) or is rendering a big 3d stage and has memory bandwidth 100% saturated. Because of that, it can lead to the erroneous conclusion that the application which is in fact slower is better. A further disadvantage is that just the test can be very expensive (in terms of computing power.) If the project sends a relatively short tasks (few hours), then even quite a big difference in the performance of different versions is not a problem. However, for example for such projects as CPDN where a single task takes 600-700 hours, sending an application which is just about 5-10% slower only to do a test is a huge waste of resources. Essentially if you look deeper, the problem is nontrivial.
RE: From my post in the
)
For clarity, I also used the same tasks. Except that I ran it through the BOINC.
That explains a lot. And since most of the work is done by these nodes probably not much has changed.
There is no need to hurry as the current state probably will continue for a relatively long time.
Guys thank you all for your replies :)