Here's some more info for those discussing credit issues. What follows is Eric K's first post to a topic which is being discussed. I think they're working on making credit equal across all projects. This is important so projects don't feel they are in competition with eachother for user based upon credit, intead it should be based upon the projects merits. Note: this has been posted to seti as well.
tony
[boinc_dev] Need for a cross project credit standard.... Inbox
Eric J Korpela to BOINC
More options Jun 29
Given the recent SETI@home credit/optimization flame wars and what is
happening with Einstein's recent apps, I think we need to come up with
a cross project credit standard. The original idea in BOINC was to
give credit for floating point, integer operations, disk space used,
etc. The primary problem with the method originally used to grant
credit was that it was based upon benchmarks that gave performance
that was unrealistically high for the real applications. When
SETI@home transitioned to granting credit based upon floating point
operations we had to stick in a multiplier of about 3.5X the floating
point operation count in order to match the credit given based upon
the benchmarks.
This caused an uproar for a variety of reasons. The first was that
the new version of SETI@home was more highly optimized than the old
version, so people running optimized versions couldn't claim 5X the
credit of people running unoptimized versions. The second was that
"fast machines" saw a decrease in credit granted per hour (primarily
because a 3GHz machine doesn't typically have memory that is 50%
faster than a 2GHz machine).
Einstein@home has also recently started using FLOP counting in its
applications. Perhaps in response to the furor on the SETI@home
forums, E@H grants significantly higher credit per hour (2X-4X) in its
FLOP counting version than in its older versions.
I worry that this lack of a standard is going to result in "credit
inflation" where in respose to actions by other projects and due to
complaints from a vocal minority of volunteers, projects are forced
into granting ever increasing number of credits per hour.
I think we need to develop a credit standard in order to prevent this.
This credit standard should 1) be measureable on a common machine
(possibly on every machine) 2) be publicly available 3) specify means
of comparing applications 4) reward optimization.
Lacking other suggestions, I propose to create a "standard"
(non-vectorized, Ooura FFT, gcc -O2 compiled) version of SETI@home
enhanced with a "standard" workunit as a credit standard. Based upon
run-time and floating point operations of this standard, other
projects can calibrate their floating point credit for a
(non-vectorized, gcc -O2 compiled) version of their own application.
If anyone has a better idea, or would prefer that there be no
standard, speak up.
In my opinion this is a completly useless discussion. It is impossible to bring so much different CPUs under the same hat. Those different CPUs use very different hardware, cache optimized architecture and other nice gimmics to do their work.
Then you have to see all the different Boic-projects. Those have also very different algorythms for their specific work. And they have to write different optimized clients to let all different kinds of computers crunch in a reasonable time so they don't feel left out.
The creditsystem was originally meant to compare the effort of work done between different users. I don't understand why people whine about not getting enough credits or not fair enough credits. This creditsystem is therefore in my opinion useless, because some users can't differenciate between cybercredits and real money.
I vote for completly disabling the creditsystem in the Boincclient!
I pretty much agree with von Halebach. Different projects do different kinds of processing, so some will outperform others on certain processors. So true cross-project, cross-platform equivalence is unachievable, surely?
I think we will get an attempt at one anyway, in which case each project should calibrate its credits according to the platform which performs best for that project. This gives users an incentive to run the project that their machine is best at, while leaving them free to disregard credits and participate in whatever projects they feel are important.
Maybe users should be able to opt-out of credit collection to avoid being influenced by it. You know, like an alcoholic should never take a drink.
I think most users just want a perception of making a contibution to the progress of a project. Most importantly, they want to see the pile of work go down. They want to see finished work leaving their machine, hence short WUs are more popular. Then they want to see the pile of done work is getting bigger, and includes a pile with their name on, also getting bigger.
A strength of E&H is that there are distinct piles of work which we can see shrinking as they are processed. This is preferable to the approach of some other open-ended projects, where a trough of work is constantly topped up, so there is no clear perception of progress.
The BOINC community is a large and disparate one that consists of people, like some who have posted here, interested in the main only in the science done, and, at the other extreme, people only interested in the credits in order to improve their own, or their teams ranking. Between the two lies perhaps the majority. In order for a project as large as this to work it must engage all contributors, whatever their motive, in order to get the science done. I therefore urge all contributors to try and see the argument from the others perspective so that we don’t drive yet more resources from the fold. dAVE.
I think we will get an attempt at one anyway, in which case each project should calibrate its credits according to the platform which performs best for that project. This gives users an incentive to run the project that their machine is best at, while leaving them free to disregard credits and participate in whatever projects they feel are important.
Considering that windows makes up almost 90% of the total boxes in boinc, even if a mac or linux build were to give much higher relative credit, calibrating the standard rate to one of them would be counterproductive. If doing an perfered platform system, win/x86 has to win be default. While there's not a good way to get processor breakdowns from the stats, SSE1/2 probably are fairly ubiquitus at this point and if the app is compiled in instruction set specific versions they'd be the best versions to pick.
That said, the way I'd recomend doing cross platform credit scaling would be to improve the relative accuracy of the benchmark system and optimize base credit towards a dcf of 1.
Considering that windows makes up almost 90% of the total boxes in boinc, even if a mac or linux build were to give much higher relative credit, calibrating the standard rate to one of them would be counterproductive. If doing an perfered platform system, win/x86 has to win be default. While there's not a good way to get processor breakdowns from the stats, SSE1/2 probably are fairly ubiquitus at this point and if the app is compiled in instruction set specific versions they'd be the best versions to pick.
Good point.
Quote:
That said, the way I'd recomend doing cross platform credit scaling would be to improve the relative accuracy of the benchmark system and optimize base credit towards a dcf of 1.
Can you have a benchmark which is accurate accross all platforms for all projects? Maybe there is a case for a custom benchmark for each project, one which is representative of the kind of processing that project does.
Maybe there is a case for a custom benchmark for each project, one which is representative of the kind of processing that project does.
When I first read this I thought: problem. If there are different benchmarks for each project, then the cred-heads will just go to the project that gives their machine the best credit/hour rating.
On reflection this is not a problem it is a bonus.
If people do that then people will be donating resources where they are most effective. That means that across BOINC as a whole more processing will be done by the same pool of machines than with the credits being evened out across the board.
Forgive me for being so simple but why not let the results of each machine, the time spent running Bionc and the average turnaround time for each error free result, be the major factors in determing credit granted.
For instance, the person who has a slower system but runs it 24x7 may have more credit granted than the person who has the latest greatest system with 2x the performance of the slower and may only run Bionc for 12x7. Keeping in mind incentives for opptimization.
Forgive me for being so simple but why not let the results of each machine, the time spent running Bionc and the average turnaround time for each error free result, be the major factors in determing credit granted.
For instance, the person who has a slower system but runs it 24x7 may have more credit granted than the person who has the latest greatest system with 2x the performance of the slower and may only run Bionc for 12x7. Keeping in mind incentives for opptimization.
The crediting system is simple. The crediting is determined on how many FLOPs your system does. Since each unit given to 2 computers is exactly the same, it gets the same credit, cause the FLOPs will be the same. I think that is very simple, and everyone gets fair credit.
Here's some more info for
)
Here's some more info for those discussing credit issues. What follows is Eric K's first post to a topic which is being discussed. I think they're working on making credit equal across all projects. This is important so projects don't feel they are in competition with eachother for user based upon credit, intead it should be based upon the projects merits. Note: this has been posted to seti as well.
tony
[boinc_dev] Need for a cross project credit standard.... Inbox
Eric J Korpela to BOINC
More options Jun 29
Given the recent SETI@home credit/optimization flame wars and what is
happening with Einstein's recent apps, I think we need to come up with
a cross project credit standard. The original idea in BOINC was to
give credit for floating point, integer operations, disk space used,
etc. The primary problem with the method originally used to grant
credit was that it was based upon benchmarks that gave performance
that was unrealistically high for the real applications. When
SETI@home transitioned to granting credit based upon floating point
operations we had to stick in a multiplier of about 3.5X the floating
point operation count in order to match the credit given based upon
the benchmarks.
This caused an uproar for a variety of reasons. The first was that
the new version of SETI@home was more highly optimized than the old
version, so people running optimized versions couldn't claim 5X the
credit of people running unoptimized versions. The second was that
"fast machines" saw a decrease in credit granted per hour (primarily
because a 3GHz machine doesn't typically have memory that is 50%
faster than a 2GHz machine).
Einstein@home has also recently started using FLOP counting in its
applications. Perhaps in response to the furor on the SETI@home
forums, E@H grants significantly higher credit per hour (2X-4X) in its
FLOP counting version than in its older versions.
I worry that this lack of a standard is going to result in "credit
inflation" where in respose to actions by other projects and due to
complaints from a vocal minority of volunteers, projects are forced
into granting ever increasing number of credits per hour.
I think we need to develop a credit standard in order to prevent this.
This credit standard should 1) be measureable on a common machine
(possibly on every machine) 2) be publicly available 3) specify means
of comparing applications 4) reward optimization.
Lacking other suggestions, I propose to create a "standard"
(non-vectorized, Ooura FFT, gcc -O2 compiled) version of SETI@home
enhanced with a "standard" workunit as a credit standard. Based upon
run-time and floating point operations of this standard, other
projects can calibrate their floating point credit for a
(non-vectorized, gcc -O2 compiled) version of their own application.
If anyone has a better idea, or would prefer that there be no
standard, speak up.
Eric
RE: [boinc_dev] Need for
)
THX Tony for the head-up, that sounds good!
Grüße vom Sänger
In my opinion this is a
)
In my opinion this is a completly useless discussion. It is impossible to bring so much different CPUs under the same hat. Those different CPUs use very different hardware, cache optimized architecture and other nice gimmics to do their work.
Then you have to see all the different Boic-projects. Those have also very different algorythms for their specific work. And they have to write different optimized clients to let all different kinds of computers crunch in a reasonable time so they don't feel left out.
The creditsystem was originally meant to compare the effort of work done between different users. I don't understand why people whine about not getting enough credits or not fair enough credits. This creditsystem is therefore in my opinion useless, because some users can't differenciate between cybercredits and real money.
I vote for completly disabling the creditsystem in the Boincclient!
von Halenbach
I pretty much agree with von
)
I pretty much agree with von Halebach. Different projects do different kinds of processing, so some will outperform others on certain processors. So true cross-project, cross-platform equivalence is unachievable, surely?
I think we will get an attempt at one anyway, in which case each project should calibrate its credits according to the platform which performs best for that project. This gives users an incentive to run the project that their machine is best at, while leaving them free to disregard credits and participate in whatever projects they feel are important.
Maybe users should be able to opt-out of credit collection to avoid being influenced by it. You know, like an alcoholic should never take a drink.
I think most users just want a perception of making a contibution to the progress of a project. Most importantly, they want to see the pile of work go down. They want to see finished work leaving their machine, hence short WUs are more popular. Then they want to see the pile of done work is getting bigger, and includes a pile with their name on, also getting bigger.
A strength of E&H is that there are distinct piles of work which we can see shrinking as they are processed. This is preferable to the approach of some other open-ended projects, where a trough of work is constantly topped up, so there is no clear perception of progress.
Dead men don't get the baby washed. HTH
The BOINC community is a
)
The BOINC community is a large and disparate one that consists of people, like some who have posted here, interested in the main only in the science done, and, at the other extreme, people only interested in the credits in order to improve their own, or their teams ranking. Between the two lies perhaps the majority. In order for a project as large as this to work it must engage all contributors, whatever their motive, in order to get the science done. I therefore urge all contributors to try and see the argument from the others perspective so that we don’t drive yet more resources from the fold. dAVE.
RE: I think we will get an
)
Considering that windows makes up almost 90% of the total boxes in boinc, even if a mac or linux build were to give much higher relative credit, calibrating the standard rate to one of them would be counterproductive. If doing an perfered platform system, win/x86 has to win be default. While there's not a good way to get processor breakdowns from the stats, SSE1/2 probably are fairly ubiquitus at this point and if the app is compiled in instruction set specific versions they'd be the best versions to pick.
That said, the way I'd recomend doing cross platform credit scaling would be to improve the relative accuracy of the benchmark system and optimize base credit towards a dcf of 1.
RE: Considering that
)
Good point.
Can you have a benchmark which is accurate accross all platforms for all projects? Maybe there is a case for a custom benchmark for each project, one which is representative of the kind of processing that project does.
Dead men don't get the baby washed. HTH
RE: Maybe there is a case
)
When I first read this I thought: problem. If there are different benchmarks for each project, then the cred-heads will just go to the project that gives their machine the best credit/hour rating.
On reflection this is not a problem it is a bonus.
If people do that then people will be donating resources where they are most effective. That means that across BOINC as a whole more processing will be done by the same pool of machines than with the credits being evened out across the board.
~~gravywavy
Forgive me for being so
)
Forgive me for being so simple but why not let the results of each machine, the time spent running Bionc and the average turnaround time for each error free result, be the major factors in determing credit granted.
For instance, the person who has a slower system but runs it 24x7 may have more credit granted than the person who has the latest greatest system with 2x the performance of the slower and may only run Bionc for 12x7. Keeping in mind incentives for opptimization.
RE: Forgive me for being so
)
The crediting system is simple. The crediting is determined on how many FLOPs your system does. Since each unit given to 2 computers is exactly the same, it gets the same credit, cause the FLOPs will be the same. I think that is very simple, and everyone gets fair credit.