However, there is little point to arguing the case there, since even if the people who are running the BOINC side of the project agree with you in principle, the ones who have the final say aren't listening. Thus the odds are low anything will be changed.
If nearly everybody detached their hosts in protest of the waste, they would be forced to either:
1) eliminate the waste and woo contributors back, or
2) mothball the collider
They would do neither ... the project LHC@Home is nice but not essential to the operation of the LHC.
And so, even if a substantial number of people left the project to the point where it affected the work throughput the most likely option would simply be to stop the project entirely.
Which is moot as even if I were the only one there I am sure that I would not mind having the work even if others left... :)
You and Brian have both said that the whole purpose for the IR > MQ policy is to get the work units to quorum faster. You've also said the work being done at LHC@home is non-essential. It makes very little sense to me to waste CPU time getting work units for non-essential work to quorum faster, especially when shortening the deadline and preventing hosts from caching huge numbers of tasks they won't touch for days will get the units to quorum just as fast and with 0 waste.
How long does it take to reconfigure BOINC server to send only 3 tasks instead of 5 for the initial replication? How long does it take to adjust the deadline from 7 days to 3 days? And how many programmers does it require?
This is not the real problem. The problem is that Big Science projects like LHC and SETI Institute do not trust the distributed volunteer computing model. Only LIGO does and we must be grateful to our dr.Allen for this, and also for mentioning us volunteers in an article recently published in a referred journal, Physical Review D. I always had the impression that LHC@home is barely tolerated by LHC, and it resides not at CERN but in London. All the rest is a consequence of this.
Tullio
The problem is that Big Science projects ... do not trust the distributed volunteer computing model. Only LIGO does and we must be grateful to our Dr. Allen for this, and also for mentioning us volunteers in an article recently published in a referred journal, Physical Review D.
This is a point which has started to worry me more and more in recent months, and especially since the scrambled release of CUDA. I voiced my concerns recently at SETI, in response to a similar one-man-bandwagon as this thread.
Looking at the Publications by BOINC projects page, which is up-to-date with the Einstein paper cited by Tullio, I don't feel the serious, peer-reviewed, scientific output of BOINC yet matches the potential and claims made for the platform. With the honourable exception of projects like CPDN and Einstein, I fear that, as I expressed it at SETI,
Quote:
That leaves BOINC with exclusive rights to the weird, freaky, and broke end of the market.
I would be fascinated to know what appraisal the serious scientific community makes of BOINC - but I suspect I may not read it on public message boards!
I agree with you, Richard. I had written a paper on BOINC for an Italian professional journal of the Italian electrical engineering association and the distinguished university professors of its editorial board were rather suspicious of my mentioning SETI. I had to rewrite it to make it more palatable.
Tullio
Also the results of QMC@home were published in a referred journal, Physical Chemistry Journal. It' s a project also in my schedule. QMC
This is not the real problem. The problem is that Big Science projects like LHC and SETI Institute do not trust the distributed volunteer computing model. Only LIGO does and we must be grateful to our dr.Allen for this, and also for mentioning us volunteers in an article recently published in a referred journal, Physical Review D. I always had the impression that LHC@home is barely tolerated by LHC, and it resides not at CERN but in London. All the rest is a consequence of this.
Tullio
I would argue that this is a little bit of a consequence of the somewhat rowdy nature of the environment in which the work is done. Though in the past a jumbled and messy laboratory was sometimes the nature of the beast, in these more recent days the more disciplined the approach the more acceptable the outcome... a clean lab indicates, if nothing else, a disciplined approach.
I have argued in the past, and will likely argue in the future, that the fact that we don't validate the machines that we use contributes to the impression that this is not a serious endeavor. We rely on simple statistical comparisons, hope, and prayers that unstable and unreliable machines do not contaminate the result pool.
The end results are probably all good, but it is hard to overcome the impression that this is more of a frat party than a serious computing effort ... and people whining about not wanting to validate or test their machines because that would detract from work ... as if you really should be trusting your life to untrained technicians using uncalibrated instruments to make repairs to that reactor three miles from your house ... does nothing to overcome that impression.
And if any of those peers reviewing the papers have tried BOINC, well, in many ways it does not leave one with a good impression ... neither do most of the boards when people try to get help ...
I confess that I am not an expert in scientific computing, but I try to learn whenever it is possible. I have followed most of the talks at the recent BOINC symposium in Grenoble, and I admire the way dr.David Anderson approaches this difficult subject. Would this paper answer some of the questions you pose? Celebrating diversity in volunteer computing
Tullio
I confess that I am not an expert in scientific computing, but I try to learn whenever it is possible. I have followed most of the talks at the recent BOINC symposium in Grenoble, and I admire the way dr.David Anderson approaches this difficult subject. Would this paper answer some of the questions you pose? Celebrating diversity in volunteer computing
Tullio
It is a good paper ... and does discuss SOME of the issues ... and completely ignores others.
I spent 20 plus years in the world of electronics maintenance and one of the lessons there is that you can only trust systems that are properly maintained and tested. And when testing, you have to use known and calibrated instruments or you are just playing with yourself.
Not to pick on him/her, but there is a participant over in Rosetta that is having troubles with running Rosetta models ... one of the first questions was are you using OC ... and suggested that turning off OC is one of the first steps ... well, they have tried everything BUT turning off OC (as of the last time I checked a couple hours ago), yet, OC is one of the primary causes of systems not being able to run models on one project or another even though the system can run other projects successfully.
Full disclosure: I am not a fan of OC, I am a fan of rock solid science and I cannot see how pushing machines to the edge of instability advances high quality science. For a project like SaH it is a lot of "who cares" because they are doing exploration, not science ... here, at Einstein, well, to my mind it is more troublesome because we are trying to do real science here ...
Rock solid science is about careful planning, rigorous implementation, and disciplined effort. Not about whose RAC is highest. I have long advocated that we test and validate the machines that are being used along with redundant computing. Because real science is about the accuracy of the results and the care with which we produce those results. It is not about being the most efficient or the fastest or the highest RAC.
When they put the Hubble up there, they did not start doing science until they had calibrated the instrument and had gone through a precise test regime to prove the instrument was working as intended. Granted they took some killer pictures to sate the public, but those were of little scientific value AT THAT TIME, because the instrument was not calibrated. It would be like testing the boiling point and freezing points of substances by the use of your fingers and not a calibrated thermometer.
More disclosure: I was trained as a technician, then as an engineer so my approach would not be like that of a scientist. In many ways watching the scientists in these projects I would argue that they need a solid dose of real engineering ...
Anyway, the BOINC community in the projects and developers give short shift to so many factors that the combination makes this effort seem more like a junior high school science class rather than a serious scientific endeavor. Which may be why you had to re-write the paper.
Well, there is point in which you are wrong. When they sent up the Hubble telescope, they found that the secondary mirror was wrongly focused and had to send a second costly mission to repair this fault. The firm producing the optical system had failed to do a very simple test, pioneered by the Italian astronomer Vasco Ronchi at the Istituto Nazionale di Ottica in Florence. This failure was very embarrassing for NASA and cost a good deal of money. You will have understood that I am an amateur astronomer and follow all NASA missions. Cheers.
Tullio
I am not an expert in statistical analysis and scientific computing either but the wisdom of years tells me that when honest and sincere questions and comments get nothing but silence or very evasive answers from admins then something is wrong.
In addition to ~25% of crunched LHC@home results being pure waste, Paul D. Buck, in a thread by the same title at Rosetta, suggests that LHC@home isn't even doing work CERN needs for operation of the collider. If that's true then it's now fairly obvious to me that it has been true for over a year. Now, looking back over the threads and certain remarks by what appears to be privileged insiders, there were some vague hints. Now it seems obvious LHC@home is simply using LHC's good name and reputation and the fact that the project once did some essential work for CERN/LHC a long time ago, to secretly slide some other non-collider related work onto the computers of unsuspecting donors who have purposely been kept ignorant. Indeed there have been hints but there has been no open and frank report/paper on exactly what is going on and who is doing it and what purpose it all serves.
I have long advocated that we test and validate the machines that are being used along with redundant computing. Because real science is about the accuracy of the results and the care with which we produce those results.
Redundant computing seems good enough to me - I'm sure all the projects that get back a particularly interesting result re-run the relevant WUs on their own hardware.
How could you possibly reliably test & validate the BOINC hosts anyway?
RE: RE: RE: However,
)
You and Brian have both said that the whole purpose for the IR > MQ policy is to get the work units to quorum faster. You've also said the work being done at LHC@home is non-essential. It makes very little sense to me to waste CPU time getting work units for non-essential work to quorum faster, especially when shortening the deadline and preventing hosts from caching huge numbers of tasks they won't touch for days will get the units to quorum just as fast and with 0 waste.
How long does it take to reconfigure BOINC server to send only 3 tasks instead of 5 for the initial replication? How long does it take to adjust the deadline from 7 days to 3 days? And how many programmers does it require?
BOINC FAQ Service
Official BOINC wiki
Installing BOINC on Linux
This is not the real problem.
)
This is not the real problem. The problem is that Big Science projects like LHC and SETI Institute do not trust the distributed volunteer computing model. Only LIGO does and we must be grateful to our dr.Allen for this, and also for mentioning us volunteers in an article recently published in a referred journal, Physical Review D. I always had the impression that LHC@home is barely tolerated by LHC, and it resides not at CERN but in London. All the rest is a consequence of this.
Tullio
RE: The problem is that Big
)
This is a point which has started to worry me more and more in recent months, and especially since the scrambled release of CUDA. I voiced my concerns recently at SETI, in response to a similar one-man-bandwagon as this thread.
Looking at the Publications by BOINC projects page, which is up-to-date with the Einstein paper cited by Tullio, I don't feel the serious, peer-reviewed, scientific output of BOINC yet matches the potential and claims made for the platform. With the honourable exception of projects like CPDN and Einstein, I fear that, as I expressed it at SETI,
I would be fascinated to know what appraisal the serious scientific community makes of BOINC - but I suspect I may not read it on public message boards!
I agree with you, Richard. I
)
I agree with you, Richard. I had written a paper on BOINC for an Italian professional journal of the Italian electrical engineering association and the distinguished university professors of its editorial board were rather suspicious of my mentioning SETI. I had to rewrite it to make it more palatable.
Tullio
Also the results of QMC@home were published in a referred journal, Physical Chemistry Journal. It' s a project also in my schedule.
QMC
RE: This is not the real
)
I would argue that this is a little bit of a consequence of the somewhat rowdy nature of the environment in which the work is done. Though in the past a jumbled and messy laboratory was sometimes the nature of the beast, in these more recent days the more disciplined the approach the more acceptable the outcome... a clean lab indicates, if nothing else, a disciplined approach.
I have argued in the past, and will likely argue in the future, that the fact that we don't validate the machines that we use contributes to the impression that this is not a serious endeavor. We rely on simple statistical comparisons, hope, and prayers that unstable and unreliable machines do not contaminate the result pool.
The end results are probably all good, but it is hard to overcome the impression that this is more of a frat party than a serious computing effort ... and people whining about not wanting to validate or test their machines because that would detract from work ... as if you really should be trusting your life to untrained technicians using uncalibrated instruments to make repairs to that reactor three miles from your house ... does nothing to overcome that impression.
And if any of those peers reviewing the papers have tried BOINC, well, in many ways it does not leave one with a good impression ... neither do most of the boards when people try to get help ...
I confess that I am not an
)
I confess that I am not an expert in scientific computing, but I try to learn whenever it is possible. I have followed most of the talks at the recent BOINC symposium in Grenoble, and I admire the way dr.David Anderson approaches this difficult subject. Would this paper answer some of the questions you pose?
Celebrating diversity in volunteer computing
Tullio
RE: I confess that I am not
)
It is a good paper ... and does discuss SOME of the issues ... and completely ignores others.
I spent 20 plus years in the world of electronics maintenance and one of the lessons there is that you can only trust systems that are properly maintained and tested. And when testing, you have to use known and calibrated instruments or you are just playing with yourself.
Not to pick on him/her, but there is a participant over in Rosetta that is having troubles with running Rosetta models ... one of the first questions was are you using OC ... and suggested that turning off OC is one of the first steps ... well, they have tried everything BUT turning off OC (as of the last time I checked a couple hours ago), yet, OC is one of the primary causes of systems not being able to run models on one project or another even though the system can run other projects successfully.
Full disclosure: I am not a fan of OC, I am a fan of rock solid science and I cannot see how pushing machines to the edge of instability advances high quality science. For a project like SaH it is a lot of "who cares" because they are doing exploration, not science ... here, at Einstein, well, to my mind it is more troublesome because we are trying to do real science here ...
Rock solid science is about careful planning, rigorous implementation, and disciplined effort. Not about whose RAC is highest. I have long advocated that we test and validate the machines that are being used along with redundant computing. Because real science is about the accuracy of the results and the care with which we produce those results. It is not about being the most efficient or the fastest or the highest RAC.
When they put the Hubble up there, they did not start doing science until they had calibrated the instrument and had gone through a precise test regime to prove the instrument was working as intended. Granted they took some killer pictures to sate the public, but those were of little scientific value AT THAT TIME, because the instrument was not calibrated. It would be like testing the boiling point and freezing points of substances by the use of your fingers and not a calibrated thermometer.
More disclosure: I was trained as a technician, then as an engineer so my approach would not be like that of a scientist. In many ways watching the scientists in these projects I would argue that they need a solid dose of real engineering ...
Anyway, the BOINC community in the projects and developers give short shift to so many factors that the combination makes this effort seem more like a junior high school science class rather than a serious scientific endeavor. Which may be why you had to re-write the paper.
Well, there is point in which
)
Well, there is point in which you are wrong. When they sent up the Hubble telescope, they found that the secondary mirror was wrongly focused and had to send a second costly mission to repair this fault. The firm producing the optical system had failed to do a very simple test, pioneered by the Italian astronomer Vasco Ronchi at the Istituto Nazionale di Ottica in Florence. This failure was very embarrassing for NASA and cost a good deal of money. You will have understood that I am an amateur astronomer and follow all NASA missions. Cheers.
Tullio
I am not an expert in
)
I am not an expert in statistical analysis and scientific computing either but the wisdom of years tells me that when honest and sincere questions and comments get nothing but silence or very evasive answers from admins then something is wrong.
In addition to ~25% of crunched LHC@home results being pure waste, Paul D. Buck, in a thread by the same title at Rosetta, suggests that LHC@home isn't even doing work CERN needs for operation of the collider. If that's true then it's now fairly obvious to me that it has been true for over a year. Now, looking back over the threads and certain remarks by what appears to be privileged insiders, there were some vague hints. Now it seems obvious LHC@home is simply using LHC's good name and reputation and the fact that the project once did some essential work for CERN/LHC a long time ago, to secretly slide some other non-collider related work onto the computers of unsuspecting donors who have purposely been kept ignorant. Indeed there have been hints but there has been no open and frank report/paper on exactly what is going on and who is doing it and what purpose it all serves.
BOINC FAQ Service
Official BOINC wiki
Installing BOINC on Linux
RE: I have long advocated
)
Redundant computing seems good enough to me - I'm sure all the projects that get back a particularly interesting result re-run the relevant WUs on their own hardware.
How could you possibly reliably test & validate the BOINC hosts anyway?