I have no idea who Tetsuji is.
For any project such as this to succeed all clients *should* be returning Exactly the same data results.
Open source code is a bad idea but I agree that certain compile codes can improve calculation performance. This IMO should be up to the project managers.
What nonsense.
In the first place, not all science will return exactly the same results from different platforms - it is not always expected nor required - each project can set it's own validation limits and permitted deviation. In your view, you'd better ban all overclocked or home-built systems as they could be unstable (inaccurate).
Second, just since when did 'Open Source' become a bad idea? There is only one reason for any code being non-open source; protection of comecial revenue.
You can't even argue that making Einstein open source would aid cheating (gaining undue credit) as that aspect is controlled by the BOINC framework which is already open source and free for modification by anyone.
There is only one reason anyone would go to the (non-trivial) effort of studying the code and re-compiling; to improve the code and its performance, which can only be a benefit to the science.
Well said, what i see is most user unfriendly.
Long run time, short deadline.
With optimized clients it would give more results that maybe the servers couldnt handle.
Someone think maybe more sience is better for the project.
My fault.
panic mode off...
The results returned by the optimized clients are the same as the results made by the official client. The only difference would be because of some roundoff error, and those errors are already causing minor differences on different platforms/OS's.
Besides, if the clients were returning bad results, then the users of those clients would not get any credit and the client would not be used anyway.
E@H is by far the most stable and reliable BOINC project. The Admins are to be congratulated on its success. IMHO THEY are in the best position to determine if releasing THEIR code will really benefit E@H. I hardly think they are hoarding it for some nebulous commercial gain as has been suggested. Can the code be improved? Possibly and probably, but it is their call. I'm sure they have weighed all the pro's and con's and have decided that any additional science they may gain from open sourcing may not be worth the potential problems......and they are in the best position to assess that potential.
If it ain't broke, don't fix it...Anon.
The results returned by the optimized clients are the same as the results made by the official client. The only difference would be because of some roundoff error, and those errors are already causing minor differences on different platforms/OS's.
Besides, if the clients were returning bad results, then the users of those clients would not get any credit and the client would not be used anyway.
Well, are they the same or different...?
To answer your question, I am going to quote MikeSW17
Quote:
In the first place, not all science will return exactly the same results from different platforms - it is not always expected nor required - each project can set it's own validation limits and permitted deviation.
If you are talking about different binaries returning results that may not match because of optimizations, that is correct. The changes to effect optimization may cause the different applications to return results sufficiently different that it impacts the science.
As far as making the source code public there are more reasons than simple comercial pay-off that can cause a project to decide to not release the source. Intelectual property and licensing terms come swiftly to mind.
The results returned by the optimized clients are the same as the results made by the official client. The only difference would be because of some roundoff error, and those errors are already causing minor differences on different platforms/OS's.
Besides, if the clients were returning bad results, then the users of those clients would not get any credit and the client would not be used anyway.
Well, are they the same or different...?
To answer your question, I am going to quote MikeSW17
Quote:
In the first place, not all science will return exactly the same results from different platforms - it is not always expected nor required - each project can set it's own validation limits and permitted deviation.
I hope this answers your question.
Sort of...you said they were the same, unless they are different.....I was being nit-picky.
I'm having a problem understanding how they want to have same OS and same CPU and then don't care how far off the results are.
It's like they would rather have a sniper that can shoot a tight grouping but always hits the anus, instead of a loose grouping that always gets a head or heart shot.
If you are talking about different binaries returning results that may not match because of optimizations, that is correct. The changes to effect optimization may cause the different applications to return results sufficiently different that it impacts the science.
Where is the "test" for a difference in results? You've got people compiling changes that have to go to a select knowledgeable group to find out how to do it. If "newbies" can make your science, more power.........
As far as making the source code public there are more reasons than simple comercial pay-off that can cause a project to decide to not release the source. Intelectual property and licensing terms come swiftly to mind.
I'm for not having "public source code" available for the applications.
I'm having a problem understanding how they want to have same OS and same CPU and then don't care how far off the results are.
They're not trying to get the same CPU and OS. The results are spread across everyone because the differences are barely noticable in the raw results. I don't think it will matter if we get the location of a pulsar a few centimeters off.
There could be problems with the auto-update function due to the different version numbers in the optimized clients, but I don't know much about that.
Sort of...you said they were the same, unless they are different.....I was being nit-picky.
I'm having a problem understanding how they want to have same OS and same CPU and then don't care how far off the results are.
Depending on the project, you have it exactly right. For SETI@Home, the actual, or exact, power and frequency of the result value in the result information is not that critical. So, the more important aspect is that we have a correct count, that should be pretty much the same for all returns. But if one says the pulse is 4.3 inches high and the other says 4.2 and a third says 4.1 ... well, who cares ....
If, I say there are 20 pulses and you say 10, well, we are not in the same bal park and even SETI@Home will barf on this ...
In the case of, say, LHC@Home, and I suspect for Einstein@Home, the tolerances are much tighter. So, it is hard to understand why iterative processes have troubles, but it is fairly simple (and I am suprised you did not take me to task for not having this example in the Wiki ...
If, we have a operation that returns a minor error, of say 0.0000000000001 ... we can ignore it right? Maybe, maybe not ...
If we have a loop
[pre]
for i := 1 to 10,000 do {
x := (x * y) + 0.0000000000001
}
[/pre]
No problem ... as yet ... but ...
[pre]
for i := 1 to 1,000,000,000,000 do {
x := (x * y) + 0.0000000000001
}
[/pre]
Wrong...
Worse, if I am doing that and you are doing:
for i := 1 to 1,000,000,000,000 do {
x := (x * y) - 0.0000000000001
}
RE: I have no idea who
)
What nonsense.
In the first place, not all science will return exactly the same results from different platforms - it is not always expected nor required - each project can set it's own validation limits and permitted deviation. In your view, you'd better ban all overclocked or home-built systems as they could be unstable (inaccurate).
Second, just since when did 'Open Source' become a bad idea? There is only one reason for any code being non-open source; protection of comecial revenue.
You can't even argue that making Einstein open source would aid cheating (gaining undue credit) as that aspect is controlled by the BOINC framework which is already open source and free for modification by anyone.
There is only one reason anyone would go to the (non-trivial) effort of studying the code and re-compiling; to improve the code and its performance, which can only be a benefit to the science.
HI Well said, what i see
)
HI
Well said, what i see is most user unfriendly.
Long run time, short deadline.
With optimized clients it would give more results that maybe the servers couldnt handle.
Someone think maybe more sience is better for the project.
My fault.
panic mode off...
greetz Mike
RE: Revolution: The
)
Well, are they the same or different...?
E@H is by far the most stable
)
E@H is by far the most stable and reliable BOINC project. The Admins are to be congratulated on its success. IMHO THEY are in the best position to determine if releasing THEIR code will really benefit E@H. I hardly think they are hoarding it for some nebulous commercial gain as has been suggested. Can the code be improved? Possibly and probably, but it is their call. I'm sure they have weighed all the pro's and con's and have decided that any additional science they may gain from open sourcing may not be worth the potential problems......and they are in the best position to assess that potential.
If it ain't broke, don't fix it...Anon.
RE: RE: Revolution: The
)
To answer your question, I am going to quote MikeSW17
I hope this answers your question.
Jim
RE: Well, are they the same
)
Are what the same or different?
If you are talking about different binaries returning results that may not match because of optimizations, that is correct. The changes to effect optimization may cause the different applications to return results sufficiently different that it impacts the science.
As far as making the source code public there are more reasons than simple comercial pay-off that can cause a project to decide to not release the source. Intelectual property and licensing terms come swiftly to mind.
RE: RE: RE: Revolution:
)
Sort of...you said they were the same, unless they are different.....I was being nit-picky.
I'm having a problem understanding how they want to have same OS and same CPU and then don't care how far off the results are.
It's like they would rather have a sniper that can shoot a tight grouping but always hits the anus, instead of a loose grouping that always gets a head or heart shot.
RE: RE: Well, are they
)
I'm for not having "public source code" available for the applications.
RE: I'm having a problem
)
They're not trying to get the same CPU and OS. The results are spread across everyone because the differences are barely noticable in the raw results. I don't think it will matter if we get the location of a pulsar a few centimeters off.
There could be problems with the auto-update function due to the different version numbers in the optimized clients, but I don't know much about that.
RE: Sort of...you said they
)
Depending on the project, you have it exactly right. For SETI@Home, the actual, or exact, power and frequency of the result value in the result information is not that critical. So, the more important aspect is that we have a correct count, that should be pretty much the same for all returns. But if one says the pulse is 4.3 inches high and the other says 4.2 and a third says 4.1 ... well, who cares ....
If, I say there are 20 pulses and you say 10, well, we are not in the same bal park and even SETI@Home will barf on this ...
In the case of, say, LHC@Home, and I suspect for Einstein@Home, the tolerances are much tighter. So, it is hard to understand why iterative processes have troubles, but it is fairly simple (and I am suprised you did not take me to task for not having this example in the Wiki ...
If, we have a operation that returns a minor error, of say 0.0000000000001 ... we can ignore it right? Maybe, maybe not ...
If we have a loop
[pre]
for i := 1 to 10,000 do {
x := (x * y) + 0.0000000000001
}
[/pre]
No problem ... as yet ... but ...
[pre]
for i := 1 to 1,000,000,000,000 do {
x := (x * y) + 0.0000000000001
}
[/pre]
Wrong...
Worse, if I am doing that and you are doing:
for i := 1 to 1,000,000,000,000 do {
x := (x * y) - 0.0000000000001
}
Well, we both may as well have stayed in bed.