I think we have a failure to communicate here. As a matter of taxonomy, the code which actually computes the science is the science application. The client is sort of the stage manager and umpire, deciding what to do next, and sending back information which determines the credit awarded. There are no "fast clients" (save for the potential of the project dedication on hyperthreaded machines--not yet actually realized). So the influence on science production is all in the social engineering end--to wit, the influence on volunteers deciding where and whether to dedicate their resources.
Not to be sneezed at, that social engineering bit, but different than raw science horsepower.
... There are no "fast clients" (save for the potential of the project dedication on hyperthreaded machines--not yet actually realized). So the influence on science production is all in the social engineering end--to wit, the influence on volunteers deciding where and whether to dedicate their resources.
I think "all" is a bit of an exaggeration. One of the client's responsibilities is to feed crunchers with an optimal amount of work: enough to keep them busy whenever they have time on a CPU, but not so much that they miss deadlines. This is easy enough to manage on dedicated systems running projects with consistent WU sizes, but becomes something of a balancing act on intermittently available hosts or where completion times vary widely. Accurate and relevant benchmarking is an important factor in the process.
In short, maximizing throughput depends in part on the effectiveness of the client at regulating workloads.
... There are no "fast clients" ... So the influence on science production is all in the social engineering end
I think "all" is a bit of an exaggeration. One of the client's responsibilities is to feed crunchers with an optimal amount of work: enough to keep them busy whenever they have time on a CPU, but not so much that they miss deadlines.
I agree that avoiding idle time and avoiding CPU time wasted on results which eventually miss deadline are both goals which serve science output increase. Thus I must agree that "all" was somewhat exaggeration on my part.
The link I posted for Trux's calibrating client does not work today--the server does not like the trailing slash I included. This link works: trux calibrating client
As a bit of an update, the calibration on my four CPUs seems to be nearing the stable point, with fluctuations both up and down and little drift for several days.
At the moment my P4 EE HT is being upclaimed by slightly over a factor of two, along with my slower Pentium III. My faster Pentium III and my Pentium M are being upclaimed by a bit over 1.6.
Most of the machines seem to be approximating fair claim as judged by the quorums they find themselves in.
However the reference here is to baseline trux CalClient benchmark. I believe it runs the original benchmark code, but with considerably better efficiency, perhaps largely by compiler settings. Most likely the upclaim compared to the distributed client is stabilizing at something more like four--the approximate speedup of S-39L compared to the distributed science application.
On one other point, Trux's rate of release of upgrades has slowed greatly. While project CPU affinity (which would let me restrict SETI to one CPU and thus greatly raise efficiency on my P4 EE) is slated for the next release, it is not out even in beta form yet.
Unfortunatly I don't know where to get the official client Version 5.3.12.
Current development client is 5.3.29.
As far as I understood, I need to install the official first, and after this copy the files from trux over the official ones.
The official BOINC versions are here. (Also - you seem to have transposed some numbers in the version designation - the latest official version is 5.2.13.)
BTW: I just switched to the Trux client from an optimized v5.2.13 client a couple of days ago and it works very well - just as described by Archae86 earlier in this thread.
i observed the behavior of tuxofts client as described by archae86, with one addition:
If you suspend network activity AND you have results ready AND you must stop/start boinc, maybe several times, it adds the correction factor to this ready results every time you do so.
I tried the calibrating client but it caused invalid results for another project with an unoptimised science app so I'm back to the standard boinc client
RE: NO! Fast client means
)
I think we have a failure to communicate here. As a matter of taxonomy, the code which actually computes the science is the science application. The client is sort of the stage manager and umpire, deciding what to do next, and sending back information which determines the credit awarded. There are no "fast clients" (save for the potential of the project dedication on hyperthreaded machines--not yet actually realized). So the influence on science production is all in the social engineering end--to wit, the influence on volunteers deciding where and whether to dedicate their resources.
Not to be sneezed at, that social engineering bit, but different than raw science horsepower.
RE: I think we have a
)
I think you got that bit right..........
Join the #1 Aussie Alliance on Einstein
RE: ... There are no "fast
)
I think "all" is a bit of an exaggeration. One of the client's responsibilities is to feed crunchers with an optimal amount of work: enough to keep them busy whenever they have time on a CPU, but not so much that they miss deadlines. This is easy enough to manage on dedicated systems running projects with consistent WU sizes, but becomes something of a balancing act on intermittently available hosts or where completion times vary widely. Accurate and relevant benchmarking is an important factor in the process.
In short, maximizing throughput depends in part on the effectiveness of the client at regulating workloads.
RE: RE: ... There are no
)
I agree that avoiding idle time and avoiding CPU time wasted on results which eventually miss deadline are both goals which serve science output increase. Thus I must agree that "all" was somewhat exaggeration on my part.
The link I posted for Trux's
)
The link I posted for Trux's calibrating client does not work today--the server does not like the trailing slash I included. This link works:
trux calibrating client
As a bit of an update, the calibration on my four CPUs seems to be nearing the stable point, with fluctuations both up and down and little drift for several days.
At the moment my P4 EE HT is being upclaimed by slightly over a factor of two, along with my slower Pentium III. My faster Pentium III and my Pentium M are being upclaimed by a bit over 1.6.
Most of the machines seem to be approximating fair claim as judged by the quorums they find themselves in.
However the reference here is to baseline trux CalClient benchmark. I believe it runs the original benchmark code, but with considerably better efficiency, perhaps largely by compiler settings. Most likely the upclaim compared to the distributed client is stabilizing at something more like four--the approximate speedup of S-39L compared to the distributed science application.
On one other point, Trux's rate of release of upgrades has slowed greatly. While project CPU affinity (which would let me restrict SETI to one CPU and thus greatly raise efficiency on my P4 EE) is slated for the next release, it is not out even in beta form yet.
I like to try the BOINC
)
I like to try the BOINC client from trux.
Unfortunatly I don't know where to get the official client Version 5.3.12.
Current development client is 5.3.29.
As far as I understood, I need to install the official first, and after this copy the files from trux over the official ones.
RE: I like to try the BOINC
)
The official BOINC versions are here. (Also - you seem to have transposed some numbers in the version designation - the latest official version is 5.2.13.)
BTW: I just switched to the Trux client from an optimized v5.2.13 client a couple of days ago and it works very well - just as described by Archae86 earlier in this thread.
Hi... i observed the
)
Hi...
i observed the behavior of tuxofts client as described by archae86, with one addition:
If you suspend network activity AND you have results ready AND you must stop/start boinc, maybe several times, it adds the correction factor to this ready results every time you do so.
This seems to be a bug.
Greetings from Germany
Caesar1
I tried the calibrating
)
I tried the calibrating client but it caused invalid results for another project with an unoptimised science app so I'm back to the standard boinc client
RE: I tried the calibrating
)
Sorry but I think that's unable. Client do no whatever with results. If you have some invalid results so problem is else-where.