I hope that this is a good thread to ask this question.
Would it be possible for BoincStudio to have a setting for number of CPU's to run a project on? For instance, on a dual/dual machine (two CPUs each with two cores), have a setting that allows only one instance of a project to run at a time, thus splitting cores between projects? For instance:
1 Instance of Leiden
2 Instances of Einstein
1 Instance of QMC
Or perhaps...
1 Instance of Leiden
1 Instance of Einstein
1 Instance of QMC
1 Instance of Rosetta
... and when any project runs out of work, switch that instance to the backup project until more work becomes available?
The benefit to this is that with careful selection of projects, one can choose those which "harmonize" in such a way as to maximize production by selecting a combination of those projects and instances that minimize conflicts for CPU and memory resources, for instance.
Example - two instances of E@H using optimized clients on a dual CPU system appears to be less efficient in terms of credit per hour than one instance of E@H and one instance of Rosetta. I ask because this is a capability I miss from the trux client, where it was possible to do this through manipulation of the "Priority Projects" parameter. If the number of "Priority Projects" was the same as the number of available CPUs (real, virtual, or cores), one project ran on each as long as work was available for all.
Thanks for considering - BoincStudio seems a very nice program, and this type of functionality would only make it better.
I had already done this request to DocMaboul but without success for the moment. I send him your request.
quoting [AF>HFR>RR>Prius Touring Club] ThierryH: I had already done this request to DocMaboul but without success for the moment. I send him your request.
But it would be desirable to have an opportunity of choosing the neccesary number of computers in the list. If I, for example, wish to suspend the project on any 20 of 100 machines (and these 20 computers are diveded into different groups also containing other computers).
Thanks author of the BoincStudio for such remarkable program.
13/06/2006 11:27: Adding c50ba626b25b30f46a0917b954cfbd9a for file projects/boinc.bakerlab.org_rosetta/rosetta_5.22_windows_intelx86.exe. Please report it.
Haven't seen a report on this message:
Account on project 'Einstein@Home' with email hash '6f517c5e6926f8bcd6b6baf4ad5ab940' does not exist.
I had it for convenience but authenticator will miss
The message is displayed only once, when adding a new project(E@H, LHC@H, CPDN).
BS seems to work without a problem, but I'm curious what the message actually means and more crunchers have seen it.
I am Homer of Borg. Prepare to be ...ooooh donuts!
Haven't seen a report on this message:
Account on project 'Einstein@Home' with email hash '6f517c5e6926f8bcd6b6baf4ad5ab940' does not exist.
I had it for convenience but authenticator will miss
The message is displayed only once, when adding a new project(E@H, LHC@H, CPDN).
BS seems to work without a problem, but I'm curious what the message actually means and more crunchers have seen it.
I see that with every project I've attached to using Boinc Studio.
The S5 WU from Einstein
16/06/2006 19:26: Adding 471a77198ab4b32f4fe7d73c6693cb74 for file projects/einstein.phys.uwm.edu/einstein_S5R1_4.02_windows_intelx86.exe. Please report it.
I am Homer of Borg. Prepare to be ...ooooh donuts!
I *think* I found the reason for excesive read/writes of BOINC.exe
Can't test BS on a farm so I was wondering behaviour of BOINC with large number of WUs on a single machine. SZTAKI is a good candidate for such test so I feeded my dual-core with a bunch of them.
When updated version client_state.xml is about to be created, client_state_next.xml is created at first. I would call it "foreward backup" in similar vein as client_state_prev.xml (or you can find some analogy to forecast vs. hindcast on CPDN). Anyway, when client_state got 2 MB, you can easily multiplay any change in his file (when WUs is finished) can easily grow to LARGE numbers. Before it's done with SZTAKI WUs, it would be no surpice if boinc.exe get 400GB for handlling this file and considerable amount for CPU time for handling the file.
It is an extreme test which should not be common - not every machine has hundreds of WUs in cache. But it was helpfull in finding why so excessive disk I/O of BOINC core; no wonder that I was not seeing this on another dual-core that does only large CPDN WUs.
To get back on topic - it would be intersting to watch how fast BS will adjust new S5's WU crunch duration...
Just found that burp doesn't obey "write at most every" so that fast WUs get progress updated every almost constantly and fighting with BOINC core for CPU cycles.
RE: I hope that this is a
)
I had already done this request to DocMaboul but without success for the moment. I send him your request.
RE: quoting
)
Thanks for your help!
.
The method of computers
)
The method of computers grouping helps much.
But it would be desirable to have an opportunity of choosing the neccesary number of computers in the list. If I, for example, wish to suspend the project on any 20 of 100 machines (and these 20 computers are diveded into different groups also containing other computers).
Thanks author of the BoincStudio for such remarkable program.
For the updated U41.05
)
For the updated U41.05 application...
10/06/2006 13:07: Unknown Einstein@Home client md5: dcb5d4a6311b6b4b4ad494c661f096c0 for file projects/einstein.phys.uwm.edu/albert_4.37_windows_intelx86.exe
10/06/2006 13:07: Adding dcb5d4a6311b6b4b4ad494c661f096c0 for file projects/einstein.phys.uwm.edu/albert_4.37_windows_intelx86.exe. Please report
me-[at]-rescam.org
13/06/2006 11:27: Adding
)
13/06/2006 11:27: Adding c50ba626b25b30f46a0917b954cfbd9a for file projects/boinc.bakerlab.org_rosetta/rosetta_5.22_windows_intelx86.exe. Please report it.
Haven't seen a report on this
)
Haven't seen a report on this message:
Account on project 'Einstein@Home' with email hash '6f517c5e6926f8bcd6b6baf4ad5ab940' does not exist.
I had it for convenience but authenticator will miss
The message is displayed only once, when adding a new project(E@H, LHC@H, CPDN).
BS seems to work without a problem, but I'm curious what the message actually means and more crunchers have seen it.
I am Homer of Borg. Prepare to be ...ooooh donuts!
RE: Haven't seen a report
)
I see that with every project I've attached to using Boinc Studio.
me-[at]-rescam.org
The S5 WU from
)
The S5 WU from Einstein
16/06/2006 19:26: Adding 471a77198ab4b32f4fe7d73c6693cb74 for file projects/einstein.phys.uwm.edu/einstein_S5R1_4.02_windows_intelx86.exe. Please report it.
I am Homer of Borg. Prepare to be ...ooooh donuts!
I *think* I found the reason
)
I *think* I found the reason for excesive read/writes of BOINC.exe
Can't test BS on a farm so I was wondering behaviour of BOINC with large number of WUs on a single machine. SZTAKI is a good candidate for such test so I feeded my dual-core with a bunch of them.
When updated version client_state.xml is about to be created, client_state_next.xml is created at first. I would call it "foreward backup" in similar vein as client_state_prev.xml (or you can find some analogy to forecast vs. hindcast on CPDN). Anyway, when client_state got 2 MB, you can easily multiplay any change in his file (when WUs is finished) can easily grow to LARGE numbers. Before it's done with SZTAKI WUs, it would be no surpice if boinc.exe get 400GB for handlling this file and considerable amount for CPU time for handling the file.
It is an extreme test which should not be common - not every machine has hundreds of WUs in cache. But it was helpfull in finding why so excessive disk I/O of BOINC core; no wonder that I was not seeing this on another dual-core that does only large CPDN WUs.
To get back on topic - it would be intersting to watch how fast BS will adjust new S5's WU crunch duration...
Damn, too late to
)
Damn, too late to edit.
Just found that burp doesn't obey "write at most every" so that fast WUs get progress updated every almost constantly and fighting with BOINC core for CPU cycles.
So, this is definitely not to blame BS...