Almost 1000 participants in the challenge so far, up to almost 950 TFLOPS and our infrastructure's holding up nicely. Sweet!
Go crunchers, go!
Oliver
That makes me wonder (only half jokingly) whether the BOINC projects collectively should set up an emergency "roadside assistance" fund, and buy an emergency server which could be loaned to any project which finds itself temporarily overwhelmed by the sudden arrival of a horde of challengers or refugees from a project facing a crisis?
I see that the MilkyWay project is being challenged "to crunch as much as possible before ... the world ends on the 21st of December this year." - they could probably use a hand.
Almost 1000 participants in the challenge so far, up to almost 950 TFLOPS and our infrastructure's holding up nicely. Sweet!
Go crunchers, go!
Oliver
Congrats to the projects admins, we know we hit hard the servers and they are holding very nice...
Our small team is having a lot of fun in the challenge, congrats to the organization too!
Our fleet will remain on target, trying to holding our positions, with all hands firing at will!...
To be honest, not all guns... on most of our battleships, we was able to deploy almost only 50% of their main guns (GTX590/690´s) real power... seem´s like they need a special configuration on the E@H client side, but we still unknown how to do, to unleash all it´s firepower.... snif!
That makes me wonder (only half jokingly) whether the BOINC projects collectively should set up an emergency "roadside assistance" fund, and buy an emergency server which could be loaned to any project which finds itself temporarily overwhelmed by the sudden arrival of a horde of challengers or refugees from a project facing a crisis?
I'm afraid to have a server sitting somewhere, even with a fair internet connection, doesn't help much. Data flow from and to that server has to be managed somehow, the server needs to be integrated into the project infrastructure. And the project infrastructure needs to be flexible enough so one could easily do this - during the past couple of months we migrated the Hannover part of Einstein@Home to something more powerful and flexible (and under full load). In addition, for the BRP search what looks just like a new download server means an additional (up to) 16 ATLAS nodes to do the pre-processing (de-dispersion) for workunit production.
When you want to run more then one task go to your preferences and change the GPU utilisation factor. For two units choose 0,5 for three 0,3. Does this help you?
That makes me wonder (only half jokingly) whether the BOINC projects collectively should set up an emergency "roadside assistance" fund, and buy an emergency server which could be loaned to any project which finds itself temporarily overwhelmed by the sudden arrival of a horde of challengers or refugees from a project facing a crisis?
I'm afraid to have a server sitting somewhere, even with a fair internet connection, doesn't help much. Data flow from and to that server has to be managed somehow, the server needs to be integrated into the project infrastructure. And the project infrastructure needs to be flexible enough so one could easily do this - during the past couple of months we migrated the Hannover part of Einstein@Home to something more powerful and flexible (and under full load). In addition, for the BRP search what looks just like a new download server means an additional (up to) 16 ATLAS nodes to do the pre-processing (de-dispersion) for workunit production.
BM
Yes Bernd, I think all is well here unless every member adds a $500 video card to each host
But if you ever need to plant a server some where I have 5 acres on the east side of the Olympic mountains so it will run cool too.
When you want to run more then one task go to your preferences and change the GPU utilisation factor. For two units choose 0,5 for three 0,3. Does this help you?
Christoph,
For my nVidia GeForce cards it runs best at 0.5 running tasks X2
When you want to run more then one task go to your preferences and change the GPU utilisation factor. For two units choose 0,5 for three 0,3. Does this help you?
Christoph,
For my nVidia GeForce cards it runs best at 0.5 running tasks X2
-Samson-
Thanks i know that and allready use, but in the 590/690 (they are realy 2 GPUs on one), the bottleneck (i belive) is in the PCI-e 3.0/memory access, the cpu simply can´t feed all the hungry of the GPU specialy when you have a 2x690 hosts ... even with the CPU totaly free from load, the Gpu´s usase still at 50 to 70% tops. Just for example in SETI/Collantz we could reach easely 98% of GPU usage. But we will remain searching for a way to do that...
Thanks realy apreciate your
)
Thanks realy apreciate your post.
We try to do our bests, but´s is a 29 x 1788 not an easy task...
But we will remain on the race until the last second...
Almost 1000 participants in
)
Almost 1000 participants in the challenge so far, up to almost 950 TFLOPS and our infrastructure's holding up nicely. Sweet!
Go crunchers, go!
Oliver
Einstein@Home Project
RE: Almost 1000
)
That makes me wonder (only half jokingly) whether the BOINC projects collectively should set up an emergency "roadside assistance" fund, and buy an emergency server which could be loaned to any project which finds itself temporarily overwhelmed by the sudden arrival of a horde of challengers or refugees from a project facing a crisis?
I see that the MilkyWay project is being challenged "to crunch as much as possible before ... the world ends on the 21st of December this year." - they could probably use a hand.
RE: Almost 1000
)
Congrats to the projects admins, we know we hit hard the servers and they are holding very nice...
Our small team is having a lot of fun in the challenge, congrats to the organization too!
Our fleet will remain on target, trying to holding our positions, with all hands firing at will!...
To be honest, not all guns... on most of our battleships, we was able to deploy almost only 50% of their main guns (GTX590/690´s) real power... seem´s like they need a special configuration on the E@H client side, but we still unknown how to do, to unleash all it´s firepower.... snif!
RE: That makes me wonder
)
I'm afraid to have a server sitting somewhere, even with a fair internet connection, doesn't help much. Data flow from and to that server has to be managed somehow, the server needs to be integrated into the project infrastructure. And the project infrastructure needs to be flexible enough so one could easily do this - during the past couple of months we migrated the Hannover part of Einstein@Home to something more powerful and flexible (and under full load). In addition, for the BRP search what looks just like a new download server means an additional (up to) 16 ATLAS nodes to do the pre-processing (de-dispersion) for workunit production.
BM
BM
When you want to run more
)
When you want to run more then one task go to your preferences and change the GPU utilisation factor. For two units choose 0,5 for three 0,3. Does this help you?
Greetings, Christoph
RE: RE: That makes me
)
Yes Bernd, I think all is well here unless every member adds a $500 video card to each host
But if you ever need to plant a server some where I have 5 acres on the east side of the Olympic mountains so it will run cool too.
-Samson-
RE: When you want to run
)
Christoph,
For my nVidia GeForce cards it runs best at 0.5 running tasks X2
-Samson-
RE: RE: When you want to
)
Thanks i know that and allready use, but in the 590/690 (they are realy 2 GPUs on one), the bottleneck (i belive) is in the PCI-e 3.0/memory access, the cpu simply can´t feed all the hungry of the GPU specialy when you have a 2x690 hosts ... even with the CPU totaly free from load, the Gpu´s usase still at 50 to 70% tops. Just for example in SETI/Collantz we could reach easely 98% of GPU usage. But we will remain searching for a way to do that...
I did expect that you knew
)
I did expect that you knew this setting already but from your post it was not clear if you use it already.
Greetings, Christoph