Sounds good. Jedirock, I am indeed using the Nvidia drivers, the vanilla version was just too unperformant and didn't give me anything in the way of desktop effects and eye candy ;-)
I think I read somewhere that either BOINC or E@h will automatically check for a GPU, and then go to work, meaning that crunchers who join a project will not have to do anything different if one or more of their projects are doing GPU. Is this true?
I think I read somewhere that either BOINC or E@h will automatically check for a GPU, and then go to work, meaning that crunchers who join a project will not have to do anything different if one or more of their projects are doing GPU. Is this true?
Not quite yet. There's development work in progress, and the pre-release BOINC clients in testing (v6.4.5 is the current Beta) will do what you say, but it isn't ready for general release yet.
I hesitate to mention that SETI@home have just started a full-scale public Beta of GPU processing, given your views on SETI expressed in the science area recently ( ! ), but you may like to read up on the rather more mature testing at GPUGRID.net.
Not quite yet. There's development work in progress, and the pre-release BOINC clients in testing (v6.4.5 is the current Beta) will do what you say, but it isn't ready for general release yet.
Apparently the powers that be decided to release BOINC 6.4.5 to the public today. Its no longer a beta version but the official version now, well for Windows and Linux anyway.
I hesitate to mention that SETI@home have just started a full-scale public Beta of GPU processing, given your views on SETI expressed in the science area recently ( ! ), but you may like to read up on the rather more mature testing at GPUGRID.net.
I do, by the way, give credit to SETI for beginning the DC trend. In fact, I was one of those who joined SETI at the beginning specifically because I saw the potential for DC early on and was proven correct, even though I didn't at all like the project itself. And I am very much aware that there are positive benefits to what SETI is doing, though they are indirect. That said, it is the direct project itself that I disagree with. Perhaps I should not be so condescending toward something that has had many benefits. I do try to be less ethnocentric in my scientific reasoning.
As to another another question, how will the new GPU version of E@h work, i.e. if I give X resource share to E@h, will the GPU get that same resource share, or will I have options on what gets crunched on the GPU?
I think I read somewhere that either BOINC or E@h will automatically check for a GPU, and then go to work, meaning that crunchers who join a project will not have to do anything different if one or more of their projects are doing GPU. Is this true?
If you have the latest Nvidia drivers installed and the correct version of BOINC, it will detect the GPU. I have only run two tasks for GPU Grid and since I cannot get into the web site (and the two e-mails I have sent to the two addresses I could find have so far gone unanswered) so I do not know yet if I was successful with those two tasks.
The announcement of the GPU change in BOINC says SaH uses the GPU but as far as I know the SaH app is still in Beta test so really there is only one project that is capable of using the GPU.
For those that subscribe to the BOINC Dev list can see the questions I have asked about what is really going on with the GPU task. Again, since I cannot get into GPU Grid I cannot ask my questions there ... and I am tired of the flack on SaH so have not bothered to subject myself again to the SaH forums ...
But the two tasks that ran "Successfully" (as far as I know) took about 10 hours on my new i7 (4 core with HT giving 8 virtual CPUs) with 0.90 CPU use with one GPU ...
which gave rise to my question if I added a GPU card (the MB I have can take up to 4 or 5 at various bus speeds) will that make things faster and will it be 1.8 CPU and 2 GPU? or 0.90 with 2 GPU?
Anyway it is a new technology and there are more questions yet than answers. I will say that it would be a relatively cheap way to add processing capability to a box, but the projects that can take advantage of the capability are limited which means you will have to ask if you really want to do more for those projects or not.
One more note, this is going to "break" the resource share model that is implicit in BOINC's design in that the computing capability of the machine is no longer comprised of a uniform pool of resources to apply to tasks. Though my expectation is that the BOINC developers will ignore the implications and do nothing to address the issues raised by this change.
... Anyway it is a new technology and there are more questions yet than answers. I will say that it would be a relatively cheap way to add processing capability to a box, but the projects that can take advantage of the capability are limited which means you will have to ask if you really want to do more for those projects or not.
And that is a good part of the 'excitement' and a good opportunity for nVidia to push their sales.
Quote:
One more note, this is going to "break" the resource share model that is implicit in BOINC's design in that the computing capability of the machine is no longer comprised of a uniform pool of resources to apply to tasks. Though my expectation is that the BOINC developers will ignore the implications and do nothing to address the issues raised by this change.
Indeed so, and that is very reasonably to be expected.
The whole development is developer resource limited and so effort is directed to the most urgent or most visible issues first. Provided the scheduler works 'well enough', then fine. Provided the credits awarded keep people happy enough, then accuracy and precision are not worth chasing for what should be just a bit of fun.
No doubt there will be some deep thought done for how to redesign the scheduler to more fairly balance disparate resource issues such as RAM and HDD space, network bandwidth, RAM/cache bandwidth, floating point vs integer performance, multiprocessor/coprocessor/GPU advantage, reliability, and whatever else. AND, then how do you apportion scoring/credit for those various features?!
Positive suggestions and example (pseudo) code are welcome.
Note from some time ago some ideas for using a hierarchy of calibrated hosts to improve the accuracy of the credits awarded. Good idea but too elaborate for anyone to bother putting the code together to do it. The alternative of including absolute ops counts and a bit of server-side code to check against outlandish claims was implemented quickly and works well enough.
Perhaps we have the GPU additions a little sooner precisely because the resource juggling and scoring problems haven't been fixed in advance!
The announcement of the GPU change in BOINC says SaH uses the GPU but as far as I know the SaH app is still in Beta test so really there is only one project that is capable of using the GPU.
Sounds good. Jedirock, I am
)
Sounds good. Jedirock, I am indeed using the Nvidia drivers, the vanilla version was just too unperformant and didn't give me anything in the way of desktop effects and eye candy ;-)
I think I read somewhere that
)
I think I read somewhere that either BOINC or E@h will automatically check for a GPU, and then go to work, meaning that crunchers who join a project will not have to do anything different if one or more of their projects are doing GPU. Is this true?
(Click for detailed stats)
RE: I think I read
)
Not quite yet. There's development work in progress, and the pre-release BOINC clients in testing (v6.4.5 is the current Beta) will do what you say, but it isn't ready for general release yet.
I hesitate to mention that SETI@home have just started a full-scale public Beta of GPU processing, given your views on SETI expressed in the science area recently ( ! ), but you may like to read up on the rather more mature testing at GPUGRID.net.
RE: Not quite yet. There's
)
Apparently the powers that be decided to release BOINC 6.4.5 to the public today. Its no longer a beta version but the official version now, well for Windows and Linux anyway.
BOINC blog
NVidia Press Release
)
NVidia Press Release
RE: NVidia Press
)
/me gets popcorn to go watch all those people who attach to Seti to crunch... well, what with exactly? ;-)
RE: I hesitate to mention
)
I do, by the way, give credit to SETI for beginning the DC trend. In fact, I was one of those who joined SETI at the beginning specifically because I saw the potential for DC early on and was proven correct, even though I didn't at all like the project itself. And I am very much aware that there are positive benefits to what SETI is doing, though they are indirect. That said, it is the direct project itself that I disagree with. Perhaps I should not be so condescending toward something that has had many benefits. I do try to be less ethnocentric in my scientific reasoning.
As to another another question, how will the new GPU version of E@h work, i.e. if I give X resource share to E@h, will the GPU get that same resource share, or will I have options on what gets crunched on the GPU?
(Click for detailed stats)
RE: I think I read
)
If you have the latest Nvidia drivers installed and the correct version of BOINC, it will detect the GPU. I have only run two tasks for GPU Grid and since I cannot get into the web site (and the two e-mails I have sent to the two addresses I could find have so far gone unanswered) so I do not know yet if I was successful with those two tasks.
The announcement of the GPU change in BOINC says SaH uses the GPU but as far as I know the SaH app is still in Beta test so really there is only one project that is capable of using the GPU.
For those that subscribe to the BOINC Dev list can see the questions I have asked about what is really going on with the GPU task. Again, since I cannot get into GPU Grid I cannot ask my questions there ... and I am tired of the flack on SaH so have not bothered to subject myself again to the SaH forums ...
But the two tasks that ran "Successfully" (as far as I know) took about 10 hours on my new i7 (4 core with HT giving 8 virtual CPUs) with 0.90 CPU use with one GPU ...
which gave rise to my question if I added a GPU card (the MB I have can take up to 4 or 5 at various bus speeds) will that make things faster and will it be 1.8 CPU and 2 GPU? or 0.90 with 2 GPU?
Anyway it is a new technology and there are more questions yet than answers. I will say that it would be a relatively cheap way to add processing capability to a box, but the projects that can take advantage of the capability are limited which means you will have to ask if you really want to do more for those projects or not.
One more note, this is going to "break" the resource share model that is implicit in BOINC's design in that the computing capability of the machine is no longer comprised of a uniform pool of resources to apply to tasks. Though my expectation is that the BOINC developers will ignore the implications and do nothing to address the issues raised by this change.
RE: [... Various good early
)
And that is a good part of the 'excitement' and a good opportunity for nVidia to push their sales.
Indeed so, and that is very reasonably to be expected.
The whole development is developer resource limited and so effort is directed to the most urgent or most visible issues first. Provided the scheduler works 'well enough', then fine. Provided the credits awarded keep people happy enough, then accuracy and precision are not worth chasing for what should be just a bit of fun.
No doubt there will be some deep thought done for how to redesign the scheduler to more fairly balance disparate resource issues such as RAM and HDD space, network bandwidth, RAM/cache bandwidth, floating point vs integer performance, multiprocessor/coprocessor/GPU advantage, reliability, and whatever else. AND, then how do you apportion scoring/credit for those various features?!
Positive suggestions and example (pseudo) code are welcome.
Note from some time ago some ideas for using a hierarchy of calibrated hosts to improve the accuracy of the credits awarded. Good idea but too elaborate for anyone to bother putting the code together to do it. The alternative of including absolute ops counts and a bit of server-side code to check against outlandish claims was implemented quickly and works well enough.
Perhaps we have the GPU additions a little sooner precisely because the resource juggling and scoring problems haven't been fixed in advance!
All good fun!
Further help is no doubt welcomed... ;-)
Happy fast crunchin',
Martin
See new freedom: Mageia Linux
Take a look for yourself: Linux Format
The Future is what We all make IT (GPLv3)
RE: The announcement of the
)
The Seti app became official today.
BOINC blog