Heck, many MB are not coming with 3 slots for GPU cards ... Theory says I could put that many on my latest MB though it would likely suck the PS out of the wall ...
Also the case is really too small for that many high end cards ... I may add a couple lower end cards as time wends on ... but, for the moment the GTX 280 card is plenty ... :)
Heck, many MB are not coming with 3 slots for GPU cards ... Theory says I could put that many on my latest MB though it would likely suck the PS out of the wall ...
Also the case is really too small for that many high end cards ... I may add a couple lower end cards as time wends on ... but, for the moment the GTX 280 card is plenty ... :)
Actually, it would likely suck the wall right out of the power supply dude! [cough!] I just installed a 9800 GT today, and they call for a minimum power supply of 400W. I doubt you could run a C2D or quad core with more than one GPU card anyway with that much flame coming out of the socket. [cough!] [cough!] Remember that there are the other users too, with display, router, printer, and DSL modem, perhaps clock, etc., all from the same socket. If you live near a nuclear power station, you might be able to get away with it, but I doubt it. No doubt you would have to get a variance from the Nuclear Regulatory Commission for a power supply like that! Good luck, though. :-)
Well, I have resisted posting anything here regarding coprocessing capability here on EAH until now.
Here is my professional evaluation and recomnmendation.
Go ahead and develop a GPU app if you have the time, but I wouldn't even think of trying to roll it out here on EAH anytime in the near term time frame if there was anything of a fundamental science or operational nature which needs addressing.
The experience over at SAH has demonstrated the basic BOINC framework is no where near being able to deal with the adding complexities this introduces, and past historical record indicates that the required fixes aren't going to be here anytime soon.
Since EAH is well known to be one of the most stable and well thought out projects from all aspects, I would think long and hard about damaging that reputation just to appeal to a very small segment of the total host population.
Well, I have resisted posting anything here regarding coprocessing capability here on EAH until now.
Here is my professional evaluation and recomnmendation.
Go ahead and develop a GPU app if you have the time, but I wouldn't even think of trying to roll it out here on EAH anytime in the near term time frame if there was anything of a fundamental science or operational nature which needs addressing.
The experience over at SAH has demonstrated the basic BOINC framework is no where near being able to deal with the adding complexities this introduces, and past historical record indicates that the required fixes aren't going to be here anytime soon.
Since EAH is well known to be one of the most stable and well thought out projects from all aspects, I would think long and hard about damaging that reputation just to appeal to a very small segment of the total host population.
Alinator
Seconded ...
Though I am literally panting with the thought of being able to run a real science project with my GPU resources ...
The issues with BOINC Manager and Dr. Anderson's reluctance to address them squarely means that as more projects come on line with GPU capability we will be seeing a repetition of the angst and anger because of the inability of BOINC Manager and the system software to handle the needs of the participants.
Unless you also have the resources to begin to address these lacks in BOINC ... I know you guys (EaH developers) read the mailing lists so you what the issues are ... and how tepid the proposed response is ...
@Gerry
I am pulling about 400W out of the wall for the i7 with a GTX280 and with the 9800 in the same box it was about 450 or so ... But, the production is phenomenal ...
I am running three computers from one room's power and had to run an extension cord to another room for the other two ... I guess I really do need to have an electrician come out and change the 230 three phase line I had installed for my old UPS changed to a 20 or 30 AMP direct line now I am using two smaller 3,000 VA UPS ... I really need a third one for the two unprotected systems ...
And when I replace those two with another i7 (or dual i7?) later this year it would be nice to have all the systems protected ...
Well, I have resisted posting anything here regarding coprocessing capability here on EAH until now.
Here is my professional evaluation and recomnmendation.
Go ahead and develop a GPU app if you have the time, but I wouldn't even think of trying to roll it out here on EAH anytime in the near term time frame if there was anything of a fundamental science or operational nature which needs addressing.
The experience over at SAH has demonstrated the basic BOINC framework is no where near being able to deal with the adding complexities this introduces, and past historical record indicates that the required fixes aren't going to be here anytime soon.
Since EAH is well known to be one of the most stable and well thought out projects from all aspects, I would think long and hard about damaging that reputation just to appeal to a very small segment of the total host population.
Alinator
I agree. BOINC just isn't ready for cuda. Even 6.5.0 which gets cuda behaving a little better broke a whole bunch of other bits that used to work. Yes I know its a "development" version so thats to be expected. But even the release version (6.4.5) has issues. Then there is the patch to the work-fetch policy to fudge things through from the BOINC developers. Obviously they've thought things through - Not.
And lets not mention the stuff happening with the S@H cuda app that can't handle VLAR or VHAR and quite often just hangs. Obviously its ready for prime-time use, so they released it to the public.
The one little ray of hope in this is that GPUGRID have an app that actually works. Their problem is getting BOINC to fetch work units.
I'm sure Bernd would know all the BOINC dev stuff thats going on and is probably not too keen to finish up and release the cuda app until issues with BOINC get sorted out first.
I know I'm not upgrading my boinc manager past 6.2.19 until things mature much better on the BOINC side; hopefully in the next version or even the next after that. :-(
I wouldn't worry about wall power much. A standard US outlet is 15A-120V, that's 1800W of power. Even the biggest PSUs don't draw much more than a thousand.
I wouldn't worry about wall power much. A standard US outlet is 15A-120V, that's 1800W of power. Even the biggest PSUs don't draw much more than a thousand.
except, most us homes are wired in circute. meaning one entire room is/could be one circute and that circute could be rated for 30A total so if you put a few computers on that one circute you can easyly max out the breaker even tho each plug is rated 15A V x W = A so lets assume 450W per pc and you put 4 pc's in one room thats hmm 3.75A but then add in monitor, printer, lamp, over head light. i belive 15A is about 1300W so a max of 2600W per room?
please feel free to correct my math folks. i really stink at numbers. but you see my point.
seeing without seeing is something the blind learn to do, and seeing beyond vision can be a gift.
I wouldn't worry about wall power much. A standard US outlet is 15A-120V, that's 1800W of power. Even the biggest PSUs don't draw much more than a thousand.
except, most us homes are wired in circute. meaning one entire room is/could be one circute and that circute could be rated for 30A total so if you put a few computers on that one circute you can easyly max out the breaker even tho each plug is rated 15A V x W = A so lets assume 450W per pc and you put 4 pc's in one room thats hmm 3.75A but then add in monitor, printer, lamp, over head light. i belive 15A is about 1300W so a max of 2600W per room?
please feel free to correct my math folks. i really stink at numbers. but you see my point.
Only comment is that in the US, most standard outlet circuits are wired for 15 amps (older homes) or 20 amps (newer homes).
When I bought my latest system (the i7) I also bought one of those plug in amp/watt meters so I can "size" my systems ... the i7 box has been as high as 450 W so far with CUDA running on two cards ... it is lower now as I moved one of the cards ...
But, this one room has the i7, Q9300, two Dell Xeons, and a Mac Pro (3.2 GHz full tower with you don't want to know how many disks ...) one 30" monitor and a 20" ... and a partridge in a pear tree ...
Heck, many MB are not coming
)
Heck, many MB are not coming with 3 slots for GPU cards ... Theory says I could put that many on my latest MB though it would likely suck the PS out of the wall ...
Also the case is really too small for that many high end cards ... I may add a couple lower end cards as time wends on ... but, for the moment the GTX 280 card is plenty ... :)
RE: Heck, many MB are not
)
Actually, it would likely suck the wall right out of the power supply dude! [cough!] I just installed a 9800 GT today, and they call for a minimum power supply of 400W. I doubt you could run a C2D or quad core with more than one GPU card anyway with that much flame coming out of the socket. [cough!] [cough!] Remember that there are the other users too, with display, router, printer, and DSL modem, perhaps clock, etc., all from the same socket. If you live near a nuclear power station, you might be able to get away with it, but I doubt it. No doubt you would have to get a variance from the Nuclear Regulatory Commission for a power supply like that! Good luck, though. :-)
(Click for detailed stats)
Well, I have resisted posting
)
Well, I have resisted posting anything here regarding coprocessing capability here on EAH until now.
Here is my professional evaluation and recomnmendation.
Go ahead and develop a GPU app if you have the time, but I wouldn't even think of trying to roll it out here on EAH anytime in the near term time frame if there was anything of a fundamental science or operational nature which needs addressing.
The experience over at SAH has demonstrated the basic BOINC framework is no where near being able to deal with the adding complexities this introduces, and past historical record indicates that the required fixes aren't going to be here anytime soon.
Since EAH is well known to be one of the most stable and well thought out projects from all aspects, I would think long and hard about damaging that reputation just to appeal to a very small segment of the total host population.
Alinator
RE: Well, I have resisted
)
Seconded ...
Though I am literally panting with the thought of being able to run a real science project with my GPU resources ...
The issues with BOINC Manager and Dr. Anderson's reluctance to address them squarely means that as more projects come on line with GPU capability we will be seeing a repetition of the angst and anger because of the inability of BOINC Manager and the system software to handle the needs of the participants.
Unless you also have the resources to begin to address these lacks in BOINC ... I know you guys (EaH developers) read the mailing lists so you what the issues are ... and how tepid the proposed response is ...
@Gerry
I am pulling about 400W out of the wall for the i7 with a GTX280 and with the 9800 in the same box it was about 450 or so ... But, the production is phenomenal ...
I am running three computers from one room's power and had to run an extension cord to another room for the other two ... I guess I really do need to have an electrician come out and change the 230 three phase line I had installed for my old UPS changed to a 20 or 30 AMP direct line now I am using two smaller 3,000 VA UPS ... I really need a third one for the two unprotected systems ...
And when I replace those two with another i7 (or dual i7?) later this year it would be nice to have all the systems protected ...
RE: Well, I have resisted
)
I agree. BOINC just isn't ready for cuda. Even 6.5.0 which gets cuda behaving a little better broke a whole bunch of other bits that used to work. Yes I know its a "development" version so thats to be expected. But even the release version (6.4.5) has issues. Then there is the patch to the work-fetch policy to fudge things through from the BOINC developers. Obviously they've thought things through - Not.
And lets not mention the stuff happening with the S@H cuda app that can't handle VLAR or VHAR and quite often just hangs. Obviously its ready for prime-time use, so they released it to the public.
The one little ray of hope in this is that GPUGRID have an app that actually works. Their problem is getting BOINC to fetch work units.
I'm sure Bernd would know all the BOINC dev stuff thats going on and is probably not too keen to finish up and release the cuda app until issues with BOINC get sorted out first.
BOINC blog
I know I'm not upgrading my
)
I know I'm not upgrading my boinc manager past 6.2.19 until things mature much better on the BOINC side; hopefully in the next version or even the next after that. :-(
(Click for detailed stats)
I wouldn't worry about wall
)
I wouldn't worry about wall power much. A standard US outlet is 15A-120V, that's 1800W of power. Even the biggest PSUs don't draw much more than a thousand.
RE: I wouldn't worry about
)
except, most us homes are wired in circute. meaning one entire room is/could be one circute and that circute could be rated for 30A total so if you put a few computers on that one circute you can easyly max out the breaker even tho each plug is rated 15A V x W = A so lets assume 450W per pc and you put 4 pc's in one room thats hmm 3.75A but then add in monitor, printer, lamp, over head light. i belive 15A is about 1300W so a max of 2600W per room?
please feel free to correct my math folks. i really stink at numbers. but you see my point.
seeing without seeing is something the blind learn to do, and seeing beyond vision can be a gift.
RE: RE: I wouldn't worry
)
Only comment is that in the US, most standard outlet circuits are wired for 15 amps (older homes) or 20 amps (newer homes).
Alinator
When I bought my latest
)
When I bought my latest system (the i7) I also bought one of those plug in amp/watt meters so I can "size" my systems ... the i7 box has been as high as 450 W so far with CUDA running on two cards ... it is lower now as I moved one of the cards ...
But, this one room has the i7, Q9300, two Dell Xeons, and a Mac Pro (3.2 GHz full tower with you don't want to know how many disks ...) one 30" monitor and a 20" ... and a partridge in a pear tree ...