Since nobody else uses it all I do when I leave it just sitting and running tasks I either lay it on a piece of granite tile I put on a table next to me and at night .....well my high tech place to put it is on top of my porcelain sink in the bathroom along with the AC box so that is how I run it cool as I can here
Must admit it is my own bathroom and I keep it cooler than the rest of the house.
You keep your bathroom cooler? I keep my bathroom door closed! Last room in the place I want cold when I sit down to do my business! :D
But yeah, laying it on granite or porcelain would act as a decent heat-sink (heh, heh) :)...better then some who leave it sit on blankets or carpet.
I went back through everything trying to find the source for that number and found nothing; for both GTS 450 or the 6,600sec(in case it was meant for another card).
Ok, well here is the corrected list:
HD 7970 ------> 2x 2,300sec
HD 7950 ------> 2x 3,400, 3x 4,500
GTX 690 (2 GPU)
GTX 590 (2 GPU)
HD 7870
GTX 680 ------> 3x 3,100(Win7)
*GTX 680 -----> 2x 1,945(Linux)
GTX 580 ------> 3x 3,350(Windows)
*GTX 580 -----> 3x 3,050(Linux)
GTX 670 ------> 3x~4,300(vista)
GTX 570
HD 7850
GTX 670
GTX 690 (1 GPU)
GTX 570
GTX 480 ------> 2x~2,200
GTX 470 ------> 2x~3,000, 3x 3,800
GTX 590 (1 GPU)
GTX 560 [448] -> 1x 1,550, 2x 2,500
GTX 560 Ti ----> 1x~1,900, 2x~3,094, 3x~3,961, 4x~6,000, 5x~6840, 6x 7,800
*GTX 560 Ti ---> 1x 1,583 (OC'd)
GT 440
GTX 460
HD 7770 ------> 2x~8,500
HD 7750 ------> 2x~11,000
GTX 560 ------> 1x 3,300, 2x 4800
GTX 465
HD 5850
HD 5870
HD 6970
*HD 6950(1536)-> 2x 6700
HD 6950
HD 6990 (1 GPU)
HD 6870
GTX 460 SE
HD 5970 (1 GPU)
HD 6850 ------> 1x 3,800
GTX 550 Ti ---> 1x 3,065, 2x 5,600
GT 640 -------> 1x~5,700
GTS 450 ------> 1x~2,850
HD 6790
HD 5770 ------> 1x 7,750+
HD 6770
GF 610M ------> 1x~7,800
GT 430 -------> 2x 9,100
GT 520 -------> 1x~9,600(Linux)
FirePro V4800-> 1x 10,620
HD 5670 ------> 1x 11,100
*HD 5670 -----> 1x 11,480(Win XP32)
HD 5570 ------> 1x~15,000
HD 5450 ------> 1x~36,500!
Those who have contributed to the benchmark list here and are following the discussion may probably also be interested in evaluating our next evolution step of the BRP4 app which will first be released on Albert@Home ( http://albert.phys.uwm.edu). This new version will have performance improvements, but just how much faster it will run will depend on the platform and individual hardware, so we are looking forward to comparing the new apps' performance to the numbers given here, with your help. What we expect is this:
- CPU only versions: very moderate speed improvement, if at all(we will release those versions later)
- CUDA app: moderate performance improvement, more noticeable for faster cards. (we will release those versions later)
- OpenCL: quite substantial improvement, more noticeable for slower and mid-range cards. Some GPUs will probably run the new app at least twice as fast as the old one (these apps are already released now on Albert).
In addition to performance improvements, the new app generation will have a unified OpenCL app again (no need any longer for dedicated apps for Radeon HD 69xx), and we will be pushing out native 64 bit apps as well soon.
The new GPU apps should increase GPU utilization. As a consequence, the "sweet spot" of running N GPU jobs in parallel might be a different one for your GPUs when running the new app, e.g. if you get optimal performance now running 3 units in parallel, the new app *may* (or may not) run best with only 2 units in parallel or even just 1, this needs to be tested for individual models. We are looking forward to hear from you, I suggest to keep the discussion of the new versions' performance to the the forum of Albert@Home to have it consistently in one place: http://albert.phys.uwm.edu/forum_thread.php?id=8912
Going to add Albert@Home now to BOINC and I hope other people here will be joining the effort to improve the new generation of apps for E@H. The more diverse the base of tester hardware they get the better the apps will be.
Just be sure you read the warnings on the main page first! Beta testing new software is not for the weak of heart or those with high blood pressure. :D
Might have to get that one.......and a power supply.
Did anybody try this card?
Currently I'm using 560 Ti with memory interface width 256-bit. GTX 660 Ti has only 192.
Somehow I guess it is critical but how much ?
Did anybody try this card?
Currently I'm using 560 Ti with memory interface width 256-bit. GTX 660 Ti has only 192.
Somehow I guess it is critical but how much ?
by how much? well a 192-bit interface at a 6008MHz effective data rate moves approx. 15% more data than a 256-bit interface at a 4008MHz effective data rate (144.192GB/sec vs 128.256GB/sec). but that's just the computation on paper...unfortunately i cannot speak from experience. i have a GTX 560 Ti now, but i don't plan on picking up a GTX 660 Ti anytime soon.
Well as I mentioned earlier in this thread I have been running a laptop 24/7 for about 70 days straight with a nVIDIA GeForce 610 and a desktop with the overclocked GeForce GTX 550Ti (both quads with 8GB ram)
But about 24 hours ago I decided to upgrade another quad-core I have with another 8GB ram and a 750 watt PS to go with the EVGA GTX 660Ti Superclocked which should be here by friday so when I get the package I will do the rebuild and start testing it and will give the info here by next weekend if all goes as planned.
Did anybody try this card?
Currently I'm using 560 Ti with memory interface width 256-bit. GTX 660 Ti has only 192.
Somehow I guess it is critical but how much ?
by how much? well a 192-bit interface at a 6008MHz effective data rate moves approx. 15% more data than a 256-bit interface at a 4008MHz effective data rate (144.192GB/sec vs 128.256GB/sec). but that's just the computation on paper...unfortunately i cannot speak from experience. i have a GTX 560 Ti now, but i don't plan on picking up a GTX 660 Ti anytime soon.
In some situations memory bandwidth and cache have a greater effect than others; I look forward to Magic's testing to see how this would effect E@H's hardware straining computations.
MAGIC RE: Since nobody
)
MAGIC
You keep your bathroom cooler? I keep my bathroom door closed! Last room in the place I want cold when I sit down to do my business! :D
But yeah, laying it on granite or porcelain would act as a decent heat-sink (heh, heh) :)...better then some who leave it sit on blankets or carpet.
RE: And now the updated
)
This seems to be unusual, it should be more like 3000s or below for one task at a time, e.g. see this one:
http://einsteinathome.org/task/300846802 .
Cheers
HB
Hmmm, yeah. You've got me
)
Hmmm, yeah. You've got me stumped on that one.
I went back through everything trying to find the source for that number and found nothing; for both GTS 450 or the 6,600sec(in case it was meant for another card).
Ok, well here is the corrected list:
HD 7970 ------> 2x 2,300sec
HD 7950 ------> 2x 3,400, 3x 4,500
GTX 690 (2 GPU)
GTX 590 (2 GPU)
HD 7870
GTX 680 ------> 3x 3,100(Win7)
*GTX 680 -----> 2x 1,945(Linux)
GTX 580 ------> 3x 3,350(Windows)
*GTX 580 -----> 3x 3,050(Linux)
GTX 670 ------> 3x~4,300(vista)
GTX 570
HD 7850
GTX 670
GTX 690 (1 GPU)
GTX 570
GTX 480 ------> 2x~2,200
GTX 470 ------> 2x~3,000, 3x 3,800
GTX 590 (1 GPU)
GTX 560 [448] -> 1x 1,550, 2x 2,500
GTX 560 Ti ----> 1x~1,900, 2x~3,094, 3x~3,961, 4x~6,000, 5x~6840, 6x 7,800
*GTX 560 Ti ---> 1x 1,583 (OC'd)
GT 440
GTX 460
HD 7770 ------> 2x~8,500
HD 7750 ------> 2x~11,000
GTX 560 ------> 1x 3,300, 2x 4800
GTX 465
HD 5850
HD 5870
HD 6970
*HD 6950(1536)-> 2x 6700
HD 6950
HD 6990 (1 GPU)
HD 6870
GTX 460 SE
HD 5970 (1 GPU)
HD 6850 ------> 1x 3,800
GTX 550 Ti ---> 1x 3,065, 2x 5,600
GT 640 -------> 1x~5,700
GTS 450 ------> 1x~2,850
HD 6790
HD 5770 ------> 1x 7,750+
HD 6770
GF 610M ------> 1x~7,800
GT 430 -------> 2x 9,100
GT 520 -------> 1x~9,600(Linux)
FirePro V4800-> 1x 10,620
HD 5670 ------> 1x 11,100
*HD 5670 -----> 1x 11,480(Win XP32)
HD 5570 ------> 1x~15,000
HD 5450 ------> 1x~36,500!
Older cards (not openCL v1.1 capable) but still interesting comparison:
GT 295 -------> 1x 2,000(Linux)
8800GT G92 ---> 1x 3,600(Linux)
8800GTS G80 --> 1x 4,020(Linux)
*GT 240 ------> 1x 4,035(OC'd)
GT 240 -------> 1x~4,500
GT 220 -------> 2x 19,400
Hi all! Those who have
)
Hi all!
Those who have contributed to the benchmark list here and are following the discussion may probably also be interested in evaluating our next evolution step of the BRP4 app which will first be released on Albert@Home ( http://albert.phys.uwm.edu). This new version will have performance improvements, but just how much faster it will run will depend on the platform and individual hardware, so we are looking forward to comparing the new apps' performance to the numbers given here, with your help. What we expect is this:
- CPU only versions: very moderate speed improvement, if at all(we will release those versions later)
- CUDA app: moderate performance improvement, more noticeable for faster cards. (we will release those versions later)
- OpenCL: quite substantial improvement, more noticeable for slower and mid-range cards. Some GPUs will probably run the new app at least twice as fast as the old one (these apps are already released now on Albert).
In addition to performance improvements, the new app generation will have a unified OpenCL app again (no need any longer for dedicated apps for Radeon HD 69xx), and we will be pushing out native 64 bit apps as well soon.
The new GPU apps should increase GPU utilization. As a consequence, the "sweet spot" of running N GPU jobs in parallel might be a different one for your GPUs when running the new app, e.g. if you get optimal performance now running 3 units in parallel, the new app *may* (or may not) run best with only 2 units in parallel or even just 1, this needs to be tested for individual models. We are looking forward to hear from you, I suggest to keep the discussion of the new versions' performance to the the forum of Albert@Home to have it consistently in one place: http://albert.phys.uwm.edu/forum_thread.php?id=8912
Cheers
Heinz-Bernd
EDIT: URL corrected
That's great to
)
That's great to hear!
Going to add Albert@Home now to BOINC and I hope other people here will be joining the effort to improve the new generation of apps for E@H. The more diverse the base of tester hardware they get the better the apps will be.
Just be sure you read the warnings on the main page first! Beta testing new software is not for the weak of heart or those with high blood pressure. :D
I am checking the GeForce GTX
)
I am checking the GeForce GTX 660 Ti
CUDA Cores: 1344
Might have to get that one.......and a power supply.
RE: I am checking the
)
Did anybody try this card?
Currently I'm using 560 Ti with memory interface width 256-bit. GTX 660 Ti has only 192.
Somehow I guess it is critical but how much ?
RE: Did anybody try this
)
by how much? well a 192-bit interface at a 6008MHz effective data rate moves approx. 15% more data than a 256-bit interface at a 4008MHz effective data rate (144.192GB/sec vs 128.256GB/sec). but that's just the computation on paper...unfortunately i cannot speak from experience. i have a GTX 560 Ti now, but i don't plan on picking up a GTX 660 Ti anytime soon.
Well as I mentioned earlier
)
Well as I mentioned earlier in this thread I have been running a laptop 24/7 for about 70 days straight with a nVIDIA GeForce 610 and a desktop with the overclocked GeForce GTX 550Ti (both quads with 8GB ram)
But about 24 hours ago I decided to upgrade another quad-core I have with another 8GB ram and a 750 watt PS to go with the EVGA GTX 660Ti Superclocked which should be here by friday so when I get the package I will do the rebuild and start testing it and will give the info here by next weekend if all goes as planned.
RE: RE: Did anybody try
)
I think this is a review piece from Anandtech http://www.anandtech.com/show/6159/the-geforce-gtx-660-ti-review/16 that might help answer your question; they test Compute Performance on top of the regular gaming tests.
In some situations memory bandwidth and cache have a greater effect than others; I look forward to Magic's testing to see how this would effect E@H's hardware straining computations.