Yes, partially I agree. However what all this topic is about ?
Actually we can compare my and Magic's hosts with some accuracy because I run some CPU tasks for other projects too. 4 WUs running simultaneously are not giving much comparing with 2 WUs - I'm running 4 WUs just because I bought card with 2Gb and hate to think it wasn't good idea.
(that is true Sunny,I run other projects at the same time)
Hmmm, let me ask a question about 'How to interprete benchmarks':
currently my i3 runs with a borrowed GTX550Ti (192 shaders @ 900MHz).
The benchmark you're referring to shows the GTX650 (which is currently my favorite for a final solution) ~20% faster than the GTX550Ti, but the new card has twice as many shaders running at higher speed and it is a new technology. All in all the GTX650 should be nearly twice as fast ...
AND this benchmark shows the HD7770 with the same performance as the GTX560. Is the CNG-design really as fast as the Cuda-design? I can compare the speed only with my older cards (HD6950 / HD5850).
Is there an overview available that shows the performance here at Einstein? A search for hd7770 gave no results.
Yes, partially I agree. However what all this topic is about ?
Actually we can compare my and Magic's hosts with some accuracy...
i think you misunderstood me. i'm not saying you can't compare your 560 Ti run times w/ someone else's 660 Ti run times with no accuracy at all...i'm saying that you can only do it with so much accuracy due to all the external factors.
and yes, that is what this topic is about, so of course i expect all sorts of comparisons to crop up. i'm just saying that we should include disclaimers (to the best of our abilities) that point out what otherwise might be essential details in said comparisons. for instance, suppose a participant cannot figure out why his GTX 560 Ti run times are substantially longer than another user's GTX 560 Ti run times, and that most other external factors are equal between them. then it comes to light that his/her CPU is an ancient Intel Core 2 Duo, while the person he/she is comparing to is using an i7 2600K. just like that, the problem is more or less solved. not only would it explain the differences in 560 Ti run times, but we also know that the full potential of a GTX 560 Ti is being severely bottle-necked to begin with by an outdated CPU.
in fact, the above example might just shed some more light on why your GTX 560 Ti appears to outperform MAGIC's GTX 660 Ti - while MAGIC's quad-core AMD CPU probably isn't bottle-necking his 660 Ti, you do have a substantially more powerful CPU than MAGIC does.
*EDIT* - i forgot to mention a comparison of single precision compute performance between the 560 Ti and the 660 Ti, which is 1263.4 GFLOPs vs 2460 GFLOPs - almost double the compute performance, and further evidence that any instance in which the 560 Ti appears to crunch work as fast as (or faster than) a 660 Ti is probably a special one.
Thanks for this link but i´ve testet that driver yesterday on my GTS 250 an it takes 20 Minutes longer to compute 1 single Wu no idea how that works on other cards.
Hmmm, let me ask a question about 'How to interprete benchmarks':
currently my i3 runs with a borrowed GTX550Ti (192 shaders @ 900MHz).
The benchmark you're referring to shows the GTX650 (which is currently my favorite for a final solution) ~20% faster than the GTX550Ti, but the new card has twice as many shaders running at higher speed and it is a new technology. All in all the GTX650 should be nearly twice as fast ...
Actually the 550 is running substantially faster than the 650. Compute on an nVidia card occurs on the part of the chip called a shader; in the 5xx and earlier series of cards the shaders ran twice as fast as the rest of the chip. The stock speed for the 550Ti's shaders is 1800mhz (vs 900 for the 'core' which everything else on the chip ran at). For the new Kepler architecture (used in some 640 and all 650 or higher cards), nVidia lowered the shader clock to be the same as the core clock but added a lot more of them. As a result the performance gap between the two cards is much smaller; just looking at shader clocks and counts you'd expect a 17% speedup so the 20% you're seeing is about right.
Thanks for this link but i´ve testet that driver yesterday on my GTS 250 an it takes 20 Minutes longer to compute 1 single Wu no idea how that works on other cards.
Try it out
New drivers almost always work worst on older cards... Mostly due to optimizations made to use the new technologies but also due to extra code added to make the drivers more safe, less bugy, etc. ... but also, due to them wanting to render old cards obsolete and get you buying new ones... ;D
I do not upgrade drivers unless, there is a bug affecting me or because some app requires it and sometimes because it's well known that it will speed up the crunching in my specific cards (it doesnt happens really often)...
Thanks for that info i tried the driver 197.xx directly at startup compute error then 267.85 its working ok but a difference to 301.42 from 40seconds so the 301 should be better but i can´t say that for sure i´m running 6 cpu tasks too at the same time but the gts 250 doesn´t really demand my cpu just between 1and 5 percentage if i run that alone :)
http://www.tomshardware.com/r
)
http://www.tomshardware.com/reviews/geforce-gtx-660-geforce-gtx-650-benchmark,3297.html
(that is true Sunny,I run other projects at the same time)
RE: you're not comparing
)
Yes, partially I agree. However what all this topic is about ?
Actually we can compare my and Magic's hosts with some accuracy because I run some CPU tasks for other projects too. 4 WUs running simultaneously are not giving much comparing with 2 WUs - I'm running 4 WUs just because I bought card with 2Gb and hate to think it wasn't good idea.
RE: http://www.tomshardware
)
Hmmm, let me ask a question about 'How to interprete benchmarks':
currently my i3 runs with a borrowed GTX550Ti (192 shaders @ 900MHz).
The benchmark you're referring to shows the GTX650 (which is currently my favorite for a final solution) ~20% faster than the GTX550Ti, but the new card has twice as many shaders running at higher speed and it is a new technology. All in all the GTX650 should be nearly twice as fast ...
AND this benchmark shows the HD7770 with the same performance as the GTX560. Is the CNG-design really as fast as the Cuda-design? I can compare the speed only with my older cards (HD6950 / HD5850).
Is there an overview available that shows the performance here at Einstein? A search for hd7770 gave no results.
Alexander
Edit: I've found the comparison post.
RE: RE: you're not
)
i think you misunderstood me. i'm not saying you can't compare your 560 Ti run times w/ someone else's 660 Ti run times with no accuracy at all...i'm saying that you can only do it with so much accuracy due to all the external factors.
and yes, that is what this topic is about, so of course i expect all sorts of comparisons to crop up. i'm just saying that we should include disclaimers (to the best of our abilities) that point out what otherwise might be essential details in said comparisons. for instance, suppose a participant cannot figure out why his GTX 560 Ti run times are substantially longer than another user's GTX 560 Ti run times, and that most other external factors are equal between them. then it comes to light that his/her CPU is an ancient Intel Core 2 Duo, while the person he/she is comparing to is using an i7 2600K. just like that, the problem is more or less solved. not only would it explain the differences in 560 Ti run times, but we also know that the full potential of a GTX 560 Ti is being severely bottle-necked to begin with by an outdated CPU.
in fact, the above example might just shed some more light on why your GTX 560 Ti appears to outperform MAGIC's GTX 660 Ti - while MAGIC's quad-core AMD CPU probably isn't bottle-necking his 660 Ti, you do have a substantially more powerful CPU than MAGIC does.
*EDIT* - i forgot to mention a comparison of single precision compute performance between the 560 Ti and the 660 Ti, which is 1263.4 GFLOPs vs 2460 GFLOPs - almost double the compute performance, and further evidence that any instance in which the 560 Ti appears to crunch work as fast as (or faster than) a 660 Ti is probably a special one.
Thanks for all this
)
Thanks for all this information and opinion over this new models i asked.
Just thought I would toss
)
Just thought I would toss this in before I go to sleep.
New GeForce drivers that I will do later today for the GeForce GTX 550 Ti OC and GeForce GTX 660 Ti SC
Version 306.23 - WHQL Release Date Thu Sep 13, 2012
Thanks for this link but
)
Thanks for this link but i´ve testet that driver yesterday on my GTS 250 an it takes 20 Minutes longer to compute 1 single Wu no idea how that works on other cards.
Try it out
RE: Hmmm, let me ask a
)
Actually the 550 is running substantially faster than the 650. Compute on an nVidia card occurs on the part of the chip called a shader; in the 5xx and earlier series of cards the shaders ran twice as fast as the rest of the chip. The stock speed for the 550Ti's shaders is 1800mhz (vs 900 for the 'core' which everything else on the chip ran at). For the new Kepler architecture (used in some 640 and all 650 or higher cards), nVidia lowered the shader clock to be the same as the core clock but added a lot more of them. As a result the performance gap between the two cards is much smaller; just looking at shader clocks and counts you'd expect a 17% speedup so the 20% you're seeing is about right.
RE: Thanks for this link
)
New drivers almost always work worst on older cards... Mostly due to optimizations made to use the new technologies but also due to extra code added to make the drivers more safe, less bugy, etc. ... but also, due to them wanting to render old cards obsolete and get you buying new ones... ;D
I do not upgrade drivers unless, there is a bug affecting me or because some app requires it and sometimes because it's well known that it will speed up the crunching in my specific cards (it doesnt happens really often)...
Thanks for that info i tried
)
Thanks for that info i tried the driver 197.xx directly at startup compute error then 267.85 its working ok but a difference to 301.42 from 40seconds so the 301 should be better but i can´t say that for sure i´m running 6 cpu tasks too at the same time but the gts 250 doesn´t really demand my cpu just between 1and 5 percentage if i run that alone :)