ATI or Nvidia, any guesses for GPU?

Christoph
Christoph
Joined: 25 Aug 05
Posts: 41
Credit: 5954206
RAC: 0

RE: i've got 2 9400 gt's

Message 94010 in response to message 94004

Quote:
i've got 2 9400 gt's that will be here to crunch if they are allowed(unlike milkyway).

I just did visit the Milkyway page, because my Laptop have an ATI card inside. Since 26th of August looks like they now also have a CUDA app.

Christoph

[edit] Now found the CUDA thread. Now I understand what you mean. [/edit]

Greetings, Christoph

Paul D. Buck
Paul D. Buck
Joined: 17 Jan 05
Posts: 754
Credit: 5385205
RAC: 0

A couple of minor points

A couple of minor points ...

CUDA cards in the 200+ class can do double precision but they don't have as many units capable of the extended precision because for GRAPHICS work double precision is mostly an advertising gimmick. As others have noted most of the time SP is good enough.

That being said, likely later generations of CUDA cards will extend the number of units that can do DP just so they can advertise the fact ... not that most use them at all ... it is just advertising ... with MW being a rare exception.

Last point, I wrote a fairly non technical note in the Wiki about some of the issues with FP math and it might prove interesting reading if you want to pursue the questions. I will merely note that in SOME cases DP is no more reliable than SP because all FP systems to represent numbers have only approximate reepresentations of most numbers. For example on an IBM 4381 you cannot represent the value 0.5 because there is no such value in the number system ... as a consequence making rounding decisions at various locations on the number line gives variable results ... by that I mean calculations that have a result that should round down may not because of the oddities of the fractional values at various places on the number line...

But I digress ... back to yelling about which GPU sucks ... :)

Bill592
Bill592
Joined: 25 Feb 05
Posts: 786
Credit: 70825065
RAC: 0

RE: ..... For example on an

Message 94012 in response to message 94011

Quote:
..... For example on an IBM 4381 you cannot represent the value 0.5 because there is no such value in the number system ... as a consequence .....

( Announcement )

The Nobel prize in mathematics has been awarded to a California
professor who has discovered a new number.

The number is ‘Bleen’ ! __ Which he claims belongs between Six and Seven.

George Carlin

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6588
Credit: 315482506
RAC: 322119

RE: RE: ..... For example

Message 94013 in response to message 94012

Quote:
Quote:
..... For example on an IBM 4381 you cannot represent the value 0.5 because there is no such value in the number system ... as a consequence .....

( Announcement )

The Nobel prize in mathematics has been awarded to a California
professor who has discovered a new number.

The number is ‘Bleen’ ! __ Which he claims belongs between Six and Seven.


Well Bleen is a base eleven number - one, two, three, four, five, six, bleen, seven, eight, nine, ten. :-)

Jokes aside, the base chosen to represent the number is important. One needs to distinguish between the idea of a particular number, "three-ness" for instance and the way it is represented and manipulated. Computers as currently predominantly configured do base 2, whereas we people like base ten. All such systems have 'placeholding' of digits such that digit values increase/decrease by powers of the base according to position. There's endian-ness too, so that you can choose which way you want placeholder values to vary. An exact, finite & non-recurring representation of a specific number in one system may be inexact, infinite and recurring in another. So 'one-third' in decimal is 0.3333* ( the asterisk indicating infinite recurrence of the 3 ) but is 0.1 in base three.

[ Plus there's another tricky bit. For example 1.0 and 0.9999* are considered to be representations of the same decimal number - because they are as close as you please, and thus must be equal! ]

And there is the idea of 'significance' so that one may throw away leading/trailing zeroes in the representation and retain the number unaltered. The ideas of 'zero' and 'one' have a special place in all systems, with the usage of a 'point' to distinguish between places with value either side of one ( except if you 'float' the point by adding more detail to the representation - the power of the base, or placeholder shifts, to adjust to obtain the real value ).

So ..... if the IBM 4381 cannot represent the decimal number 0.5, then I might deduce it wasn't binary based. In binary that ought be 0.1, that is 1 x 2^(-1) ....

And numbers written in a base one system are really boring. :-)

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Ver Greeneyes
Ver Greeneyes
Joined: 26 Mar 09
Posts: 140
Credit: 9562235
RAC: 0

One way of keeping floating

One way of keeping floating point errors from growing to be a problem is to use Kahan Summation. It's only about twice as slow as doing things normally, and means you can sum any number of values of greatly varying magnitudes without running into large errors. I believe CUDA and CAL use forms of tree-based summation and pairwise summation, taking advantage of the parallelization of the GPUs. (but you could presumably write your own summation function to get some extra precision)

Martin Ryba
Martin Ryba
Joined: 9 Apr 09
Posts: 48
Credit: 160371664
RAC: 35985

RE: Last point, I wrote a

Message 94015 in response to message 94011

Quote:

Last point, I wrote a fairly non technical note in the Wiki about some of the issues with FP math and it might prove interesting reading if you want to pursue the questions. I will merely note that in SOME cases DP is no more reliable than SP because all FP systems to represent numbers have only approximate reepresentations of most numbers. For example on an IBM 4381 you cannot represent the value 0.5 because there is no such value in the number system ... as a consequence making rounding decisions at various locations on the number line gives variable results ... by that I mean calculations that have a result that should round down may not because of the oddities of the fractional values at various places on the number line...

Hey Paul could you provide the link in which wiki? I tried searching to no avail. I do a lot of DSP at work, and as a physicist and systems engineer one of my primary concerns is algorithm quality traded against precision. For instance, in 2006 I successfully implemented a high accuracy time-of-arrival estimating template matching algorithm in a Blackfin DSP, which primarily supports 16-bit fixed point arithmetic (including FFTs). You could get floating point in both single and double precision, but the fast stuff was fixed point. One of my principal beefs with MATLAB is that nearly everything is DP. What a memory pig. I did use their "Fixed Point Toolbox" to validate that algorithm, but that was painfully slow in itself. Now I do a lot of stuff on FPGAs; there you can trim your precision to the bit level.

"Better is the enemy of the good." - Voltaire (should be memorized by every requirements lead)

Simplex0
Simplex0
Joined: 1 Sep 05
Posts: 152
Credit: 964726
RAC: 0

I just speculating here but I

I just speculating here but I could imagine that a double precision based calculation will translate in a higher resolution of the wavelength that are being analyzed so I wonder what the precision of the GW detector is when it measure the wavelength of the light?

Bikeman (Heinz-Bernd Eggenstein)
Bikeman (Heinz-...
Moderator
Joined: 28 Aug 06
Posts: 3522
Credit: 722700165
RAC: 1154419

RE: I just speculating here

Message 94017 in response to message 94016

Quote:
I just speculating here but I could imagine that a double precision based calculation will translate in a higher resolution of the wavelength that are being analyzed so I wonder what the precision of the GW detector is when it measure the wavelength of the light?

It's not really the wavelength that is measured (it's constant for the laser used), it's more like the phase difference. Anyway, the main raw data acquisition channel is the "GW channel" that is using a 16 bit A/D converter:

http://www.ligo.caltech.edu/docs/G/G970064-00.pdf

CU
Bikeman

Simplex0
Simplex0
Joined: 1 Sep 05
Posts: 152
Credit: 964726
RAC: 0

Thank you Bikeman. I have to

Thank you Bikeman.
I have to take a deeper look in how
the detector actually works.

MC707
MC707
Joined: 12 Dec 07
Posts: 4
Credit: 18331
RAC: 0

RE: Einstein @ home will

Message 94019 in response to message 93989

Quote:

Einstein @ home will never get any real work done until
they create and release an ATI app.

ATI cards are much more powerful than nvidia in double
precision crunching.


So friggin' agreed. I got two HD4890... they just kick ass badly (not to mention the buck!)

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.