Yeah we had a stats update page here on the old version of the website but it has been a while so I don't remember where we had that list and I think that the newer cards would not be on it anyway.
But it was just single card numbers.
I might keep looking for that thread but I see a couple other long threads from the past here so that one stats page must still be here.
The newest one I have is my GeForce 660Ti SC so I guess I am not up top date with those newer ones we have here......I even run my old 550Ti and 560Ti when I get in the mood....OC'd of course.
Sorry if I skimmed over some posts and missed some important details, but will WUprop give you the info you're looking for? You can look up computing times as reported by other users, for a particular project and app.
I now have a Threadripper 1950X so I'm keen to retest my Vega56 in the near future.(About 3 weeks time at least) It will be nice to experiment with sacrificing cores to look for speed increases/decreases and strike a good GPU/CPU combination.
In regards to electricity usage, make no mistake, I have really noticed it on the electricity bills having owned a R9 280x, R9 390 and now a Vega. I ditched my thoughts on a Vega 64 when I heard about the TDP. 24/7 crunching really takes a toll. That's one reason why, I'd vote the GTX1070 better than my Vega56 due to the TDP vs credits.
I have my first NV card in the mail as we speak. A cheap (relatively speaking) old 970. Primarily due to its TDP. Oh how I ache for another R9 280x for M@H but the power usage, plus the fact my old one basically died within 12 months, plus there 2 for sale on ebay right now with "issues"....I'm not going to go there.
Putting solar on the roof has really helped but electricity prices in Australia are astronomical atm. Almost like the price of GPU's :/
Fluctuating a bit though. Yesterday they were finishing in about 11 - 12mins.
If you'd like to know the reason why 220-240 secs has now become 316 secs, you might like to peruse this thread, particularly the recent stuff near the end of the thread. In the last month, there have been two significant changes in the nature of the data. The first change, almost a month ago resulted in a significant reduction in the elapsed time to process a task (crunch time). That has now been reversed, and a bit more for good measure, so it seems :-). The latest tasks are now taking longer than what was the norm before the speedup earlier in this month.
The longer crunch time is a good thing. Should take a bit of load off the servers. They mightn't be quite so frantic in distributing tasks and receiving results :-).
Hey Gary, what are you thoughts on the RX480 card? Do you run any?
I don't have any 480s. I have a number of 560s 570s and 580s to go with a bunch of 460s I started the Polaris adventure with :-). I was fortunate with the 570s and 580s. I got them at a very good price before the 'mining boom' (aka price gouging) really kicked off :-). I haven't been looking too closely at prices lately but a quick check now shows best current prices about $AUD150 per unit more than what I paid. The 570s have virtually identical performance to the 580s. I'm very happy with those purchases. A 480 should be pretty similar and worthwhile to run if you can get one at the right price :-).
I've used most of my Polaris purchases to upgrade machines (2009-2010 vintage) that I was planning to retire. The surprising thing was that despite the PCIe slots being version 1.x, the performance is virtually identical to the same GPU in a much more modern motherboard. It's nice that 8+ year old machines can still be useful - just as a platform for a decent GPU.
Chooka wrote:
How do they compare with a R9 280x?
I don't have any 280x (HD 7970 rebadged if I remember correctly). I have a couple of HD 7950s and based on what the extra performance of a 7970 would probably amount to, I would prefer to have a 480 for Einstein. The DP performance doesn't matter. The two benefits of the 480 would be lower power usage and higher output. My 570s give about 25% more output than a 7950. I would be very surprised if a 280x was anywhere near that much better than a 7950.
The surprising thing was that despite the PCIe slots being version 1.x, the performance is virtually identical to the same GPU in a much more modern motherboard. It's nice that 8+ year old machines can still be useful - just as a platform for a decent GPU.
I suspect that depends very much on the particular application and input date. Happily those of us with a primary interest in a single Einstein project application can optimize to that one--until they change it.
Happily those of us with a primary interest in a single Einstein project application can optimize to that one--until they change it.
Yes, I'm very aware of that. The thing that's been in the back of my mind has been how things would change if and when a GPU app for GW appeared. Bernd mentioned it initially quite a long time ago but there's been nothing recently.
It has suited me to keep the old CPUs running just as a platform for GPUs, believing that when a GW GPU app appears it may really benefit from the latest in CPU architecture at that time. I was hoping it would have been by now but I've got the patience to continue waiting. However, there are signs the old hardware is starting to feel the strain. In the last couple of months I've done several cap replacements on motherboards. So far that has been keeping everything working and now that winter has finally arrived, and the machine room is at 27C rather than 38C, it should be basically OK for the next few months :-).
I was interested to read (via this link that Martin posted about a month ago in the Science forum) about the computational complexity for detecting continuous GW emissions. The fact that they set up a new group (quite a few bodies in the group photo) and that funding was in place for several years, seems to indicate a lot of work to be done and the need for a GW app to speed things up. That gives me hope for something to do for GPUs after we finish with gamma-ray pulsars, whenever that inevitably happens :-).
Yeah we had a stats update
)
Yeah we had a stats update page here on the old version of the website but it has been a while so I don't remember where we had that list and I think that the newer cards would not be on it anyway.
But it was just single card numbers.
I might keep looking for that thread but I see a couple other long threads from the past here so that one stats page must still be here.
https://einsteinathome.org/content/nvidia-pascal-and-amd-polaris-starting-gtx-10801070-and-amd-480
https://einsteinathome.org/content/times-elapsed-cpu-brp566-beta-various-cpugpu-combos-discussion-thread
The newest one I have is my GeForce 660Ti SC so I guess I am not up top date with those newer ones we have here......I even run my old 550Ti and 560Ti when I get in the mood....OC'd of course.
Sorry if I skimmed over some
)
Sorry if I skimmed over some posts and missed some important details, but will WUprop give you the info you're looking for? You can look up computing times as reported by other users, for a particular project and app.
Example Search
Click Here to see My Detailed BOINC Stats
I've got a Vega 56 and was
)
I've got a Vega 56 and was running 3WU's which was the most efficient at the time. I say at the time because I only had a 4 core CPU.
https://einsteinathome.org/goto/comment/162210
I now have a Threadripper 1950X so I'm keen to retest my Vega56 in the near future.(About 3 weeks time at least) It will be nice to experiment with sacrificing cores to look for speed increases/decreases and strike a good GPU/CPU combination.
In regards to electricity usage, make no mistake, I have really noticed it on the electricity bills having owned a R9 280x, R9 390 and now a Vega. I ditched my thoughts on a Vega 64 when I heard about the TDP. 24/7 crunching really takes a toll. That's one reason why, I'd vote the GTX1070 better than my Vega56 due to the TDP vs credits.
I have my first NV card in the mail as we speak. A cheap (relatively speaking) old 970. Primarily due to its TDP. Oh how I ache for another R9 280x for M@H but the power usage, plus the fact my old one basically died within 12 months, plus there 2 for sale on ebay right now with "issues"....I'm not going to go there.
Putting solar on the roof has really helped but electricity prices in Australia are astronomical atm. Almost like the price of GPU's :/
Bit of an update here. I'm
)
Bit of an update here. I'm currently running 3 x WU's with my CPU usage set to 93% (16C/32T threadripper) to reserve 3 cores. @94% an extra WU starts.
Average WU time is about 950sec
950/3 = 316sec/WU.
Fluctuating a bit though. Yesterday they were finishing in about 11 - 12mins.
I am running CPU tasks on another project though fwiw.
Chooka wrote:Average WU time
)
If you'd like to know the reason why 220-240 secs has now become 316 secs, you might like to peruse this thread, particularly the recent stuff near the end of the thread. In the last month, there have been two significant changes in the nature of the data. The first change, almost a month ago resulted in a significant reduction in the elapsed time to process a task (crunch time). That has now been reversed, and a bit more for good measure, so it seems :-). The latest tasks are now taking longer than what was the norm before the speedup earlier in this month.
The longer crunch time is a good thing. Should take a bit of load off the servers. They mightn't be quite so frantic in distributing tasks and receiving results :-).
Cheers,
Gary.
Thanks Gary! I thought it
)
Thanks Gary!
I thought it might have been task related.
Hey Gary, what are you
)
Hey Gary, what are you thoughts on the RX480 card? Do you run any?
How do they compare with a R9 280x?
I see they have a low DP output but are far more efficient.
Chooka wrote:Hey Gary, what
)
I don't have any 480s. I have a number of 560s 570s and 580s to go with a bunch of 460s I started the Polaris adventure with :-). I was fortunate with the 570s and 580s. I got them at a very good price before the 'mining boom' (aka price gouging) really kicked off :-). I haven't been looking too closely at prices lately but a quick check now shows best current prices about $AUD150 per unit more than what I paid. The 570s have virtually identical performance to the 580s. I'm very happy with those purchases. A 480 should be pretty similar and worthwhile to run if you can get one at the right price :-).
I've used most of my Polaris purchases to upgrade machines (2009-2010 vintage) that I was planning to retire. The surprising thing was that despite the PCIe slots being version 1.x, the performance is virtually identical to the same GPU in a much more modern motherboard. It's nice that 8+ year old machines can still be useful - just as a platform for a decent GPU.
I don't have any 280x (HD 7970 rebadged if I remember correctly). I have a couple of HD 7950s and based on what the extra performance of a 7970 would probably amount to, I would prefer to have a 480 for Einstein. The DP performance doesn't matter. The two benefits of the 480 would be lower power usage and higher output. My 570s give about 25% more output than a 7950. I would be very surprised if a 280x was anywhere near that much better than a 7950.
Cheers,
Gary.
Gary Roberts wrote:The
)
I suspect that depends very much on the particular application and input date. Happily those of us with a primary interest in a single Einstein project application can optimize to that one--until they change it.
archae86 wrote:Happily those
)
Yes, I'm very aware of that. The thing that's been in the back of my mind has been how things would change if and when a GPU app for GW appeared. Bernd mentioned it initially quite a long time ago but there's been nothing recently.
It has suited me to keep the old CPUs running just as a platform for GPUs, believing that when a GW GPU app appears it may really benefit from the latest in CPU architecture at that time. I was hoping it would have been by now but I've got the patience to continue waiting. However, there are signs the old hardware is starting to feel the strain. In the last couple of months I've done several cap replacements on motherboards. So far that has been keeping everything working and now that winter has finally arrived, and the machine room is at 27C rather than 38C, it should be basically OK for the next few months :-).
I was interested to read (via this link that Martin posted about a month ago in the Science forum) about the computational complexity for detecting continuous GW emissions. The fact that they set up a new group (quite a few bodies in the group photo) and that funding was in place for several years, seems to indicate a lot of work to be done and the need for a GW app to speed things up. That gives me hope for something to do for GPUs after we finish with gamma-ray pulsars, whenever that inevitably happens :-).
Cheers,
Gary.