Yes i´m in Panama now due my job and realy miss my big battleships with his big guns (690´s). LOL
I have some more info about the problem, seems like is a general problem of the 390/290 series. The same happening in Seti@home. For what i could dig it´s something related to the GPU processor and the driver.
Hope the AMD team could fint the cure vary fast.
I was thinking about bild a new host here with 4 or more big guns and the performance of the 390 it´s realy amazing when you compare agaist similar priced NV stuff.
I think Tom* might have figured out where the tasks are failing. I looked at several of your invalids and saw what he noticed, inordinately high values of sumspec pages. If I understand what little of what the application is doing, the number of sumspec is related to memory allocation for processing the task. I looked at a lot of your wingmates until I found similar hosts using a Hawaii chip and looked for their invalids. Without knowing whether those hosts tried running more than one task at a time, I saw high values of sumspec pages on some of those invalids too. I still think setting the memory_debug and coproccessor_debug would be enlightening. I haven't followed any of the 290/390X SETI discussions so far. Maybe the SETI folks have already got a good handle on the issue already?
Hi,
I´m a little late to the party as I just recently switched from my GTX470 to a R9 290.
First I was using the catalyst 15.7 driver and run only single wu as two at the same time were failing. The runtime was around 3300s for the Parkes PMPS XT v1.52 BRP6-opencl-ati app (6500s for two wus at the same time but invalid) and 1150s for the Arecibo, GPU v1.52 BRP4G-Beta-opencl-ati app (1800s for two wus, but invalid).
After reading through many posts, I stumbled upon one at the anandtech forums. http://forums.anandtech.com/showthread.php?t=2355452&page=2
So I switched to catalyst 14.4 and now I´m able to run two wus at the same time. However, the efficiency is not so good and two wus now need 7600s. I haven´t crunched any BRP4 units yet, so no data there.
I found the 13.~~ driver so slow, that it was faster to do just 1 work task at a time with the later driver.
I didn't try 13.12 though, would be interesting to see how that one does.
I'm running that card in my linux system now, so no more playing with drivers (just content for the moment that it all works O,K)
I have now switched successfully to catalyst 14.8. Productivity is now up and running two Parkes PMPS XT wus takes ca. 5000s. No invalids so far, besides one from switching the driver.
Hi Bill Yes i´m in Panama
)
Hi Bill
Yes i´m in Panama now due my job and realy miss my big battleships with his big guns (690´s). LOL
I have some more info about the problem, seems like is a general problem of the 390/290 series. The same happening in Seti@home. For what i could dig it´s something related to the GPU processor and the driver.
Hope the AMD team could fint the cure vary fast.
I was thinking about bild a new host here with 4 or more big guns and the performance of the 390 it´s realy amazing when you compare agaist similar priced NV stuff.
I think Tom* might have
)
I think Tom* might have figured out where the tasks are failing. I looked at several of your invalids and saw what he noticed, inordinately high values of sumspec pages. If I understand what little of what the application is doing, the number of sumspec is related to memory allocation for processing the task. I looked at a lot of your wingmates until I found similar hosts using a Hawaii chip and looked for their invalids. Without knowing whether those hosts tried running more than one task at a time, I saw high values of sumspec pages on some of those invalids too. I still think setting the memory_debug and coproccessor_debug would be enlightening. I haven't followed any of the 290/390X SETI discussions so far. Maybe the SETI folks have already got a good handle on the issue already?
Another thing to try is the
)
Another thing to try is the beta 1.52 BRP4G application and see if there is any difference.
I can confirm that the
)
I can confirm that the problem remains with the beta 1.52 BRP4G tasks as well.
Hi, I´m a little late to the
)
Hi,
I´m a little late to the party as I just recently switched from my GTX470 to a R9 290.
First I was using the catalyst 15.7 driver and run only single wu as two at the same time were failing. The runtime was around 3300s for the Parkes PMPS XT v1.52 BRP6-opencl-ati app (6500s for two wus at the same time but invalid) and 1150s for the Arecibo, GPU v1.52 BRP4G-Beta-opencl-ati app (1800s for two wus, but invalid).
After reading through many posts, I stumbled upon one at the anandtech forums.
http://forums.anandtech.com/showthread.php?t=2355452&page=2
So I switched to catalyst 14.4 and now I´m able to run two wus at the same time. However, the efficiency is not so good and two wus now need 7600s. I haven´t crunched any BRP4 units yet, so no data there.
Try Catalyst 13.12. It is
)
Try Catalyst 13.12. It is just slightly slower than 15.7 but it is significant faster than Catalyst 14.xx.
I found the 13.~~ driver so
)
I found the 13.~~ driver so slow, that it was faster to do just 1 work task at a time with the later driver.
I didn't try 13.12 though, would be interesting to see how that one does.
I'm running that card in my linux system now, so no more playing with drivers (just content for the moment that it all works O,K)
I have now switched
)
I have now switched successfully to catalyst 14.8. Productivity is now up and running two Parkes PMPS XT wus takes ca. 5000s. No invalids so far, besides one from switching the driver.
i've got the same problem on
)
i've got the same problem on my new R9 nano. all completed tasks are invalid. any solution to fix this?
one solution would be to only
)
one solution would be to only run one wu at a time on that card
i know it's a much older driver, but if i had that card i would at least try to see if i could get catalyst 14.8 to work on it