Peter, be careful with the memory overclock. It may well be that too high of a clock speed does not yet produce errors but reduces the throughput, as the error correction 8and resending of data) of the memory bus is triggered. It's certainly been observed for the GTX1080 with its GDDR5X.
It may well be that too high of a clock speed does not yet produce errors but reduces the throughput, as the error correction 8and resending of data) of the memory bus is triggered.
Happily I inched up my memory clock just a very little bit at a time, after taking my accidental first big gulp (on the NVI scale +500). I was raising my request on the NVI scale just 50 at a time, which is just 25 at a time on the GPU-Z scale.
I logged completion time each step along the way, and it got monotonically shorter all the way to the point where I got an involuntary reboot I deemed to be the "you're finished" signal.
So for my card in my box on BRP6/CUDA55 at my conditions, not a factor.
By the way, since you raised the subject of errors, I'm happy to report that throughout my taking both core clock and memory clock to the brink and beyond, I never once generated a validation error on a WU that was returned as good. I did get runtime computation errors from what I believe was too fast a core clock (two instances), and I got involuntary reboot I believe was the symptom of too fast a memory clock (two instances).
Overall, I'd score this as about the most docile overclocking exercise in my memory--and also one of the most productive. Of course it helps that the card is not consuming as much power running Einstein BRP6/CUDA55 at a given clock condition as it would on many games.
In the matter of the discussions and accusations regarding overall power consumption of the RX 480 and more particularly assertions that it rather commonly violates all three of:
.Its own stated TDP of 150 W
.the standard limitation on the PCIe bus power connection
.the standard limitation on the six pin power connector
AMD has announced that they are releasing a new driver within a couple of days which will modify card behavior, including an optional compatibility mode (which is not turned on by default) which apparently is meant to assure compliance with at least one of these three specs (though the initial release statement does not seem to clarify which, merely stating that it reduces power with minimum performance loss if activated).
Here are two articles on the matter, both of which include the exact text of the AMD announcement.
As an Einstein user, I'm a bit concerned about the AMD assurance
Quote:
we are confident that the levels of reported power draws by the Radeon RX 480 do not pose a risk of damage to motherboards or other PC components based on expected usage
I rather imagine continuous 24 x 7 usage I would expect to use for Einstein work is well outside the expected usage profile. On the other hand it may be for the RX 480, as I have seen with some other cards, then running the current Einstein applications causes power draw well below that seen in games, in which case a special Einstein concern might not be warranted.
Overall, I'd score this as about the most docile overclocking exercise in my memory--and also one of the most productive.
That was imprudent to claim at that moment, as in fact the system had recently had a major event, which I had not yet noticed. I've delayed posting as I don't have a good diagnosis.
The system had run fine with the 1070 at the slightly relaxed overclocking imposed after one classic incident and the 750Ti SC running stock. Four hours after the event I finally noticed the BOINCTasks running on another machine was not showing any changes in work status, and that the power meter showed about 66W (idle, not the 260ish for running Einstein at current settings). While the system correctly answered a ping, it was not otherwise responsive, so I eventually hit the reset button.
The temperature trend graph in TThrottle showed a sudden GPU temperature drop, which implies it kept running for a least a while after the primary event. I think both GPUs dropped in temperature about simultaneously, which either suggests that the primary problem was not on a GPU, or that if it was it propagated into the host system in a way that stopped processing on the second GPU.
I reviewed Windows Event Viewer (a first for me on a Windows 10 machine) and could not spot any useful reports--specifically nothing near the bad event timeline.
Interestingly enough, all six in-process GPU tasks at the time of the event resumed successfully. However two of the three in-process CPU tasks failed promptly when BOINC restarted them after my reboot.
So I really don't know whether this has something to do with 1070 overclocking, or with the combination of the two cards running together, or a chance weakness of my two-month old Windows 10 PC, or even some incompatibility of my system with the new-to-me GRPB1 work it has just changed to from a previous GW CPU task diet.
As I consider 1070 overclocking the prime suspect, but the symptom was new to me, I backed off the core clock and memory clock 5 increments each, and have been inching them alternately back up after at least eight hours run time on a setting. At the moment I have almost four successful hours at the most recent rung on this ladder (+80 core clock, +650 memory clock). I intend to keep inching up until something breaks, though if I get close to previous limits I'll spend longer on each setting.
While there is precious little of detail, initial prices have been announced for the forthcoming GTX 1060 cards. A suggested $250 base MSRP, with the "Limited Founders Edition" about $300. Initial shipment release is July 19 videocardz initial 1060 stuff gamersnexus 1060 stuff
Happily the announced memory speed is better than some rumors, 8GBPs GDDR5 192-bit, so just 0.75 of 1070.
I think this might do pretty well here at Einstein, as BRP6 throughput seems not to scale fully with available core count, so the performance may not go down so much as a the reduction from 1920 on the GTX 1070 to 1280 on the GTX 1060 suggests. I'm hoping for three-quarters of the performance for two-thirds the price, and suspect it will more easily stay cool enough in my less capable cases.
On the other hand, unless it overclocks much more favorably, it seems likely to underperform the RX 480 on BRP6 by a lot, but consume a lot less power while costing somewhat more.
As I'm something of a total system power consumption reduction hawk, I don't currently plan to explore the RX 480 path, but on current indications will probably purchase a Founders 1060 as soon as they are available. I'm also a fan noise nut, and I'm finding the sonic character of the 1070 Founders fan far more agreeable than my other GPU fans, and I somewhat doubt the early available of substantially less expensive cards.
I've seen a claim that the review embargo release date is 9 a.m. July 19. If so people won't have reviews to go on in making up their minds in advance. Maybe this will take some of the zest out of the first-day frenzy.
Just picked up a 1080. I'm having trouble getting the card to reach full memory clock while crunching BOINC projects. I've used NVI to boost the P2 memory clock, but I'm not seeing any effect from it. Also, the card's GPU clock is not boosting.
I tried going back through this thread but didn't see this issue. Any help would be appreciated. Thanks.
Peter, be careful with the
)
Peter, be careful with the memory overclock. It may well be that too high of a clock speed does not yet produce errors but reduces the throughput, as the error correction 8and resending of data) of the memory bus is triggered. It's certainly been observed for the GTX1080 with its GDDR5X.
MrS
Scanning for our furry friends since Jan 2002
RE: It may well be that too
)
Happily I inched up my memory clock just a very little bit at a time, after taking my accidental first big gulp (on the NVI scale +500). I was raising my request on the NVI scale just 50 at a time, which is just 25 at a time on the GPU-Z scale.
I logged completion time each step along the way, and it got monotonically shorter all the way to the point where I got an involuntary reboot I deemed to be the "you're finished" signal.
So for my card in my box on BRP6/CUDA55 at my conditions, not a factor.
By the way, since you raised the subject of errors, I'm happy to report that throughout my taking both core clock and memory clock to the brink and beyond, I never once generated a validation error on a WU that was returned as good. I did get runtime computation errors from what I believe was too fast a core clock (two instances), and I got involuntary reboot I believe was the symptom of too fast a memory clock (two instances).
Overall, I'd score this as about the most docile overclocking exercise in my memory--and also one of the most productive. Of course it helps that the card is not consuming as much power running Einstein BRP6/CUDA55 at a given clock condition as it would on many games.
That's good to know for
)
That's good to know for everyone else overclocking thes cards, thanks!
MrS
Scanning for our furry friends since Jan 2002
In the matter of the
)
In the matter of the discussions and accusations regarding overall power consumption of the RX 480 and more particularly assertions that it rather commonly violates all three of:
.Its own stated TDP of 150 W
.the standard limitation on the PCIe bus power connection
.the standard limitation on the six pin power connector
AMD has announced that they are releasing a new driver within a couple of days which will modify card behavior, including an optional compatibility mode (which is not turned on by default) which apparently is meant to assure compliance with at least one of these three specs (though the initial release statement does not seem to clarify which, merely stating that it reduces power with minimum performance loss if activated).
Here are two articles on the matter, both of which include the exact text of the AMD announcement.
wccftech power article
anandtech power article
As an Einstein user, I'm a bit concerned about the AMD assurance
I rather imagine continuous 24 x 7 usage I would expect to use for Einstein work is well outside the expected usage profile. On the other hand it may be for the RX 480, as I have seen with some other cards, then running the current Einstein applications causes power draw well below that seen in games, in which case a special Einstein concern might not be warranted.
RE: Overall, I'd score this
)
That was imprudent to claim at that moment, as in fact the system had recently had a major event, which I had not yet noticed. I've delayed posting as I don't have a good diagnosis.
The system had run fine with the 1070 at the slightly relaxed overclocking imposed after one classic incident and the 750Ti SC running stock. Four hours after the event I finally noticed the BOINCTasks running on another machine was not showing any changes in work status, and that the power meter showed about 66W (idle, not the 260ish for running Einstein at current settings). While the system correctly answered a ping, it was not otherwise responsive, so I eventually hit the reset button.
The temperature trend graph in TThrottle showed a sudden GPU temperature drop, which implies it kept running for a least a while after the primary event. I think both GPUs dropped in temperature about simultaneously, which either suggests that the primary problem was not on a GPU, or that if it was it propagated into the host system in a way that stopped processing on the second GPU.
I reviewed Windows Event Viewer (a first for me on a Windows 10 machine) and could not spot any useful reports--specifically nothing near the bad event timeline.
Interestingly enough, all six in-process GPU tasks at the time of the event resumed successfully. However two of the three in-process CPU tasks failed promptly when BOINC restarted them after my reboot.
So I really don't know whether this has something to do with 1070 overclocking, or with the combination of the two cards running together, or a chance weakness of my two-month old Windows 10 PC, or even some incompatibility of my system with the new-to-me GRPB1 work it has just changed to from a previous GW CPU task diet.
As I consider 1070 overclocking the prime suspect, but the symptom was new to me, I backed off the core clock and memory clock 5 increments each, and have been inching them alternately back up after at least eight hours run time on a setting. At the moment I have almost four successful hours at the most recent rung on this ladder (+80 core clock, +650 memory clock). I intend to keep inching up until something breaks, though if I get close to previous limits I'll spend longer on each setting.
While there is precious
)
While there is precious little of detail, initial prices have been announced for the forthcoming GTX 1060 cards. A suggested $250 base MSRP, with the "Limited Founders Edition" about $300. Initial shipment release is July 19
videocardz initial 1060 stuff
gamersnexus 1060 stuff
Happily the announced memory speed is better than some rumors, 8GBPs GDDR5 192-bit, so just 0.75 of 1070.
I think this might do pretty well here at Einstein, as BRP6 throughput seems not to scale fully with available core count, so the performance may not go down so much as a the reduction from 1920 on the GTX 1070 to 1280 on the GTX 1060 suggests. I'm hoping for three-quarters of the performance for two-thirds the price, and suspect it will more easily stay cool enough in my less capable cases.
On the other hand, unless it overclocks much more favorably, it seems likely to underperform the RX 480 on BRP6 by a lot, but consume a lot less power while costing somewhat more.
As I'm something of a total system power consumption reduction hawk, I don't currently plan to explore the RX 480 path, but on current indications will probably purchase a Founders 1060 as soon as they are available. I'm also a fan noise nut, and I'm finding the sonic character of the 1070 Founders fan far more agreeable than my other GPU fans, and I somewhat doubt the early available of substantially less expensive cards.
I've seen a claim that the review embargo release date is 9 a.m. July 19. If so people won't have reviews to go on in making up their minds in advance. Maybe this will take some of the zest out of the first-day frenzy.
Just picked up a 1080. I'm
)
Just picked up a 1080. I'm having trouble getting the card to reach full memory clock while crunching BOINC projects. I've used NVI to boost the P2 memory clock, but I'm not seeing any effect from it. Also, the card's GPU clock is not boosting.
I tried going back through this thread but didn't see this issue. Any help would be appreciated. Thanks.
Which driver version have you
)
Which driver version have you installed for that 1080? If not already in use, try the 368.69: http://www.guru3d.com/files-details/geforce-368-69-whql-driver-download.html
RE: Which driver version
)
Latest. 368.39. Using Nvidia Inspector 1.9.7.6.
RE: RE: Which driver
)
Unistall that with DDU. Download http://www.wagnardmobile.com/DDU/download/DDU%20v16.0.0.4.exe , run that and then run the uninstaller as administrator. Let it restart into Safe Mode with Networking and clean up and restart again.
Then run 368.69 installer as administrator. Restart.