... Unwanted visitors sticking their noses where they don't belong? ...LOLOL!...(not my cat)
Not your cat. Uh-huh, sure.
I actually did consider our two cats when debating whether to leave the case open, but that host is on top of a tall dresser along with my other host, monitor, surge protector, etc, so I think it (and, of course, the kitties) are safe. I took the open-case concept a step further and turned off the case fans, thereby saving and extra 4 W without affecting core temperatures. That arrangement worked so well that I tried the same for my other host, but that did not work at all; it's a different case with different cards and they ran a lot hotter. So I restored that host's modesty and let only my new host run in the buff.
Ideas are not fixed, nor should they be; we live in model-dependent reality.
It can't think why the bios change wouldn't take hold in Windows, though I've only flipped the switch under Linux.
My original RFX RX 570, for which I've never flipped the mining switch, currently reports
Bios Information: 113-57085STB1-W90
My second card of that model, which has had inferior performance to the first, but for which I flipped the switch just before last boot reports:
Bios Information: 113-57085SHB2-W90
These differ, which hints that it noticed I flipped the switch (as does the extra initial delay on first boot after the change).
Do you have access to BIOS version information on yours?
Good idea. Both my cards, currently running the mining bios, have a vbios of 113-57045EHD1-M90, as reported by amdgpu-utils. I'm not sure this is the relevant mining/gaming BIOS ID though, because the TechPowerUp GPU Database for that card, https://www.techpowerup.com/vgabios/196582/196582, reports the VBIOS Version as: 015.050.002.001.000000, which is the same general format of bios versions reported by GPU-Z on Windows systems. (I don't know of a GPU-Z equivalent for Linux, so I can't pull up that alternate bios version number for my two cards.)
I don't know what the different bios ID's refer to. I suspect that the one we're seeing is more like a vendor-model ID, in which case it wouldn't change with a flip of the switch. I assume the one used by TechPowerUp is the ID for the gaming bios because the memory timing tables listed under the BIOS Internals on that page don't list the 1850MHz that my 'mining' cards are currently running. Also, my two cards are slightly different, XXX vs Black Edition, with one having a slightly higher "overclock" maximum speed. I would think such a difference would show up in any bios that's running clock, voltages, and timings.
Where did you get your bios IDs from? If not GPU-Z, then maybe try that utility to compare the alternate bios version IDs with what's listed at TechPowerUp.
Ideas are not fixed, nor should they be; we live in model-dependent reality.
Where did you get your bios IDs from? If not GPU-Z, then maybe try that utility to compare the alternate bios version IDs with what's listed at TechPowerUp.
The strings I posted as "Bios Information" were provided directly by Windows, with that designation.
I tried looking at both my cards with both GPU-Z and HWInfo, and got these reports:
First card (the better performing one, on which I've never flipped the switch):
Micron memory
Video BIOS Version: 015.050.002.001.000000
ASIC Quality: 73.9%
SMU Firmware Version: 2.23.17
Second card (poorer performing in all conditions, switch currently flipped)
015.050.002.001.000000
Hynix memory
Video BIOS Version: 015.050.002.001
ASIC Quality: 73.3%
SMU Firmware Version: 2.23.17
I guess sometime soon I'll need to flip the switch back and see if any of these reports change.
But first I should just do a simple full power down and reboot.
Footnote: The site software habit of putting extra linefeeds into my posts drives me nuts.
It's really worthwhile reading the BBCode Help drop-down menu item right at the bottom of this page. The coverage of things you can do is quite extensive. I found the following little gem quite a while ago and it helped enormously with a number of frustrations I was having with formatting.
Quote:
When using the Rich-text editor, loaded by default, ENTER will place a new paragraph and SHIFT-ENTER will place a new line. There are some BBCode tags that will not work with ENTER (new paragraph), e.g., the [list] tag. In general, when entering a line break within a BBCode tag, please use SHIFT-ENTER.
I actually did consider our two cats when debating whether to leave the case open, but that host is on top of a tall dresser along with my other host, monitor, surge protector, etc, so I think it (and, of course, the kitties) are safe. I took the open-case concept a step further and turned off the case fans, thereby saving and extra 4 W without affecting core temperatures.
Okay, I'm quoting myself, and for a fairly minor contribution, but this does peripherally concern AMD cards.
I noticed that the dual fan RX570 cards blow hot air out in pretty much all directions, but more so out the backend of the cards (toward the front of the case). So I flipped the two case fans, reversing the air flow so that it now flows from back-to-front of the case. Then I slightly upped the fan speed profiles in UEFI-bios, replaced the case side panel, et voila!, got decent GPU crunching temperatures. Temps are much better than initially when case fans were working against GPU fans. Added benefit is there's much less GPU heat rising to the CPU. I'm burning a few more Watts running the case fans, but, for now, the sense of achievement seems worth it. Out waste heat, out!
Ideas are not fixed, nor should they be; we live in model-dependent reality.
I actually did consider our two cats when debating whether to leave the case open, but that host is on top of a tall dresser along with my other host, monitor, surge protector, etc, so I think it (and, of course, the kitties) are safe. I took the open-case concept a step further and turned off the case fans, thereby saving and extra 4 W without affecting core temperatures.
Okay, I'm quoting myself, and for a fairly minor contribution, but this does peripherally concern AMD cards.
I noticed that the dual fan RX570 cards blow hot air out in pretty much all directions, but more so out the backend of the cards (toward the front of the case). So I flipped the two case fans, reversing the air flow so that it now flows from back-to-front of the case. Then I slightly upped the fan speed profiles in UEFI-bios, replaced the case side panel, et voila!, got decent GPU crunching temperatures. Temps are much better than initially when case fans were working against GPU fans. Added benefit is there's much less GPU heat rising to the CPU. I'm burning a few more Watts running the case fans, but, for now, the sense of achievement seems worth it. Out waste heat, out!
Those fans seem counter to normal thinking, I'm glad you figured that out!!
A set of observations on CPU affinity restrictions:
I recently raised the multiplicity on my second RX 570 from 2X to 3X, and got a nice little productivity improvement, with small enough power consumption increase to make me decide to keep it. While I was fiddling, I decided to spend a couple of days running restriction (by the CPU affinity controls in Process Lasso) to varying numbers of cores on the 6-core non-hyperthreaded i5-9400F. I averaged each test condition over several hours. I continue to run the RX 570 at a -20% power limitation, imposed by MSIAfterburner, and with an Afterburner fan curve which has it reporting 62C GPU temperature most of the time. This is an XFX brand RX 570 with the BIOS switch on the "mining" position.
The results were simple: restricting to a single core does noticeable harm, but the other five options are surprisingly similar with the (slightly) best result observed with three allowed cores (the same as the number of GPU tasks). My longstanding observation that restricting the GPU support task to anything less than all available cores always does harm was not borne out. This does support my long-standing advice that it is better to test than just to invoke "known truths" for these settings.
Cores Average elapsed time
1 31:14
2 30:18
3 30:12
4 30:14
5 30:16
6 30:17
A set of observations on CPU affinity restrictions:
I recently raised the multiplicity on my second RX 570 from 2X to 3X, and got a nice little productivity improvement, with small enough power consumption increase to make me decide to keep it. While I was fiddling, I decided to spend a couple of days running restriction (by the CPU affinity controls in Process Lasso) to varying numbers of cores on the 6-core non-hyperthreaded i5-9400F. I averaged each test condition over several hours. I continue to run the RX 570 at a -20% power limitation, imposed by MSIAfterburner, and with an Afterburner fan curve which has it reporting 62C GPU temperature most of the time. This is an XFX brand RX 570 with the BIOS switch on the "mining" position.
The results were simple: restricting to a single core does noticeable harm, but the other five options are surprisingly similar with the (slightly) best result observed with three allowed cores (the same as the number of GPU tasks). My longstanding observation that restricting the GPU support task to anything less than all available cores always does harm was not borne out. This does support my long-standing advice that it is better to test than just to invoke "known truths" for these settings.
Cores Average elapsed time
1 31:14
2 30:18
3 30:12
4 30:14
5 30:16
6 30:17
How does this respond to the settings of amount of cpu threads you can specify in the boinc App_config? Did you vary this with the affinity or did you use the same ratio while testing?
Holmis wrote:... Unwanted
)
Not your cat. Uh-huh, sure.
I actually did consider our two cats when debating whether to leave the case open, but that host is on top of a tall dresser along with my other host, monitor, surge protector, etc, so I think it (and, of course, the kitties) are safe. I took the open-case concept a step further and turned off the case fans, thereby saving and extra 4 W without affecting core temperatures. That arrangement worked so well that I tried the same for my other host, but that did not work at all; it's a different case with different cards and they ran a lot hotter. So I restored that host's modesty and let only my new host run in the buff.
Ideas are not fixed, nor should they be; we live in model-dependent reality.
cecht wrote:It can't think
)
My original RFX RX 570, for which I've never flipped the mining switch, currently reports
Bios Information: 113-57085STB1-W90
My second card of that model, which has had inferior performance to the first, but for which I flipped the switch just before last boot reports:
Bios Information: 113-57085SHB2-W90
These differ, which hints that it noticed I flipped the switch (as does the extra initial delay on first boot after the change).
Do you have access to BIOS version information on yours?
archae86 wrote:cecht wrote:It
)
Good idea. Both my cards, currently running the mining bios, have a vbios of 113-57045EHD1-M90, as reported by amdgpu-utils. I'm not sure this is the relevant mining/gaming BIOS ID though, because the TechPowerUp GPU Database for that card, https://www.techpowerup.com/vgabios/196582/196582, reports the VBIOS Version as: 015.050.002.001.000000, which is the same general format of bios versions reported by GPU-Z on Windows systems. (I don't know of a GPU-Z equivalent for Linux, so I can't pull up that alternate bios version number for my two cards.)
I don't know what the different bios ID's refer to. I suspect that the one we're seeing is more like a vendor-model ID, in which case it wouldn't change with a flip of the switch. I assume the one used by TechPowerUp is the ID for the gaming bios because the memory timing tables listed under the BIOS Internals on that page don't list the 1850MHz that my 'mining' cards are currently running. Also, my two cards are slightly different, XXX vs Black Edition, with one having a slightly higher "overclock" maximum speed. I would think such a difference would show up in any bios that's running clock, voltages, and timings.
Where did you get your bios IDs from? If not GPU-Z, then maybe try that utility to compare the alternate bios version IDs with what's listed at TechPowerUp.
Ideas are not fixed, nor should they be; we live in model-dependent reality.
cecht wrote:Where did you get
)
The strings I posted as "Bios Information" were provided directly by Windows, with that designation.
I tried looking at both my cards with both GPU-Z and HWInfo, and got these reports:
First card (the better performing one, on which I've never flipped the switch):
Second card (poorer performing in all conditions, switch currently flipped)
archae86 wrote:Footnote: The
)
Try holding down shift while hitting enter for a new line instead of a new paragraph.
It's really worthwhile
)
It's really worthwhile reading the BBCode Help drop-down menu item right at the bottom of this page. The coverage of things you can do is quite extensive. I found the following little gem quite a while ago and it helped enormously with a number of frustrations I was having with formatting.
Cheers,
Gary.
CEcht wrote:I actually did
)
Okay, I'm quoting myself, and for a fairly minor contribution, but this does peripherally concern AMD cards.
I noticed that the dual fan RX570 cards blow hot air out in pretty much all directions, but more so out the backend of the cards (toward the front of the case). So I flipped the two case fans, reversing the air flow so that it now flows from back-to-front of the case. Then I slightly upped the fan speed profiles in UEFI-bios, replaced the case side panel, et voila!, got decent GPU crunching temperatures. Temps are much better than initially when case fans were working against GPU fans. Added benefit is there's much less GPU heat rising to the CPU. I'm burning a few more Watts running the case fans, but, for now, the sense of achievement seems worth it. Out waste heat, out!
Ideas are not fixed, nor should they be; we live in model-dependent reality.
cecht wrote:CEcht wrote:I
)
Those fans seem counter to normal thinking, I'm glad you figured that out!!
A set of observations on CPU
)
A set of observations on CPU affinity restrictions:
I recently raised the multiplicity on my second RX 570 from 2X to 3X, and got a nice little productivity improvement, with small enough power consumption increase to make me decide to keep it. While I was fiddling, I decided to spend a couple of days running restriction (by the CPU affinity controls in Process Lasso) to varying numbers of cores on the 6-core non-hyperthreaded i5-9400F. I averaged each test condition over several hours. I continue to run the RX 570 at a -20% power limitation, imposed by MSIAfterburner, and with an Afterburner fan curve which has it reporting 62C GPU temperature most of the time. This is an XFX brand RX 570 with the BIOS switch on the "mining" position.
The results were simple: restricting to a single core does noticeable harm, but the other five options are surprisingly similar with the (slightly) best result observed with three allowed cores (the same as the number of GPU tasks). My longstanding observation that restricting the GPU support task to anything less than all available cores always does harm was not borne out. This does support my long-standing advice that it is better to test than just to invoke "known truths" for these settings.
archae86 wrote:A set of
)
How does this respond to the settings of amount of cpu threads you can specify in the boinc App_config? Did you vary this with the affinity or did you use the same ratio while testing?