All RTX series have Tensor cores. But why are you concerned of those? No tensor cores are used anywhere in BOINC.
Not concerned about the difference for boinc GPU crunching.
Given the reports of similar boinc crunching I was trying to remember the differences between the 1080. and the 2080….
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
All RTX series have Tensor cores. But why are you concerned of those? No tensor cores are used anywhere in BOINC.
Was trying to remember what differences there were besides the name difference.
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
Each Nvidia generation involves an archtitectural change, a process node change, a change in CUDA core counts and amount of board memory and memory generation.
At a similar price point, am weighing a 3080 or a 4070ti. Very different animals. 4070ti - fewer cores and higher clocks, far more cache, but memory throughput quite choked with a 192 bit bus. The 3080 is more memory friendly, has more cores and lower clocks, with much less cache.
Would the memory handicap of the 4070ti perform worse on E@H, specifically Gamma Ray on Linux with Petri's special app, or would the 3080 churn better?
At a similar price point, am weighing a 3080 or a 4070ti. Very different animals. 4070ti - fewer cores and higher clocks, far more cache, but memory throughput quite choked with a 192 bit bus. The 3080 is more memory friendly, has more cores and lower clocks, with much less cache.
Would the memory handicap of the 4070ti perform worse on E@H, specifically Gamma Ray on Linux with Petri's special app, or would the 3080 churn better?
Fred
there's not really any way to say for sure without someone testing it. several variables changing at the same time.
The 4070 Ti can get away with a smaller memory bus width because it has double the PCI speed of Gen. 5 compared to Gen. 4.
But unless you are running a Gen 5 platform, if you run on an older Gen. 4 platform there is no memory speed advantage. With both gpus running at Gen. 4, the 3080 has the memory advantage.
Until you get real world examples of BOINC crunching, all theoretical suppositions.
The 4070 Ti can get away with a smaller memory bus width because it has double the PCI speed of Gen. 5 compared to Gen. 4.
But unless you are running a Gen 5 platform, if you run on an older Gen. 4 platform there is no memory speed advantage. With both gpus running at Gen. 4, the 3080 has the memory advantage.
Until you get real world examples of BOINC crunching, all theoretical suppositions.
i think he's referring to the GPU memory and processor. the PCIe generation doesn't have anything to with how fast the GPU core communicates with VRAM. the 40-series cards are all still PCIe gen 4 anyway.
the gamma ray application does put a heavy load on the GPU memory controller, and more bandwidth may be helpful if you have enough cores to drive it. however, the 4070Ti has memory speed and GPU speed advantages over the 3080 that may make up for the reduced VRAM bandwidth.
still too many variables moving around to be able to say for sure.
Keith Myers wrote: All RTX
)
Not concerned about the difference for boinc GPU crunching.
Given the reports of similar boinc crunching I was trying to remember the differences between the 1080. and the 2080….
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
Keith Myers wrote: All RTX
)
Was trying to remember what differences there were besides the name difference.
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
Each Nvidia generation
)
Each Nvidia generation involves an archtitectural change, a process node change, a change in CUDA core counts and amount of board memory and memory generation.
At a similar price point, am
)
At a similar price point, am weighing a 3080 or a 4070ti. Very different animals. 4070ti - fewer cores and higher clocks, far more cache, but memory throughput quite choked with a 192 bit bus. The 3080 is more memory friendly, has more cores and lower clocks, with much less cache.
Would the memory handicap of the 4070ti perform worse on E@H, specifically Gamma Ray on Linux with Petri's special app, or would the 3080 churn better?
Fred
catavalon21 wrote: At a
)
there's not really any way to say for sure without someone testing it. several variables changing at the same time.
_________________________________________________________________________
Understood. I didn't know if
)
Understood. I didn't know if the app was memory-heavy or more processor heavy. Thanks for the quick reply.
The 4070 Ti can get away with
)
The 4070 Ti can get away with a smaller memory bus width because it has double the PCI speed of Gen. 5 compared to Gen. 4.
But unless you are running a Gen 5 platform, if you run on an older Gen. 4 platform there is no memory speed advantage. With both gpus running at Gen. 4, the 3080 has the memory advantage.
Until you get real world examples of BOINC crunching, all theoretical suppositions.
I have an older system with a
)
I have an older system with a MoBo sporting Gen 3, an Intel 6700k.
Keith Myers wrote: The 4070
)
i think he's referring to the GPU memory and processor. the PCIe generation doesn't have anything to with how fast the GPU core communicates with VRAM. the 40-series cards are all still PCIe gen 4 anyway.
the gamma ray application does put a heavy load on the GPU memory controller, and more bandwidth may be helpful if you have enough cores to drive it. however, the 4070Ti has memory speed and GPU speed advantages over the 3080 that may make up for the reduced VRAM bandwidth.
still too many variables moving around to be able to say for sure.
_________________________________________________________________________
My mistake. I thought both
)
My mistake. I thought both the latest Nvidia and AMD cards utilized Gen. 5 to match the latest motherboards.