NPU inclusion?!

KLiK
KLiK
Joined: 1 Apr 14
Posts: 113
Credit: 530675491
RAC: 1220298
Topic 231362

Few words about future of tech & NPUs? What do you know? What do you think?

Is NPU going to be included into E@h? What about some other project?

Will it be limited to CPU-NPU (like intel or AMD) or will it include some other NPU devices? Such as Googles Tensor & Coral TPU/NPU? Falcon cards? 

Lets start the discussion here...

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 4158
Credit: 50297708210
RAC: 42319579

An NPU is designed to do

An NPU is designed to do neural network operations. Like inferencing. This project does not use NN and so an NPU would not be useful here. It’s like the tensor or RTX cores on your GPU, they’re there, but won’t be used. 

_________________________________________________________________________

Tom M
Tom M
Joined: 2 Feb 06
Posts: 6894
Credit: 9818712834
RAC: 3703942

If e@h were to start using AI

If e@h were to start using AI inference for data analysis you might see it used here. But it would require significant retooling.

Unless it became "turnkey" it doesn't sound likely.

A Proud member of the O.F.A.  (Old Farts Association).

Filipe
Filipe
Joined: 10 Mar 05
Posts: 186
Credit: 423905640
RAC: 227090

Tom M wrote:If e@h were to

Tom M wrote:

If e@h were to start using AI inference for data analysis you might see it used here. But it would require significant retooling.

Unless it became "turnkey" it doesn't sound likely.

What about the nvidia Tensor Cores? Would they be useful at e@h if the software could be rewritten to make use of them?

mikey
mikey
Joined: 22 Jan 05
Posts: 12958
Credit: 1884509265
RAC: 19062

Filipe wrote: Tom M

Filipe wrote:

Tom M wrote:

If e@h were to start using AI inference for data analysis you might see it used here. But it would require significant retooling.

Unless it became "turnkey" it doesn't sound likely.

What about the nvidia Tensor Cores? Would they be useful at e@h if the software could be rewritten to make use of them? 

Sounds like an answer Petri could help with, he's VERY good at optimizing the crunching software for gpu's.

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 4158
Credit: 50297708210
RAC: 42319579

For most projects Tensor

For most projects Tensor cores won’t be used unless it’s coded into the app (and that it makes sense for the app to do) like an NPU, Tensors are just another kind of ASIC. They can do one kind of operation, matrix FMAs. Most GPU apps won’t need this. It’s more for ML and AI stuff also. I think only a couple of the GPUGRID apps would use this. No other projects that I know of. 
 

Unless the project wants to move to AI based signals detection, and rewrite all their apps, and change their whole workflow, AND have a working/reliable trained model… I just don’t see this being a feasible or realistic option. They have little enough time and resources as it is. 

_________________________________________________________________________

petri33
petri33
Joined: 4 Mar 20
Posts: 129
Credit: 4491332454
RAC: 5772433

mikey wrote: Filipe

mikey wrote:

Filipe wrote:

Tom M wrote:

If e@h were to start using AI inference for data analysis you might see it used here. But it would require significant retooling.

Unless it became "turnkey" it doesn't sound likely.

What about the nvidia Tensor Cores? Would they be useful at e@h if the software could be rewritten to make use of them? 

Sounds like an answer Petri could help with, he's VERY good at optimizing the crunching software for gpu's.

NVIDIA could train a model to recognize a CUDA code pattern, a CPU/GPU i/o or a certain compiler output that would wake up a special AI phase for compiler chain assisted AI optimizations (ccaiop) to automagically produce sass code that uses Tensor cores.

--

petri33

Boca Raton Community HS
Boca Raton Comm...
Joined: 4 Nov 15
Posts: 303
Credit: 11575278840
RAC: 14646424

petri33 wrote: mikey

petri33 wrote:

mikey wrote:

Filipe wrote:

Tom M wrote:

If e@h were to start using AI inference for data analysis you might see it used here. But it would require significant retooling.

Unless it became "turnkey" it doesn't sound likely.

What about the nvidia Tensor Cores? Would they be useful at e@h if the software could be rewritten to make use of them? 

Sounds like an answer Petri could help with, he's VERY good at optimizing the crunching software for gpu's.

NVIDIA could train a model to recognize a CUDA code pattern, a CPU/GPU i/o or a certain compiler output that would wake up a special AI phase for compiler chain assisted AI optimizations (ccaiop) to automagically produce sass code that uses Tensor cores.

--

petri33

 

Sounds simple! (I jest)

Ian&Steve C- do some of the current GPUGRID apps actually use tensor cores? 

Also, is there a way to tell if tensor cores are being actively used, other than the word of the app creator?

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 4158
Credit: 50297708210
RAC: 42319579

Boca Raton Community HS

Boca Raton Community HS wrote:

Sounds simple! (I jest)

Ian&Steve C- do some of the current GPUGRID apps actually use tensor cores? 

Also, is there a way to tell if tensor cores are being actively used, other than the word of the app creator?



they are supposed to be use used as the underlying software packages are supposed to be able to use them.

the only way to see Tensor core usage would be to use the nvprof profiler in the cuda toolkit. I haven't gotten around to trying yet.

_________________________________________________________________________

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.