NPU inclusion?!

KLiK
KLiK
Joined: 1 Apr 14
Posts: 70
Credit: 463675646
RAC: 1070553
Topic 231362

Few words about future of tech & NPUs? What do you know? What do you think?

Is NPU going to be included into E@h? What about some other project?

Will it be limited to CPU-NPU (like intel or AMD) or will it include some other NPU devices? Such as Googles Tensor & Coral TPU/NPU? Falcon cards? 

Lets start the discussion here...

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 4045
Credit: 48060742094
RAC: 34557176

An NPU is designed to do

An NPU is designed to do neural network operations. Like inferencing. This project does not use NN and so an NPU would not be useful here. It’s like the tensor or RTX cores on your GPU, they’re there, but won’t be used. 

_________________________________________________________________________

Tom M
Tom M
Joined: 2 Feb 06
Posts: 6577
Credit: 9654131365
RAC: 2853071

If e@h were to start using AI

If e@h were to start using AI inference for data analysis you might see it used here. But it would require significant retooling.

Unless it became "turnkey" it doesn't sound likely.

A Proud member of the O.F.A.  (Old Farts Association).  Be well, do good work, and keep in touch.® (Garrison Keillor)  I want some more patience. RIGHT NOW!

Filipe
Filipe
Joined: 10 Mar 05
Posts: 186
Credit: 412587590
RAC: 251827

Tom M wrote:If e@h were to

Tom M wrote:

If e@h were to start using AI inference for data analysis you might see it used here. But it would require significant retooling.

Unless it became "turnkey" it doesn't sound likely.

What about the nvidia Tensor Cores? Would they be useful at e@h if the software could be rewritten to make use of them?

mikey
mikey
Joined: 22 Jan 05
Posts: 12779
Credit: 1865307499
RAC: 1721802

Filipe wrote: Tom M

Filipe wrote:

Tom M wrote:

If e@h were to start using AI inference for data analysis you might see it used here. But it would require significant retooling.

Unless it became "turnkey" it doesn't sound likely.

What about the nvidia Tensor Cores? Would they be useful at e@h if the software could be rewritten to make use of them? 

Sounds like an answer Petri could help with, he's VERY good at optimizing the crunching software for gpu's.

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 4045
Credit: 48060742094
RAC: 34557176

For most projects Tensor

For most projects Tensor cores won’t be used unless it’s coded into the app (and that it makes sense for the app to do) like an NPU, Tensors are just another kind of ASIC. They can do one kind of operation, matrix FMAs. Most GPU apps won’t need this. It’s more for ML and AI stuff also. I think only a couple of the GPUGRID apps would use this. No other projects that I know of. 
 

Unless the project wants to move to AI based signals detection, and rewrite all their apps, and change their whole workflow, AND have a working/reliable trained model… I just don’t see this being a feasible or realistic option. They have little enough time and resources as it is. 

_________________________________________________________________________

petri33
petri33
Joined: 4 Mar 20
Posts: 124
Credit: 4172389169
RAC: 4321325

mikey wrote: Filipe

mikey wrote:

Filipe wrote:

Tom M wrote:

If e@h were to start using AI inference for data analysis you might see it used here. But it would require significant retooling.

Unless it became "turnkey" it doesn't sound likely.

What about the nvidia Tensor Cores? Would they be useful at e@h if the software could be rewritten to make use of them? 

Sounds like an answer Petri could help with, he's VERY good at optimizing the crunching software for gpu's.

NVIDIA could train a model to recognize a CUDA code pattern, a CPU/GPU i/o or a certain compiler output that would wake up a special AI phase for compiler chain assisted AI optimizations (ccaiop) to automagically produce sass code that uses Tensor cores.

--

petri33

Boca Raton Community HS
Boca Raton Comm...
Joined: 4 Nov 15
Posts: 264
Credit: 10875494223
RAC: 9418387

petri33 wrote: mikey

petri33 wrote:

mikey wrote:

Filipe wrote:

Tom M wrote:

If e@h were to start using AI inference for data analysis you might see it used here. But it would require significant retooling.

Unless it became "turnkey" it doesn't sound likely.

What about the nvidia Tensor Cores? Would they be useful at e@h if the software could be rewritten to make use of them? 

Sounds like an answer Petri could help with, he's VERY good at optimizing the crunching software for gpu's.

NVIDIA could train a model to recognize a CUDA code pattern, a CPU/GPU i/o or a certain compiler output that would wake up a special AI phase for compiler chain assisted AI optimizations (ccaiop) to automagically produce sass code that uses Tensor cores.

--

petri33

 

Sounds simple! (I jest)

Ian&Steve C- do some of the current GPUGRID apps actually use tensor cores? 

Also, is there a way to tell if tensor cores are being actively used, other than the word of the app creator?

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 4045
Credit: 48060742094
RAC: 34557176

Boca Raton Community HS

Boca Raton Community HS wrote:

Sounds simple! (I jest)

Ian&Steve C- do some of the current GPUGRID apps actually use tensor cores? 

Also, is there a way to tell if tensor cores are being actively used, other than the word of the app creator?



they are supposed to be use used as the underlying software packages are supposed to be able to use them.

the only way to see Tensor core usage would be to use the nvprof profiler in the cuda toolkit. I haven't gotten around to trying yet.

_________________________________________________________________________

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.