Why is very powerful the tensor cores of Nvidia

Moderators: Site Moderators, FAHC Science Team

Post Reply
GTX56O
Posts: 55
Joined: Tue Mar 13, 2012 11:25 am
Hardware configuration: rx570 itx 4gb no oc

Why is very powerful the tensor cores of Nvidia

Post by GTX56O »

The rtx4090, have cores H100 https://www.techpowerup.com/299092/nvid ... t-a-glance

And this is very useful to chemical secuences, for example in the page 70 of white paper

https://resources.nvidia.com/en-us-tensor-core

https://en.wikipedia.org/wiki/Smith%E2% ... _algorithm

It is very expensive for my economy to buy the rtx4090, but I am sure they wanted guaranteed sales after the pandemic. Because it is a big jump in power where ATI has been ousted.

It is curious how price speculation due to the low supply and the high prices of the pandemic have prevented the sale of a large stock that previous versions are now giving away at a bargain price.

I hope that in the future they make two formats of cards, one for air-cooling and the other for liquid, because the air-cooling makes the card have disproportionately large measurements.

This can pose a danger because if ATI or AMD does not get anything decent throughout this year that can compete with Nvidia, Nvidia will have a monopoly, it will put the price it wants on its cards, and they are not characterized by being weak by imposing prices on the low, especially intermediaries who speculate on supply and demand.

Amd is wasting time developing CPUs with more process threads, because the GPU already fulfills that function, and I think that since they can't compete with Nvidia's graphics cards, they are opening the market with CPUs, something very ambitious, because it supposes that it also has to compete with intel.
Last edited by GTX56O on Fri Feb 03, 2023 7:36 pm, edited 3 times in total.
Joe_H
Site Admin
Posts: 7854
Joined: Tue Apr 21, 2009 4:41 pm
Hardware configuration: Mac Pro 2.8 quad 12 GB smp4
MacBook Pro 2.9 i7 8 GB smp2
Location: W. MA

Re: Why is very powerful the tensor cores of Nvidia

Post by Joe_H »

The tensor cores may speed up some calculations, but as implemented by Nvidia they are based on 16-bit floating point calculations and are not as accurate. F@h uses 32-bit calculations, and for some critical ones uses 64-bit floating point. So currently not useful for F@h. The very large number of shader cores in a 4090 can give impressive speedups if the molecular system being simulated is large enough.
Image

iMac 2.8 i7 12 GB smp8, Mac Pro 2.8 quad 12 GB smp6
MacBook Pro 2.9 i7 8 GB smp3
GTX56O
Posts: 55
Joined: Tue Mar 13, 2012 11:25 am
Hardware configuration: rx570 itx 4gb no oc

Re: Why is very powerful the tensor cores of Nvidia

Post by GTX56O »

https://www.youtube.com/watch?v=jiMZYJ--cT8
https://developer.nvidia.com/hpc-applic ... erformance

NAMD
Molecular Dynamics

Designed for high-performance simulation of large molecular systems

VERSION

GPU, AMD CPU V 3.0a13 ; Intel CPU V 2.15a AVX512

Hoy en día, aplicaciones de simulación de dinámica molecular como AMBER, GROMACS, NAMD y LAMMPS
https://www.azken.com/blog/sistemas-rec ... molecular/
http://www.mdtutorials.com/gmx/index.html

https://www.youtube.com/watch?v=rYZ1p5l ... haelPapili
https://www.youtube.com/watch?v=DH25pKy ... nformatics

Why can the rx7900xtx only get 4 million points when the rtx4090 gets 30 million and they are worth the same price then?

Although Nvidia has implemented CUDA for FP16 in recreational game graphics, it seems that it has implemented artificial intelligence for data centers. I don't understand why he has abandoned us like that. What could be done at the software level?

It seems that Nvidia has implemented the Tensor Cores for a data center with artificial intelligence.

Are home folding applications accurate to use 32 or 64 bit double precision?

Could Alpha fold reduce processor time?

https://www.youtube.com/watch?v=mTjYvIU ... nformatics
https://www.youtube.com/watch?v=lLFEqKl3sm4
https://www.youtube.com/watch?v=Uz7ucmqjZ08
https://www.youtube.com/watch?v=f8FAJXPBdOg
Post Reply