New Tesla V and Tensor Cores

Moderators: slegrand, Site Moderators, PandeGroup

New Tesla V and Tensor Cores

Postby Kuno » Fri Dec 08, 2017 4:54 am

With the new Tesla V being released, and the new Tensor Cores that are on the card, I was wondering will Folding@home be able to take full advantage of this card, or would I be "wasting" 3 grand in purchasing it? The card is designed for deep learning, so I am hoping that Folding@home would be able to utilize the card to it's fullest potential. It has been thrown around that Stanford wants to go Exascale, and well, this card will allow that to happen.
Kuno
 
Posts: 28
Joined: Sat Sep 23, 2017 4:59 pm

Re: New Tesla V and Tensor Cores

Postby foldy » Fri Dec 08, 2017 9:24 am

https://www.nvidia.com/en-us/titan/titan-v/

It makes 1700k PPD with current smaller work units like 9415.
viewtopic.php?f=83&t=30278&start=15

I can get this PPD with 2x gtx 1080 for $1000. So PPD/price is bad.

But it only uses 250 watts while 2x gtx 1080 use 400 watts so PPD/watt is good.
foldy
 
Posts: 1075
Joined: Sat Dec 01, 2012 3:43 pm

Re: New Tesla V and Tensor Cores

Postby Kuno » Fri Dec 08, 2017 4:06 pm

Holy Jebus. So not really worth it, yet. It's great to see on 11431 it's doing almost twice the ppd at 2.6 million, but it does appear that Stanford will need to make much larger work units to address the power of this next generation of cards before they can actually be used properly. It even feels as though Pascal is severely underused as my 1080's and 1080Ti constantly score the same and the 94xx work units.

Thanks for the information foldy!
Kuno
 
Posts: 28
Joined: Sat Sep 23, 2017 4:59 pm

Re: New Tesla V and Tensor Cores

Postby JimboPalmer » Fri Dec 08, 2017 7:40 pm

Kuno wrote:With the new Tesla V being released, and the new Tensor Cores that are on the card, I was wondering will Folding@home be able to take full advantage of this card, or would I be "wasting" 3 grand in purchasing it? The card is designed for deep learning, so I am hoping that Folding@home would be able to utilize the card to it's fullest potential. It has been thrown around that Stanford wants to go Exascale, and well, this card will allow that to happen.


My understanding is that the tensor cores are a faster, simpler core, the trade off between precision and speed will get tricky. If there is a lot of low precision math needed, they could be programmed for.

GROMACS is an open source project, they will be the first to decide if Tensor Cores can help speed up simulations. http://www.gromacs.org/About_Gromacs

The Pande Group hires Caldron Development to integrate new GROMACS code into Folding@Home cores. Once Pande Group releases a new Core, individual researchers will begin using it in 'all new' projects. Many projects are continuations of existing research and need to use the same software tools, so we have both a4 and a7 cores on the CPU side and cores 17, 18 and 21 on the GPU side.

Open Source coding is never fast, as it is often a hobby. Pande does not accept every new GROMACS version and testing and integrating takes time. I expect the first Volta aware core within 4 years.

Core_11 was fairly recently retired, it ran on Nvidia 8xx0,9xx0 and GTX 2x0 GPUs. Core 15 ran on GTX 2x0 GPUs and WUs JUST ran out. Core_17 runs slower on GTX 4x0 cores than using Core_15, but GTX 4x0 still runs on Core_21. So changes that need new features are slow to roll out and are met with disappointment by owners of previous GPUs.

Unless code is added to match GPU power to molecule size, many older cards will run the code, but never finish in time.

[The above is my understanding, and not a official statement by someone in charge. I am not in any way affiliated with F@H, it is just a hobby for my computers]
Tsar of all the Rushers
I tried to remain childlike, all I achieved was childish.
A friend to those who want no friends
JimboPalmer
 
Posts: 611
Joined: Mon Feb 16, 2009 4:12 am
Location: Greenwood MS USA

Re: New Tesla V and Tensor Cores

Postby Nathan_P » Fri Dec 08, 2017 9:25 pm

Volta has been available for several months to purchase, IIRC Nvidia are one of F@H partners so It would not surprise me if there was a V100 being tested in the lab already. Development were working on several updates to cores and clients in the earlier part of the year and everything has gone quiet so i'm sure something is in the works
Image
Nathan_P
 
Posts: 1397
Joined: Wed Apr 01, 2009 9:22 pm
Location: Jersey, Channel islands

Re: New Tesla V and Tensor Cores

Postby Kuno » Fri Dec 08, 2017 10:40 pm

Well I know core 22 was supposed to be released a while ago and it hasn't been released yet. It's just so frustrating seeing my 1080ti being so underutilitized, and thinking of buying a Volta V, and realizing that it too will be even more so underutilized is just arg. We need better work units that can take advantage of these new, more powerful cards. When my 1080 and 1080Ti do the same work units at the same TPF, but my 1080Ti has 15-20% less GPU usage, you know there is a serious lack of optimization. I imagine with Volta it would be even more.
Kuno
 
Posts: 28
Joined: Sat Sep 23, 2017 4:59 pm

Re: New Tesla V and Tensor Cores

Postby rwh202 » Sat Dec 09, 2017 9:54 am

Nathan_P wrote:Volta has been available for several months to purchase, IIRC Nvidia are one of F@H partners so It would not surprise me if there was a V100 being tested in the lab already. Development were working on several updates to cores and clients in the earlier part of the year and everything has gone quiet so i'm sure something is in the works


That's optimistic!

When Maxwell was released it took months to even get it to fold, never mind optimised. Pascal is still under-utilised. Also, I think we're three and a half years since a client release (beside two flaky betas). If there is software development going on, it's too slow to keep up with progress everywhere else in the chain.
rwh202
 
Posts: 320
Joined: Mon Nov 15, 2010 8:51 pm
Location: South Coast, UK

Re: New Tesla V and Tensor Cores

Postby foldy » Sat Dec 09, 2017 5:33 pm

Guess when we see gtx 2080/1180 in mid 2018 then a new fahcore_22 could get released. But optimization may only be scientific properties not max GPU usage.
foldy
 
Posts: 1075
Joined: Sat Dec 01, 2012 3:43 pm

Re: New Tesla V and Tensor Cores

Postby JimboPalmer » Sat Dec 09, 2017 8:38 pm

JimboPalmer wrote:My understanding is that the tensor cores are a faster, simpler core, the trade off between precision and speed will get tricky. If there is a lot of low precision math needed, they could be programmed for.

Given ideal math needs, tensor cores could be 9 times as fast as the current code at floating point.
However, it also has restrictions. It always does a 4x4 16 bit multiply and then add that to a 16 or 32 bit 4x4 array and gets a 4x4 32 bit floating point result.

Are there parts of GROMACS where 16 bit accuracy is all F@H needs? I don't know. How often is the data set 4x4? I don't know.

We know that currently most of F@H is 32 bits, and that occasionally it needs 64 bits, but currently 32 bits is the fastest precision, if 16 bits is 4 to 9 times faster, then it makes sense to see how often it could be used. It won't be used without reprogramming GROMACS. It won't be used in F@H without a newer version of GROMACS.

https://devblogs.nvidia.com/parallelfor ... ores-cuda-

[Currently all GPU cores use OpenCL, so they run on both AMD and Nvidia. If, to use tensor cores, you have to use CUDA, which is Nvidia only, Pande group will have to support more Cores, which slows development]
JimboPalmer
 
Posts: 611
Joined: Mon Feb 16, 2009 4:12 am
Location: Greenwood MS USA

Re: New Tesla V and Tensor Cores

Postby Nathan_P » Sat Dec 09, 2017 9:22 pm

Core 22 was rumored to have a Cuda variant, maybe this was why
Nathan_P
 
Posts: 1397
Joined: Wed Apr 01, 2009 9:22 pm
Location: Jersey, Channel islands

Re: New Tesla V and Tensor Cores

Postby toTOW » Sun Dec 10, 2017 11:23 am

A little mistake : GPU cores use OpenMM, not GROMACS.

In my opinion, anything related to IA (tensor cores and like) won't be usable for sensitive computation like FAH, because they mostly use what they call half precision (FP16) which is not enough for our computation. FAH use single precision (FP32) for 90% of their computations, and when it's not enough, some parts of the code can use double precision (FP64), which is called mixed precision in OpenMM.
Folding@Home beta tester since 2002. Folding Forum moderator since July 2008.

FAH-Addict : latest news, tests and reviews about Folding@Home project.

Image
User avatar
toTOW
Site Moderator
 
Posts: 8376
Joined: Sun Dec 02, 2007 10:38 am
Location: Bordeaux, France

Re: New Tesla V and Tensor Cores

Postby foldy » Sun Dec 10, 2017 12:06 pm

I guess we don't need the tensor cores as the pure power of the shaders is amazing. And the Tesla or Titan V are pro cards. The consumer cards gtx 1180 or gtx 2080 will not have Tensor cores and that is what most at home users will buy.
foldy
 
Posts: 1075
Joined: Sat Dec 01, 2012 3:43 pm

Re: New Tesla V and Tensor Cores

Postby toTOW » Sun Dec 10, 2017 12:47 pm

I'm a bit surprised by NV move on Titan V. We know that Titan cards are always between high gaming card and low end professional ones, but they used to use GPUs that were lacking most parts of the compute ones (especially full speed double precision). So I thought that the Titan Volta would feature a GV102 without tensor cores and DP (like any other gaming GPU).

It seems like NV decided that GV100 was the one that would fit to Titan Volta without modifications. Which leads to another rumours I saw : are we going to see gaming versions of the Volta architecture (GV10x) for a 20xx series, or is nVidia skipping the whole step ?
User avatar
toTOW
Site Moderator
 
Posts: 8376
Joined: Sun Dec 02, 2007 10:38 am
Location: Bordeaux, France

Re: New Tesla V and Tensor Cores

Postby rwh202 » Sun Dec 10, 2017 12:52 pm

toTOW wrote:It seems like NV decided that GV100 was the one that would fit to Titan Volta without modifications. Which leads to another rumours I saw : are we going to see gaming versions of the Volta architecture (GV10x) for a 20xx series, or is nVidia skipping the whole step ?


Yep, maybe no Volta for GeForce and instead Ampere...
rwh202
 
Posts: 320
Joined: Mon Nov 15, 2010 8:51 pm
Location: South Coast, UK

Re: New Tesla V and Tensor Cores

Postby foldy » Sun Dec 10, 2017 4:27 pm

The Ampere name I also heard and guess it means no Tensor Cores and GDDR6 instead of HBM2, maybe late 2018.
foldy
 
Posts: 1075
Joined: Sat Dec 01, 2012 3:43 pm


Return to NVIDIA specific issues

Who is online

Users browsing this forum: No registered users and 1 guest

cron