Page 2 of 2

Re: Tesla V100-SXM2-16GB

Posted: Fri Oct 27, 2017 4:05 pm
by csvanefalk
foldy wrote:Oh my god it's Nvidia Volta: 5120 shaders, 15 TFlops, ~1700k PPD
That's really impressive. My 1080Ti maxes out at 1.5M ppd.

Re: Tesla V100-SXM2-16GB

Posted: Tue Nov 07, 2017 3:54 am
by Luscious
Thinkmate is selling V100 rackmount systems for purchase right now, including a 4U 2P 10 GPU variant. 17 million PPD out of a single box :eo :eo :eo That's more than what most TEAMS make.

http://www.thinkmate.com/system/gpx-xt24-24s1-10gpu

Re: Tesla V100-SXM2-16GB

Posted: Thu Nov 09, 2017 10:42 pm
by 84036980
FAH should works but I'm getting some error if i add it in GPU list file manually.
I just want it to be officially supported asap.


FAHBench wokrs. FYR

Loading plugins from plugin directory
Number of registered plugins: 3
Deserializing input files: system
Deserializing input files: state
Deserializing input files: integrator
Creating context (may take several minutes)
Checking accuracy against reference code
Creating reference context (may take several minutes)
Comparing forces and energy
Starting Benchmark

Benchmarking finished
Final score: 230.1101
Scaled score: 230.1101 (23558 atoms)

Re: Tesla V100-SXM2-16GB

Posted: Thu Nov 09, 2017 11:13 pm
by bruce
You cannot add to GPUs.txt manually -- the server's copy must match.

Run fahclient --lspci or obtain the lspci identifiers elsewhere and post them here.

At the present time, FAH uses OpenCL, so you're going to be limited to what can be done with OpenCL. OpenMM is not written to support tensor math so performance is going to be reduced to whatever can be done with the CUDA cores. My guess is that FAH won't load up that many CUDA cores simultaneously, either.

What version of CUDA is installed and what version of OpenCL is supported?

Please describe your hardware.

Re: Tesla V100-SXM2-16GB

Posted: Fri Nov 10, 2017 7:47 am
by foldy
@Luscious: Price for the rack with 10 nvidia Teslas: only $100000

Re: Tesla V100-SXM2-16GB

Posted: Sun Nov 12, 2017 1:21 pm
by toTOW
I added 0x1db1 / GV100 [Tesla V100 SXM2] and 0x1db4 / GV100 [Tesla V100 PCIe] to the GPU.txt file ... let us know if something goes wrong ...

Re: Tesla V100-SXM2-16GB

Posted: Sun Nov 12, 2017 8:45 pm
by 84036980
it's working now : )

Thank you guys,

Re: Tesla V100-SXM2-16GB

Posted: Mon Nov 13, 2017 11:37 am
by foldy
Can you post some PPD numbers? I guess current work units are too small for Tesla so you may not get more than 1000k PPD currently

Re: Tesla V100-SXM2-16GB

Posted: Mon Nov 13, 2017 5:56 pm
by 84036980

Re: Tesla V100-SXM2-16GB

Posted: Thu Jun 28, 2018 9:31 pm
by icemanncsu
Burning off some AWS EC2 credit since its the end of the month, it would have expired otherwise :) . Right now AWS EC2 spots in USE1 are $7.80/hour, on-demand is normally $24.

Forgot to mention this is a single p3.16xlarge instance.

57 CPUs at 2.7GHz & 8 x Tesla V100's 16GB

15.5M PPD

Image
Full size image here -> https://ibb.co/dTWduo

Re: Tesla V100-SXM2-16GB

Posted: Sat Jun 30, 2018 4:03 pm
by foldy
That's some folding power! Be sure to have enough CPUs left to feed the GPUs.