Page 3 of 4

Re: Nvidia Titan V

Posted: Sun Jan 07, 2018 1:27 am
by Joe_H
The Nvidia Tesla M2090 is based on a 6 year old Fermi chip, the GF110. Closest comparable consumer level card would be a GTX 580, but the Tesla is running at a slower clock.

Re: Nvidia Titan V

Posted: Sun Jan 07, 2018 4:04 am
by AndyE
Update:
Used MSI Afterburner, added 100 Mhz to Core Clock.
Now at 1537 Mhz
Unit 9431 is now at 1165k ppd

Power usage is still at 40%, GPU load 80%,
temp at 60C with 45% fan speed

Re: Nvidia Titan V

Posted: Sun Jan 07, 2018 12:18 pm
by foldy
@Aurum: I guess Titans are not worth it because of the high price and similar PPD than gtx 1080ti.

Maybe next summer when gtx 1180/2080 are released these are the cards to go for.

Re: Nvidia Titan V

Posted: Sun Jan 07, 2018 6:32 pm
by Aurum
foldy, I came to the same conclusion. But contrary to economics prices are rising. Must be the crypto miners.

Re: Nvidia Titan V

Posted: Thu Jan 11, 2018 12:38 pm
by scott@bjorn3d
Guess I am glad I bought a 7900X so when the Volta GPU's for consumers come out I will not get dinged. Currently running 2 1080TI's windows 10 and getting right at 990000 PPD per card and these stupid little work units. Bigger work units these card rock at 1.3m PPD

Re: Nvidia Titan V

Posted: Thu Jan 11, 2018 3:01 pm
by foldy
And still we have the report of Tesla V100 with 1700k PPD on Linux with 94xx work units. How is this possible compared to Titan V 1200k PPD on Linux?

Re: Nvidia Titan V

Posted: Thu Jan 11, 2018 10:35 pm
by toTOW
Different CPUs to feed the GPU ? Different GPU clocks ? More optimized drivers for Tesla (or options not yet activated in the drivers for the newest Titan V) ?

Re: Nvidia Titan V

Posted: Fri Jan 12, 2018 9:52 am
by foldy
Maybe it would be possible to install a modded nvidia tesla driver on Titan and get better PPD? (This may not be allowed in computer centers)

Re: Nvidia Titan V

Posted: Fri Jan 12, 2018 6:15 pm
by bruce
I think there's a better chance of getting OpenMM re-optimized to work with the existing Titan driver -- but we're just guessing here. Neither you or I know why performance of the Titan isn't better.

(It also could be that the design of the Titan favours improvements in video performance and power savings over improved in the processing speed/ability of the shaders. Game frame-rate depends on many factors, some of which do not help FAH).

Re: Nvidia Titan V

Posted: Tue Jan 16, 2018 8:43 am
by AndyE
I don't have a Tesla V100 card to look more deeply into the TitanV lower performance versus the Tesla V100.
Versus Pascal-based cards, the Volta micro architecture is sufficiently different, that some of the performance differences (or lack thereof) of the TitanV vs. 1080Ti could come from a potential need to recompile the binaries (not rewrite Core21, just recompile) - just in case FAH is leveraging CUDA's fatbinaries capability. I can't verify the Tesla V100 numbers side by side with a TitanV in one system, I only know for sure, that with the current settings, PPD's aren't really different vs. a 1080Ti.

I've used the latest CUDA documentation on NVidia's Website to compare the 5 Microarchitectures, or rather the native instruction sets of (Fermi, Kepler, Maxwell, Pascal and Volta) in a single page document (download the PDF). Interestingly, NVidia's documentation indicates that Maxwell and Pascal have identical native instruction sets. Volta has quite a few instructions discontinued from previous generations. They are potentially replaced by other instrutions, potentially more flexible or effective, but compiled code might depend on these differences. For instance, does the Titan driver need to emulate the discontinued instructions in the CPU part of the driver and hence the speed of the CPU is of critical importance for this part of the code. BTW, I used a relatively slow i5-3450 host-CPU. Just an idea, no evidence (yet).

Snippet of the page:
Image

rgds,
Andy

Re: Nvidia Titan V

Posted: Tue Jan 16, 2018 12:24 pm
by foldy
FP16 is half precision and is not used by FAH but most FP32 single precision and some FP64 double precicion.

There are some 11xxx work units around, maybe there Titan V can hit a new top PPD?

Re: Nvidia Titan V

Posted: Tue Jan 16, 2018 2:06 pm
by Joe_H
Core_21 does not use CUDA, like the two GPU folding cores before (17 & 18) it uses OpenCL. So recompiling the core is not likely to help.

Re: Nvidia Titan V

Posted: Tue Jan 16, 2018 9:26 pm
by bruce
FAHCore_22 has been under development for some time now. (and there's no predicted beta test date or release date). Development plans to release two versions, OpenCL for AMD/ATI and CUDA for NV. I have no doubt that CUDA will improve the performance ofor NV's GPUs but I'm not aware of any testing that has been done to determine whether it will improve Pascal and Titan V equally or unequally. (The same goes for Fermi, Kepler, and Maxwell.) You'll just have to wait and see.

Right now, the objective is to get FAHCore_22 to the point that it can be exposed for beta testing (as well of maximizing overall throughput in any way that is known until then).

Re: Nvidia Titan V

Posted: Wed Jan 17, 2018 1:37 am
by AndyE
Thanks for the dev update Bruce.
Joe_H wrote:Core_21 does not use CUDA, like the two GPU folding cores before (17 & 18) it uses OpenCL. So recompiling the core is not likely to help.
The native instruction set is independent and "beneath" the CUDA or OpenCL layer. The table aggregates native instruction sets of the GPUs based on NVidia's documentation, either framework is compiled to. In the case of CUDA, there is another virtual layer in between (the virtual instruction set PTX). Don't know if OpenCL has a similar architecture.

Re: Nvidia Titan V

Posted: Wed Jan 17, 2018 3:34 am
by bruce
True, but from Joe_H's perspective, each recent FAHCore interfaces with either the OpenCL API or the CUDA API.

ATI's OpenCL doesn't pass through a CUDA level. The fact that NVidia's implementation of OpenCL sits on top of CUDA is part of NV's design decision, and rewriting Core_22 to interface directly with CUDA improves efficiency. It's more costly for FAH, though, because they still have to compile a FAHCore that talks to OpenCL so that ATI's GPUs will still be supported -- unless someday ATI decides to pay the license fees for CUDA -- which is never going to happen, IMHO.