Page 1 of 2

Launch of new NVIDIA 3000 series GPU's

Posted: Tue Sep 01, 2020 8:53 pm
by Demmers
Just wanted to see if anyone else is aware, or seen today's launch of the new NVIDIA 3000 series GPU's (https://www.youtube.com/watch?v=E98hC9e__Xs). Only asking because PPD/WU count is about to get a whole lot more interesting!
For a bit of perspective, the current Titan RTX priced at ~$2500 has a CUDA core count of 4608. The new 3070 has 5888, at a price of $499. The new Titan replacement (3090) has .... wait for it.... 10496 CUDA cores, priced at $1499!
I've been folding since November last year on just my Ryzen 3400G CPU. I haven't bought a GPU for this new-build PC for budget reasons, and also because I knew these cards were coming out, and now I can afford one. I'm sure the lower spec/cheaper GPU's will be announced later in the year, but damn, I am just salivating at the prospect of folding with one of these!
I can't wait to participate to the COVID Moonshot project if it is still on-going by the time these are out on market!

Re: Launch of new NVIDIA 3000 series GPU's

Posted: Tue Sep 01, 2020 9:44 pm
by Neil-B
I think the 3090 core count is a bit of an illusion - number of threads on various talking about it actually being half that but double accounted due to the extra FP32 units - but I am in o way an expert/

Re: Launch of new NVIDIA 3000 series GPU's

Posted: Tue Sep 01, 2020 10:38 pm
by Bastiaan_NL
It's interesting for sure, and I can't wait to see what the 3090 will do with folding at home.
But until we see it in action it's only speculation.
Let's hope we won't need it for the covid cause :)

Re: Launch of new NVIDIA 3000 series GPU's

Posted: Tue Sep 01, 2020 10:53 pm
by Neil-B
For some work the count actually be half may be an issue/disappointment - but for FaH it might be that the extra FP32 units make a real difference - time will tell

Re: Launch of new NVIDIA 3000 series GPU's

Posted: Sun Sep 06, 2020 11:22 pm
by Breach
Seems that 3080 would be about 40% faster than 2080ti in compute:
https://videocardz.com/newz/nvidia-gefo ... benchmarks

Let's see how that translates for FAH.

Re: Launch of new NVIDIA 3000 series GPU's

Posted: Mon Sep 07, 2020 3:01 pm
by ipkh
It's really a question of scaling.
If everything was to scale 100% these new cards are terrific at fp32. The core counts are correct and They have dpublednup the fp32 performance per cuda core. Each core can do 2 fp32 and 1 int32 per clock. But splitting the workload and effectively using all those cores remains to be seen.
We might eventually need to run multiple WU per GPU to use them effectively.

Re: Launch of new NVIDIA 3000 series GPU's

Posted: Tue Sep 08, 2020 9:53 am
by PantherX
ipkh wrote:...We might eventually need to run multiple WU per GPU to use them effectively.
That is under discussion by the development team and there are a few ideas floating around but nothing solid so far. Also, there's no ETA but I do know that it is something that will likely be addressed sooner rather than later :)

Re: Launch of new NVIDIA 3000 series GPU's

Posted: Tue Sep 08, 2020 4:11 pm
by cine.chris
ipkh wrote:It's really a question of scaling.
But splitting the workload and effectively using all those cores remains to be seen.
We might eventually need to run multiple WU per GPU to use them effectively.
Watching 2060s' choke on low atom count WU has always bothered me.
Once I realized there was a corresponding drop in pwr used, it bothered me less, but I'd still prefer to see my folders used more effectively.
Having the option to enable multiple WU processing could be a good solution.
GPU Architecture also effects this... my 2060 KO runs great (nominal) on low atom count WU, but the 2060 supers turn into GTX 1660, with plots aligning in htm.
The mentioned change to CUDA from OpenCL will likely effect many of the patterns seen in the past.
So, I'm left wondering how long it will be until we see CUDA and truly efficient utilization of even our current gear & imagining the potential of a CUDA + 30780 architecture.
Like driving a Ferrari in downtown traffic analogy could easily apply to a Moonshot typ atom count, as is.
Of course, it's much better than waiting for a slot to load, right?

Re: Launch of new NVIDIA 3000 series GPU's

Posted: Tue Sep 08, 2020 5:15 pm
by cine.chris
Neil-B wrote:I think the 3090 core count is a bit of an illusion - number of threads on various talking about it actually being half that but double accounted due to the extra FP32 units - but I am in o way an expert/
Tom's Hdwr wrote:just published 9/8.. It has shown a generational performance increase of roughly 2X when comparing RTX 3080 to RTX 2080, but if you look just at TFLOPS, the RTX 3080 is nearly triple the theoretical performance. But the reality is the RTX 2080 could do FP32 + INT at around 10 tera-OPS each, whereas the RTX 3080 has nearly 30 tera-OPS of FP32 available and only 15 tera-OPS of INT available
I heard mention that Nvidia had bumped the CUDA executions per clock cycle in the 30X0 too, still looking for more on that, true??
All good, but putting it to work efficiently will likely take some time.
With Nvidia being an active supporter & their talent pool a solution likely already exists in someone's brain.

Re: Launch of new NVIDIA 3000 series GPU's

Posted: Wed Sep 09, 2020 8:39 am
by PantherX
cine.chris wrote:...The mentioned change to CUDA from OpenCL will likely effect many of the patterns seen in the past.
So, I'm left wondering how long it will be until we see CUDA and truly efficient utilization of even our current gear & imagining the potential of a CUDA + 30780 architecture...
I would say that would fall in the "very soon" category (see my signature for reference).

The announcement would happen here on the Forum and potentially on the Blog/Twitter too. Let's just wait and see what happens later this month :)

Re: Launch of new NVIDIA 3000 series GPU's

Posted: Sat Sep 12, 2020 4:14 pm
by toTOW
There's already a topic about the new GPUs : viewtopic.php?f=38&t=36010

Re: Launch of new NVIDIA 3000 series GPU's

Posted: Sat Sep 12, 2020 6:13 pm
by JohnChodera
> Having the option to enable multiple WU processing could be a good solution.

We're working on benchmarking/evaluating this approach right now! We'll find a way to take maximal advantage of these big GPUs, but it may not be ready right away.

> So, I'm left wondering how long it will be until we see CUDA and truly efficient utilization of even our current gear & imagining the potential of a CUDA + 30780 architecture...

You may not have to wait long... ;)

~ John Chodera // MSKCC

Re: Launch of new NVIDIA 3000 series GPU's

Posted: Sat Sep 19, 2020 2:09 am
by empleat
Don't you think it is staggering FAH has like 2 exaflops, while they have almost no users yet? :D They could have so many more! Imagine if there was gpu usage limit! I tried workaround: because 1 cpu core is still needed to give gpu the work. I used a stress test and assigned it to this core and i tried different amount stress - like 2-8% of cpu usage. So FAH gets the rest. It worked! Gpu usage was decreased by a lot! I could use my pc, while folding! Problem was: WU always crashed, except once i almost finished a project!!! But i tried that many times and it had always crashed at some point... I don't know if, because of instability, or to many errors piled up. And whether or no these data would be useful, even if WU finished! That was only on 780. I have 2070 super now - unfortunately, but i didn't test it yet. But on 3000 series something like 3080/90, one WU could take like 1-2 hours, or less. On 780 one wu took 6-8 hours! I think that could finish! Currently no new projects available, so i test it later...

Re: Launch of new NVIDIA 3000 series GPU's

Posted: Sat Sep 19, 2020 7:59 am
by bruce
empleat wrote:... Problem was: WU always crashed, except once i almost finished a project!!! But i tried that many times and it had always crashed at some point... I don't know if, because of instability, or to many errors piled up. And whether or no these data would be useful, even if WU finished! That was only on 780. I have 2070 super now - unfortunately, but i didn't test it yet. But on 3000 series something like 3080/90, one WU could take like 1-2 hours, or less. On 780 one wu took 6-8 hours! I think that could finish! Currently no new projects available, so i test it later...
Ordinarily, it fine to learn by testing, but it's not ok to intentionally crash perfectly good production WUs. Every WU is unique and is assigned to ONE person. If it crashes, it has to be assigned to someone else and if it it fails too many times, it is withdrawn from distribution. Such failure are costly to scientific research. WUs are not "free"

Re: Launch of new NVIDIA 3000 series GPU's

Posted: Sat Sep 19, 2020 8:25 am
by Neil-B
Basically if one refuses to accept how GPUs manage load and try to force limitations on them getting lots of failures is not a great surprise :eo ... it is a shame that experimentation like this damages the science when one can with a bit of knowledge run copies of a completed WU for testing and experimentation without impacting science :e( ... but if one wants to make a point (in push it in many places in the forums) by failing to do something others have advised is not currently possible in a manageable way whilst damaging the science then I guess that is ones right :( ... for myself I accept there are limitations due go various sub optimal technical issues and focus on supporting FaH within the bounds it is currently able to operate :)

There are approaches to managing the heat/power loading on GPUs using various tweaking packages and for many this helps address their concerns/wishes ... but to try but until OS and GPU vendors change the way GPU work prioritisation and workload management then I am sorry but much of what I believe you wish to do is simply not doable in a manageable manner ... Yes Nvidia are starting to address this with some of their recent developments and I have no doubt that the FaH devs will be watching this closely but there is a huge inertia in the OS/GPU vendor space to overcome before allocation of jobs to %ages of GPU capacity becomes the norm ... when it does then folders will be able to "dial in" whatever GPU level of donation they wish - but until then working within the bounds of current technological implementations help the science best.