GeForce RTX 3080 and 3090 support enabled !

Post requests to add new GPUs to the official whitelist here.

Moderators: Site Moderators, FAHC Science Team

Re: GeForce RTX 3080 and 3090 support enabled !

Postby ir_cow » Sun Sep 20, 2020 11:26 pm

MeeLee wrote:@flarbear: I tend to agree with you.
The question I would ask here, is, if the pcie 4.0 bandwidth also consumes 1.5% more energy than 3.0? If your system runs at 350W, the extra 3.5W may be worth it, but it may not if the power draw is more like 10W higher...
And performance and power draw on Pcie 4.0 vs 3.0, and x16, vs x8, vs 4.0x4 speeds also need to be tested.


I don't see how the PCIe slot can "consume" more power. I also tried the Founders Edition which has a limit of 370 watts. No difference in PPD. Just those massive swings depending on the WU. Also the RTX 3080 doesn't even use 8x PCIe 3.0 for folding. I doesn't use the full 16x in games either. That .5% uplift is how the bits are encoded and lowers the overhead. Funny enough you "gain" .5% with PCIe, but being on AMD at lower resolutions you lose 15-30% FPS depending on the game. It is only when you reach 4K does the CPU not matter much. But we are talking folding here and I don't see any reason why PCIe 4.0 would help in folding.

Now what needs to happen is FAHCores being written for Tensor. If the RTX 2080 Ti gets 3.5~ mill, a extra 4000 CUDA cores, higher clock speed and memory frequency should be at least 50% faster. But at 4.5 Mill tops it is only 28%. This tells me things need to be optimized better for currently CUDA. Then add Tensor WUs.
ir_cow
 
Posts: 4
Joined: Sat Sep 19, 2020 3:18 am

Re: GeForce RTX 3080 and 3090 support enabled !

Postby ipkh » Mon Sep 21, 2020 12:33 am

The Nvidia driver interprets the OpenCL and CUDA (Core 22 version 13) instructions. So it is up to Nvidias optimizations to make the dual fp32 work. For games the basic rule was that 30% of the commands were int32 so expect some reduction to the doubling of performance.
FAH has a difficult time here as it is has to split the work over many more cores (effective fp32 cores) and smaller WU will be very inefficient on large GPUs. We already see this disparity with just the gaps between the 900 series, 10 series and 20 series. But I have no doubt that they are working on it. I'm sure Nvidia has a vested interest in helping as well.
ipkh
 
Posts: 166
Joined: Thu Jul 16, 2015 3:03 pm

Re: GeForce RTX 3080 and 3090 support enabled !

Postby kiore » Mon Sep 21, 2020 1:00 am

What is being seen with F@H not seeming to make the most of new hardware has happened previously with new gens, it can be a number of factors such as projects cores not aligned to new standards or drivers not yet fully utilizing capacities or combinations. However the project seems to be ahead of the curve this time with new core versions coming online, new bench marking and new ways to use the new generations of hardware like running multiple work units on single GPUs under development. I am optimistic that the work underway will see significant optimization improvements not too far into the future.
Image
i7 7800x GTX 1080ti GTX1660ti, OS= win10. AMD 3700x RTX 2080ti OS= win10

Team page: http://www.rationalskepticism.org/gener ... -t616.html
kiore
 
Posts: 865
Joined: Fri Jan 16, 2009 6:45 pm
Location: USA

Re: GeForce RTX 3080 and 3090 support enabled !

Postby PantherX » Mon Sep 21, 2020 9:34 am

F@H can't use all the new GPU features since it doesn't render anything. Instead, it will use all features that helps it in protein simulation. There are some really cool ideas floating around and some are easier to implement than others. However, time will tell what happens next but it is definitely a good thing for F@H since new and exciting times lie ahead for GPU folding :)
ETA:
Now ↞ Very Soon ↔ Soon ↔ Soon-ish ↔ Not Soon ↠ End Of Time

Welcome To The F@H Support Forum Ӂ Troubleshooting Bad WUs Ӂ Troubleshooting Server Connectivity Issues
User avatar
PantherX
Site Moderator
 
Posts: 6765
Joined: Wed Dec 23, 2009 10:33 am
Location: Land Of The Long White Cloud

Re: GeForce RTX 3080 and 3090 support enabled !

Postby HaloJones » Mon Sep 21, 2020 11:10 am

will be very interested to see what 0.0.13 can do with a 3080
1x Titan X, 5x 1070, 1x 970, 1 x Ryzen 3600

Image
HaloJones
 
Posts: 869
Joined: Thu Jul 24, 2008 11:16 am

Re: GeForce RTX 3080 and 3090 support enabled !

Postby PantherX » Tue Sep 22, 2020 4:59 am

HaloJones wrote:will be very interested to see what 0.0.13 can do with a 3080

Some quick numbers from Project 11765 in Linux:

TPF 73s - GTX 1080Ti running OpenCL/ 1.554 M PPD
TPF 57s - GTX 1080Ti running CUDA / 2.253 M PPD
TPF 49s - RTX 2080Ti running OpenCL/ 2.826 M PPD
TPF 39s - RTX 2080Ti running CUDA / 3.981 M PPD
TPF 36s - RTX 3080 running OpenCL / 4.489 M PPD
TPF 31s - RTX 3080 running CUDA / 5.618 M PPD

I do expect that the numbers might potentially be better once the drivers have matured a bit, generally in about 6 months. By that time, we might have a new version of FahCore_22 that can unlock more performance too!
User avatar
PantherX
Site Moderator
 
Posts: 6765
Joined: Wed Dec 23, 2009 10:33 am
Location: Land Of The Long White Cloud

Re: GeForce RTX 3080 and 3090 support enabled !

Postby MeeLee » Tue Sep 22, 2020 5:28 pm

ir_cow wrote:
MeeLee wrote:@flarbear: I tend to agree with you.
The question I would ask here, is, if the pcie 4.0 bandwidth also consumes 1.5% more energy than 3.0? If your system runs at 350W, the extra 3.5W may be worth it, but it may not if the power draw is more like 10W higher...
And performance and power draw on Pcie 4.0 vs 3.0, and x16, vs x8, vs 4.0x4 speeds also need to be tested.


I don't see how the PCIe slot can "consume" more power. I also tried the Founders Edition which has a limit of 370 watts. No difference in PPD. Just those massive swings depending on the WU. Also the RTX 3080 doesn't even use 8x PCIe 3.0 for folding. I doesn't use the full 16x in games either. That .5% uplift is how the bits are encoded and lowers the overhead. Funny enough you "gain" .5% with PCIe, but being on AMD at lower resolutions you lose 15-30% FPS depending on the game. It is only when you reach 4K does the CPU not matter much. But we are talking folding here and I don't see any reason why PCIe 4.0 would help in folding.


11th gen Intel CPUs support PCIE Gen 4.
While the primary PCIE x16 slot, is generally seen as directly laned to the CPU, and should have very little wattage overhead,
Other slots (especially x4 slots, or m.2 slots) could go via a PCIE bridge chip, consuming extra power.
They actually use a controller that requires active cooling (a tiny 40mm fan in most cases, so I'd be estimating ~15-20W max).
You make a point about AMD CPUs being slower than Intel CPUs in PCIE data transfer.
Despite a 2080Ti not using more than a PCIE 3.0 x8 slot, when connecting it to an x16 slot, there's a marginal performance improvement (<10%, usually between 1-5%).
MeeLee
 
Posts: 1096
Joined: Tue Feb 19, 2019 11:16 pm

Re: GeForce RTX 3080 and 3090 support enabled !

Postby bruce » Thu Sep 24, 2020 2:33 am

ipkh wrote:The Nvidia driver interprets the OpenCL and CUDA (Core 22 version 13) instructions. So it is up to Nvidias optimizations to make the dual fp32 work. For games the basic rule was that 30% of the commands were int32 so expect some reduction to the doubling of performance.


It's impossible to write code without integers, but I'd expect the ratio of INT to FP32 in a game to be inferior to FAH ... though the benchmarking results will be examined carefully and then the drivers will be improved, making them obsolete. 8-)
bruce
 
Posts: 20009
Joined: Thu Nov 29, 2007 11:13 pm
Location: So. Cal.

Re: GeForce RTX 3080 and 3090 support enabled !

Postby MeeLee » Thu Sep 24, 2020 11:37 pm

I don't think there'll be a lot of people running the 3090.
It's theoretical performance is a max of 20-25% higher than the 3080, costing twice the price.
I think the 3080 will be the best GPU for most people looking for a new high performance GPU.
MeeLee
 
Posts: 1096
Joined: Tue Feb 19, 2019 11:16 pm

Re: GeForce RTX 3080 and 3090 support enabled !

Postby road-runner » Fri Sep 25, 2020 2:52 am

yea the price of those I can buy a lot of electric for the 1080TI
User avatar
road-runner
 
Posts: 226
Joined: Sun Dec 02, 2007 5:01 am
Location: Willis, Texas

Re: GeForce RTX 3080 and 3090 support enabled !

Postby gunnarre » Fri Sep 25, 2020 9:27 am

MeeLee wrote:Other slots (especially x4 slots, or m.2 slots) could go via a PCIE bridge chip, consuming extra power.
They actually use a controller that requires active cooling (a tiny 40mm fan in most cases, so I'd be estimating ~15-20W max).

This is not a feature inherent to PCIe Gen 4 standard, right? It has more to do with having to use a less power efficient chip for the X570 chipset, which made an active chipset cooling fan necessary. In future chipsets from ASMedia, Intel or AMD, we might see PCIe 4 support with lower power dissipation.
Image
gunnarre
 
Posts: 181
Joined: Sun May 24, 2020 8:23 pm
Location: Norway

Re: GeForce RTX 3080 and 3090 support enabled !

Postby MeeLee » Fri Sep 25, 2020 5:34 pm

gunnarre wrote:This is not a feature inherent to PCIe Gen 4 standard, right? It has more to do with having to use a less power efficient chip for the X570 chipset, which made an active chipset cooling fan necessary. In future chipsets from ASMedia, Intel or AMD, we might see PCIe 4 support with lower power dissipation.

I'm not sure,
I think it'll be like USB 3.0 protocol.
It does use more power than USB 2.0, but then data also moves at a higher rate.
However, the question would be, if you stick a USB 3.0 stick with USB 2.0 speeds, in a USB 3.0 port, will it run more or less power than a USB 2.0 port?
My estimation is that a PCIE 4.0 x4 port, uses nearly the same power as a PCIE 3.0 x8 port.
Saves a bit of power with less lanes, but wastes more to feed the GPU at a faster data rate.
Saves power again, because faster transactions mean quicker idling of the PCIE interface.
But uses both more idle power, as well as power under load.

If the load isn't 100%, but a constant 25%, the PCIE 4.0 should have slightly higher power consumption than a modern 3.0.

I think ultimately power consumption will depend on the CPU. So it'll depend on what nm the CPU process is made.
Like many, a 10nm CPU doesn't mean the entire CPU is made on a 10nm process die. Sometimes parts are still 14nm, or even 28nm.

So I think a new PCIE 4.0 will consume less power than an old 3.0 port,
Things will be more interesting when trying to compare 4.0 to 3.0 of the same node CPU.

In the grand scheme of things, answers to these questions will more than likely be useless, as we're going to PCIE 4.0 regardless; and PCIE 5.0, and 6.0 is on the table already.
Both 5.0 and 6.0 might make problems with finding good risers that can support these speeds.
MeeLee
 
Posts: 1096
Joined: Tue Feb 19, 2019 11:16 pm

Re: GeForce RTX 3080 and 3090 support enabled !

Postby Lockheed_Tvr » Sat Sep 26, 2020 2:25 am

PantherX wrote:
HaloJones wrote:will be very interested to see what 0.0.13 can do with a 3080

Some quick numbers from Project 11765 in Linux:

TPF 73s - GTX 1080Ti running OpenCL/ 1.554 M PPD
TPF 57s - GTX 1080Ti running CUDA / 2.253 M PPD
TPF 49s - RTX 2080Ti running OpenCL/ 2.826 M PPD
TPF 39s - RTX 2080Ti running CUDA / 3.981 M PPD
TPF 36s - RTX 3080 running OpenCL / 4.489 M PPD
TPF 31s - RTX 3080 running CUDA / 5.618 M PPD

I do expect that the numbers might potentially be better once the drivers have matured a bit, generally in about 6 months. By that time, we might have a new version of FahCore_22 that can unlock more performance too!


Is there any way to force it to use CUDA or is that just for that new Beta core that recently came out?
Lockheed_Tvr
 
Posts: 10
Joined: Thu Aug 03, 2017 1:23 pm

Re: GeForce RTX 3080 and 3090 support enabled !

Postby kiore » Sat Sep 26, 2020 2:52 am

Only with the new core. The new core is under beta level testing, still a few bugs it seems as some 'escaped' to general users and some issues found. Serious progress though for optimization let us see, I am optimistic.
kiore
 
Posts: 865
Joined: Fri Jan 16, 2009 6:45 pm
Location: USA

Re: GeForce RTX 3080 and 3090 support enabled !

Postby PantherX » Sat Sep 26, 2020 5:09 am

Lockheed_Tvr wrote:...Is there any way to force it to use CUDA or is that just for that new Beta core that recently came out?

In addition to what kiore mentioned, do note that you can't "force" to use CUDA... upon initialization, the FahCore_22 has this logic (simplified steps):
1) Let me see how many platforms I have access to
2) Let me try to use CUDA since you're an Nvidia GPU
3) Okay, I tired to use CUDA and failed so let me try to use OpenCL
4) Oh no, I can't use any platforms, let me collect all the information in an error report and send it back for debugging

Do note that AMD GPUs would skip step 2 since CUDA isn't present.
User avatar
PantherX
Site Moderator
 
Posts: 6765
Joined: Wed Dec 23, 2009 10:33 am
Location: Land Of The Long White Cloud

PreviousNext

Return to New GPUs (whitelist)

Who is online

Users browsing this forum: No registered users and 1 guest

cron