Linux, Nvidia GPU & PCIe 4x?

A forum for discussing FAH-related hardware choices and info on actual products (not speculation).

Moderator: Site Moderators

Forum rules
Please read the forum rules before posting.
Frontiers
Posts: 50
Joined: Thu Sep 13, 2012 3:23 pm
Hardware configuration: Ryzen 5 5600x
G.Skill 2x16 GB 3200@3333
GTX 1070
Lancool II Mesh Perf.
Linux Mint 21.3
some drives, some cooler, some peripherials

Re: Linux, Nvidia GPU & PCIe 4x?

Post by Frontiers »

With latest GPU drivers - Nvidia 451.48 and AMD 20.5.1 Beta HWS installed on Win10 2004 with updated WDM 2.7, after enabling hardware accelerated GPU scheduling in graphics settings - PCIe bandwidth utilization with FahCore_22 with my GTX 1070 drops very significantly , up to 5-10 times depending on WU, from 25-40% to only 2-6% for PCIe 2.0 x16 slot.
Hardware accelerated GPU scheduling supported with latest drivers by both Nvidia Pascal 10x0 and Turing 20x0 cards and by AMD Navi cards, both in Windows and Linux.
So wonder if some (of course not the fastest Ti's) cards now could be feeded with only 1 lane without perfomance impact in F@H.
MeeLee
Posts: 1375
Joined: Tue Feb 19, 2019 10:16 pm

Re: Linux, Nvidia GPU & PCIe 4x?

Post by MeeLee »

I used to fold on an old Xeon processor, 20 threads, that can be gotten for $65, and motherboards for like $50, and DDR3 was super cheap, like $40 for 2x4GB.
It's just not worth it.
20 'fast' cores (at 2,5Ghz) vs 500-5000 slower cores (GPU), and the choice is clear.
markdotgooley
Posts: 101
Joined: Tue Apr 21, 2020 11:46 am

Re: Linux, Nvidia GPU & PCIe 4x?

Post by markdotgooley »

MeeLee wrote:I used to fold on an old Xeon processor, 20 threads, that can be gotten for $65, and motherboards for like $50, and DDR3 was super cheap, like $40 for 2x4GB.
It's just not worth it.
20 'fast' cores (at 2,5Ghz) vs 500-5000 slower cores (GPU), and the choice is clear.
Yes. Two mid-range GPU cards (RTX 2060) grossly outperform anything else I’ve used. I’m toying with getting an old 2-processor Xeon motherboard with four PCIe 3.0 x16 slots (probably could put three GPU cards in those), but the two 6-core Xeons are 95W each and 2.0 GHz, so I probably would do no folding on them at all to save on power bills. (If I do buy it, I’ll see how much more power 24-thread CPU folding uses than idling; although that’s not completely accurate, it’ll give me an idea if it’s worthwhile to use the Xeons to do more than tend the GPUs.)
markdotgooley
Posts: 101
Joined: Tue Apr 21, 2020 11:46 am

Re: Linux, Nvidia GPU & PCIe 4x?

Post by markdotgooley »

markdotgooley wrote:
ajm wrote:Do let us know! :)
If I go through with it, I will. Probably I’ll post blow-by-blow accounts.
I bought one. It finally arrived yesterday.

It's... big. I think I'll just make sure it works, using assorted old parts I have in a temporary setup on a box in the style of Linus Tech Tips videos. Maybe try running F@H on the CPUs although that will be poor performance/watt: more of a test than a permanent arrangement. (Probably will need fans next to the CPUs at least.) In the end, the CPUs may be mostly idle. Checking power consumption when the system is idle: another priority. How much extra will two mostly-idling old Xeons cost per day over a nominally 65W Ryzen 5? Perhaps 2 kWh or less?

I'm eyeing one of those aluminum frames for under US$40, designed for mining, that has plenty of room and I hope will mostly match the board's mounting holes. Maximum setup could have two GPUs on the motherboard and two more on the rails above, connected with those riser cables. But I tend to buy things and let them sit a while until I work up the nerve to try them out. Also need to see what new GPUs are coming out and if I can properly afford the power bills.

I think I'll start a new thread when there's news.
Neil-B
Posts: 2027
Joined: Sun Mar 22, 2020 5:52 pm
Hardware configuration: 1: 2x Xeon E5-2697v3@2.60GHz, 512GB DDR4 LRDIMM, SSD Raid, Win10 Ent 20H2, Quadro K420 1GB, FAH 7.6.21
2: Xeon E3-1505Mv5@2.80GHz, 32GB DDR4, NVME, Win10 Pro 20H2, Quadro M1000M 2GB, FAH 7.6.21 (actually have two of these)
3: i7-960@3.20GHz, 12GB DDR3, SSD, Win10 Pro 20H2, GTX 750Ti 2GB, GTX 1080Ti 11GB, FAH 7.6.21
Location: UK

Re: Linux, Nvidia GPU & PCIe 4x?

Post by Neil-B »

If it is an intel server then research FRUSDR ... Server Boards can be less intuitive about what is connected/installed - you may find that for the most part you have to specify everything rather than just have it done for you like a PC/Workstation MoBo
2x Xeon E5-2697v3, 512GB DDR4 LRDIMM, SSD Raid, W10-Ent, Quadro K420
Xeon E3-1505Mv5, 32GB DDR4, NVME, W10-Pro, Quadro M1000M
i7-960, 12GB DDR3, SSD, W10-Pro, GTX1080Ti
i9-10850K, 64GB DDR4, NVME, W11-Pro, RTX3070

(Green/Bold = Active)
MeeLee
Posts: 1375
Joined: Tue Feb 19, 2019 10:16 pm

Re: Linux, Nvidia GPU & PCIe 4x?

Post by MeeLee »

markdotgooley wrote:
MeeLee wrote:I used to fold on an old Xeon processor, 20 threads, that can be gotten for $65, and motherboards for like $50, and DDR3 was super cheap, like $40 for 2x4GB.
It's just not worth it.
20 'fast' cores (at 2,5Ghz) vs 500-5000 slower cores (GPU), and the choice is clear.
Yes. Two mid-range GPU cards (RTX 2060) grossly outperform anything else I’ve used. I’m toying with getting an old 2-processor Xeon motherboard with four PCIe 3.0 x16 slots (probably could put three GPU cards in those), but the two 6-core Xeons are 95W each and 2.0 GHz, so I probably would do no folding on them at all to save on power bills. (If I do buy it, I’ll see how much more power 24-thread CPU folding uses than idling; although that’s not completely accurate, it’ll give me an idea if it’s worthwhile to use the Xeons to do more than tend the GPUs.)
I wouldn't go with a Xeon CPU, if you're just going for GPU folding.
Some motherboards run 2 to 3 GPUs easily, and if they're tuned 2080 Ti GPUs, that's 600-900W, plus CPU and efficiencies = 800-1200W.
Pretty much a single unit like this takes up an entire wall socket, if you don't want to trigger the fuse.

But if you're lucky enough to live in Europe, with 240V, and 16A breakers, or you have access to dual phase sockets (220V 50Hz in USA), Xeons make sense.
That is, if they recognize RTX 2080 Ti GPUs, and if they can fully occupy at least 6 PCIE slots (~1500-2000W).
In such case you'll need at least 6, preferably 8 CPU cores.

But I wouldn't get a 10 core 20 thread CPU, to feed only 1 or 2 GPUs.
markdotgooley
Posts: 101
Joined: Tue Apr 21, 2020 11:46 am

Re: Linux, Nvidia GPU & PCIe 4x?

Post by markdotgooley »

This thing has two old 6-core Xeons. I'm thinking of running up to 4 GPU cards and using the Xeons only to tend the GPUs at full PCIe 3.0 x16 bandwidth, no attempts to fold. I'm hoping that the new GPUs supposedly coming out starting in September will include some RTX 2080ti-speed ones that cost less and don't use more power. We'll see
MeeLee
Posts: 1375
Joined: Tue Feb 19, 2019 10:16 pm

Re: Linux, Nvidia GPU & PCIe 4x?

Post by MeeLee »

markdotgooley wrote:This thing has two old 6-core Xeons. I'm thinking of running up to 4 GPU cards and using the Xeons only to tend the GPUs at full PCIe 3.0 x16 bandwidth, no attempts to fold. I'm hoping that the new GPUs supposedly coming out starting in September will include some RTX 2080ti-speed ones that cost less and don't use more power. We'll see
I myself am waiting for the RTX 3000 series GPUs.
Supposedly the mid range of the 3000 series should bump up to a 2080 Ti, but at a much lower power envelope.
But since the human malware, we've been experiencing some serious delays.
What should have been introduced in May, probably will be introduced in August (IF), and released by the end of the year.
We're not sure if Nivida is going to stick to releasing them this year, as there's currently no need for it (they're still the king).
I think instead they hold off a little longer in a time when the economy isn't doing well, but stock markets are high (meaning, lots of money for R&D).

A 2080Ti starts getting restricted in Linux on a PCIE 3.0 x4 slot. I believe a 3.0 x8 slot will still be good enough for the upcoming top end 3000 series GPUs, assuming their mid range GPUs will be equipped with ~5k cuda cores, and their top end ~8k cores...
But we can't say that for sure. Nivida can always reserve the right to keep that 8k core design for their Quadro RTX line; in which we, the average consumer, for sure won't be able to afford one!

Correction on my previous post USA has 60Hz, not 50Hz.
Post Reply