PCI-e bandwidth/capacity limitations

A forum for discussing FAH-related hardware choices and info on actual products (not speculation).

Moderator: Site Moderators

Forum rules
Please read the forum rules before posting.
boristsybin
Posts: 50
Joined: Mon Jan 16, 2017 11:40 am
Hardware configuration: 4x1080Ti + 2x1050Ti
Location: Russia, Moscow

Re: PCI-e bandwidth/capacity limitations

Post by boristsybin »

i don`t see any ppd increase with pl 220, on both cards tasks 94xx and 117xx produce same 1+ mppd
set pl back to 180

so, have tested 1080ti @ pci-e v2.0 x1, ryzen platform mentioned above
task 9415, at 20% progress shows 856k ppd
nvidia settings console shows 11% pci-e link load.

well, 1070 will fold with full power, may be even 1080 will.
but 1080ti not, it requers at least pci-e v2.0 x4 (may be x2 if that kind exists).
Image
foldy
Posts: 2061
Joined: Sat Dec 01, 2012 3:43 pm
Hardware configuration: Folding@Home Client 7.6.13 (1 GPU slots)
Windows 7 64bit
Intel Core i5 2500k@4Ghz
Nvidia gtx 1080ti driver 441

Re: PCI-e bandwidth/capacity limitations

Post by foldy »

That is important especially for those mining mainboards with pcie v3.0 x1 only. So with Linux using gtx1070 would be feasible if CPU is big enough, 4core/8thread would be good for upto 8 GPUs. So a mining rig with 8x gtx1070 can be converted to a folding rig just by replacing the CPU from often used dual core to intel i7 and using Linux.
antropofob
Posts: 59
Joined: Mon Aug 22, 2011 8:03 am

Re: PCI-e bandwidth/capacity limitations

Post by antropofob »

Greetings,
According to this user, there is a significant PPD hit when using four 1060s using risers on X1 slots.
viewtopic.php?f=38&t=28847&p=296023&hilit=1060#p296023
Could somebody confirm?
Or should I plug just two of them in X16 slots?
Motherboard https://www.gigabyte.com/Motherboard/GA ... -rev-2x#ov

OS would be Linux.
Thanks.
foldy
Posts: 2061
Joined: Sat Dec 01, 2012 3:43 pm
Hardware configuration: Folding@Home Client 7.6.13 (1 GPU slots)
Windows 7 64bit
Intel Core i5 2500k@4Ghz
Nvidia gtx 1080ti driver 441

Re: PCI-e bandwidth/capacity limitations

Post by foldy »

This mainboard has pcie 2.0 only so this halves the bandwidth.

1 x PCI Express x16 slot, running at pcie 2.0 x16 (PCIEX16)
1 x PCI Express x16 slot, running at pcie 2.0 x4 (PCIEX4)
3 x PCI Express x1 slots, running at pcie 2.0 x1

It doesn't matter that the other user has four GPUs, each GPU speed is on its own.
But you need one CPU core for each GPU to feed it. Maybe even another CPU thread free for the OS.

I would recommend to just test it, put one 1060 in PCIEX16 slot and put another 1060 in PCIEx1 slot with riser.
Then see if there is a significant speed difference with similar fah project work units. And if it is still worth it.

I guess running in the x16 slots only is better.
Aurum
Posts: 296
Joined: Sat Oct 03, 2015 3:15 pm
Location: The Great Basin

Re: PCI-e bandwidth/capacity limitations

Post by Aurum »

I have 2 similar rigs where I put a 1060 in each 16x 2.0 slot and 2 more up on 1x risers. Lousy for F@H but ok for Asteroids, Milkyway or Einstein. Like Foldy said if you don't have 4 threads that will slow you down some more.

I sure wish F@H was compatible with Time-of-Use scheduling.
In Science We Trust Image
bruce
Posts: 20910
Joined: Thu Nov 29, 2007 10:13 pm
Location: So. Cal.

Re: PCI-e bandwidth/capacity limitations

Post by bruce »

Aurum wrote:I sure wish F@H was compatible with Time-of-Use scheduling.
Please explain your statement. Either I don't understand that statement or it's not true.

While there is no built-it feature that pauses/unpauses specific slots, it's not all that difficult to write a script that sends a pause / unpause to a specific slot at specific times of the day. I've even seen some published that can be adapted to your environment.
Aurum
Posts: 296
Joined: Sat Oct 03, 2015 3:15 pm
Location: The Great Basin

Re: PCI-e bandwidth/capacity limitations

Post by Aurum »

And every F@H WU that pauses for those 5+ hours loses its Quick Return Bonus and is a waste of time. Given the choice to have a QRB I think what's needed is to have a Don't Start If You Cannot Finish A WU before 13:00. It's been suggested here before. My electric rate goes up 7x during Peak Hours so I can't run F@H until October.
In Science We Trust Image
Joe_H
Site Admin
Posts: 7854
Joined: Tue Apr 21, 2009 4:41 pm
Hardware configuration: Mac Pro 2.8 quad 12 GB smp4
MacBook Pro 2.9 i7 8 GB smp2
Location: W. MA

Re: PCI-e bandwidth/capacity limitations

Post by Joe_H »

A WU that is paused for 5 hours will only lose a relatively small portion of its QRB on most cases as the deadlines are in days, not hours. That might be a waste of time in your figuring, but it is not a complete loss of QRB like you are stating.

The scripts that exist can be also modified to set folding slots to Finish, so a WU started after a time will not be followed by another download.

As for your suggestion, yes it has been suggested before and the reasons it is very unlikely to ever be implemented and added to the client given by a number of people. It would add a level of complexity to the client, the servers, and the record keeping to make WU assignments to meet that kind of criteria that would require a programming effort that could not be justified.
Image

iMac 2.8 i7 12 GB smp8, Mac Pro 2.8 quad 12 GB smp6
MacBook Pro 2.9 i7 8 GB smp3
bruce
Posts: 20910
Joined: Thu Nov 29, 2007 10:13 pm
Location: So. Cal.

Re: PCI-e bandwidth/capacity limitations

Post by bruce »

Aurum wrote:And every F@H WU that pauses for those 5+ hours loses its Quick Return Bonus and is a waste of time.
As Joe_H has said, that's false. Here's an example.

(I didn't recheck theses calculations, so if I made a mistake, let me know and I'll correct it.)

I have a WU running on my GTX 960 which was assigned 2018-08-22T20:17:13Z which will reach it's timeout 2018-08-29T20:17:13Z. It is now 2018-08-23T00:09:46Z and it's projected to finish in 32 minutes. With a timeout of 7.00 days and a projected completion of 0.48 days, I expect to earn the baseline points of 4879 and a bonus of 16807 for a total of 21686 points.

ASSUMING i had shut down for 5 hours, I would have completed the WU in 0.588 days instead of 0.38 days and would have only have received 12564 bonus points for a total of 17443. Yes, that's significantly less, but hardly zero bonus. Moreover I might have started it early in the day and completed it without the 5 hr penalty -- ESPECIALLY if I use FINISH for any WU that's expected to be completed not long after the beginning of your 5 hr timeout.

Yes, if you intentionally delay a WU, it will reduce your points, but with the judicious use of the FINISH option, you can reduce that effect.
Aurum
Posts: 296
Joined: Sat Oct 03, 2015 3:15 pm
Location: The Great Basin

Re: PCI-e bandwidth/capacity limitations

Post by Aurum »

Well I'm trying to learn more about Linux everyday, by necessity. However, I doubt I'll ever be proficient enough to write scripts as I doubt I'll live long enough.
I've been using BOINC and I sure like it. I set a schedule for every rig (Win or Linux) on the Options tab. I saw a web site that uses BOINC Wrapper to host smaller projects and wonder if F@H could run under BOINC. http://www.rechenkraft.net/yoyo/
I've also been running GPUGRID under BOINC and it works great. I recall they talk about having rewritten some code but I can't find the link to that at the moment. http://www.rechenkraft.net/yoyo/
BOINC also makes it easy to run more than one WU per GPU even from different projects.
It seems to me that the PCIe bus speed is much less important than it is with F@H.
In Science We Trust Image
bruce
Posts: 20910
Joined: Thu Nov 29, 2007 10:13 pm
Location: So. Cal.

Re: PCI-e bandwidth/capacity limitations

Post by bruce »

Many years ago, FAH initiated a development effort to see if it could run under BOINC. That project was not completed because FAH and BOINC have entirely different concepts of priority.

* In FAH, rapid completion of WUs is essential to the assignment process, to the scientific value of the work, and consequentially to the points awarded. (Unnecessary running WUs twice is a bad thing.]

* In BOINC, concurrently running multiple WUs (which necessarily delays both of them by dividing the resources] is a good thing.

PCIe speed is only a small part of FAH's need for speed.
ProDigit
Posts: 242
Joined: Sun Dec 09, 2018 10:23 pm

Re: PCI-e bandwidth/capacity limitations

Post by ProDigit »

Food for thought.
I still have to read the whole thread,
My motherboard has 1x PCIE 16x slot, and 2x PCIE 1x slots.

I probably will put a GTX 1050 in the 16x slot (perhaps in the future this will upgrade to a 2000 series card),

My interest is in the 1x slot.
With risers, I'm interested on what the threshold is for GPU?
What GPU is processing data faster than the PCIE 1x slot can provide (causing the GPU to be idling at below 90-95%)?

I currently went with a passive GT 1030 card, hoping that it will be a good match for the slower PCIE 1x slot; based on some older posts.
The GTX 960 would use 55% of the PCIe 3.0 8x slot's bandwidth.
So it should saturate a PCIE 4x's bandwidth.

THe GT 1030 (I would estimate) has 1/4 th the shaders, so probably 1/4th the bandwidth of the GTX 960; thus I hope It'll be a good match for the PCIE 1x slot.
foldy
Posts: 2061
Joined: Sat Dec 01, 2012 3:43 pm
Hardware configuration: Folding@Home Client 7.6.13 (1 GPU slots)
Windows 7 64bit
Intel Core i5 2500k@4Ghz
Nvidia gtx 1080ti driver 441

Re: PCI-e bandwidth/capacity limitations

Post by foldy »

The pcie bandwidth limits mostly occur on Windows where the rule for fast GPUs is to not go below pcie 3.0 x4. On Linux pcie bandwidth doesn't matter much. But at pcie x1 there is still a 20% performance loss on Linux for fast GPUs which seems OK. For slow GPUs pcie speed also doen't matter much.

So the question is do you use Windows or Linux?
ProDigit
Posts: 242
Joined: Sun Dec 09, 2018 10:23 pm

Re: PCI-e bandwidth/capacity limitations

Post by ProDigit »

It'll be a dual boot system.
Windows 10 home 64 bit, is for my own projects and games,
And Linux, I'll have to decide which one.

I'm currently running xubuntu on one system, but was looking into Intel's Clear Linux.
Not sure if it'll run well for FAH, but people that do run it, say for their projects, it is about 20% more efficient than regular Linux distributions.

Most of the time, the system will be folding under Linux.
I'm trying to determine what the best course of action would be to fill the two pcie 1x slots.
The system never was purchased for folding, but I might as well optimize it for it.
Money is scarce, so I don't want to pay too much money.

2 things I'm still looking at, is if those pcie risers add significant latency or not, and if a pcie 1x card would make more sense than purchasing risers and an 8/16x slot graphics card.
And if I do use risers, what would be the fastest card I can run off of the pcie 1x slot?

Financially, anything above a second 1050 I can't justify.
Either 2x1050, or a 1050 and 2x1030 cards (as I have 2x pcie 1x slots).


Another thing I am wondering, if it's not better to find a pcie 16x slot converter to 4x slots, if this exists?
It might make more sense running 4x gpus off of 1 pcie 16x slot, vs trying to work with the pcie 1x slots...
foldy
Posts: 2061
Joined: Sat Dec 01, 2012 3:43 pm
Hardware configuration: Folding@Home Client 7.6.13 (1 GPU slots)
Windows 7 64bit
Intel Core i5 2500k@4Ghz
Nvidia gtx 1080ti driver 441

Re: PCI-e bandwidth/capacity limitations

Post by foldy »

I would go with 2xgtx 1050 only because each GPU also needs one CPU thread to feed it and you would still have one pcie x1 slot free for a future 3rd GPU. And I think there are no gtx 1050 for pcie x1 slot, so you need the x1 riser, also never heard of splitter for x16 to 4x4. Riser latency doesn't matter, you get full x1 speed which is fast enough for a gtx 1050 on Linux. Or if that is an option for you: sell your gtx 1050 and upgrade to gtx 1060 which has tripple speed PPD.
Post Reply