PCI-e bandwidth/capacity limitations

A forum for discussing FAH-related hardware choices and info on actual products (not speculation).

Moderator: Site Moderators

Forum rules
Please read the forum rules before posting.

Re: PCI-e bandwidth/capacity limitations

Postby boristsybin » Fri Mar 09, 2018 3:46 pm

i don`t see any ppd increase with pl 220, on both cards tasks 94xx and 117xx produce same 1+ mppd
set pl back to 180

so, have tested 1080ti @ pci-e v2.0 x1, ryzen platform mentioned above
task 9415, at 20% progress shows 856k ppd
nvidia settings console shows 11% pci-e link load.

well, 1070 will fold with full power, may be even 1080 will.
but 1080ti not, it requers at least pci-e v2.0 x4 (may be x2 if that kind exists).
Image
boristsybin
 
Posts: 50
Joined: Mon Jan 16, 2017 11:40 am
Location: Russia, Moscow

Re: PCI-e bandwidth/capacity limitations

Postby foldy » Tue Mar 13, 2018 1:52 pm

That is important especially for those mining mainboards with pcie v3.0 x1 only. So with Linux using gtx1070 would be feasible if CPU is big enough, 4core/8thread would be good for upto 8 GPUs. So a mining rig with 8x gtx1070 can be converted to a folding rig just by replacing the CPU from often used dual core to intel i7 and using Linux.
foldy
 
Posts: 1178
Joined: Sat Dec 01, 2012 3:43 pm

Re: PCI-e bandwidth/capacity limitations

Postby antropofob » Wed Aug 22, 2018 9:48 am

Greetings,
According to this user, there is a significant PPD hit when using four 1060s using risers on X1 slots.
viewtopic.php?f=38&t=28847&p=296023&hilit=1060#p296023
Could somebody confirm?
Or should I plug just two of them in X16 slots?
Motherboard https://www.gigabyte.com/Motherboard/GA ... -rev-2x#ov

OS would be Linux.
Thanks.
antropofob
 
Posts: 47
Joined: Mon Aug 22, 2011 8:03 am

Re: PCI-e bandwidth/capacity limitations

Postby foldy » Wed Aug 22, 2018 4:47 pm

This mainboard has pcie 2.0 only so this halves the bandwidth.

1 x PCI Express x16 slot, running at pcie 2.0 x16 (PCIEX16)
1 x PCI Express x16 slot, running at pcie 2.0 x4 (PCIEX4)
3 x PCI Express x1 slots, running at pcie 2.0 x1

It doesn't matter that the other user has four GPUs, each GPU speed is on its own.
But you need one CPU core for each GPU to feed it. Maybe even another CPU thread free for the OS.

I would recommend to just test it, put one 1060 in PCIEX16 slot and put another 1060 in PCIEx1 slot with riser.
Then see if there is a significant speed difference with similar fah project work units. And if it is still worth it.

I guess running in the x16 slots only is better.
foldy
 
Posts: 1178
Joined: Sat Dec 01, 2012 3:43 pm

Re: PCI-e bandwidth/capacity limitations

Postby Aurum » Wed Aug 22, 2018 4:59 pm

I have 2 similar rigs where I put a 1060 in each 16x 2.0 slot and 2 more up on 1x risers. Lousy for F@H but ok for Asteroids, Milkyway or Einstein. Like Foldy said if you don't have 4 threads that will slow you down some more.

I sure wish F@H was compatible with Time-of-Use scheduling.
Image Image
User avatar
Aurum
 
Posts: 298
Joined: Sat Oct 03, 2015 3:15 pm
Location: The Great Basin

Re: PCI-e bandwidth/capacity limitations

Postby bruce » Wed Aug 22, 2018 7:48 pm

Aurum wrote:I sure wish F@H was compatible with Time-of-Use scheduling.

Please explain your statement. Either I don't understand that statement or it's not true.

While there is no built-it feature that pauses/unpauses specific slots, it's not all that difficult to write a script that sends a pause / unpause to a specific slot at specific times of the day. I've even seen some published that can be adapted to your environment.
bruce
 
Posts: 21696
Joined: Thu Nov 29, 2007 10:13 pm
Location: So. Cal.

Re: PCI-e bandwidth/capacity limitations

Postby Aurum » Wed Aug 22, 2018 8:28 pm

And every F@H WU that pauses for those 5+ hours loses its Quick Return Bonus and is a waste of time. Given the choice to have a QRB I think what's needed is to have a Don't Start If You Cannot Finish A WU before 13:00. It's been suggested here before. My electric rate goes up 7x during Peak Hours so I can't run F@H until October.
User avatar
Aurum
 
Posts: 298
Joined: Sat Oct 03, 2015 3:15 pm
Location: The Great Basin

Re: PCI-e bandwidth/capacity limitations

Postby Joe_H » Wed Aug 22, 2018 8:56 pm

A WU that is paused for 5 hours will only lose a relatively small portion of its QRB on most cases as the deadlines are in days, not hours. That might be a waste of time in your figuring, but it is not a complete loss of QRB like you are stating.

The scripts that exist can be also modified to set folding slots to Finish, so a WU started after a time will not be followed by another download.

As for your suggestion, yes it has been suggested before and the reasons it is very unlikely to ever be implemented and added to the client given by a number of people. It would add a level of complexity to the client, the servers, and the record keeping to make WU assignments to meet that kind of criteria that would require a programming effort that could not be justified.
Image

iMac 2.8 i7 12 GB smp8, Mac Pro 2.8 quad 12 GB smp6
MacBook Pro 2.9 i7 8 GB smp3
Joe_H
Site Admin
 
Posts: 4215
Joined: Tue Apr 21, 2009 4:41 pm
Location: W. MA

Re: PCI-e bandwidth/capacity limitations

Postby bruce » Thu Aug 23, 2018 12:53 am

Aurum wrote:And every F@H WU that pauses for those 5+ hours loses its Quick Return Bonus and is a waste of time.

As Joe_H has said, that's false. Here's an example.

(I didn't recheck theses calculations, so if I made a mistake, let me know and I'll correct it.)

I have a WU running on my GTX 960 which was assigned 2018-08-22T20:17:13Z which will reach it's timeout 2018-08-29T20:17:13Z. It is now 2018-08-23T00:09:46Z and it's projected to finish in 32 minutes. With a timeout of 7.00 days and a projected completion of 0.48 days, I expect to earn the baseline points of 4879 and a bonus of 16807 for a total of 21686 points.

ASSUMING i had shut down for 5 hours, I would have completed the WU in 0.588 days instead of 0.38 days and would have only have received 12564 bonus points for a total of 17443. Yes, that's significantly less, but hardly zero bonus. Moreover I might have started it early in the day and completed it without the 5 hr penalty -- ESPECIALLY if I use FINISH for any WU that's expected to be completed not long after the beginning of your 5 hr timeout.

Yes, if you intentionally delay a WU, it will reduce your points, but with the judicious use of the FINISH option, you can reduce that effect.
bruce
 
Posts: 21696
Joined: Thu Nov 29, 2007 10:13 pm
Location: So. Cal.

Re: PCI-e bandwidth/capacity limitations

Postby Aurum » Sun Sep 09, 2018 5:02 pm

Well I'm trying to learn more about Linux everyday, by necessity. However, I doubt I'll ever be proficient enough to write scripts as I doubt I'll live long enough.
I've been using BOINC and I sure like it. I set a schedule for every rig (Win or Linux) on the Options tab. I saw a web site that uses BOINC Wrapper to host smaller projects and wonder if F@H could run under BOINC. http://www.rechenkraft.net/yoyo/
I've also been running GPUGRID under BOINC and it works great. I recall they talk about having rewritten some code but I can't find the link to that at the moment. http://www.rechenkraft.net/yoyo/
BOINC also makes it easy to run more than one WU per GPU even from different projects.
It seems to me that the PCIe bus speed is much less important than it is with F@H.
User avatar
Aurum
 
Posts: 298
Joined: Sat Oct 03, 2015 3:15 pm
Location: The Great Basin

Re: PCI-e bandwidth/capacity limitations

Postby bruce » Sun Sep 09, 2018 8:27 pm

Many years ago, FAH initiated a development effort to see if it could run under BOINC. That project was not completed because FAH and BOINC have entirely different concepts of priority.

* In FAH, rapic completion of a WUs is essential to the assignment process, to the scientific value of the work, and consequentially to the points awarded. (Unneceasarily running two WUs is a bad thing.]

* In BOINC, concurrently running multiple WUs (which necessarily delays both of them by dividing the resources] is a good thing.

PCIe speed is only a small part of FAH's need for speed.
bruce
 
Posts: 21696
Joined: Thu Nov 29, 2007 10:13 pm
Location: So. Cal.

Previous

Return to FAH Hardware

Who is online

Users browsing this forum: No registered users and 1 guest

cron