PCI-e bandwidth/capacity limitations

A forum for discussing FAH-related hardware choices and info on actual products (not speculation).

Moderator: Site Moderators

Forum rules
Please read the forum rules before posting.
QuintLeo
Posts: 52
Joined: Sat Jun 17, 2017 5:43 am

Re: PCI-e bandwidth/capacity limitations

Post by QuintLeo »

I notice some significant differences in my 3 card AMD rigs (all slots PCI-E 2.0, A10-7860K or A10-7890K quad-core CPUs) between the pair of slots that have 8 PCI-E lanes to them (physically 16, but they have to "share" with 2 cards installed to 8x/8x) and the slot that has 4 physical PCI-E lanes - even when the cards are all clocked the same, the 4x slot shows a few percent lower folding PPD when running the same unit as one of the other slots.

My one and only Intel rig (G4600 dual-core CPU) is worse - all slots PCI-E 3.0, but the #2 and #3 slots have 4 physical PCI-E lanes while the #1 slot has 16 - and observed PPD difference (on a much lower database of observations, this machine is less than 2 days old) suggests the difference is close to 15%.

The issue seems to be purely the available bandwidth on the PCI-E slots - I've run a single-card rig on a Sempron 3000+ (old Socket 754 single-core) with a PCI-E 2.0 x16 slot and that rig was managing somewhat higher PPD consistantly than the #1 slots on any of my 3card rigs (except the Intel, but I can't compare similar work units there as I moved the card out of that single-card rig TO the Intel rig when I built the Intel rig).
SteveWillis
Posts: 409
Joined: Fri Apr 15, 2016 12:42 am
Hardware configuration: PC 1:
Linux Mint 17.3
three gtx 1080 GPUs One on a powered header
Motherboard = [MB-AM3-AS-SB-990FXR2] qty 1 Asus Sabertooth 990FX(+59.99)
CPU = [CPU-AM3-FX-8320BR] qty 1 AMD FX 8320 Eight Core 3.5GHz(+41.99)

PC2:
Linux Mint 18
Open air case
Motherboard: ASUS Crosshair V Formula-Z AM3+ AMD 990FX SATA 6Gb/s USB 3.0 ATX AMD
AMD FD6300WMHKBOX FX-6300 6-Core Processor Black Edition with Cooler Master Hyper 212 EVO - CPU Cooler with 120mm PWM Fan
three gtx 1080,
one gtx 1080 TI on a powered header

Re: PCI-e bandwidth/capacity limitations

Post by SteveWillis »

boristsybin wrote:and price is comparible to EZDYI and TT. and shipping is not free.
But it seems that leheat48 produces those risers, and can make them any length, and straight or angled. And may be with power connector (I`m already add power connector to my riser to reduce power impact on mb)
I can't find anything about "leheat48". Do you have a url or is that maybe a typo? Thanks
Image

1080 and 1080TI GPUs on Linux Mint
Aurum
Posts: 296
Joined: Sat Oct 03, 2015 3:15 pm
Location: The Great Basin

Re: PCI-e bandwidth/capacity limitations

Post by Aurum »

SteveWillis wrote:
boristsybin wrote:and price is comparible to EZDYI and TT. and shipping is not free.
But it seems that leheat48 produces those risers, and can make them any length, and straight or angled. And may be with power connector (I`m already add power connector to my riser to reduce power impact on mb)
I can't find anything about "leheat48". Do you have a url or is that maybe a typo? Thanks
They're listed on Amazon: https://www.amazon.com/gp/product/B01CE ... XZZKQVDB4S
I just ordered these 1x risers hoping they'll fit in the 1x slot tucked under an EVGA card plugged into a 16x slot.
https://www.amazon.com/gp/product/B017Q ... UTF8&psc=1
In Science We Trust Image
boristsybin
Posts: 50
Joined: Mon Jan 16, 2017 11:40 am
Hardware configuration: 4x1080Ti + 2x1050Ti
Location: Russia, Moscow

Re: PCI-e bandwidth/capacity limitations

Post by boristsybin »

SteveWillis wrote:I can't find anything about "leheat48". Do you have a url or is that maybe a typo? Thanks
I mistyped :) liheat48 is correct, seller at ebay http://www.ebay.com/usr/liheat48
Image
boristsybin
Posts: 50
Joined: Mon Jan 16, 2017 11:40 am
Hardware configuration: 4x1080Ti + 2x1050Ti
Location: Russia, Moscow

Re: PCI-e bandwidth/capacity limitations

Post by boristsybin »

Aurum wrote:I just ordered these 1x risers hoping they'll fit in the 1x slot tucked under an EVGA card plugged into a 16x slot.
https://www.amazon.com/gp/product/B017Q ... UTF8&psc=1
i shurely reccomend for lowspeed cards (1050Ti, RX 460 and lower) this kind of risers https://www.amazon.com/PCLE-VER-006C-PC ... =pci+riser

they tested and approved by a lot of cryptominers, and me too. Flexible, with external power connector to reduce powerload through MB, easy to change length by simply changing usb 3.0 cabe. Must have, stongly recomended, etc, etc... :)
I have 1050Ti folding on that kind of riser connected to pci-e x1 2.0, and It produces ~150kppd. The same card directly on pci-e x16 2.0 produces the same numbers.
Last edited by boristsybin on Sun Jun 18, 2017 7:47 am, edited 1 time in total.
Image
ComputerGenie
Posts: 236
Joined: Mon Dec 12, 2016 4:06 am

Re: PCI-e bandwidth/capacity limitations

Post by ComputerGenie »

boristsybin wrote:i shurely reccomend to anyone this kind of risers ...

they tested and approved by a lot of cryptominers, and me too. Flexible, with external power connector to reduce powerload through MB, easy to change length by simply changing usb 3.0 cabe. Must have, stongly recomended, etc, etc... :)
I have 1050Ti folding on that kind of riser connected to pci-e x1 2.0, and It produces ~150kppd. The same card directly on pci-e x16 2.0 produces the same numbers.
They work great for "smaller" cards, but if you get in the 1080 range, then you will loose large amounts of PPD.
boristsybin
Posts: 50
Joined: Mon Jan 16, 2017 11:40 am
Hardware configuration: 4x1080Ti + 2x1050Ti
Location: Russia, Moscow

Re: PCI-e bandwidth/capacity limitations

Post by boristsybin »

ComputerGenie wrote:They work great for "smaller" cards, but if you get in the 1080 range, then you will loose large amounts of PPD.
Oh, yes, you right. I recommend those risers for low speed cards (under 200 kppd)
Image
NGBRO
Posts: 12
Joined: Mon Apr 08, 2013 10:49 am

Re: PCI-e bandwidth/capacity limitations

Post by NGBRO »

I recently got a GTX 1060 Mini 3GB to try folding full-time. I installed in a slapped-together system with a Q9400 and I see that the GPU is running at PCIe 1.1 x16, which seems to be the max supported by the mobo and CPU. Bearing in mind that GTX 1060 is PCIe 3.0.

My PPD seems to be hovering around 220K at stock speed (thermal throttling). It does seem pretty low compared to numbers I saw around, which is around 270-300+K. Is it because running at PCIe 1.1 x16 is bottlenecking performance?
Aurum
Posts: 296
Joined: Sat Oct 03, 2015 3:15 pm
Location: The Great Basin

Re: PCI-e bandwidth/capacity limitations

Post by Aurum »

There's some pretty WUs running now, maybe you caught one of those.
x16 1.0 ~ x8 2.0 ~ x4 3.0 so you may be taking a bite out of your 1060 PPD.
I've got a rig with four 1060 6GB cards at x16 2.0, x8 2.0, x8 2.0 & x1 2.0.
PPD ranges: 348477, 331888, 266147 and 206888.
In Science We Trust Image
NGBRO
Posts: 12
Joined: Mon Apr 08, 2013 10:49 am

Re: PCI-e bandwidth/capacity limitations

Post by NGBRO »

Aurum wrote:There's some pretty WUs running now, maybe you caught one of those.
x16 1.0 ~ x8 2.0 ~ x4 3.0 so you may be taking a bite out of your 1060 PPD.
I've got a rig with four 1060 6GB cards at x16 2.0, x8 2.0, x8 2.0 & x1 2.0.
PPD ranges: 348477, 331888, 266147 and 206888.
So 1.1 x16 is about the same as 2.0 x8?
foldy
Posts: 2061
Joined: Sat Dec 01, 2012 3:43 pm
Hardware configuration: Folding@Home Client 7.6.13 (1 GPU slots)
Windows 7 64bit
Intel Core i5 2500k@4Ghz
Nvidia gtx 1080ti driver 441

Re: PCI-e bandwidth/capacity limitations

Post by foldy »

Yes. One way to speed it up is using Linux instead of Windows as the nvidia driver there has less pcie bandwidth usage.

As you wrote about thermal throttling, putting a case fan next to the GPU to help spread the heat, may increase the GPU clock.
If the heat problem is solved you can overclock the GPU upto 2000 Mhz. That should push your PPD.
NGBRO
Posts: 12
Joined: Mon Apr 08, 2013 10:49 am

Re: PCI-e bandwidth/capacity limitations

Post by NGBRO »

foldy wrote:Yes. One way to speed it up is using Linux instead of Windows as the nvidia driver there has less pcie bandwidth usage.

As you wrote about thermal throttling, putting a case fan next to the GPU to help spread the heat, may increase the GPU clock.
If the heat problem is solved you can overclock the GPU upto 2000 Mhz. That should push your PPD.
But one thing I noticed is that even when it first starts out a task from idling ~50C, the max GPU load is only 85++% and 56+% TDP even though freq is up to 1880MHz, way past the GPU Boost freq. PPD at the beginning is ~270-275K. Then when it hits 82degC, it starts throttling down over time and settles at the base 1506MHz, PPD ~215K, load still around 85% and TDP 53+%.

Is the max load supposed to be just that or caused by bottleneck?
Last edited by NGBRO on Sun Jun 18, 2017 3:05 pm, edited 1 time in total.
Aurum
Posts: 296
Joined: Sat Oct 03, 2015 3:15 pm
Location: The Great Basin

Re: PCI-e bandwidth/capacity limitations

Post by Aurum »

NGBRO, What's the model of your MB? Do you have a link to the manual?
Last edited by Aurum on Sun Jun 18, 2017 3:07 pm, edited 2 times in total.
In Science We Trust Image
NGBRO
Posts: 12
Joined: Mon Apr 08, 2013 10:49 am

Re: PCI-e bandwidth/capacity limitations

Post by NGBRO »

Aurum wrote:NGBRO, What's the model of your MB? Do you have a link to the manual?
I got it off someone else 2nd-hand. It's a HP Napa-GL8E. Got a C2Q Q9400 in it and 4GB of DDR2.
Aurum
Posts: 296
Joined: Sat Oct 03, 2015 3:15 pm
Location: The Great Basin

Re: PCI-e bandwidth/capacity limitations

Post by Aurum »

Have you watched it run with Windows/Task Manager/Performance to see if Q9400 is maxing out :?:
In Science We Trust Image
Post Reply