Shared PCI Slots on Motherboard

A forum for discussing FAH-related hardware choices and info on actual products (not speculation).

Moderator: Site Moderators

Forum rules
Please read the forum rules before posting.
Post Reply
SandyG
Posts: 108
Joined: Mon Apr 13, 2020 11:15 pm
Hardware configuration: 2 Shuttle i9's with RTX3060, Old server mobo, Mint Linux with 2 RTX3090's and 2 RTX 4090. Dell 7920 RTX3060, RTX4070

[img]https://folding.extremeoverclocking.com/sigs/sigimage.php?u=1172112[/img]
Contact:

Shared PCI Slots on Motherboard

Post by SandyG »

Been running a Gigabyte MW51-HP0 with the Intel C422 Chipset (https://www.gigabyte.com/Enterprise/Ser ... HP0-rev-1x)

Been cleaning up GPU's and upgrading. Have 4 slots that are not shared, 1 off the CPU not shared, and 3 that are shared. Each of the 3 remaining GPU's are spread out so they are not sharing the bus (they are plugged into every other connector by the Gigabyte diagrams).

2 Questions -

1. For the PCI slot coming out of the CPU, that is where my HDMI video is coming from for my linux install (Mint/Ubuntu). I hit a couple of the same work units and looks like the card that is running the O/S video output is slower given the same Project. Not sure if projects can vary. Have to keep watching and see if I can get more that match on the same cards that are running. I guess is their a cost to the O/S using a card for video?

2. Using a shared slot I tried a couple of times and could not get FAH to see a card that is on a shared slot. I plugged in a 3060 and a 3090 and it will not show up in the log.txt as being recognized. Before I start messing with it, is their anyone that know if FAH will work on a chipset (C422) the supports shared slots?

Thanks
Sandy
2 Shuttle i9's with RTX3060, Old server mobo, Mint Linux with 2 RTX3090's and 2 RTX 4090. Dell 7920 RTX3060, RTX4070

Image
BobWilliams757
Posts: 497
Joined: Fri Apr 03, 2020 2:22 pm
Hardware configuration: ASRock X370M PRO4
Ryzen 2400G APU
16 GB DDR4-3200
MSI GTX 1660 Super Gaming X

Re: Shared PCI Slots on Motherboard

Post by BobWilliams757 »

Hmmm...

Looking at the link, the MB seems to have plenty of lanes available, and with the options should support your GPU's.

Some questions though. Do you have 4 total GPU's you are attempting to install or 3? If 4 you may have to run two of them at x8 rather than x16, otherwise you can run out of PCIe lanes. Being you mention the speed comparison for video output vs none, I'm assuming you have at least two GPU's working. Which ones are they and what slots are they in? I would assume that the slot 7 is a no brainer and would work, as it's only 1 slot option and x16. The slot 5 and 6 pair also look easy enough.... BUT it might be worth digging into the manual. It could be that the x8 or x16 selection would have impact on the remaining slots 1-4.

In theory with slot 7 at x16 and slot 5 at x16 you should still have enough PCIe lanes for the last two cards to go in slots and run at x8. I'm not sure how that PEC chip impacts those slots, if at all. But it might somehow limit how they can be used, as well as impact the other slots.


As for question 1, I have no direct input as I only have 1 GPU. But I have seen project variations work unit to work unit for certain. And I would think the overhead for the drivers and such would be the same regardless of output or not. However actually using the video output no doubt slows the GPU down for folding. Not as much as might be expected, but this might vary with GPU's as well. Even light work such as browsing or checking HFM stats will rob a few PPD, and video conferencing for me usually robs about 40-60K PPD during the time I am connected. I would think that if say video conferencing takes up 20W worth of effort, it would be the same as reducing my power limit around 20W.... but I'm really not sure, since the busses, CPU, etc all come into the picture.

As for question 2, I would think the second GPU should work readily in slot 5 or 6. How many are already in and operating when you run into issues?
Fold them if you get them!
SandyG
Posts: 108
Joined: Mon Apr 13, 2020 11:15 pm
Hardware configuration: 2 Shuttle i9's with RTX3060, Old server mobo, Mint Linux with 2 RTX3090's and 2 RTX 4090. Dell 7920 RTX3060, RTX4070

[img]https://folding.extremeoverclocking.com/sigs/sigimage.php?u=1172112[/img]
Contact:

Re: Shared PCI Slots on Motherboard

Post by SandyG »

Thanks for info...

I currently have 4 running GPU's each on a non shared slot. Slot 7 is by design not shared, that is where I mentioned I had my video coming off of. I got lucky and caught another pair of work units that were on the matching cards, one with the video and one without and really didn't see much difference. So I'll make the assumption that work units withing themselves are not always going to make roughly the same points.

For the speed of the slots, I don't think it would be a huge hit, as a PCIe X1 slot was at most 20% with vs the PCIe 16x, so some but not much vs. having another card. My guess is that sharing 2 slots makes both slots x8 speed but it's kinda' unclear.

On the other comments, I have a total of 4 cards running 2 RTX3090's and 2 RTX4090's, these are in the following PCIe slots

Slot 7 (single non shareable off the CPU directly, video on this card) - 3090
Slot 6 - 3090
Slot 4 - 4090
Slot 2 - 4090

I was trying to add a spare 3060 I had laying around to one of the slots (PCIe 5) that seems to be shared with the one of the 3090's. I did not see the card being recognized when looking at the logs. I likely should just try it again and see if it's seen. I poked around the BIOS and saw nothing that lead me to believe that something needed to be set up so will just give it a go again. Otherwise seems to be working really well after 2 days.

Sandy
2 Shuttle i9's with RTX3060, Old server mobo, Mint Linux with 2 RTX3090's and 2 RTX 4090. Dell 7920 RTX3060, RTX4070

Image
BobWilliams757
Posts: 497
Joined: Fri Apr 03, 2020 2:22 pm
Hardware configuration: ASRock X370M PRO4
Ryzen 2400G APU
16 GB DDR4-3200
MSI GTX 1660 Super Gaming X

Re: Shared PCI Slots on Motherboard

Post by BobWilliams757 »

I should have known with your averages... you're simply trying to install the 5th GPU onto the motherboard! :D

I almost suspect you won't be able to have another one recognized. It appears to me that the slot options are too fast, and since there is no slow options available you run out of PCIe lanes since all that is left is x16 slots. Unless the manual states you can somehow lower them, I'm not sure there is any other hope. If you could split the remaining lanes it might be possible to use risers and make both of them an x8 on a pair rather than one each x8 and x16.

Even then with the way they are designated and shown with the "switch" in line, I suspect only one of the slots on those pairs can be used at a time.

I really hope I'm wrong because I want you to have your 5 GPU system up and running. :mrgreen:


Is this all in a case, some type of rack setup, or what?
Fold them if you get them!
SandyG
Posts: 108
Joined: Mon Apr 13, 2020 11:15 pm
Hardware configuration: 2 Shuttle i9's with RTX3060, Old server mobo, Mint Linux with 2 RTX3090's and 2 RTX 4090. Dell 7920 RTX3060, RTX4070

[img]https://folding.extremeoverclocking.com/sigs/sigimage.php?u=1172112[/img]
Contact:

Re: Shared PCI Slots on Motherboard

Post by SandyG »

Yeah, not sure how the switch on the PCIe slots work. I would expect that O/S when it enumerates the slots it has a way to cause the shared switch to toggle to the other slot. Who knows. I might give it one more shot and see if it gets picked up, but no rush on that.

The chassis is a $39 dollar amazon bitcoin (fill in your coin I guess) mining chassis. It was so inexpensive but it supports 2x power and up to 6 cards. I have a second one with a Mining motherboard but the cost of PCIe 1x is pretty high, especially with the faster cards. The mining motherboard has 9 PCIe 1x slots and so very slow and wasteful of the fast cards. It worked but not very good use of the hardware. The current rig works well now, I have 2 1600 watt supplies running off of 240v and they are not working hard at all.See how the summer goes as it gets hot.

Thanks for the info and thoughts, at some point have to see how much power the system is pulling, I need to make a simple break out box for the cord to be able to use the "Clamprobe' to read the current and see. I have a 120 adapter but no 240.

Done for upgrades for a while until I can afford to update the 3090 to 4090's but down the road a bit I'm thinking...

Sandy
2 Shuttle i9's with RTX3060, Old server mobo, Mint Linux with 2 RTX3090's and 2 RTX 4090. Dell 7920 RTX3060, RTX4070

Image
SandyG
Posts: 108
Joined: Mon Apr 13, 2020 11:15 pm
Hardware configuration: 2 Shuttle i9's with RTX3060, Old server mobo, Mint Linux with 2 RTX3090's and 2 RTX 4090. Dell 7920 RTX3060, RTX4070

[img]https://folding.extremeoverclocking.com/sigs/sigimage.php?u=1172112[/img]
Contact:

Re: Shared PCI Slots on Motherboard

Post by SandyG »

Upon some reading it seems that the BIOS is responsible for providing the PCIe slot info to Linux, and as such should present to FAH. So might be something that just needs me to try again. I also found that I can look at what linux sees as PCIe hardware in

Code: Select all

/sys/bus/pci/devices

If it shows up as a device (not exactly sure what to look for) to linux then FAH should see it. My guess is that it's not showing up. Something to check when I get a chance. I'm going to let it run for a bit to make sure everything is stable then next power down cycle plug it back in and see if it shows.

More linux stuff then I want to know about at this point :D
2 Shuttle i9's with RTX3060, Old server mobo, Mint Linux with 2 RTX3090's and 2 RTX 4090. Dell 7920 RTX3060, RTX4070

Image
BobWilliams757
Posts: 497
Joined: Fri Apr 03, 2020 2:22 pm
Hardware configuration: ASRock X370M PRO4
Ryzen 2400G APU
16 GB DDR4-3200
MSI GTX 1660 Super Gaming X

Re: Shared PCI Slots on Motherboard

Post by BobWilliams757 »

That's why I've just avoided Linux, I know I'll end up down the rabbit hole at some point in time. :roll: There are quite a few users on the forums though, but I doubt many of them have had to deal with the issue you are having. Either way, I hope you get that 5th GPU working in the system.
Fold them if you get them!
SandyG
Posts: 108
Joined: Mon Apr 13, 2020 11:15 pm
Hardware configuration: 2 Shuttle i9's with RTX3060, Old server mobo, Mint Linux with 2 RTX3090's and 2 RTX 4090. Dell 7920 RTX3060, RTX4070

[img]https://folding.extremeoverclocking.com/sigs/sigimage.php?u=1172112[/img]
Contact:

Re: Shared PCI Slots on Motherboard

Post by SandyG »

Linux has been pretty good except for the issues with no FAHControl on due to it's Python being way old as I understand it. New version of FAH someday will fix. But I think you do get a bit more performance then on Windows. I have had no downtime yet to try to re-add the card and see if it detects by the O/S. It should but who knows. All good, numbers are cranking out of the new setup too so I like that ;)

Sandy
2 Shuttle i9's with RTX3060, Old server mobo, Mint Linux with 2 RTX3090's and 2 RTX 4090. Dell 7920 RTX3060, RTX4070

Image
Joe_H
Site Admin
Posts: 7870
Joined: Tue Apr 21, 2009 4:41 pm
Hardware configuration: Mac Pro 2.8 quad 12 GB smp4
MacBook Pro 2.9 i7 8 GB smp2
Location: W. MA

Re: Shared PCI Slots on Motherboard

Post by Joe_H »

The v8 Beta does away with FAHControl, all settings, etc are done through a web interface. There is a version of FAHControl available to run with Python 3 for those recent versions of Linux which do not support installing legacy Python 2.7. That was done by someone who took the open source FAHControl code and updated it for Python 3 - https://github.com/cdberkstresser/fah-control.
Image

iMac 2.8 i7 12 GB smp8, Mac Pro 2.8 quad 12 GB smp6
MacBook Pro 2.9 i7 8 GB smp3
calxalot
Site Moderator
Posts: 889
Joined: Sat Dec 08, 2007 1:33 am
Location: San Francisco, CA
Contact:

Re: Shared PCI Slots on Motherboard

Post by calxalot »

After installing the dependencies, you build with

python3 setup.py build

FAHControl may think it is version 0.0.0, but that hurts nothing.
SandyG
Posts: 108
Joined: Mon Apr 13, 2020 11:15 pm
Hardware configuration: 2 Shuttle i9's with RTX3060, Old server mobo, Mint Linux with 2 RTX3090's and 2 RTX 4090. Dell 7920 RTX3060, RTX4070

[img]https://folding.extremeoverclocking.com/sigs/sigimage.php?u=1172112[/img]
Contact:

Re: Shared PCI Slots on Motherboard

Post by SandyG »

calxalot wrote: Sat May 06, 2023 4:21 am After installing the dependencies, you build with

python3 setup.py build

FAHControl may think it is version 0.0.0, but that hurts nothing.
When you mean installing the dependencies, what do you mean?

If I do the normal install of FAHClient by the package manager, does that not resolve anything needed, or is it because python 3 is being used and a bunch of other dependencies need to be updated for Py3?

I'm putting together another machine, and might give it a try, but will need a bit more help with the steps if you have them (or can point me)

Honestly I really don't mind working with the XML, easy enough and can just SSH in and make the changes. But still would be nice to have FAHControl.

Sandy
2 Shuttle i9's with RTX3060, Old server mobo, Mint Linux with 2 RTX3090's and 2 RTX 4090. Dell 7920 RTX3060, RTX4070

Image
SandyG
Posts: 108
Joined: Mon Apr 13, 2020 11:15 pm
Hardware configuration: 2 Shuttle i9's with RTX3060, Old server mobo, Mint Linux with 2 RTX3090's and 2 RTX 4090. Dell 7920 RTX3060, RTX4070

[img]https://folding.extremeoverclocking.com/sigs/sigimage.php?u=1172112[/img]
Contact:

Re: Shared PCI Slots on Motherboard

Post by SandyG »

Joe_H wrote: Sat May 06, 2023 1:59 am The v8 Beta does away with FAHControl, all settings, etc are done through a web interface. There is a version of FAHControl available to run with Python 3 for those recent versions of Linux which do not support installing legacy Python 2.7. That was done by someone who took the open source FAHControl code and updated it for Python 3 - https://github.com/cdberkstresser/fah-control.
I came across that on github. I keep waiting for the V8 to pop to production some day, but not holding my breath. I might give that a go and see how it works. I have another of the old server motherboards inbound so might try on that once I collect the CPU and memory.

Setting up the config.xml is pretty easy, just need to poke around on the web to find all the options and formats for them. It's actually nice and easy to deal with. I think on windows you have to look into the database as their is no config.xml. (or so I think).

In any case, I might give this a go once the second system is up and running.

Thanks for help here!!

Sandy
2 Shuttle i9's with RTX3060, Old server mobo, Mint Linux with 2 RTX3090's and 2 RTX 4090. Dell 7920 RTX3060, RTX4070

Image
Joe_H
Site Admin
Posts: 7870
Joined: Tue Apr 21, 2009 4:41 pm
Hardware configuration: Mac Pro 2.8 quad 12 GB smp4
MacBook Pro 2.9 i7 8 GB smp2
Location: W. MA

Re: Shared PCI Slots on Motherboard

Post by Joe_H »

v8 uses an existing config.xml file at startup to set appropriate options, but stores its settings in a sql database file. So it can inherit some settings when installed over a v7 client installation. Not sure when it will go into production. The v8 client still could use documentation on its API so third party monitoring apps can be used, and its setup for monitoring and controlling remote clients could use improvement. Otherwise v8 is fairly stable, just some quirks and it can have issues getting WUs from a few servers still running an older version of the server software.

As for Windows and the v7 client, it also uses a config.xml file. So if you are not seeing one there might be a problem with the install.
Image

iMac 2.8 i7 12 GB smp8, Mac Pro 2.8 quad 12 GB smp6
MacBook Pro 2.9 i7 8 GB smp3
calxalot
Site Moderator
Posts: 889
Joined: Sat Dec 08, 2007 1:33 am
Location: San Francisco, CA
Contact:

Re: Shared PCI Slots on Motherboard

Post by calxalot »

SandyG wrote: Mon May 08, 2023 4:53 pm When you mean installing the dependencies, what do you mean?
https://github.com/cdberkstresser/fah-c ... requisites
Post Reply