Multi-GPU quirk

Questions and comments regarding the GPU2 client running on a nVidia GPU that supports CUDA.

Multi-GPU quirk

Postby wickedld9 » Sat Jul 19, 2008 8:12 am

I've not seen this mentioned here but I didn't search too terribly much either. This is concerning multiple Nvidia GPU's only.

I noticed a few users running multiple GPU's in one machine having similar ppd across all of the cards in the system, regardless if it's 16x or 4x PCI-E. This was somewhat perplexing as I have two machines that I had been trying to acheieve similar results with but couldn't figure out why I was not able to. So I've just finally figured it out, as long as you are using the same type of cards in the system, output will be nearly equal at a given clock speed. However, if you mix cards, say, an 8800 GT with a 9600 GT..... or even a 9800 GTX with a 9600 GT, you will get a significant drop in performance from the second and third cards in the system while the primary card will output as normal. In the above situation, the 9800GTX put out a solid 5300ppd on its own, while the clocks on my 9600 GT have it putting out around 4100ppd. Together, the 9800GTX maintains its 5300ppd but the 9600GT drops to only 3400ppd.
I confirmed this a few minutes ago by swapping out the 9800 GTX with another 9600 GT, giving me two in the system. Both 9600 GT's are now putting out almost identical ppd near 4100.

Hope this isn't something that's already been covered, just a little food for thought when you're planning upgrades.
wickedld9
 
Posts: 11
Joined: Thu Jan 17, 2008 3:12 pm

Re: Multi-GPU quirk

Postby PCZ » Sat Jul 19, 2008 11:23 am

wickedld9

I use multi GPU setups, same cards and mixed.
What you have observed is the effect of letting the GPU clients share a core.

The faster card gets the lions share of the processing time and the lesser card gets starved.
This also depends to a certain extent on the WU's as well, some require more CPU time.

Ideally you need to let each GPU client run on its own core.
This is done with the NV_FAH_CPU_AFFINITY variable.
PCZ
 
Posts: 20
Joined: Sun May 25, 2008 3:25 am

Re: Multi-GPU quirk

Postby wickedld9 » Sat Jul 19, 2008 4:38 pm

I have to disagree with you, as at first I had thought the same thing. The only thing that changed was the second video card in each system; I made no changes to affinity and each run on the same core.
By assigning each instance its own CPU core, the output did not change; in fact I see more consistent results with each instance sharing a single CPU core.

In my environment, I have seen no evidence of CPU limitation while folding with Nvidia GPU's. The same cannot be said of folding on ATI hardware. I have run everything from a stock clocked Athlon 64 3000+ (1.8Ghz) to an E8500 at 4.3GHz and output does not change on Nvidia hardware. The 100% task manager reporting in XP seems to be false. I have seen nothing to suggest that the client is more CPU bound on WinXP than Vista.

edit: One of my final tests was to leave both mixed GPU's in the system but run ONLY the second video card client, and again the output was the same as having both cards folding.
wickedld9
 
Posts: 11
Joined: Thu Jan 17, 2008 3:12 pm

Re: Multi-GPU quirk

Postby who » Sat Jul 19, 2008 5:14 pm

Is it possible that you mixed up machine id's? Several users reported similar findings, only to have their output doubled when they fixed it
who
 
Posts: 18
Joined: Sat Jul 05, 2008 6:37 pm

Re: Multi-GPU quirk

Postby wickedld9 » Sat Jul 19, 2008 5:29 pm

Nope, the only change made was the swap of the "primary" card to match the secondary. This behaviour was observed on two separate machines and the results were immediate.
wickedld9
 
Posts: 11
Joined: Thu Jan 17, 2008 3:12 pm

Re: Multi-GPU quirk

Postby PCZ » Sat Jul 19, 2008 5:52 pm

The 100% utilisation in XP is not false.
Strangly it doesnt matter what speed or type the CPU is but it does need exclusive use of that core.
Obviously this is something the team will be trying to fix.

Vista is a different kettle of fish and has much lower CPU utilisation

Redo your experiment and set the affinity to allow each GPU client a core.
PCZ
 
Posts: 20
Joined: Sun May 25, 2008 3:25 am

Re: Multi-GPU quirk

Postby wickedld9 » Sat Jul 19, 2008 8:41 pm

I am not sure why you are so insistent with me giving each GPU client exclusive access to its own CPU core. This is something I have been working off and on for a couple of weeks now.
I am asking you to provide me details and proof that your mix-GPU machines are producing the same as they would if each in separate PC’s, and also that there are tangible benefits to each GPU in a multi-gpu machine having exclusive access to its own CPU core.

It seems like there may be some misinformation going on in the forum that is being passed on as an absolute, when that is not the case at all. I am trying to make an informative post so that others can learn how to make their hardware run at top efficiency. We are all on the same side here right?
wickedld9
 
Posts: 11
Joined: Thu Jan 17, 2008 3:12 pm

Re: Multi-GPU quirk

Postby who » Sat Jul 19, 2008 8:57 pm

wickedld9 wrote:Nope, the only change made was the swap of the "primary" card to match the secondary. This behaviour was observed on two separate machines and the results were immediate.


I appreciate sharing your findings so far

How "immediate"? Are you using fahmon to monitor? Set to last 3 frames? I ask because it takes a while to show changes (ramp up). Let it run for awhile

Are you on 1.07 core? I don't think the NV_FAH_CPU_AFFINITY works with 1.06
who
 
Posts: 18
Joined: Sat Jul 05, 2008 6:37 pm

Re: Multi-GPU quirk

Postby ^w^ing » Sat Jul 19, 2008 9:05 pm

PCZ wrote:The 100% utilisation in XP is not false.
Strangly it doesnt matter what speed or type the CPU is but it does need exclusive use of that core.
...


Interesting. My findings were the exact opposite. Hmm.
^w^ing
 
Posts: 247
Joined: Fri Mar 07, 2008 7:29 pm
Location: Prague

Re: Multi-GPU quirk

Postby Bernie64 » Sat Jul 19, 2008 10:35 pm

The CPU should automatically assign a second thread to a second core and not wait on the primary core, if not what is the point of multi core design.
The cache is often shared, and if it is 100% in use, you will bottle neck.
Using 2 different cards could theoretically increase cache use, adding to a cache bottle neck.
Instinct-that which uses the other 90% of the brain conscious thought does not.
Bernie64
 
Posts: 49
Joined: Wed Jun 11, 2008 10:23 pm

Re: Multi-GPU quirk

Postby heimie » Mon Jul 21, 2008 8:18 pm

I got the same results. Your suggestion makes sense. My core usage never goes over 40% and thats running 3 clients on the same core. I played with the affinity and nothing changed. I was using 2x 8800gt akimbos and an 8800GS in the middle slot of an 680i mobo (8x). the 8800GS dropped 1500 ppd with all the same setting no matter what core I had what on. Moved the GS back to its own PC and the PPD went right back up. The Akimbos never varried at all.

Maybe one of the Nvidia Guys can offer more clarification?
heimie
 
Posts: 79
Joined: Sat Jun 14, 2008 10:17 am
Location: Lockport, Louisiana

Re: Multi-GPU quirk

Postby aicjofs » Mon Jul 21, 2008 8:37 pm

Good post. Me and another poster mentioned it a couple of weeks ago here by me and leexgx:

viewtopic.php?f=43&t=3774&p=37341&hilit=lack+luster#p37341

No one seemed interested back then, I believe you it does actually happen. Now there are 4 people total seeing it. This wasn't the best thread title for others to find, but hopefully we draw others into the discussion.

I think this is happening to alot more people then known, but people tend to run the more powerful card in slot 1, so they think the lesser card in slot 2 just naturally is less powerful and therefore gets less PPD. Hence not many people are talking about it.

Anyway through my testing it has nothing to do with machine ID, core affinity, cache, etc. What would be useful if we wanted to get to the bottom of this is for people to post some info on their setup. The 4 people with the problem and any of those that claim it works perfectly normal.

Mine:

Vista 64
nvidia driver 177.35 or 177.41
8800GTS
8800GT
Intel x38 chipset

The 2nd card speed is almost what people get with the 175 drivers, for whatever that might be worth. Also the thought that it has something to do with 2 nVidia cards on an Intel chipset and its part of the encrypted portion of the driver that breaks SLI on Intel chips...anyway just some brainstorming.
aicjofs
 
Posts: 35
Joined: Tue Jun 17, 2008 7:48 pm

Re: Multi-GPU quirk

Postby ChasR » Mon Jul 21, 2008 9:33 pm

Wicked,
I'm seeing the same thing with an 8800 GTS (G92) and an 8800 GT a 50% slowdown on the second card. In XP, It makes ZERO difference if two instances are using the same core or different cores, the second card is much slower. I'll swap the 8800 GTS with another 8800GT and report back.

XP SP3
177.35
8800GTS
8800GT
Intel P35 (ASUS P5K Premium)
Image
User avatar
ChasR
 
Posts: 698
Joined: Sun Dec 02, 2007 5:36 am
Location: Atlanta, GA

Re: Multi-GPU quirk

Postby ChasR » Tue Jul 22, 2008 1:10 am

Replacing 8800GTS with an 8800GT, so that I have matching GPUs in each slot, fixed my problems. 5120ppd from each instance, both running on cpu core 3.
User avatar
ChasR
 
Posts: 698
Joined: Sun Dec 02, 2007 5:36 am
Location: Atlanta, GA

Re: Multi-GPU quirk

Postby chestRcopRpot » Tue Jul 22, 2008 5:25 am

I also have this issue.

With a 9800gx2 in the first slot, both its cores will go full speed but both the 8800gt's will go slow.

If i put the 8800gt in first slot then the other 8800gt starts going full speed but both the gx2 cores go slow. :cry:

Is there any workaround for this issue yet?
chestRcopRpot
 
Posts: 13
Joined: Tue Jul 22, 2008 5:19 am

Next

Return to Windows GPU2 (nVidia GPUs)

Who is online

Users browsing this forum: No registered users and 1 guest

cron