Cuda 3.0 beta RUNNING

Moderators: slegrand, Site Moderators, PandeGroup

Re: Cuda 3.0 beta RUNNING

Postby codysluder » Tue Dec 15, 2009 6:20 pm

FahCore_11 looks fine to me. It's running at a higher priority that FahCore_a2 which is what's important. It's using 4% of one core, which is right. If you change it to a real-time priority so that it takes priority over anything else that the computer is doing (which adds up to zero) I don't think you'll see any difference.
codysluder
 
Posts: 2239
Joined: Sun Dec 02, 2007 12:43 pm

Re: Cuda 3.0 beta RUNNING

Postby Viper666 » Sat Dec 19, 2009 9:34 pm

codysluder wrote:FahCore_11 looks fine to me. It's running at a higher priority that FahCore_a2 which is what's important. It's using 4% of one core, which is right. If you change it to a real-time priority so that it takes priority over anything else that the computer is doing (which adds up to zero) I don't think you'll see any difference.

real time is -20 on linux, I posted how nice works and the values in this thread ! zero is normal !
ImageImage
Powered by AMD and Open SUSe 11.2 ----The New servers have nothing to do with fixing the stats problem !
User avatar
Viper666
 
Posts: 229
Joined: Fri Dec 14, 2007 9:57 pm

Re: Cuda 3.0 beta RUNNING

Postby Mindmatter » Sat Feb 13, 2010 1:46 pm

Viper666 wrote:You just have to create a link name libcudart.so.2 to the new libcudart.so.3.0.8 or rename the link thats already there from libcudart.so.3 to libcudart.so.2

either the dll or the client looks for the older file not sure which.


I can't seem to get this to work. I have made a link named libcudart.so.2 linked to libcudart.so.3. When I run

Code: Select all
ldd /home/zerix01/.wine/drive_c/windows/system32/cudart.dll


I get this error:

Code: Select all
        linux-gate.so.1 =>  (0xffffe000)
   libcudart.so.2 => not found
   libwine.so.1 => /usr/local/lib/libwine.so.1 (0xf7e4b000)


Any of the CUDA 2.x toolkits work fine. But I'm getting horrible GPU performance when running the SMP client with two GPU clients. I just upgraded to Kubuntu 9.10 server and I'm trying to work out what is going wrong.

The SMP client is set to idle priority, both GPU's set to low in the FAH config file. I start the SMP client with "nice -n 19" and no nice flags for the GPU's. Looking at Ksysguard and top both show the running cores for all clients are set as expected. The fahcore_a1/a3 processes are at 19 and the fahcore_11/14 are at 0. Without the SMP client running, the GPU's almost double in PPD. I have forced the fahcore_11/14 processes to run in realtime priority with chrt and the performance went down more!!!! I first had them set to FIFO and then round robin thinking maybe they were interfering with each other but no change. Also the SMP client performance was minimally effected by the priority changes.

I was running the two GPU's and one uniclient and saw no issues with performance at all. I think I will try two unclients but I would really like to fix this SMP issue. Why is the SMP client sort of overriding all priorities?
Mindmatter
 
Posts: 48
Joined: Tue May 27, 2008 1:53 pm

Re: Cuda 3.0 beta RUNNING

Postby bruce » Tue Feb 16, 2010 5:35 pm

Mindmatter wrote:The SMP client is set to idle priority, both GPU's set to low in the FAH config file. I start the SMP client with "nice -n 19" and no nice flags for the GPU's. Looking at Ksysguard and top both show the running cores for all clients are set as expected. The fahcore_a1/a3 processes are at 19 and the fahcore_11/14 are at 0. Without the SMP client running, the GPU's almost double in PPD. I have forced the fahcore_11/14 processes to run in realtime priority with chrt and the performance went down more!!!! I first had them set to FIFO and then round robin thinking maybe they were interfering with each other but no change. Also the SMP client performance was minimally effected by the priority changes.

I was running the two GPU's and one uniclient and saw no issues with performance at all. I think I will try two unclients but I would really like to fix this SMP issue. Why is the SMP client sort of overriding all priorities?


I've seen a number of similar reports (statistically speaking, probably mostly from users of native Windows, but I don't remember) where SMP2 (core_a3) and other activities (including GPU clients) don't seem to play well together, whether it's a reduction in GPU points or a reduction in SMP2 points. The developers are aware that this is happening but I have no information about whether this situation can be improved. Some have found that running -smp (N-1) so as to leave one free CPU to handle the other activities actually improves total PPD, but this is highly dependent on your specific configuration and the patterns of your various activities. Others have recommended stopping GPU folding because (SMP+GPU) is less productive that SMP only.

This was a pretty severe issue with version 2.14 and is much better with 2.15, but I hesitate to call it "cured"

Whatever you decide to do today, it's rather certain that it may not be the best option the next time something else changes. :egeek:
bruce
 
Posts: 21332
Joined: Thu Nov 29, 2007 10:13 pm
Location: So. Cal.

Re: Cuda 3.0 beta RUNNING

Postby Mindmatter » Thu Feb 18, 2010 3:16 pm

It looks like my best solution is to run two GPU clients and one unicore client. With just SMP running on my Athlon X2 I get about 1800 PPD. With two GPU clients and SMP I get 4500PPD. With two GPU and one uni I get 8500PPD. Looks like I will deal with what I'm getting at the moment, at least my CPU isn't completely going to waste and I can keep the science going.

For anyone running a tri core, quad or better, this quote from another post seems to be a good fix to get SMP back up and running

My current solution is no way near optimal, but I use core 2 and 3 for GPU, and SMP on core 0 and 1, using "taskset -c 0,1" for smp, and dito for each GPU.


Lets say you have a tri-core Phenom. Start the SMP client with taskset on cores 0 and 1. Then start the GPU client on core 2. If anyone does this with multiple GPU's let me know if the GPU's are picky about having their own core or if you can stack two or more GPU processes on one core without effecting PPD. If you have a quad, obviously try forcing the SMP client to three cores.
Mindmatter
 
Posts: 48
Joined: Tue May 27, 2008 1:53 pm

Re: Cuda 3.0 beta RUNNING

Postby ragz » Fri Jun 04, 2010 6:14 pm

Thanks Mindmatter for posting the 'taskset' tip.

For what it's worth, the issue seems to be related to the newer drivers; 195.17, 195.36.24, and 256.25 beta are the ones I've tried so far. The gpu2 client suffers the hit in performance when the smp client is running with cuda 3.0. Interestingly the same performance hit is still observed when running cuda 2.3 with the 256.25 beta drivers. I haven't tried cuda 2.3 with any of the 195.xx drivers but I suspect it won't be any different. Has anyone else noticed this as well?
ragz
 
Posts: 2
Joined: Fri Jun 04, 2010 5:54 pm

Re: Cuda 3.0 beta RUNNING

Postby bruce » Fri Jun 04, 2010 6:54 pm

An alternate solution that some people like is to
(A) make sure that the GPU client(s) are configured to run at a higher priority than the CPU/SMP client(s)
and (B) use taskset for the SMP/CPU client(s) but NOT for the GPUs.

That way the GPU FAHcores always get priority, no matter which cpu is either idle or needs to be interrupted to perform GPU I/O. The NV FahCores (and ATi FAHcores if you use optimum env var settings) use the processor very briefly and then release it quickly allowing the CPUs to go back to work on the heavy computing load associated with the CPU/SMP FahCores.

This probably needs to be reconfirmed with FahCore_15, but I can't think of any reason why it would change appreciably.
bruce
 
Posts: 21332
Joined: Thu Nov 29, 2007 10:13 pm
Location: So. Cal.

Re: Cuda 3.0 beta RUNNING

Postby ragz » Fri Jun 04, 2010 10:43 pm

Thanks for the tip Bruce but for some reason it doesn't work on this particular system unless I use taskset on both the cpu and gpu client. I split the performance hit in half if I only use taskset on the cpu client.
ragz
 
Posts: 2
Joined: Fri Jun 04, 2010 5:54 pm

Previous

Return to unOfficial Linux GPU (WINE wrapper) (3rd party support)

Who is online

Users browsing this forum: No registered users and 2 guests

cron