Cuda 3.0 beta RUNNING

Moderators: slegrand, Site Moderators, PandeGroup

Re: Cuda 3.0 beta RUNNING

Postby Skiesare » Mon Dec 07, 2009 4:14 pm

I have the GPU nice at 15 and the CPU at 19. I don't have a hi-res kernel but I never did before and this only started with the 195.17 drivers and 2.6.31 kernel.

I also note that when both clients are running together the GPU temperature hardly goes up at all.
Skiesare
 
Posts: 17
Joined: Mon Aug 25, 2008 6:46 am

Re: Cuda 3.0 beta RUNNING

Postby shatteredsilicon » Tue Dec 08, 2009 12:22 am

If you don't have high-res timer support in your kernel, and you are running a v2 wrapper, your GPU performance will go through the floor anyway. The minimum sleep on a lernel without high-res timer support is 1ms, which is way, way too long for the polling interval to the GPU. I'd be surprised if you are pushing your GPUs past about 25% with a v2 wrapper and non high-res timers. Of course adding an SMP client to that mix will hammer the performance further downward. High-res timer support is mandatory for v2 wrapper. With a v1 wrapper (regardless of the kernel), your CPU time used by the GPU clients will sit at 100%.
Image
1x Q6600 @ 3.2GHz, 4GB DDR3-1333
1x Phenom X4 9950 @ 2.6GHz, 4GB DDR2-1066
3x GeForce 9800GX2
1x GeForce 8800GT
CentOS 5 x86-64, WINE 1.x with CUDA wrappers
shatteredsilicon
 
Posts: 699
Joined: Tue Jul 08, 2008 2:27 pm

Re: Cuda 3.0 beta RUNNING

Postby Skiesare » Tue Dec 08, 2009 5:24 pm

Could I have hi-res timer support without knowing it? The PPD of the GPU client when run on it's own seems OK, about 7000, it's a GTX260. I had no problem with 180 series drivers and 2.6.29 kernels but never knowingly patched them for hi-res timer support.
Skiesare
 
Posts: 17
Joined: Mon Aug 25, 2008 6:46 am

Re: Cuda 3.0 beta RUNNING

Postby shatteredsilicon » Wed Dec 09, 2009 1:39 am

It's not a patch, it's a feature that needs to be enabled at kernel build time.
Does /proc/timer_list exist? What does it contain?
shatteredsilicon
 
Posts: 699
Joined: Tue Jul 08, 2008 2:27 pm

Re: Cuda 3.0 beta RUNNING

Postby Viper666 » Wed Dec 09, 2009 3:40 am

willmore wrote:It seems to work here for Fedora 12. I've also had to install the older 32 bit toolkit from the howto as this new toolkit doesn't provide '2.0' libraries.

I guess we'll see if it keeps working.

Cheers!

You just have to create a link name libcudart.so.2 to the new libcudart.so.3.0.8 or rename the link thats already there from libcudart.so.3 to libcudart.so.2

either the dll or the client looks for the older file not sure which.

either tool kit has both sets of LIb 32 and 64( I download both and looked why they made 2 different ones I dunno ask Nvidia)

/usr/local/cuda/lib (32 bit files)
/usr/local/cuda/lib64(64 bit files)
cuda 3.0 beta

It will also run from a command prompt without x windows like the headless thread but all you have to do is run it,it just works!
ImageImage
Powered by AMD and Open SUSe 11.2 ----The New servers have nothing to do with fixing the stats problem !
User avatar
Viper666
 
Posts: 229
Joined: Fri Dec 14, 2007 9:57 pm

Re: Cuda 3.0 beta RUNNING

Postby Viper666 » Wed Dec 09, 2009 4:13 am

Skiesare wrote:Could I have hi-res timer support without knowing it? The PPD of the GPU client when run on it's own seems OK, about 7000, it's a GTX260. I had no problem with 180 series drivers and 2.6.29 kernels but never knowingly patched them for hi-res timer support.

I would guess most kernels have it now by default suse did!
User avatar
Viper666
 
Posts: 229
Joined: Fri Dec 14, 2007 9:57 pm

Re: Cuda 3.0 beta RUNNING

Postby shatteredsilicon » Wed Dec 09, 2009 5:20 am

RHEL certainly doesn't.
shatteredsilicon
 
Posts: 699
Joined: Tue Jul 08, 2008 2:27 pm

Re: Cuda 3.0 beta RUNNING

Postby Skiesare » Wed Dec 09, 2009 4:59 pm

shatteredsilicon wrote:It's not a patch, it's a feature that needs to be enabled at kernel build time.
Does /proc/timer_list exist? What does it contain?

It does exist and contains quite a lot, starting with;
Timer List Version: v0.4
HRTIMER_MAX_CLOCK_BASES: 2
now at 2796178791930 nsecs

cpu: 0
clock 0:
.base: ffff880028038188
.index: 0
.resolution: 1 nsecs
.get_time: ktime_get_real
.offset: 1260374390427953937 nsecs
active timers:
clock 1:
.base: ffff8800280381c8
.index: 1
.resolution: 1 nsecs
.get_time: ktime_get
.offset: 0 nsecs

I have re-installed CUDA 3 and renamed the file libcudart.so.3 to libcudart.so.2 and although it is very early days that does seem to have solved the display freezing problem, but I still have the problem of a massive fall in performance from the GPU client when the CPU client runs at the same time. For instance, I am running a 5763 unit and was getting 1% per 50 seconds but when I started the CPU client that fell to between 4 and 6 minutes for 1%.

If the GPU2 client continues to run OK on it's own then I'll just have to go without the CPU client but I'm not folding so much at the moment because of the need to save electricity I mostly only fold while I'm using the PC anyway so I might not get to complete CPU units before the deadline passes.
Skiesare
 
Posts: 17
Joined: Mon Aug 25, 2008 6:46 am

Re: Cuda 3.0 beta RUNNING

Postby shatteredsilicon » Wed Dec 09, 2009 5:35 pm

OK, so you have HR timers. Do you have CPU usage limiting set in either client? It sounds like you just need to run the CPU client at niceness 19 and the GPU either at standard priority or slightly reduced (niceness 0-5). That should make sure the CPU client never gets priority. What CPU do you have? How many cores? If it is a Core2 quad, you may be seeing a performance degradation because the GPU client is getting bounced between the core pairs (Core2 quad is 2x2 not 1x4). You may want to taskset the GPU client(s) to each bind to a particular core, and leave the CPU client to get bounced around. I found that I got the best performance on my quad with quad GPUs (2x9800GX2) by limiting each GPU client to a core, and leaving the SMP client unbound.
shatteredsilicon
 
Posts: 699
Joined: Tue Jul 08, 2008 2:27 pm

Re: Cuda 3.0 beta RUNNING

Postby Viper666 » Thu Dec 10, 2009 8:35 am

shatteredsilicon wrote:RHEL certainly doesn't.

RHEL doesn't normally run one of the latest kernels either.They lag behind a bit !
User avatar
Viper666
 
Posts: 229
Joined: Fri Dec 14, 2007 9:57 pm

Re: Cuda 3.0 beta RUNNING

Postby Viper666 » Thu Dec 10, 2009 8:39 am

Skiesare wrote:
shatteredsilicon wrote:It's not a patch, it's a feature that needs to be enabled at kernel build time.
Does /proc/timer_list exist? What does it contain?

It does exist and contains quite a lot, starting with;
Timer List Version: v0.4
HRTIMER_MAX_CLOCK_BASES: 2
now at 2796178791930 nsecs

cpu: 0
clock 0:
.base: ffff880028038188
.index: 0
.resolution: 1 nsecs
.get_time: ktime_get_real
.offset: 1260374390427953937 nsecs
active timers:
clock 1:
.base: ffff8800280381c8
.index: 1
.resolution: 1 nsecs
.get_time: ktime_get
.offset: 0 nsecs

I have re-installed CUDA 3 and renamed the file libcudart.so.3 to libcudart.so.2 and although it is very early days that does seem to have solved the display freezing problem, but I still have the problem of a massive fall in performance from the GPU client when the CPU client runs at the same time. For instance, I am running a 5763 unit and was getting 1% per 50 seconds but when I started the CPU client that fell to between 4 and 6 minutes for 1%.

If the GPU2 client continues to run OK on it's own then I'll just have to go without the CPU client but I'm not folding so much at the moment because of the need to save electricity I mostly only fold while I'm using the PC anyway so I might not get to complete CPU units before the deadline passes.


I run mine nice 0 not 19 or I had issues. nice -n 0 wine Folding@home-Win32-GPU.exe -forcegpu nvidia_g80 oops shatteredsilicon said that !
User avatar
Viper666
 
Posts: 229
Joined: Fri Dec 14, 2007 9:57 pm

Re: Cuda 3.0 beta RUNNING

Postby shatteredsilicon » Thu Dec 10, 2009 11:10 pm

There is no point in adding "nice -n 0", that is the default. :)
shatteredsilicon
 
Posts: 699
Joined: Tue Jul 08, 2008 2:27 pm

Re: Cuda 3.0 beta RUNNING

Postby Viper666 » Sun Dec 13, 2009 8:50 pm

shatteredsilicon wrote:There is no point in adding "nice -n 0", that is the default. :)

Actually thats not correct the clients all run at nice 19 as default which is lowest priority

nice 0 is normally priority

"-20 is the lowest nice level, which gives it the highest priority. 19 is the highest nice level, which gives it the lowest priority. Just think of the nice level as "The ability of the program to play nice with other programs." The higher the nice level, the more the program will get out of the way of other programs. The lower the nice level, the more it will stop other programs from using system resources."


http://www.novell.com/coolsolutions/feature/14878.html
User avatar
Viper666
 
Posts: 229
Joined: Fri Dec 14, 2007 9:57 pm

Re: Cuda 3.0 beta RUNNING

Postby shatteredsilicon » Sun Dec 13, 2009 11:20 pm

Hmm, if they automatically renice themselves to 19, then starting the master process with "nice -n 0" won't normally prevent that.
shatteredsilicon
 
Posts: 699
Joined: Tue Jul 08, 2008 2:27 pm

Re: Cuda 3.0 beta RUNNING

Postby Viper666 » Mon Dec 14, 2009 3:50 am

Image

Well its doing it here !FahCore_11 is the GPU the chrt command will also do it,Ive used that before!
keep in mind this is a windows app and wine is in control
User avatar
Viper666
 
Posts: 229
Joined: Fri Dec 14, 2007 9:57 pm

PreviousNext

Return to unOfficial Linux GPU (WINE wrapper) (3rd party support)

Who is online

Users browsing this forum: No registered users and 1 guest

cron