CPU usage with Nvidia vs ATI

Moderators: Site Moderators, PandeGroup

CPU usage with Nvidia vs ATI

Postby Walter » Sat Oct 22, 2011 9:31 pm

I have been using an Nvidia GT 220 card in one machine and a GT 240 in another for 2+ years. Neither use more than 1 or 2% of the CPU when running a GPU client. My GT 240's fan died so I decided to try a Radeon HD 5670 card because it used less power and was a little faster. The Radeon used all the CPU you allowed it in the configuration and seemed to produce slower results (the screen turned white after a couple of hours three times so I am returning the card and going back to Nvidia.) It also caused my World Community Grid client to suspend computation often (stating high cpu usage) which the Nvidia cards never did. Do ATI cards typically use much more of the CPU than Nvidia ones? When I looked at the Radeon card in GPU-Z I noticed that it doesn't use cuda. Is the GPU client tuned for cuda?

Walter Hamilton

For the record my system is
Windows 7 Pro service pack 1
intel core i7 950@ 3.07gig
memory 9 gig
Nvidia GT220 or GT 240

1 FAH GPU client
1 FAH CPU client
6 World Community Grid clients (this is a quad core that represents itself as an 8 cpu device)
Walter
 
Posts: 41
Joined: Thu Aug 13, 2009 8:49 pm

Re: CPU usage with Nvidia vs ATI

Postby verlyol » Sat Oct 22, 2011 9:57 pm

Folding @ home actually works better with Nvidia GPU, improvements have been made since the release of Fahcore16 for ATI / AMD.
but the performances are unfortunately not on the level of Nvidia, but there are still problems with CPU utilization too high.
Image


I dedicate my participation to my grandmother died in 1992 because of Parkinson's disease
and to my friend Benoit died of leukemia February 18 2012 ...he was 40 years old.
User avatar
verlyol
 
Posts: 98
Joined: Sun Jul 10, 2011 11:54 am
Location: Brabant-Wallon, Belgium

Re: CPU usage with Nvidia vs ATI

Postby Jesse_V » Sat Oct 22, 2011 10:28 pm

Yes, ATI GPUs tend to use more CPU power. I'm not sure exactly why, but from what I've read it seems that most people reserve one CPU core for their GPU folding. Moreover, it appears that Nvidia GPUs are more productive anyway, but each different type of client is very scientifically valued.

Now, Folding@home is currently developing some new software which unifies and will soon replace all of the different types of clients that we currently offer. Its called the v7 client for its the seventh generation. In case you don't understand the terminology, verlyol was saying that Core 16 should make things better. The core does the actual calculations on the Work Units, and there are different cores for different hardware, operating systems, and configurations. Core 16 is only available for the v7 client. While the v7 client has a few bugs and features that are currently being worked on, you're welcome to try it out here: http://folding.stanford.edu/English/WinGuide Other than a few bugs, most of which won't impact scientific production, one important thing about the v7 client is that all GPUs must be whitelisted in order to be used. Perhaps yours is, and if it is v7 should work for you. Just configure it during the install to use just a GPU and it should turn out fine I think. If you need any further help feel free to post back.
Pen tester at Cigital/Synopsys
User avatar
Jesse_V
 
Posts: 2894
Joined: Mon Jul 18, 2011 4:44 am
Location: USA

Re: CPU usage with Nvidia vs ATI

Postby gwildperson » Sun Oct 23, 2011 1:00 am

The gpus from ATI do not use CUDA. CUDA is a proprietary language developed by Nvidia for Nvidia. Beginning with fahcore_16, ATI uses OpenCL which is a non-proprietary language. which will probably be supported by Nvida, too. (That may already be true.)

FAH worked with Brook on ATI before CUDA was developed and it is still supported for HD4000 and below, though support for those gpus will be ending some time soon, leaving only OpenCL for HD5000 and above. There's probably a similar change in support for older/newer gpus from Nvidia but I'm not sure what or when.

The amount of CPU time spent running any fahcore is directly dependent on how the drivers are written, and they are supported directly by either Nvidia or ATI. I have no idea why Nvidia is able to write more economical drivers than ATI, but it's not something that FAH can do anything about. Talk to ATI if you don't like how their drivers work.
gwildperson
 
Posts: 785
Joined: Tue Dec 04, 2007 8:36 pm

Re: CPU usage with Nvidia vs ATI

Postby verlyol » Sun Oct 23, 2011 7:57 am

That's right! it is a driver problem with ATI / AMD and not a problem with the Fahcore or client v7.

And there is a problem already present when the ATI / AMD used before the brook language.

We hope that ATI / AMD will develop in the future, drivers that saves CPU power !!
User avatar
verlyol
 
Posts: 98
Joined: Sun Jul 10, 2011 11:54 am
Location: Brabant-Wallon, Belgium

Re: CPU usage with Nvidia vs ATI

Postby Walter » Sun Oct 23, 2011 12:15 pm

The drivers for my Nvidia GT 220 and GT 240 cards are also a problem. Those dated from 2011 dramatically slow down all other programs use of the screen. (ex. scrolling down a page might take several seconds to respond, switching from one tab in a browser to another or from one program to another also.) The drivers from November 2010 and earlier work without this problem. If someone with an Nvidia card suddenly has response problems you might recommend reinstalling their old drivers and not automatically update them via Windows Update.
Walter
 
Posts: 41
Joined: Thu Aug 13, 2009 8:49 pm

Re: CPU usage with Nvidia vs ATI

Postby amdfan404 » Sat Dec 03, 2011 2:53 am

This is not ATI/AMD's fault is PG/GPU2 client fault, it doesn't do a good job finding the best configuration for the hardware GPU2 client runs on, I had the same problem until I was suggested to look into configuration threads, ATI/AMD Video Cards run just fine, not slower than Nvidia ones, next time instead of writing lies about ATI/AMD check configuration threads, this is typical of misinformed people. I have a configuration for you to save you the trouble, you can tweak it from here as you see fit: You need to create the following environment variables:

BROOK_YIELD=2
CAL_NO_FLUSH=1
CAL_PRE_FLUSH=1
FLUSH_INTERVAL=168 (this is what you actually tweak, the rest you don't need to change as they're the best configurations).

For your 5670 you'll want FLUSH_INTERVAL=250+
Image
amdfan404
 
Posts: 23
Joined: Thu Mar 18, 2010 3:40 am

Re: CPU usage with Nvidia vs ATI

Postby Leonardo » Sat Dec 03, 2011 7:33 am

this is typical of misinformed people
Pot, meet kettle.

Check GPU2 and GPU3 Folding production databases at any developed team's site. This is not an brand X vs. brand Y childish fight, but a matter of fact. No set of variables applied to AMD GPUs will bring them up to the their competitor's performance, with respect to Folding@Home. AMD's adoption of Open CL has greatly improved AMD GPU Folding productivity, but it still has quite a ways to go.

Games are another matter.
Image
User avatar
Leonardo
 
Posts: 655
Joined: Tue Dec 04, 2007 5:09 am
Location: Eagle River, Alaska

Re: CPU usage with Nvidia vs ATI

Postby Ivoshiee » Sat Dec 03, 2011 6:30 pm

amdfan404 wrote:This is not ATI/AMD's fault is PG/GPU2 client fault, it doesn't do a good job finding the best configuration for the hardware GPU2 client runs on, I had the same problem until I was suggested to look into configuration threads, ATI/AMD Video Cards run just fine, not slower than Nvidia ones, next time instead of writing lies about ATI/AMD check configuration threads, this is typical of misinformed people. I have a configuration for you to save you the trouble, you can tweak it from here as you see fit: You need to create the following environment variables:

BROOK_YIELD=2
CAL_NO_FLUSH=1
CAL_PRE_FLUSH=1
FLUSH_INTERVAL=168 (this is what you actually tweak, the rest you don't need to change as they're the best configurations).

For your 5670 you'll want FLUSH_INTERVAL=250+

I haven't checked it lately, but does those environment variables work at all with the latest crop of cores and clients?
Ivoshiee
Site Moderator
 
Posts: 1377
Joined: Sun Dec 02, 2007 12:05 am
Location: Estonia

Re: CPU usage with Nvidia vs ATI

Postby ChasR » Sat Dec 03, 2011 7:24 pm

It was my impression that the variables work with Brook rather than OpenMM (Core_16). I have no way to test though.
User avatar
ChasR
 
Posts: 759
Joined: Sun Dec 02, 2007 5:36 am
Location: Atlanta, GA

Re: CPU usage with Nvidia vs ATI

Postby bruce » Sat Dec 03, 2011 8:20 pm

Right. I have seen no information about Env Vars or other techniques that might allow you to tune OpenCL performance.

Brook was developed at Stanford and so the Pande Group had access to the knowledge and the talent to tweak the internals. OpenCL is being developed by the Khronos Group (a non-profit technology consortium) so internal changes to it go through the normal open software procedures and then are implemented in the drivers and toolkits supplied by the various vendors of OpenCL hardware.
bruce
 
Posts: 21416
Joined: Thu Nov 29, 2007 10:13 pm
Location: So. Cal.

Re: CPU usage with Nvidia vs ATI

Postby amdfan404 » Mon Dec 05, 2011 5:41 pm

@leonardo not entirely true, without this set of variables you'll have the mentioned problem of having CPU Cores at 100% all by F@H and the GPU not working at 100%, with this variables will bring CPU usage to about 1.5% average and the GPU will work at 100% if 'FLUSH_INTERVAL' is big enough, but this tweak must be made manually. F@H favors Nvidia GPUs froma a design POV as far as I know, that's why they give more yield, it was designed for them initially and dapted for AMD/ATI GPUs later on. I was defending what righteously belongs to AMD/ATI, their good performance, the one not well informed here is you, I'm well informed because I gave a solution to the problem and clarified that AMD/ATI wasn't the problem, if anything that proves I'm well informed about them and yes what they were writing was lies, clearly out of ignorance for the looks of their posts, for not taking the bother to look the stickies that easily takes you to at least 2 well written config guides, so don't tell me I'm not well informed because I have demosntrated I AM. And since you're so short sighted I'm going to clear it for you: AMD/ATI GPUs run fine against Nvidia GPUs of the same period of time they were released, as I wrote Nvidia is favored in GPU2 and might be a little faster but not by much, you get informed next time.

@All GPU2 works for every GPU AMD/ATI, but AMD/ATI HD5xxx and up, fully support the latest OpenCL, v.1.1 (double precision required), which is faster and more accurate, GPU3 beta has this features and you should use it if you have the hardware for it.
amdfan404
 
Posts: 23
Joined: Thu Mar 18, 2010 3:40 am

Re: CPU usage with Nvidia vs ATI

Postby ChasR » Mon Dec 05, 2011 6:20 pm

andfan404,
No matter how much you wish it was true, ATi FAH performance isn't on par with nVidia performance. About the only thing true in either of your posts is that you can lower cpu usage onthe ATi GPU2 client by using the environment variables you posted. ATi GPU2 performance is about 50% of ATi GPU3 performance, which the OP would be using with a 5670, and the environment variables don't work on that.
User avatar
ChasR
 
Posts: 759
Joined: Sun Dec 02, 2007 5:36 am
Location: Atlanta, GA

Re: CPU usage with Nvidia vs ATI

Postby Zagen30 » Mon Dec 05, 2011 6:24 pm

amdfan404 wrote:F@H favors Nvidia GPUs froma a design POV as far as I know, that's why they give more yield, it was designed for them initially and dapted for AMD/ATI GPUs later on.


Patently untrue. GPU1 was ATI-only. GPU2 was originally ATI-only, with Nvidia cards getting support a few months after launch.
Image
Zagen30
 
Posts: 1814
Joined: Tue Mar 25, 2008 12:45 am

Re: CPU usage with Nvidia vs ATI

Postby bruce » Mon Dec 05, 2011 8:35 pm

Zagen30 wrote:
amdfan404 wrote:F@H favors Nvidia GPUs froma a design POV as far as I know, that's why they give more yield, it was designed for them initially and dapted for AMD/ATI GPUs later on.


Patently untrue. GPU1 was ATI-only. GPU2 was originally ATI-only, with Nvidia cards getting support a few months after launch.


Stanford does not "prefer" any brand. They apply their development efforts where they believe they'll get the maximum scientific results. The fact that the hardware in two GPUs is designed differently is up to the manufacturer. When Stanford decides to support a specific product, they consider how many donors might contribute to FAH and then work with whatever design limitations are part of that product as well as the both the features and the limitations of whatever API is available.

ATI had CAL/BROOK before NVidia released CUDA. Now OpenCL is available and that availability is being extended to other companies. I understand that there's a upcoming Intel GPU that MIGHT have robust OpenCL support and MIGHT be popular. If both turn out to be true, it might be a good investment for them to support that product. (I'm not aware of any specific plans :!: so this is pure speculation on my part.) We do know that FahCore_16 was written for OpenCL and most or all of what has been developed for ATI will PROBABLY be useable for Intel -- but there's a lot that I don't know yet about Intel's support of OpenCL.

If all this happens as the previous paragraph suggests it might, either FahCore_16 will work on Intel or a new core with some extra enhancements will be produced. Assume, for the moment, :( it's the latter. [The :( is because that costs Stanford more to develop.] Does that mean that Stanford favors Intel if the newer core on the new hardware happens to be as scientifically productive as we can hope it might be? Obviously not.
bruce
 
Posts: 21416
Joined: Thu Nov 29, 2007 10:13 pm
Location: So. Cal.

Next

Return to General GPU client issues

Who is online

Users browsing this forum: No registered users and 4 guests

cron