GTX 1080 Review

A forum for discussing FAH-related hardware choices and info on actual products (not speculation).

Moderator: Site Moderators

Forum rules
Please read the forum rules before posting.
Post Reply
Paragon
Posts: 139
Joined: Fri Oct 21, 2011 3:24 am
Hardware configuration: Rig1 (Dedicated SMP): AMD Phenom II X6 1100T, Gigabyte GA-880GMA-USB3 board, 8 GB Kingston 1333 DDR3 Ram, Seasonic S12 II 380 Watt PSU, Noctua CPU Cooler

Rig2 (Part-Time GPU): Intel Q6600, Gigabyte 965P-S3 Board, EVGA 460 GTX Graphics, 8 GB Kingston 800 DDR2 Ram, Seasonic Gold X-650 PSU, Artic Cooling Freezer 7 CPU Cooler
Location: United States

GTX 1080 Review

Post by Paragon »

I'm slowly working my way through the Pascal cards so that I can get into the RTX hardware (better late than never, right?). The review is below. Please let me know what you think and if you have any suggestions.

https://greenfoldingathome.com/2019/04/ ... ew-part-1/

Takeaways:
730K PPD performance in Windows 10 after averaging stats for 2 weeks (is this low, or is it similar to what you have seen?)
150 Watt power consumption at the card
240 Watt system power consumption

Interesting note: Based on my testing, I found the 1080 to be slightly less efficient than the 1070 Ti (not by much though, could be within the variation of the work units I got). Anyone else see this?

In part 2, I'm going to get some stats at folding at different power limits to see the effect on PPD and PPD/Watt (probably will run it for a week at each different setting).
Last edited by Paragon on Mon Apr 08, 2019 9:30 pm, edited 1 time in total.
foldy
Posts: 2061
Joined: Sat Dec 01, 2012 3:43 pm
Hardware configuration: Folding@Home Client 7.6.13 (1 GPU slots)
Windows 7 64bit
Intel Core i5 2500k@4Ghz
Nvidia gtx 1080ti driver 441

Re: GTX 1080 Review

Post by foldy »

I can give you as hint to downclock the GPU memory as it does not hurt FAH PPD but reduces power usage a little which can even increase FAH speed if the GPU clock boosts higher.

You can also use FAHbench to compare GPU performance which can be loaded with different real work units.
Paragon
Posts: 139
Joined: Fri Oct 21, 2011 3:24 am
Hardware configuration: Rig1 (Dedicated SMP): AMD Phenom II X6 1100T, Gigabyte GA-880GMA-USB3 board, 8 GB Kingston 1333 DDR3 Ram, Seasonic S12 II 380 Watt PSU, Noctua CPU Cooler

Rig2 (Part-Time GPU): Intel Q6600, Gigabyte 965P-S3 Board, EVGA 460 GTX Graphics, 8 GB Kingston 800 DDR2 Ram, Seasonic Gold X-650 PSU, Artic Cooling Freezer 7 CPU Cooler
Location: United States

Re: GTX 1080 Review

Post by Paragon »

Thanks. Memory clock was down clocked by 500 MHz...I think I will go lower and see if it affects anything. Do you know if FAHbench reports in PPD? I thought it was in ns. Still that is helpful, I can load it with the exact same work unit and compare in a controlled environment
bruce
Posts: 20910
Joined: Thu Nov 29, 2007 10:13 pm
Location: So. Cal.

Re: GTX 1080 Review

Post by bruce »

there's no single measurement that will satisfy everyone. When I want a quick-and-dirty measurement, I use advertised GFLOPS but that's not perfect either.

Science is measured in ns, but total performance also depends on the complexity of the protein being analyzed, so I don't particularly like using ns except to compare other hardware conficurations ON THE SAME PROJECT.

PPD is a lot closer ot what FAH Donors like, but it's also distorted by how the bonus points depend on speed as well as WU complexity.
MeeLee
Posts: 1375
Joined: Tue Feb 19, 2019 10:16 pm

Re: GTX 1080 Review

Post by MeeLee »

@Foldy:
memory downclocks on GTX cards affect performance much more than on RTX cards; and it doesn't affect power levels at all when you're already capping the power of the cards.
At least not on my cards.
I haven't seen any GPU speed boost resulting from lowering GPU memory speeds when power limiting the graphics card.
In most cases, downclocking the VRAM speed of GTX cards, will result in somewhat lower PPD, but not lower power consumption.
In extreme cases the VRAM can be set to idle, in which PPD is almost halved on GTX cards.


@Paragon:
The best thing you can do, is power cap the card to any allowed value, then overclock both GPU and memory (mildly).
This works best on cards that already are running hot at stock speed.

From there, I'd try to push the temps down further by manually setting fan speed, just below them getting loud; but high enough to get the GPU running at 65C or lower temperatures.
It'll get you more headroom for overclocking.

Usually, I power cap the card, and let it run +5% of a WU, before stats stabilize before recording any PPD values.
I repeat the process when given a different WU number. Eg: 14k, 13k, 11k or 9K WUs all give different PPD values.

As far as benchmarking with FAHbench, I would just install FAHClient and Control, and run a WU on it.
It's far more accurate.
gordonbb
Posts: 510
Joined: Mon May 21, 2018 4:12 pm
Hardware configuration: Ubuntu 22.04.2 LTS; NVidia 525.60.11; 2 x 4070ti; 4070; 4060ti; 3x 3080; 3070ti; 3070
Location: Great White North

Re: GTX 1080 Review

Post by gordonbb »

MeeLee wrote:...
As far as benchmarking with FAHbench, I would just install FAHClient and Control, and run a WU on it.
It's far more accurate.
I disagree. I have found FAHBench useful for finding a maximum stable overclock quickly using 10 minute runs to get the GPU heat-soaked. I then usually dial it back one or two "bins" further as experience has shown a stable overclock on one WU will, invariably in time, fail on other work units with different typologies.

I consider using FAHBench to see how far you can push the overclock before failing much preferable to doing it live on WUs which may impact your "reputation" as far as the servers are concerned.

I usually use a current hard WU to test using FAHBench by copying it from the work folder to the FAHBench folder and configuring a json file as outlined in the FAHBench documentation.

I just finished two months testing across 5 (2 GTX 1060 6GB, GTX 1070Ti, RTX 2060 & RTX 2070) cards adjusting power limits and my conclusion is initially that the yields (PPD) appear to scale linearly (well a bit of a slope of diminishing returns) with the Power-Limit. In other words there is no "Sweet Spot" where PPD peaks due to the Quick Return Bonus. The one exception to this is at or approaching the Maximum Power there is a significant knee where you can push in a lot more power for minimal increase in yield.

All this testing has just affirmed that the results from my original 10-minute runs with FAHBench are valid in production.

Note that I did not run any overclock on the cards in testing and I ran the Power-limits based on the Founder's Edition (FE) Power Limits from the minimum limit to the Add-In Board's (AIB) maximum.
Image
bruce
Posts: 20910
Joined: Thu Nov 29, 2007 10:13 pm
Location: So. Cal.

Re: GTX 1080 Review

Post by bruce »

gordonbb wrote:I usually use a current hard WU to test using FAHBench by copying it from the work folder to the FAHBench folder and configuring a json file as outlined in the FAHBench documentation.
That's not an unreasonable method ... except for your ability to choose which WU adequately represents a "hard WU" If you're only testing a specific type of GPU, you may be able to do it, but a "hard WU" for one type of GPU is not necessarily "hard" for a different class of GPU.
gordonbb
Posts: 510
Joined: Mon May 21, 2018 4:12 pm
Hardware configuration: Ubuntu 22.04.2 LTS; NVidia 525.60.11; 2 x 4070ti; 4070; 4060ti; 3x 3080; 3070ti; 3070
Location: Great White North

Re: GTX 1080 Review

Post by gordonbb »

bruce wrote:
gordonbb wrote:I usually use a current hard WU to test using FAHBench by copying it from the work folder to the FAHBench folder and configuring a json file as outlined in the FAHBench documentation.
That's not an unreasonable method ... except for your ability to choose which WU adequately represents a "hard WU" If you're only testing a specific type of GPU, you may be able to do it, but a "hard WU" for one type of GPU is not necessarily "hard" for a different class of GPU.
Agreed, some WUs that have a much lower than average PPD on Nvidia Pascal work better on Turing and some WU have a much lower than average PPD on Lower end Pascal hardware work better on GPUs with more shaders.

There are a couple that are consistent "brutes" on my hardware (14124?). I usually look in the reports on HFM.net and pick something with a low standard deviation and a consistent negative variance from the mean across all cards.

In general though I usually only "profile" a card when it's new so running it stock for a few days will generally show what WUs it has lower yields on.
Image
MeeLee
Posts: 1375
Joined: Tue Feb 19, 2019 10:16 pm

Re: GTX 1080 Review

Post by MeeLee »

gordonbb wrote:
MeeLee wrote:...
As far as benchmarking with FAHbench, I would just install FAHClient and Control, and run a WU on it.
It's far more accurate.
I disagree. I have found FAHBench useful for finding a maximum stable overclock quickly using 10 minute runs to get the GPU heat-soaked. I then usually dial it back one or two "bins" further as experience has shown a stable overclock on one WU will, invariably in time, fail on other work units with different typologies.

I consider using FAHBench to see how far you can push the overclock before failing much preferable to doing it live on WUs which may impact your "reputation" as far as the servers are concerned.

I usually use a current hard WU to test using FAHBench by copying it from the work folder to the FAHBench folder and configuring a json file as outlined in the FAHBench documentation.

I just finished two months testing across 5 (2 GTX 1060 6GB, GTX 1070Ti, RTX 2060 & RTX 2070) cards adjusting power limits and my conclusion is initially that the yields (PPD) appear to scale linearly (well a bit of a slope of diminishing returns) with the Power-Limit. In other words there is no "Sweet Spot" where PPD peaks due to the Quick Return Bonus. The one exception to this is at or approaching the Maximum Power there is a significant knee where you can push in a lot more power for minimal increase in yield.

All this testing has just affirmed that the results from my original 10-minute runs with FAHBench are valid in production.

Note that I did not run any overclock on the cards in testing and I ran the Power-limits based on the Founder's Edition (FE) Power Limits from the minimum limit to the Add-In Board's (AIB) maximum.
Yeah, losing points because of a failing a WU, isn't very nice.
Sometimes I wished that FAH could go back in time where the WU was still ok, and redo the part which went bad.
As far as credit goes, I think that a certain amount of WUs need to go wrong, before you'll lose your bonus.
Often, if you overclock, and a WU goes bad, you can adjust the overclock; after which it seems to run fine for a few WUs.
Then out of the blue, one WU goes bad; perhaps because the room temperature went up, or a change in configuration, or whatever;
So you dial it yet again 5Mhz lower.
Doing so, you probably lose 3 to 5 WUs, in hopes that the following WUs will pass.
I've done this for the cards I own, and sometimes have to redo it (with a system reinstall, or change from cold weather to hot weather, my pc is not Air Conditioned ), and I reach on average a 96+% completed WUs, and 3+% of unreturned/bad WUs. As soon as the card runs stable, it'll just keep on adding on to that ratio, and it would take quite some amount of bad WUs to lose the bonus.
bruce
Posts: 20910
Joined: Thu Nov 29, 2007 10:13 pm
Location: So. Cal.

Re: GTX 1080 Review

Post by bruce »

MeeLee wrote:Yeah, losing points because of a failing a WU, isn't very nice.
Agreed, but consider it from FAH's perspective. The project lost one donor's resources for the entire time you worked on that WU and zero science was produced.

In the early says of FAH, the FAHCore was unable to detect the failure until after the WU was uploaded so even more resources were wasted. Now maybe you only wasted 50% of the time it would have taken to complete the WU.

To make matters worse, neither FAH nor you actually know whether it's a case of hardware failure (including common cases of overclocking/overheating/etc. but also other hardware problems) or a spontaneous failure of the trajectory (aka "bad WU"). Nobody knows how to predict if it's a case of a bad WU before somebody actually tries to run it and since a significant percentage of failures can be recovered by retrying, it's worth attempting to do that ... and to successfully complete that significant percentage without sending it to someone else and restarting from the beginning.
Paragon
Posts: 139
Joined: Fri Oct 21, 2011 3:24 am
Hardware configuration: Rig1 (Dedicated SMP): AMD Phenom II X6 1100T, Gigabyte GA-880GMA-USB3 board, 8 GB Kingston 1333 DDR3 Ram, Seasonic S12 II 380 Watt PSU, Noctua CPU Cooler

Rig2 (Part-Time GPU): Intel Q6600, Gigabyte 965P-S3 Board, EVGA 460 GTX Graphics, 8 GB Kingston 800 DDR2 Ram, Seasonic Gold X-650 PSU, Artic Cooling Freezer 7 CPU Cooler
Location: United States

Re: GTX 1080 Review

Post by Paragon »

This is why I tend to find a "stable" point on a graphics card and then back it off by a decent amount. Sure I could run it at 2050, but additional stability margin I get at 1975-2012 MHz makes me more comfortable
foldy
Posts: 2061
Joined: Sat Dec 01, 2012 3:43 pm
Hardware configuration: Folding@Home Client 7.6.13 (1 GPU slots)
Windows 7 64bit
Intel Core i5 2500k@4Ghz
Nvidia gtx 1080ti driver 441

Re: GTX 1080 Review

Post by foldy »

Overclocking also has a psychologic factor which means users want to reach e.g. 2000 Mhz to have a rounded number while running at 1980 Mhz only is 1% slower but stable.
Post Reply