Project 10496 (158,16,66)

Moderators: Site Moderators, PandeGroup

Re: Project 10496 (158,16,66)

Postby ComputerGenie » Mon May 15, 2017 10:12 am

bruce wrote:The OpenCL driver for Linux is significantly different than the Windows OpenCL driver. Each GPU uses at most one CPU thread. Many projects do significantly better if you have a faster CPU (Central Processing Unit). The GPU can't get data fast enough to stay busy ...
And how much more does one need to run 2 GPUs (with no CPU folding) than a FX-8320 Eight-Core Processor running at 3500.00MHz? :|
User avatar
ComputerGenie
 
Posts: 242
Joined: Mon Dec 12, 2016 4:06 am

Re: Project 10496 (158,16,66)

Postby bruce » Mon May 15, 2017 4:23 pm

ComputerGenie wrote:
bruce wrote:And how much more does one need to run 2 GPUs (with no CPU folding) than a FX-8320 Eight-Core Processor running at 3500.00MHz? :|

Unfortunately having 8-cores doesn't matter. Drivers are not multi-threaded. Each GPU uses one CPU core. I'll bet you could create a slot that uses 6 CPUs without changing the GPU performance ... unless you already run some "heavy" applications that continuously use some of your CPU threads.
bruce
 
Posts: 22711
Joined: Thu Nov 29, 2007 10:13 pm
Location: So. Cal.

Re: Project 10496 (158,16,66)

Postby ComputerGenie » Mon May 15, 2017 5:24 pm

bruce wrote:The OpenCL driver for Linux is significantly different than the Windows OpenCL driver. Each GPU uses at most one CPU thread. Many projects do significantly better if you have a faster CPU (Central Processing Unit). The GPU can't get data fast enough to stay busy ...
bruce wrote:
ComputerGenie wrote:And how much more does one need to run 2 GPUs (with no CPU folding) than a FX-8320 Eight-Core Processor running at 3500.00MHz? :|

Unfortunately having 8-cores doesn't matter. Drivers are not multi-threaded. Each GPU uses one CPU core. I'll bet you could create a slot that uses 6 CPUs without changing the GPU performance ... unless you already run some "heavy" applications that continuously use some of your CPU threads.
OK, so to re-ask:
And how much more does one need to run 2 1080 GPUs (with no CPU folding) than a CPU running at 3500.00MHz?
User avatar
ComputerGenie
 
Posts: 242
Joined: Mon Dec 12, 2016 4:06 am

Re: Project 10496 (158,16,66)

Postby Joe_H » Mon May 15, 2017 6:58 pm

Ignore the clock speed of the CPU, it is a relatively meaningless measure of performance. The clock speed is only useful comparing processors from the same family. As a design that dates back nearly 5 years, the FX-8320 is probably fast enough in most cases, but its ability to transfer data between the CPU and the GPU card in a PCIe slot is also going to be dependent on the chipset that connects them.
Image

iMac 2.8 i7 12 GB smp8, Mac Pro 2.8 quad 12 GB smp6
MacBook Pro 2.9 i7 8 GB smp3
Joe_H
Site Admin
 
Posts: 4556
Joined: Tue Apr 21, 2009 4:41 pm
Location: W. MA

Re: Project 10496 (158,16,66)

Postby ComputerGenie » Mon May 15, 2017 7:36 pm

Joe_H wrote:Ignore the clock speed of the CPU, it is a relatively meaningless measure of performance. The clock speed is only useful comparing processors from the same family. As a design that dates back nearly 5 years, the FX-8320 is probably fast enough in most cases, but its ability to transfer data between the CPU and the GPU card in a PCIe slot is also going to be dependent on the chipset that connects them.
M5A99FX PRO R2.0
"Chipset - AMD 990FX/SB950
System Bus - Up to 5.2 GT/s HyperTransport™ 3.0 "
:?:
Still not sure how that could/would have a massive effect on one given RCG and not another RCG in the same project.
User avatar
ComputerGenie
 
Posts: 242
Joined: Mon Dec 12, 2016 4:06 am

Re: Project 10496 (158,16,66)

Postby B.A.T » Mon May 15, 2017 9:08 pm

The only thing I notice with this project and 10494/10492 is that they have a very large atom number (277543/189218)

Maybe the issue is connected to this?
B.A.T
 
Posts: 17
Joined: Sat Oct 08, 2011 8:33 am
Location: Norway

Re: Project 10496 (158,16,66)

Postby bruce » Tue May 16, 2017 1:19 am

It could be dependent on the atom count ... in fact, that's pretty likely. It still depends a lot on how NVidia wrote their OpenCL driver. A certain amount of the OpenCL work is done on the CPU, preparing data to be transferred to the GPU. Add the time for the chipset to transfer the data to the GPU and it produces less than ideal performance. As I've said several times, PG is aware of the problem and working toward an improvement but they're still dependent on whatever drivers NVidia distributes.
bruce
 
Posts: 22711
Joined: Thu Nov 29, 2007 10:13 pm
Location: So. Cal.

Re: Project 10496 (158,16,66)

Postby FldngForGrandparents » Fri Jun 30, 2017 2:56 am

Any update on this? This project seems to have spread out. Currently have all nine of my GPU's running it at the same time. From 980's to 1080TI's all the same impact. Ubuntu 14.04/16.04. Nvidia 370.28 to 281.22. So a mix.
Image

Dedicated to my grandparents who have passed away from Alzheimer's

Dedicated folding rig on Linux Mint 19.1:
2 - GTX 980 OC +200
1 - GTX 980 Ti OC +20
4 - GTX 1070 FE OC +200
3 - GTX 1080 OC +140
1 - GTX 1080Ti OC +120
FldngForGrandparents
 
Posts: 67
Joined: Sun Feb 28, 2016 10:06 pm

Re: Project 10496 (158,16,66)

Postby bruce » Fri Jun 30, 2017 11:15 pm

When there are many projects that are designed to work with your hardware, you'll get a variety of assignments. If some of those projects happen to be off-line, the frequency of assignments for the remaining projects will increase. If it happens that only a few (or even only one) project is available at the time, it's conceivable that all your systems will be running that project(s).

Rest assured that every WU is unique. FAH doesn't reassign the same WU multiple times -- except after somebody fails to return results by the deadline.
bruce
 
Posts: 22711
Joined: Thu Nov 29, 2007 10:13 pm
Location: So. Cal.

Re: Project 10496 (158,16,66)

Postby QuintLeo » Tue Jul 04, 2017 10:06 am

Unfortunately no, we have not seen the end of this project.

GTX 1080 ti also runs low PPD on this, commonly less than 800k PPD (vs 1 million PLUS for everything else my 1080 ti cards have seen to date).

I'm starting to wonder if this project has memory latency issues, perhaps due to it's size, given it also doesn't seem to run well on the GTX 1080.
QuintLeo
 
Posts: 45
Joined: Sat Jun 17, 2017 5:43 am

Re: Project 10496 (158,16,66)

Postby Strife84 » Wed Jul 05, 2017 12:38 pm

QuintLeo wrote:I'm starting to wonder if this project has memory latency issues, perhaps due to it's size, given it also doesn't seem to run well on the GTX 1080.


It's the same for me: I've only been folding project 10496 on my gtx 1080TI since the last two days. Anyway, this is my personal theory: what if users block this project to fold more "profitable" WUs? I've read something similar some time ago on another thread: they just blocked incoming data from a particular server that was offline for several days
Strife84
 
Posts: 4
Joined: Sat Mar 04, 2017 11:36 am

Re: Project 10496 (158,16,66)

Postby Nathan_P » Wed Jul 05, 2017 3:20 pm

Cherry picking of WU's is frowned upon by PG, people have had their points zero'd in the past for doing it.
Image
Nathan_P
 
Posts: 1442
Joined: Wed Apr 01, 2009 9:22 pm
Location: Jersey, Channel islands

Re: Project 10496 (158,16,66)

Postby Leonardo » Thu Jul 06, 2017 1:14 am

QuintLeo wrote:GTX 1080 ti also runs low PPD on this, commonly less than 800k PPD (vs 1 million PLUS for everything else my 1080 ti cards have seen to date).


Is there a possibility your GPU is throttling, due to heat buildup? I have three 1080Tis, all of which typically process a 10496 work unit at about 1M PPD. I have overclocked the GPUs, but only at a minimal 100MHz core boost. They stay relatively cool, typically at about 65 to 70C core temp.

Keep in mind also that a 1080Ti working a 10496 WU pumps a lot of data through PCIe bus. If there are multiple video cards Folding on the same motherboard, it can very easily saturate the bus.
Image
User avatar
Leonardo
 
Posts: 597
Joined: Tue Dec 04, 2007 5:09 am
Location: Eagle River, Alaska

Re: Project 10496 (158,16,66)

Postby QuintLeo » Sat Jul 08, 2017 8:22 am

Unfortunately not - and this project is very bad on 1080 (same to a hair LESS PPD than a 1070) and VERY bad on 1080 ti (gives about 10% more PPD than a 1070). 10494 is very similar.

Not nearly as bad on the 1080ti as 9415 and 9414 though, where it gets A LOT WORSE PPD than the 1070 on a 1080ti (literally about 60% based on the ones I've accumulated in HFM so far).
The performance is SO CRAZY BAD on my 1080ti cards that I'm seriously considering blocking the workserver on that machine - it's ridiculous to WASTE top-end folding cards on a work unit that performs so much BETTER on MUCH WORSE hardware.
QuintLeo
 
Posts: 45
Joined: Sat Jun 17, 2017 5:43 am

Re: Project 10496 (158,16,66)

Postby bruce » Sat Jul 08, 2017 2:59 pm

I think it's fair to assume that the GPU is either waiting for data to process or it's computing something. In other words, it's either waiting on the PCIe bus or the shaders. If the shaders are 80% busy, that means that 20% of the time it's waiting on the PCIe bus to give it data to work on -- and the PCIe bus is either moving data or it's waiting on the CPU to prepare that data. (in fact, those numbers can add up to more than 100% because it's possible to transfer data concurrently with computing data.)

Given your ability to measure the %Busy for the shaders, the %Busy for the bus, and the %Busy for the FAHCore, give us your estimate of what's going on and why it's doing whatever it's doing.
bruce
 
Posts: 22711
Joined: Thu Nov 29, 2007 10:13 pm
Location: So. Cal.

PreviousNext

Return to Issues with a specific WU

Who is online

Users browsing this forum: No registered users and 1 guest

cron