calculation of work unit difficulty

Moderators: Site Moderators, FAHC Science Team

Post Reply
FAMAS
Posts: 89
Joined: Fri Sep 16, 2016 6:30 pm

calculation of work unit difficulty

Post by FAMAS »

has there been any attempt at reliably calculating the changes in difficulty for the work units over the years? the power level of the network increased from some 15 to 100 between 2012 and present, how much more difficult has the work units become?
bruce
Posts: 20910
Joined: Thu Nov 29, 2007 10:13 pm
Location: So. Cal.

Re: calculation of work unit difficulty

Post by bruce »

There's no doubt that WUs are getting more difficult. Proteins are bigger and there's a constant demand for longer trajectories but I'm not sure anybody has collected that information in numerical terms -- mostly because it's an obscure enough concept that a single number can't easily be treated as a real answer.

One possible way to measure it is by the numbers of Donors (slots/clients) but that doesn't take into account the increase in average GFLOPS of individual GPUs or systems.

Another possible is to look at the increase in total PPD but that doesn't allow for what I'll call "points inflation" where one unit of work might be earning more points today that it did 15 years ago (if this sort of inflation has been happening). (How many weeks of folding on a pre-Pentium CPU would have been equivalent to how many seconds of computing on a GeForce GTX 1080?)

The grand total of all FLOPS is another measurement but it' has been discredited as an accurate measurement of FAH's rate of scientific progress.

OK, let's go back to my first statement about a lot more science being done even if it's a challenge to put a number on it. Maybe 5 years ago I saw a chart about the trends in trajectory length over the years. Maybe we could ask to have that chart updated. That still doesn't take into account changes in protein size or analysis quality, both of which matter, too.

During the early days of GPU folding, there was an important distinction between analyses involving implicit or explicit solvents. GPUs could only handle implicit solvent problems and all explicit solvent problems had to be done with CPU-based SSE calculations. FAHCore development eventually provided a path to solving explicit solvent problems (which are more precise analysis but require a lot more atoms -- of the solvent.)
ChristianVirtual
Posts: 1596
Joined: Tue May 28, 2013 12:14 pm
Location: Tokyo

Re: calculation of work unit difficulty

Post by ChristianVirtual »

bruce wrote: During the early days of GPU folding, there was an important distinction between analyses involving implicit or explicit solvents. GPUs could only handle implicit solvent problems and all explicit solvent problems had to be done with CPU-based SSE calculations.
Because of single-precision of GPUs ?
ImageImage
Please contribute your logs to http://ppd.fahmm.net
bruce
Posts: 20910
Joined: Thu Nov 29, 2007 10:13 pm
Location: So. Cal.

Re: calculation of work unit difficulty

Post by bruce »

ChristianVirtual wrote:Because of single-precision of GPUs ?
No. The calculations are still done in single-precision. There is limited use of double-precision, mostly during the "sanity check" that confirms whether or not the calculations up to that point make sense ... aborting any run that's destined to fail.

I think that the number of atoms was limited.

There's no doubt that small proteins can easily be handled with single precision and at some point FAH may attack problems for which more double-precision will be required, but I don't think we're there yet.
Post Reply