Bigadv points change

Moderators: Site Moderators, FAHC Science Team

bruce
Posts: 20910
Joined: Thu Nov 29, 2007 10:13 pm
Location: So. Cal.

Re: point system is getting ridiculous...

Post by bruce »

Let's look at this issue of a benchmark machine in reverse.

Suppose I take some 48-core machine and call it the benchmark machine. (Oh, and somebody please donate 10 of them to Stanford so they'll have the hardware necessary whenever they need to test/benchmark/etc.) Run some bigadv project on it and let X be the number of baseline points to be assigned to that project. Now run some classic WU that was benchmarked on the 2.8 GHz P4 and call the points Y. We know that Y is equivalent to 110 PPD after considering the actual run-time. Apply some (yet to be determined) formula that relates X and Y and solve for X.

Has anything changed compared to the present benchmarking methodology that uses the I5?

That doesn't answer any questions with respect to the QRB -- please treat that as a separate topic -- but I think it does answer the question about the benchmark machine. The same methodology was probably applied when they decided to use the I5 as a benchmark machine.

the simplest possible formula relating X to Y is to say Y=N*X where N is the 48 representing the core count. (i.e.- running a uniprocessor client WU is worth the same on a P4 as it is on one core of the 48-core monster. Should the formula be more complex -- say allowing for more RAM or allowing that 48-cores are worth more working together on a single project than on 48 separate projects or whatever -- or are those factors part of QRB? {* see note below, too}

Maybe when the baseline points for Core_a1, a2 were originally designed (before QRB) those factors were taken into account and now they're taken into account twice ... both in the baseline points and in the bonus calculation. (Note I said "maybe" since I don't know.)

there has been a lot of discussion associated with adjusting P2684 because it was "under-rated with respect to other projects" like p6901 but that's NOT the way to make a scientific comparison. Everything has to be based on a single standard that's actually the definition of what a "point" is. By it's very nature, both P6901 and p2684 will have (shall I call them "pseudo-errors"?) in the comparison. The Donor perception that they're not equal was not based on benchmark information. I think I can confidently say that nobody was actually running both of them on a i5 (except the Pande Group) to make their comparison so the fact that the hardware differences treat different parts of an analysis slightly differently will lead to "pseudo-errors" even if there were other factors leading to the variations. Sorting out these "pseudo-errors" from possible actual errors[/u] or from possible software changes is an expensive process and AFAIK was never done.

-------------------------

Punchy wrote:I'm curious as to what 12-core machine would take 10.2 or even 17 days to complete a 6903. Perhaps a more careful setting of the deadlines would provide the necessary damping to stop the complaints that started this thread
.
* One factor that has to be figured in is the deadline factor. From time to time, folks on the cusp of being able to complete a WU or not ask to have deadlines extended and the answer has always been NO. Some people choose to run 2 uniprocessor WUs rather than -smp 2 on a marginal Dual that may run part-time. Ignoring the issues associate with hardware detection and assigning the "right" projects to the "right" hardware, there should be some difference between the PPD of two Classic WUs with long deadlines and one SMP WU with a much tighter deadline.

If the Pande Group ever decides to accept a request to extend a deadline of an existing project, shouldn't they also REDUCE the points at the same time? I should have some incentive that encourages me to choose more challenging WUs. There's an automatic disincentive that will discourage folks with marginal machines from making a risky choice because anyone who gets a few non-bonus credits will find it worthwhile to downgrade their expectations (from bigadv to smp or from smp to multi-uniprocessor, which are obviously things a person can choose).

At this point, the bigadv-12 projects are still very new and the Pande Group has stated that there still may be changes made to their points, so if they do change the deadline as you suggest, they can choose to increase the points or choose not to, but I don't think that changes the basic concept.

The fundamental formulas for PPD need to consider actual deadlines, not just the percent of the deadline that you used up processing it, facilitating wise choices between bigadv and smp or between smp and uni.
Grandpa_01
Posts: 1122
Joined: Wed Mar 04, 2009 7:36 am
Hardware configuration: 3 - Supermicro H8QGi-F AMD MC 6174=144 cores 2.5Ghz, 96GB G.Skill DDR3 1333Mhz Ubuntu 10.10
2 - Asus P6X58D-E i7 980X 4.4Ghz 6GB DDR3 2000 A-Data 64GB SSD Ubuntu 10.10
1 - Asus Rampage Gene III 17 970 4.3Ghz DDR3 2000 2-500GB Segate 7200.11 0-Raid Ubuntu 10.10
1 - Asus G73JH Laptop i7 740QM 1.86Ghz ATI 5870M

Re: point system is getting ridiculous...

Post by Grandpa_01 »

7im wrote:
MtM wrote:...

What happens if you take bigadv and change it to a whole new client type beyond smp. What if PG then comes and gives us a formula which give us a tie between science and time, saying it's as close as it gets. Would we still argue -bigadv needs to benched on the same machine's as the other clients would run on even if -bigadv would have it's own much higher requirements?

That is the essence of the problem, not the formula used to attribute a value to time needing a change.

Yes, this is the essense of the problem. NO, the formula is ALSO a problem. You have posted nothing to show the forumula is accurate for more than 8 cores. You cannot make this claim.

We need to fix both. So answer this question...


On the newest -bigadv-12 work units, a well known individual is is netting 850,000 PPD on a 48 core system.

Is 1 computer with 48 cores, that completes only 1 work unit each day REALLY worth more than 140 (benchmark)* computers, with a total of 560 cores, that completes an average of 70 work units a day? Really?

Then explain it to me! To ALL of us!!!
Our Core i5 benchmark machine gets 6189 PPD
*http://folding.stanford.edu/English/FAQ-PointsNew
I can give you a little better Idea of how that compares to other bigadv machinery I am currently running 4 970 and 980 machines it takes me 66 hours to complete a 6903 on ea machine so in 2 3/4 days I complete 4 of them for which I receive around 500,000 PPD where as the 48 core system completes 3 of them and receives 850,000 PPD. But it is not up to me to decide what value the quick return receives Stanford is the only one who knows how much it worth to them. I also know that the next upgrade I do will most likely be a 48 core or greater system. I can build and operate one for a hell of allot less than what I have invested in my current folding rigs and the electricity it takes to run the. That is where the curve is out of whack I am actually encouraged to do less work and get paid more. :mrgreen:
Image
2 - SM H8QGi-F AMD 6xxx=112 cores @ 3.2 & 3.9Ghz
5 - SM X9QRI-f+ Intel 4650 = 320 cores @ 3.15Ghz
2 - I7 980X 4.4Ghz 2-GTX680
1 - 2700k 4.4Ghz GTX680
Total = 464 cores folding
mdk777
Posts: 480
Joined: Fri Dec 21, 2007 4:12 am

Re: point system is getting ridiculous...

Post by mdk777 »

Is 1 computer with 48 cores, that completes only 1 work unit each day REALLY worth more than 140 (benchmark)* computers, with a total of 560 cores,
Depends on the value of speed.

Currently you can buy a 2TB HD for $70.
A consumer ssd runs around $240 for 120 GB.

This works out to a factor of 57X in cost per GB by my calculation.

While not 100 x, the costs paid for server grade solutions are often well more than 10 X these consumer grade solutions.
Example at $5000 for 256GB, this works out to $19 /GB

http://www.newegg.com/Product/Product.a ... 6820227580

So we are above 500X cost of a rather recent high performance solution (2TB HD have not been available all that long)

My point is that the market pays 100 x for speed increases on a regular basis.
Last edited by mdk777 on Tue Jun 21, 2011 11:35 pm, edited 1 time in total.
Transparency and Accountability, the necessary foundation of any great endeavor!
MtM
Posts: 1579
Joined: Fri Jun 27, 2008 2:20 pm
Hardware configuration: Q6600 - 8gb - p5q deluxe - gtx275 - hd4350 ( not folding ) win7 x64 - smp:4 - gpu slot
E6600 - 4gb - p5wdh deluxe - 9600gt - 9600gso - win7 x64 - smp:2 - 2 gpu slots
E2160 - 2gb - ?? - onboard gpu - win7 x32 - 2 uniprocessor slots
T5450 - 4gb - ?? - 8600M GT 512 ( DDR2 ) - win7 x64 - smp:2 - gpu slot
Location: The Netherlands
Contact:

Re: point system is getting ridiculous...

Post by MtM »

mdk777 wrote:
Is 1 computer with 48 cores, that completes only 1 work unit each day REALLY worth more than 140 (benchmark)* computers, with a total of 560 cores,
Depends on the value of speed.

Currently you can buy a 2TB HD for $70.
A consumer ssd runs around $240 for 120 GB.

This works out to a factor of 57X in cost per GB by my calculation.

While not 100 x, the costs paid for server grade solutions are often well more than 10 X these consumer grade solutions.

My point is that the market pays 100 x for speed increases on a regular basis.
Only PG can answer if the curve is right, I'm still assuming it is. If it is, then diversification of the project based on hardware capabilities is even more sound. Keep the fastest hardware separated from the mediocre and mediocre from the really slow, I'm sure there will be wu's enough for all our hardware and this way the payoff of having faster machines should be bigger then when you mix them with the others.
MtM
Posts: 1579
Joined: Fri Jun 27, 2008 2:20 pm
Hardware configuration: Q6600 - 8gb - p5q deluxe - gtx275 - hd4350 ( not folding ) win7 x64 - smp:4 - gpu slot
E6600 - 4gb - p5wdh deluxe - 9600gt - 9600gso - win7 x64 - smp:2 - 2 gpu slots
E2160 - 2gb - ?? - onboard gpu - win7 x32 - 2 uniprocessor slots
T5450 - 4gb - ?? - 8600M GT 512 ( DDR2 ) - win7 x64 - smp:2 - gpu slot
Location: The Netherlands
Contact:

Re: point system is getting ridiculous...

Post by MtM »

Grandpa_01 wrote:That is where the curve is out of whack I am actually encouraged to do less work and get paid more. :mrgreen:
Professor Pande could really clear this thread up with a short post if this would be correct? Is the formula encouraging wrong contributions? Or is the contribution made with a single 48 core monster really worth more then with a a modest amount of gpu's and normal smp clients?

It would really help streamline this discussion into something more constructive if we didn´t have to keep putting the formula to the test. We can´t solve the problem as we don´t have enough data to really value time. If time is only used to discourage running multiple clients, then it can be anything if exponential ( which makes changing the existing 3rd root formula to it´s second root counterpart a valid option ). But if the formula was chosen because time is really as important to them as the formula implies, does that not make us a witness of the change in focus from bandwidth to latency?

bruce wrote:The fundamental formulas for PPD need to consider actual deadlines, not just the percent of the deadline that you used up processing it, facilitating wise choices between bigadv and smp or between smp and uni.
The percent used is already made up from total deadline and processing time, so I'm not sure on what you mean to say? In layman terms please.

Also, why is there a need to equate Y to X? To ensure that projects are valued equally across the client types? Why is that needed when you can't run a bigadv work unit on a normal system? Doesn't that imply you don't need to equate those two either?
Last edited by MtM on Tue Jun 21, 2011 7:51 pm, edited 2 times in total.
7im
Posts: 10189
Joined: Thu Nov 29, 2007 4:30 pm
Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
Location: Arizona
Contact:

Re: point system is getting ridiculous...

Post by 7im »

mdk777 wrote:
Is 1 computer with 48 cores, that completes only 1 work unit each day REALLY worth more than 140 (benchmark)* computers, with a total of 560 cores,
Depends on the value of speed.

Currently you can buy a 2TB HD for $70.
A consumer ssd runs around $240 for 120 GB.

This works out to a factor of 57X in cost per GB by my calculation.

While not 100 x, the costs paid for server grade solutions are often well more than 10 X these consumer grade solutions.
Example at $5000 for 256GB, this works out to $19 /GB

http://www.newegg.com/Product/Product.a ... 6820227580

So we are above 100X cost of a rather recent high performance solution (2TB HD have not been available all that long)

My point is that the market pays 100 x for speed increases on a regular basis.

I understand the analogy. But market values are not based on scientific values. Dollars to do not equate to proteins.

So I have to ask... the market pays for the speed of what, in regards to fah? 70 WUs in the same time as 1 WU is considered slower?

And if speed were the only concern here, the analogy might have applied better. ;)
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
mdk777
Posts: 480
Joined: Fri Dec 21, 2007 4:12 am

Re: point system is getting ridiculous...

Post by mdk777 »

I understand the analogy. But market values are not based on scientific values. Dollars to do not equate to proteins.

So I have to ask... the market pays for the speed of what, in regards to fah? 70 WUs in the same time as 1 WU is considered slower?

Yes, I was answering the value of speed question implied by your example.

Your analogy is of course intentionally flawed.

1. If those 140 benchmark computers turn in 70 WU, they will receive much more than the base 6189ppd.
2. As you know, those 140 benchmark machines cannot even handle the -bigadv-12 WU due to memory constraints.

You know these facts, but you attempt to make the situation appear much more dire than it really is.

Can 48 cores connected by high speed buses out-perform 560 cores that are not? We all know that they can.

Just looking at cores, we are only talking a little over 10x! Not even anything close to the 500X of my example.

Yes, my example was $, but those $ are a point system to determine the value of computational increase; science, or proteins follow from the computational increase.

And no, the $ increase is not always(seldom if ever) linear with the increase in computation. Isn't that what this thread is about?.......
A desire to see a 1 to 1 relationship between points and science ?..... A relationship that has never existed in computer science, in cost markets or in depreciation markets.

Stop the world, I want to get off. :eo :lol:
Last edited by mdk777 on Tue Jun 21, 2011 11:36 pm, edited 3 times in total.
Transparency and Accountability, the necessary foundation of any great endeavor!
Punchy
Posts: 125
Joined: Fri Feb 19, 2010 1:49 am

Re: point system is getting ridiculous...

Post by Punchy »

Punchy wrote:A quick fix to the bigadv-12 6903 work units would be to drop the final deadline down to a more reasonable time (with a corresponding change to the preferred deadline as well). Since they seem to take roughly 2x the older A5 units, 12 days would be more reasonable than 17. That one change would drop the section of the bonus curve in use down to a much less steep part.

I'm curious as to what 12-core machine would take 10.2 or even 17 days to complete a 6903. Perhaps a more careful setting of the deadlines would provide the necessary damping to stop the complaints that started this thread.
Disregard this - a quick check tells me that changing the final deadline from 17 to 12 days only changes the score by sqrt(12/17) and has no effect on the shape of the curve.
Napoleon
Posts: 887
Joined: Wed May 26, 2010 2:31 pm
Hardware configuration: Atom330 (overclocked):
Windows 7 Ultimate 64bit
Intel Atom330 dualcore (4 HyperThreads)
NVidia GT430, core_15 work
2x2GB Kingston KVR1333D3N9K2/4G 1333MHz memory kit
Asus AT3IONT-I Deluxe motherboard
Location: Finland

Re: point system is getting ridiculous...

Post by Napoleon »

OK, QRB system may need some fixing, but it isn't exactly top priority on my personal wish list: viewtopic.php?f=19&t=18972&start=0 :lol:
Win7 64bit, FAH v7, OC'd
2C/4T Atom330 3x667MHz - GT430 2x832.5MHz - ION iGPU 3x466.7MHz
NaCl - Core_15 - display
7im
Posts: 10189
Joined: Thu Nov 29, 2007 4:30 pm
Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
Location: Arizona
Contact:

Re: point system is getting ridiculous...

Post by 7im »

mdk777 wrote:
1. If those 140 benchmark computers turn in 70 WU, they will receive much more than the base 6189ppd.
2. As you know, those 140 benchmark machines cannot even handle the -bigadv-12 WU due to memory constraints.

You know these facts, but you attempt to make the situation appear much more dire than it really is.
And you try to make it less dire than it really is.

1. The FAQ I linked stated the i5 PPD was 6189 PPD. 850,000 PPD / 6000 PPD = about 140 systems to make that much PPD. How can the bonus should be a lot higher if the FAQ states the PPD IS 6189 PPD?

2. Yes, I know that. Who cares? PPD is the measure of "scientific production." And I've shown the PPD of one 48 core -bigadv system to be equal to 140 SMP systems. That one bigadv work unit turned in after 24 hours is the same points value as 70 SMP work units turned in every 24 hours.

Points being equal... are the scientific values REALLY the same?! 1 wu of any type equals 70 of another? Or, as I have suggested, the points on the -bigadv work units scale upwards too quickly, and that 1 WU is NOT really that valueable. Not worth the same points as 70 SMP work units. Maybe 1 -bigadv WU is really only worth 50 SMP work units. Or only 35?

You tell me. Which is it?
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Grandpa_01
Posts: 1122
Joined: Wed Mar 04, 2009 7:36 am
Hardware configuration: 3 - Supermicro H8QGi-F AMD MC 6174=144 cores 2.5Ghz, 96GB G.Skill DDR3 1333Mhz Ubuntu 10.10
2 - Asus P6X58D-E i7 980X 4.4Ghz 6GB DDR3 2000 A-Data 64GB SSD Ubuntu 10.10
1 - Asus Rampage Gene III 17 970 4.3Ghz DDR3 2000 2-500GB Segate 7200.11 0-Raid Ubuntu 10.10
1 - Asus G73JH Laptop i7 740QM 1.86Ghz ATI 5870M

Re: point system is getting ridiculous...

Post by Grandpa_01 »

7im wrote:
mdk777 wrote:
1. If those 140 benchmark computers turn in 70 WU, they will receive much more than the base 6189ppd.
2. As you know, those 140 benchmark machines cannot even handle the -bigadv-12 WU due to memory constraints.

You know these facts, but you attempt to make the situation appear much more dire than it really is.
And you try to make it less dire than it really is.

1. The FAQ I linked stated the i5 PPD was 6189 PPD. 850,000 PPD / 6000 PPD = about 140 systems to make that much PPD. How can the bonus should be a lot higher if the FAQ states the PPD IS 6189 PPD?

2. Yes, I know that. Who cares? PPD is the measure of "scientific production." And I've shown the PPD of one 48 core -bigadv system to be equal to 140 SMP systems. That one bigadv work unit turned in after 24 hours is the same points value as 70 SMP work units turned in every 24 hours.

Points being equal... are the scientific values REALLY the same?! 1 wu of any type equals 70 of another? Or, as I have suggested, the points on the -bigadv work units scale upwards too quickly, and that 1 WU is NOT really that valueable. Not worth the same points as 70 SMP work units. Maybe 1 -bigadv WU is really only worth 50 SMP work units. Or only 35?

You tell me. Which is it?
You know only Stanford can answer that question. And judging by the current QRB it is worth 70, which is the only answer they have given.
Image
2 - SM H8QGi-F AMD 6xxx=112 cores @ 3.2 & 3.9Ghz
5 - SM X9QRI-f+ Intel 4650 = 320 cores @ 3.15Ghz
2 - I7 980X 4.4Ghz 2-GTX680
1 - 2700k 4.4Ghz GTX680
Total = 464 cores folding
Amaruk
Posts: 254
Joined: Fri Jun 20, 2008 3:57 am
Location: Watching from the Woods

Re: point system is getting ridiculous...

Post by Amaruk »

What, exactly, is the relative value of the various CPU clients?

THIS FAQ gives us some indication.

classic PPD = 100
SMP PPD = 1760
bigadv PPD = 2640

This would mean the classic to bigadv ratio is 1:26.4

If we divide the aforementioned 48 core bigadv machine's PPD (850,000) by that ratio we get the equivelent classic PPD value, which is 32,196.97 PPD

So how many classic folders would it take to get that PPD? I have an AMD @2.8 GHz that gets 575.96 PPD on classic Project 10720. That works out to 56 classic clients.

Does a single bigadv client = 56 classic clients?

Is a folder that turns in one massive WU every 20 hours equal to 56 Classic clients that each turn in one WU every 3.74 days?

In the time those 56 classic clients complete each of their WUs the bigadv machine will finish 4.5 WUs

Does that single bigadv WU = 12.5 classic WUs?

The right side of the graph approaches infinity too quickly...
I've seen a number of people alude to this, usually while promoting the idea that moving the point of infinity further to the right will 'fix' the PPD problem.

This reasoning is incorrect as the point of infinity does not change. Perhaps a picture would help:

Image


X is the time to complete 6901 in hours, Y is corresponding PPD.

This illustrates two things.

PPD is inversely proportional to time.

PPD approaches infinity as time nears zero.



This will be true for any points formula that provides exponential returns.
Image
Amaruk
Posts: 254
Joined: Fri Jun 20, 2008 3:57 am
Location: Watching from the Woods

Re: point system is getting ridiculous...

Post by Amaruk »

I have been asked to share the following with everyone.


PROPOSED CHANGE TO POINTS SYSTEM


Proposed changes are using cube root in place of square root and increasing benchmark's base points by 28.32% to compensate.

The above change in benchmark machine's base points was chosen to make an AMD X2 240 @ 2.8 GHz earn similar points with both systems. Under current system it gets 575.96 pts for 10720 and 601.74 for 10721. With proposed changes it would get 569.07 pts for 10720 and 592.69 for 10721.

Of course this adjustment to base points can be changed to match the curves of the two formulas at any given point. This also highlights the downside of changing the bonus points curve, that it will not affect all clients equally.




Using current formula:

final_points = base_points * max(1,squareroot(k*deadline_length/elapsed_time))

benchmark base PPD = 1,130

k = 30 * Core_i5_time / deadline_length


The current stats for the following WUs are:


UNI 10720, A4, 0.75k, 984 points, 16/24 days

UNI 10721, A4, 0.75k, 654 points, 8/16 days

~~~

SMP 6052, A3, 2.1k, 481 points, 3/6 days

SMP 6067, A3, 2.1k, 481 points, 3/6 days

SMP 7136, A3, 3.23k, 585 points, 2.6/4 days

SMP 7137, A3, 3.23k, 585 points, 2.6/4 days

~~~

bigadv 2685, A5, 26.4k, 8,955 points, 4/6 days

bigadv 6900, A5, 26.4k, 8,955 points, 4/6 days

bigadv 6901, A5, 26.4k, 8,955 points, 4/6 days


The following is a breakdown of relative performance of two folders using the current system.


First machine is a 930 @ 3.8 GHz (4C8T):


UNI 10720 - 00:27:34 - 5,147.68 PPD (1,286.92 X4)

UNI 10721 - 00:17:45 - 6,621.76 PPD (1,655.44 X4)

~~~

SMP 6052 - 00:03:10 - 16,556.58 PPD

SMP 6067 - 00:03:17 - 15,682.01 PPD

SMP 7137 - 00:03:17 - 19,313.39 PPD

~~~

bigadv 2685 - 00:35:50 - 28,711.51 PPD

bigadv 6900 - 00:35:55 - 29,611.65 PPD

bigadv 6901 - 00:36:36 - 27,814.12 PPD


Averaging out the PPD by client type:

Uni @5,884.72 PPD vs SMP @17,183.99 PPD = 2.9201:1 ratio.

SMP @17,183.99 PPD vs bigadv @ 28,712.43 PPD = 1.6709:1 ratio.



Second machine is dual 5620 @3.8 GHz (8C16T)


UNI 10720 - 00:27:34 - 10,295.36 PPD (1,286.92 X4)

UNI 10721 - 00:17:45 - 13,243.52 PPD (1,655.44 X4)

~~~

SMP 6052 - 00:01:36 - 46,099.28 PPD

SMP 6067 - 00:01:37 - 45,388.24 PPD

SMP 7136 - 00:01:36 - 56,774.18 PPD

~~~

bigadv 2685 - 00:18:30 - 77,397.93 PPD

bigadv 6900 - 00:17:48 - 82,008.13 PPD

bigadv 6901 - 00:17:49 - 81,893.09 PPD


Averaging out the PPD by client type:

Uni @11,769.44 PPD vs SMP @ 49,420.57 PPD = 4.1991:1

SMP @ 49,420.57PPD vs bigadv @ 80,433.05 PPD = 1.6275:1




The following reflects changes resulting from the proposed formula:

final_points = base_points * max(1,cuberoot(k*deadline_length/elapsed_time))

benchmark's base PPD = 1,450

k = 30 * Core_i5_time / deadline_length


Revised stats for following WUs (base points increased by 28.32%):


UNI 10720, A4, 0.75k, 1,263 points, 16/24 days

UNI 10721, A4, 0.75k, 839 points, 8/16 days

~~~

SMP 6052, A3, 2.1k, 617 points, 3/6 days

SMP 6067, A3, 2.1k, 617 points, 3/6 days

SMP 7136, A3, 3.23k, 751 points, 2.6/4 days

SMP 7137, A3, 3.23k, 751 points, 2.6/4 days

~~~

bigadv 2685, A5, 26.4k, 11,491 points, 4/6 days

bigadv 6900, A5, 26.4k, 11,491 points, 4/6 days

bigadv 6901, A5, 26.4k, 11,491 points, 4/6 days



This is the breakdown of relative performance of the same two folders using the proposed system.


First machine is a 930 @ 3.8 GHz (4C8T):


UNI 10720 - 00:27:34 - 5,570.04 PPD (1,392.51 X4)

UNI 10721 - 00:17:45 - 5,813.44 PPD (1,453.36 X4)

~~~

SMP 6052 - 00:03:10 - 10,816.55 PPD

SMP 6067 - 00:03:17 - 10,307.15 PPD

SMP 7137 - 00:03:17 - 12,650.98 PPD

~~~

bigadv 2685 - 00:35:50 - 18,437.84 PPD

bigadv 6900 - 00:35:55 - 18,380.82 PPD

bigadv 6901 - 00:36:36 - 17,924.68 PPD


Averaging out the PPD by client type:

Uni @ 5,691.74 PPD vs SMP @ 11,258.27 PPD = 1.9780:1

SMP @ 11,258.27 PPD vs bigadv @ 18,247.78 PPD = 1.6208:1



Second machine is dual 5620 @3.8 GHz (8C16T)

UNI clients same as 930, so 1,131.48 PPD/client X 8 = 9051.84 PPD

~~~

SMP 6052 A3 00:01:36 - 26,878.11 PPD

SMP 6067 A3 00:01:37 - 26,509.29 PPD

SMP 7136 A3 00:01:36 - 32,990.14 PPD

~~~

bigadv 2685 A5 00:18:30 - 44,517.48 PPD

bigadv 6900 A5 00:17:48 - 46,866.90 PPD

bigadv 6901 A5 00:17:49 - 46,808.45 PPD


Averaging out the PPD by client type:

Uni @11,383.48 PPD vs SMP @ 28,792.51 PPD = 2.5293:1

SMP @ 28,792.51 PPD vs bigadv @ 46,064.28 PPD = 1.5999:1



Reduction in PPD as percentage, i7 @ 3.8 GHz:

UNI 5,884.72 vs 5,691.74 - 96.72% of current.

SMP 17,183.99 vs 11,258.27 - 65.52% of current.

bigadv 28,712.43 vs 18,247.78 - 63.55% of current.



Reduction in PPD as percentage, dual 5620 @ 3.8 GHz:

UNI 11,769.44 vs 11,383.48 - 96.72% of current.

SMP 49,420.57 vs 28,792.51 - 58.26% of current.

bigadv 80,433.05 vs 46,064.28 - 57.27% of current.
Image
7im
Posts: 10189
Joined: Thu Nov 29, 2007 4:30 pm
Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
Location: Arizona
Contact:

Re: point system is getting ridiculous...

Post by 7im »

Please summarize why your formula is better.
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
MtM
Posts: 1579
Joined: Fri Jun 27, 2008 2:20 pm
Hardware configuration: Q6600 - 8gb - p5q deluxe - gtx275 - hd4350 ( not folding ) win7 x64 - smp:4 - gpu slot
E6600 - 4gb - p5wdh deluxe - 9600gt - 9600gso - win7 x64 - smp:2 - 2 gpu slots
E2160 - 2gb - ?? - onboard gpu - win7 x32 - 2 uniprocessor slots
T5450 - 4gb - ?? - 8600M GT 512 ( DDR2 ) - win7 x64 - smp:2 - gpu slot
Location: The Netherlands
Contact:

Re: point system is getting ridiculous...

Post by MtM »

@Amaruk

Sliding to the right to cover the entire performance scale f@h operates on alone does not help, neither does changing the qrb formula to work better within the boundaries of todays performance scale. It does fix the current problem some seem to have with the exponential increase in value with increasingly shorter return times, but only until another project is released which behaves different on different machines in that performance spectrum.

The old faq's are of no use anymore, if you want to change something I believe you need to forget them as building on them for future considerations is not reflecting the change in the performance range. As I said, didn't it start with single core's and then dual core's came with smp, big difference but does it equate to comparing today's average dual core ( I think steam survey still lists this as being the average cpu type.. could be wrong ) to the 48+ core monsters? Not on just core count, but in total capabilities? With the step to multi socket there comes allot more then twice the cores, it's also higher memory capacity for 64 bit core's to exploit, allot more.

As I asked Bruce when he talked about nothing changing if you add a benchmarking machine if you have to do this
Apply some (yet to be determined) formula that relates X and Y and solve for X.
Why do this if X and Y aren't related? Separate X and Y, benchmark for base points for them just based on their performance on the reference systems and their scientific worth. The QRB formula can be left alone as no one yet as come up with a reason why time isn't as valuable as shown in the equation. If there is a tendency of people running far to the right which shifts the average system configuration upwards then reflect that with the benchmarking machine.

About the donations needed for PG to upgrade their reference machines, could some one working in the pr field explain why neither Intel and/or AMD would consider donating a reference machine to the folding cause and upgrading it when needed being good pr compared to the value of the hardware?
Post Reply