Suggested Change to the PPD System

Moderators: Site Moderators, FAHC Science Team

Post Reply
k1wi
Posts: 910
Joined: Tue Sep 22, 2009 10:48 pm

Suggested Change to the PPD System

Post by k1wi »

Hey guys,

Its pretty obvious that there is an ongoing issue regarding the PPD bonus system etc, and while there has been a lot of hypothesis about how points should be awarded etc, I was wondering whether I might suggest an option that will help alleviate the problems associated with the currents PPD bonus system.

As I see it, the main issue with the PPD system is that we are fast approaching the curve often known as the hockey-stick end, where small improvements in speed result in massive PPD increases. We were, in fact, heading towards that hockey-stick curve long before the Bonus Points system came into effect, because computers are increasing in power so rapidly. What the BP system has done is exacerbate, or speed up the approach to this curve.

I was looking at the graph, and what I noticed is that the reason why we haven't had a problem so far is because we've slowly been working our way along the relatively flat portion of the curve. My thought then became: How do we keep ourselves on the relatively flat portion of the graph, and avoid ramping up the curve rapidly. I thought about it economically, and my thinking is as follows:

As I see it, the easiest way to keep 'PPD inflation' from going nuclear, is to adjust the benchmark machine to the rate of CPU Improvement. At a defined interval (annually is probably too infrequent, but monthly may be too frequently), adjust the benchmark machine's PPD to reflect the improvements of computing power. My thinking behind this is that we already benchmark all WUs relative to a single machine, the only issue is that this benchmark, while being run on an i5, is still based on a Pentium 4. Therefore, I ask: Is it too much to of a stretch to adjust the benchmark to suit a contemporary computer? Consider it benchmarking to a current day Pentium 4 equivalent. Another way of thinking about it is ensuring that PPD is constant on the current 'average' computer. I.e. if the Pentium 4 benchmark machine earned 100 PPD in it's heyday, then the current day equivalent, lets say an i5, should currently earn 100PPD.

Yes it means that the PPD of a given machine will decrease over time, but it already does that - just relative to other newer computers. All that is changing is that we are adding this adjustment into the PPD model so that the different strata of current generation machines will earn roughly the same PPD year on year - 'the top 5%' will earn roughly the same ppd year on year, while 'the bottom 5%' will also earn roughly the same ppd year on year (it will just always be less). I'm sure Stanford has enough information to enable them to adjust the value of a point to accurately reflect the increase in computer power. If need be there could be a discussion prior to each adjustment, or maybe not.

The main downside I can imagine is that people won't like their PPD production on a particular machine decreasing as time goes by and this could make it difficult for people testing new projects to accurately determine whether the new projects PPD is in line with other projects (because you would have to compare them with current PPD values). The first is only a conceptual issue and getting people to understand the abstract shift. The second is something that I think we could live with.

The upside, on the other hand, is that at no point in time will there ever be a point where a computer that is 4.5 times faster than a mainstream machine earns 1,000,000 more PPD. I think, in the scheme of things, this is a worth while thing.

I have purposely not given any hard figures, because I think it's something that should be 'discussed' first, and I am not in the position to suggest what rate of adjustment is required.
Last edited by k1wi on Fri Mar 16, 2012 10:04 pm, edited 1 time in total.
MtM
Posts: 1579
Joined: Fri Jun 27, 2008 2:20 pm
Hardware configuration: Q6600 - 8gb - p5q deluxe - gtx275 - hd4350 ( not folding ) win7 x64 - smp:4 - gpu slot
E6600 - 4gb - p5wdh deluxe - 9600gt - 9600gso - win7 x64 - smp:2 - 2 gpu slots
E2160 - 2gb - ?? - onboard gpu - win7 x32 - 2 uniprocessor slots
T5450 - 4gb - ?? - 8600M GT 512 ( DDR2 ) - win7 x64 - smp:2 - gpu slot
Location: The Netherlands
Contact:

Re: PPD Bonus Scheme

Post by MtM »

As I see it, the main issue with the PPD system is that we are fast approaching the curve often known as the hockey-stick end, where small improvements in speed result in massive PPD increases. We were, in fact, heading towards that hockey-stick curve long before the Bonus Points system came into effect, because computers are increasing in power so rapidly. What the BP system has done is exacerbate, or speed up the approach to this curve.
This doesn't make sense, if computers were already getting more powerful the increases even if like a hockey stick are grounded solidly into more science being done. This is where points values is based on, scientific contribution.

Let's tackle this first, as I posted in the other thread, I'm against considering points anything other then being tied to scientific value of contributions. A 10 year member who folded a single uniprocessor client all those years should not complain that a bigadv folder passes him in points in a very short time. Points are there as incentive, and have the benefit of the competitive element, but they are not meant to increase participation because of this competitive element they are meant to show how much a person has contributed to the scientific goals of this project.

So if computer speeds are raising exponentially, points should follow suit. This is by design, not because of a wrong conception of the points system.
The upside, on the other hand, is that at no point in time will there ever be a point where a computer that is 4.5 times faster than a mainstream machine earns 1,000,000 more PPD. I think, in the scheme of things, this is a worth while thing.
Instead of hard figures you exaggerated the issue greatly :oops:

I'm fine in discussing anything without hard numbers, as philosophical idea's are the start of getting to those hard numbers, but then also don't try to sway the uninformed opinion of people who don't know better with numbers like those which are seemingly meant to show to in your opinion big discrepancy between how it 'should work' and how it currently works.
I was looking at the graph, and what I noticed is that the reason why we haven't had a problem so far is because we've slowly been working our way along the relatively flat portion of the curve. My thought then became: How do we keep ourselves on the relatively flat portion of the graph, and avoid ramping up the curve rapidly.
By changing the kfactor and deadline values of a project you can control where the hockey stick effect takes place, taking into account the spread in speed of the machines which will be assigned a certain project you can predict and therefore adjust this point.

This is now done by feedback from us, maybe when the client/servers have more detailed statistics logging project owners can set this up based on previous projects.
As I see it, the easiest way to keep 'PPD inflation' from going nuclear, is to adjust the benchmark machine to the rate of CPU Improvement. At a defined interval (annually is probably too infrequent, but monthly may be too frequently), adjust the benchmark machine's PPD to reflect the improvements of computing power. My thinking behind this is that we already benchmark all WUs relative to a single machine, the only issue is that this benchmark, while being run on an i5, is still based on a Pentium 4. Therefore, I ask: Is it too much to of a stretch to adjust the benchmark to suit a contemporary computer? Consider it benchmarking to a current day Pentium 4 equivalent. Another way of thinking about it is ensuring that PPD is constant on the current 'average' computer. I.e. if the Pentium 4 benchmark machine earned 100 PPD in it's heyday, then the current day equivalent, lets say an i5, should currently earn 100PPD.
That would be devaluating the current scientific contributions! See previous paragraphs as well. Let me add though on a really personal note: hell no:!:

Edit:

Let me be more specific ->
I.e. if the Pentium 4 benchmark machine earned 100 PPD in it's heyday, then the current day equivalent, lets say an i5, should currently earn 100PPD.
If the I5 was just as quick, and would produce the same scientific results in the same period, only then should it earn the same ppd.

If you want to let loose the tie between science and points, just be blunt about it and say out loud you want people to be credited for participation over actual scientific contribution ( and I'm actually not saying that would be bad in all aspects even if I think the drawbacks are bigger then the advantages you're describing ). If you do, would you be so kind to list all the advantages it would have in your opinion so it will be easier to understand why you think it's a better option overall?
Last edited by MtM on Fri Mar 16, 2012 1:13 am, edited 1 time in total.
k1wi
Posts: 910
Joined: Tue Sep 22, 2009 10:48 pm

Re: PPD Bonus Scheme

Post by k1wi »

MtM wrote:
As I see it, the main issue with the PPD system is that we are fast approaching the curve often known as the hockey-stick end, where small improvements in speed result in massive PPD increases. We were, in fact, heading towards that hockey-stick curve long before the Bonus Points system came into effect, because computers are increasing in power so rapidly. What the BP system has done is exacerbate, or speed up the approach to this curve.
This doesn't make sense, if computers were already getting more powerful the increases even if like a hockey stick are grounded solidly into more science being done. This is where points values is based on, scientific contribution.

Let's tackle this first, as I posted in the other thread, I'm against considering points anything other then being tied to scientific value of contributions. A 10 year member who folded a single uniprocessor client all those years should not complain that a bigadv folder passes him in points in a very short time. Points are there as incentive, and have the benefit of the competitive element, but they are not meant to increase participation because of this competitive element they are meant to show how much a person has contributed to the scientific goals of this project.

So if computer speeds are raising exponentially, points should follow suit. This is by design, not because of a wrong conception of the the points system.
The upside, on the other hand, is that at no point in time will there ever be a point where a computer that is 4.5 times faster than a mainstream machine earns 1,000,000 more PPD. I think, in the scheme of things, this is a worth while thing.
Instead of hard figures you exaggerated the issue greatly :oops:

I'm fine in discussing anything without hard numbers, as philosophical idea's are the start of getting to those hard numbers, but then also don't try to sway the uninformed opinion of people who don't know better.
I was looking at the graph, and what I noticed is that the reason why we haven't had a problem so far is because we've slowly been working our way along the relatively flat portion of the curve. My thought then became: How do we keep ourselves on the relatively flat portion of the graph, and avoid ramping up the curve rapidly.
By changing the kfactor and deadline values of a project you can control where the hockey stick effect takes place, taking into account the spread in speed of the machines which will be assigned a certain project you can predict and therefore adjust this point.

This is now done by feedback from us, maybe when the client/servers have more detailed statistics logging project owners can set this up based on previous projects.
As I see it, the easiest way to keep 'PPD inflation' from going nuclear, is to adjust the benchmark machine to the rate of CPU Improvement. At a defined interval (annually is probably too infrequent, but monthly may be too frequently), adjust the benchmark machine's PPD to reflect the improvements of computing power. My thinking behind this is that we already benchmark all WUs relative to a single machine, the only issue is that this benchmark, while being run on an i5, is still based on a Pentium 4. Therefore, I ask: Is it too much to of a stretch to adjust the benchmark to suit a contemporary computer? Consider it benchmarking to a current day Pentium 4 equivalent. Another way of thinking about it is ensuring that PPD is constant on the current 'average' computer. I.e. if the Pentium 4 benchmark machine earned 100 PPD in it's heyday, then the current day equivalent, lets say an i5, should currently earn 100PPD.
That would be devaluating the current scientific contributions! See previous paragraph. Let me add though on a really personal note: hell no:!:
I think you fall under the does not understand the conceptual issue. I also take issue with the attitude you write your reply in.

For one thing, it is simple maths that eventually, perhaps not today, but easily within a couple of years, a computer 4.5 times more powerful than a mainstream computer will earn a million ppd more. It's simple exponential truth. Already a computer 4.5times more powerful will earn hundreds of thousands of PPD more, give it a couple of years and 1M PPD will easily be surpassed.

I'm sorry, but adjusting the K factor will not prevent the hockey stick effect happening. For that to happen, we'd have to adjust the K factor on ALL of the projects, continuously. Which in effect, is the exact same outcome as I am hypothesising.

You argue that points should always increase exponentially. Why do you believe that? If you believe that then you believe that one day, in the not to distant future, people with even average computers should earn trillions of points per day, because with points increasing exponentially that is what is going to happen. Do we then at some point drop the last x 0's from everyones score? In effect that's doing exactly the same thing as I suggest.

To push the economic equivilent, long term inflation is usually approximately 3%, sure in the 70s and 80s this was much higher in most western economies. To barstardise Moore's Law, if computers double in power every 18 months, then we've got a rate of inflation over 50%. Even at 3% prices increase pretty rapidly over the long term - 1 dollar in 1984 money was worth a lot more than 1 dollar is worth now, imagine what the difference would be if we had inflation of over 50% year on year...

In economic terms, my suggestion is to move towards measuring in real dollars rather than absolute dollars. It does not 'devalue' current production and it certainly encourage people to continue investing in new hardware. If you spend $1000 on a computer today, why shouldn't it earn the same number of points today as a $1000 dollar computer did in 1996? Under my proposal, if the 1996 computer was still running today, it won't be earning anywhere near the same points as today's $1000 computer is (infact it would largely be in line with it's actual relative processing power).

I appreciate your input to the forums, but I am really surprised at your response to this post. It seems rather myopic.
k1wi
Posts: 910
Joined: Tue Sep 22, 2009 10:48 pm

Re: PPD Bonus Scheme

Post by k1wi »

You asked for hard figures.... Here is a graph I prepared earlier. The below is real world data for Project 8004, using the ppd calculator. If computers increase in performance every 18 months, then every 18 months we should move half the distance along the X axis toward the Y axis. Because of the exponential nature of the points system, the closer we get to the Y axis the steeper the curve gets, even when the Y axis is in log. As to where we are on that graph - well my i7 does that in roughly 2/3rds of a minute, which is the fourth highest datapoint. A computer 4x as powerful represents the highest datapoint. Already that difference is in the hundreds of thousands of points.

Image

Because PPD is benchmarked to a single machine, and PPD should be even across all projects, it doesn't matter if new projects get released and K factors are adjusted, as computers get faster and faster, so to does the points difference between computers. Adjusting the K factor will ONLY have an effect if we adjust the K factor on either all work units, or continuously as new projects are released. In effect, that is the same as my proposal, just under a different guise.

The reason I think my proposal is optimal is because it maintains the difference between different levels of current computing, but what it does is keeps the differences in the 10s,100s,1000s, 10,000s, rather than the millions or trillians. The reason why this is a good thing, is because millions of PPD in difference looks a lot worse than 100s of PPD in difference, even if the relative difference is actually the same.

I happen to think it will encourage people to keep competing... Why? because under the current system, the best way to compete is to delay and delay and delay - if you buy a computer today and fold fold fold, in a years time those points will be miniscule compared with the points you could be earning if you'd delayed your purchase. Therefore, it is a better system because under the current system there is a disadvantage to purchasing computers regularly. It is more beneficial to save up your money and delay your purchase indefinitely.
Last edited by k1wi on Fri Mar 16, 2012 1:55 am, edited 1 time in total.
MtM
Posts: 1579
Joined: Fri Jun 27, 2008 2:20 pm
Hardware configuration: Q6600 - 8gb - p5q deluxe - gtx275 - hd4350 ( not folding ) win7 x64 - smp:4 - gpu slot
E6600 - 4gb - p5wdh deluxe - 9600gt - 9600gso - win7 x64 - smp:2 - 2 gpu slots
E2160 - 2gb - ?? - onboard gpu - win7 x32 - 2 uniprocessor slots
T5450 - 4gb - ?? - 8600M GT 512 ( DDR2 ) - win7 x64 - smp:2 - gpu slot
Location: The Netherlands
Contact:

Re: PPD Bonus Scheme

Post by MtM »

k1wi wrote:
MtM wrote:
As I see it, the main issue with the PPD system is that we are fast approaching the curve often known as the hockey-stick end, where small improvements in speed result in massive PPD increases. We were, in fact, heading towards that hockey-stick curve long before the Bonus Points system came into effect, because computers are increasing in power so rapidly. What the BP system has done is exacerbate, or speed up the approach to this curve.
This doesn't make sense, if computers were already getting more powerful the increases even if like a hockey stick are grounded solidly into more science being done. This is where points values is based on, scientific contribution.

Let's tackle this first, as I posted in the other thread, I'm against considering points anything other then being tied to scientific value of contributions. A 10 year member who folded a single uniprocessor client all those years should not complain that a bigadv folder passes him in points in a very short time. Points are there as incentive, and have the benefit of the competitive element, but they are not meant to increase participation because of this competitive element they are meant to show how much a person has contributed to the scientific goals of this project.

So if computer speeds are raising exponentially, points should follow suit. This is by design, not because of a wrong conception of the the points system.
The upside, on the other hand, is that at no point in time will there ever be a point where a computer that is 4.5 times faster than a mainstream machine earns 1,000,000 more PPD. I think, in the scheme of things, this is a worth while thing.
Instead of hard figures you exaggerated the issue greatly :oops:

I'm fine in discussing anything without hard numbers, as philosophical idea's are the start of getting to those hard numbers, but then also don't try to sway the uninformed opinion of people who don't know better.
I was looking at the graph, and what I noticed is that the reason why we haven't had a problem so far is because we've slowly been working our way along the relatively flat portion of the curve. My thought then became: How do we keep ourselves on the relatively flat portion of the graph, and avoid ramping up the curve rapidly.
By changing the kfactor and deadline values of a project you can control where the hockey stick effect takes place, taking into account the spread in speed of the machines which will be assigned a certain project you can predict and therefore adjust this point.

This is now done by feedback from us, maybe when the client/servers have more detailed statistics logging project owners can set this up based on previous projects.
As I see it, the easiest way to keep 'PPD inflation' from going nuclear, is to adjust the benchmark machine to the rate of CPU Improvement. At a defined interval (annually is probably too infrequent, but monthly may be too frequently), adjust the benchmark machine's PPD to reflect the improvements of computing power. My thinking behind this is that we already benchmark all WUs relative to a single machine, the only issue is that this benchmark, while being run on an i5, is still based on a Pentium 4. Therefore, I ask: Is it too much to of a stretch to adjust the benchmark to suit a contemporary computer? Consider it benchmarking to a current day Pentium 4 equivalent. Another way of thinking about it is ensuring that PPD is constant on the current 'average' computer. I.e. if the Pentium 4 benchmark machine earned 100 PPD in it's heyday, then the current day equivalent, lets say an i5, should currently earn 100PPD.
That would be devaluating the current scientific contributions! See previous paragraph. Let me add though on a really personal note: hell no:!:
I think you fall under the does not understand the conceptual issue. I also take issue with the attitude you write your reply in.

For one thing, it is simple maths that eventually, perhaps not today, but easily within a couple of years, a computer 4.5 times more powerful than a mainstream computer will earn a million ppd more. It's simple exponential truth. Already a computer 4.5times more powerful will earn hundreds of thousands of PPD more, give it a couple of years and 1M PPD will easily be surpassed.
What's against using factors in your favourite ppd listing/statistics site?

I'm sorry, but adjusting the K factor will not prevent the hockey stick effect happening. For that to happen, we'd have to adjust the K factor on ALL of the projects, continuously. Which in effect, is the exact same outcome as I am hypothesising.
That comment was directed against this statement ->
The upside, on the other hand, is that at no point in time will there ever be a point where a computer that is 4.5 times faster than a mainstream machine earns 1,000,000 more PPD. I think, in the scheme of things, this is a worth while thing.
You argue that points should always increase exponentially. Why do you believe that? If you believe that then you believe that one day, in the not to distant future, people with even average computers should earn trillions of points per day, because with points increasing exponentially that is what is going to happen. Do we then at some point drop the last x 0's from everyones score? In effect that's doing exactly the same thing as I suggest.
I believe that because it would give an clearer indication of points being tied to scientific value of contributed computing time. And as I said, yes that would mean we would have to start using factors in the stats pretty soon, and no I don't really like that either but I feel it's more fair then just cutting the tie between scientific worth and actual contributions.
To push the economic equivilent, long term inflation is usually approximately 3%, sure in the 70s and 80s this was much higher in most western economies. To barstardise Moore's Law, if computers double in power every 18 months, then we've got a rate of inflation over 50%. Even at 3% prices increase pretty rapidly over the long term - 1 dollar in 1984 money was worth a lot more than 1 dollar is worth now, imagine what the difference would be if we had infation of over 50% year on year...
I agree that the increments in computation power are causing this whole problem, but I don't agree in just cutting the tie between it and points awarded previously.
In economic terms, my suggestion is to move towards measuring in real dollars rather than absolute dollars. It does not 'devalue' current production and it certainly encourage people to continue investing in new hardware. If you spend $1000 on a computer today, why shouldn't it earn the same number of points today as a $1000 dollar computer did in 1996? Under my proposal, if the 1996 computer was still running today, it won't be earning anywhere near the same points as today's $1000 computer is (infact it would largely be in line with it's actual relative processing power).
Because it's not producing as much science as the computer bought for the same money 10 years ago.
I appreciate your input to the forums, but I am really surprised at your response to this post. It seems rather myopic.
It's not shortsighted at all. As I said, any number can be shown as a factor, it won't be pretty perhaps but what you see as an obstacle or a problem which has to be fixed is something I actually value in this project over any other. The clear tie between scientific value attributed to certain donations.

Sorry if you took offense based on the end of my last paragraph, as my might have understood from this post as well I made partial responses based on a part of your post which made it seem like you're trying to devalue the QRB concept.

Part of your post also seemed to imply you were advocating a change in points formula because some systems running certain projects are so close to the edge of the curve their incremental values in completing the project half an hour quicker is capable of boosting the total value by an amount you don't see as being valid.

In short: I'd rather be able to tell straight away based on points how much of an impact a certain account has had on scientific progression of this project, then needing to look at their yearly points averages and check those against the actual increments in computational power.

This can change if you can provide hard numbers for this claim, which imho will be hard since you don't know how the progression of computation power and corresponding price development will be like over x years.
infact it would largely be in line with it's actual relative processing power
It would be impossible to change without having the above confirmed, and even if confirmed it would mean making a new tie between donor's investment and points, instead of donor's scientific contributions and points.

I see the advantage you're looking for: you make it safer for people to invest in hardware as they know in advance what they will get for it. I support that advantage. I don't like the concept because I like the existing tie between science and points more then I would dislike having to use factors in statistics in the future. Unless you can back up your claim that your idea will still accurately ( yes that's more then 'largely in line' ... ) reflect scientific contributions made.

I see problems as well because we don't have 'a single 1000 dollar computer' but many variants not only between vendor specific components but also between choosing a more powerfull cpu over a better gpu or vice verse. Those changes can greatly influence the computational output of a system without changing the price.

Edit: reading the post you made just above, will comment on it when done
MtM
Posts: 1579
Joined: Fri Jun 27, 2008 2:20 pm
Hardware configuration: Q6600 - 8gb - p5q deluxe - gtx275 - hd4350 ( not folding ) win7 x64 - smp:4 - gpu slot
E6600 - 4gb - p5wdh deluxe - 9600gt - 9600gso - win7 x64 - smp:2 - 2 gpu slots
E2160 - 2gb - ?? - onboard gpu - win7 x32 - 2 uniprocessor slots
T5450 - 4gb - ?? - 8600M GT 512 ( DDR2 ) - win7 x64 - smp:2 - gpu slot
Location: The Netherlands
Contact:

Re: PPD Bonus Scheme

Post by MtM »

k1wi wrote:You asked for hard figures.... Here is a graph I prepared earlier. The below is real world data for Project 8004, using the ppd calculator. If computers increase in performance every 18 months, then every 18 months we should move half the distance along the X axis toward the Y axis. Because of the exponential nature of the points system, the closer we get to the Y axis the steeper the curve gets, even when the Y axis is in log. As to where we are on that graph - well my i7 does that in roughly 2/3rds of a minute, which is the fourth highest datapoint. A computer 4x as powerful represents the highest datapoint. Already that difference is in the hundreds of thousands of points.

Image

Because PPD is benchmarked to a single machine, and PPD should be even across all projects, it doesn't matter if new projects get released and K factors are adjusted, as computers get faster and faster, so to does the points difference between computers. Adjusting the K factor will ONLY have an effect if we adjust the K factor on either all work units, or continuously as new projects are released. In effect, that is the same as my proposal, just under a different guise.

The reason I think my proposal is optimal is because it maintains the difference between different levels of current computing, but what it does is keeps the differences in the 10s,100s,1000s, 10,000s, rather than the millions or trillians. The reason why this is a good thing, is because millions of PPD in difference looks a lot worse than 100s of PPD in difference, even if the relative difference is actually the same.
What's the average run time for a single project? How many tick/tock's will happen in their lifetime? I think not to many, which moves away the need to adjust a single project to changing hardware conditions as I think the speed increments will be small.

When a project is introduced now, there is often feedback about the point value, and this leads to adjustments. I think the feedback is because projects are not set up right away with the correct spread in speed for the machines the project will get assigned to, and I think this can be improved on by using settings/results from previous projects with the same assignment settings. The relevant part is that kfactor and deadline need to determined correctly at project creation, and this already does take into account current processing power.

I can't see how your proposal fixes an existing problem because I don't see the actual problem.
I happen to think it will encourage people to keep competing... Why? because under the current system, the best way to compete is to delay and delay and delay - if you buy a computer today and fold fold fold, in a years time those points will be miniscule compared with the points you could be earning if you'd delayed your purchase. Therefore, it is a better system because under the current system there is a disadvantage to purchasing computers regularly. It is more beneficial to save up your money and delay your purchase indefinitely.
The same thing happens with buying a car, I can buy it now and have a clunker, or save and have a decent one, or save longer and have a fancy one, or save the rest of my life until retirement and buy a classic Ferrari. But people who want a car don't keep saving for the last option, and people who need the car now will still buy the clunker. I chose to buy a used gtx275 as my latest folding addition, it produces less at times then my q6600 but it was a 50e investment which makes it the most economical one available at the time, I could have kept the 50e so I would now have 50e more towards getting something keppler based when it comes out and it might have paid even while gpu doesn't have QRB.

It has always paid of to save before buying new hardware, the qrb is increasing the payoff as it's intended to do, again, not a conceptual failure but a design feature in my opinion.
k1wi
Posts: 910
Joined: Tue Sep 22, 2009 10:48 pm

Re: PPD Bonus Scheme

Post by k1wi »

It's actually nothing to do with QRB, as I alluded to in my first post - even without QRB we would still have the same issue. Perhaps my thread title should avoid the word bonus.

The issue is that PPD in one project should = PPD in another project. Therefore, if PPDa = PPDb then the curve for project a, project b and project c should be the same, regardless of at what point in time they are released/finished. Therefore, PPDa (the one I give above) should hold true regardless of whether the project is still running or not. In other words, there should only be one points curve (although a second exists for Bigadv, with a premuim on it), Or do you disagree with that?

If we were to adjust the K factor of a project then PPDb would not equal PPDa. Which would absolutely destroy the argument about 1 point = 1 constant measure of scientific value. Basically what would happen is every new project would be less rewarding than the previous project. In other words, that would be exactly the same outcome, but failing to maintain the PPDa = PPDb framework.

My argument is that we should measure in real dollars not in nominal dollars. Just as we have developed methods of measuring real dollars, I see no reason why we cannot develop methods of measuring 'real points'.
k1wi
Posts: 910
Joined: Tue Sep 22, 2009 10:48 pm

Re: PPD Bonus Scheme

Post by k1wi »

http://en.wikipedia.org/wiki/Wheat_and_ ... rd_problem is basically what folding is up against.
MtM
Posts: 1579
Joined: Fri Jun 27, 2008 2:20 pm
Hardware configuration: Q6600 - 8gb - p5q deluxe - gtx275 - hd4350 ( not folding ) win7 x64 - smp:4 - gpu slot
E6600 - 4gb - p5wdh deluxe - 9600gt - 9600gso - win7 x64 - smp:2 - 2 gpu slots
E2160 - 2gb - ?? - onboard gpu - win7 x32 - 2 uniprocessor slots
T5450 - 4gb - ?? - 8600M GT 512 ( DDR2 ) - win7 x64 - smp:2 - gpu slot
Location: The Netherlands
Contact:

Re: PPD Bonus Scheme

Post by MtM »

Since when do project have to be equal in point value?

Different project use different simulations, they can't all be valued the same. So no I can't agree with having one point curve.

And the ppd of a project can and should change if the hardware running it is getting quicker, the whole nature of points accrediting is directly tied to the time needed to process the work units as evident from the qrb existence.

Changing a kfactor of a project while running would invalidate the tie between scientific value of contributions made to that project in the past compared to the same tie in the future, but not against other projects because of the reason above. There is no direct tie between different projects ( excluding clear project ranges where projects are in essence duplicates with only small differences ), so why should there be a tie between the scientific value.

I'll repeat what I said above: I support your incentive in making it safer to purchase hardware, but unless you can show that you can come up with a way to put a value based on purchase price on a project which is a> consistent with my alternate system configuration for the same price b> ensure that actual scientific value is still based on computation donations I don't see it as anything better then the current system. In fact, the current system's only obvious flaw is that we'll have to use factors in viewing pdd/total credit in the future, in your conceptual system I see many thing which I would consider a flaw.
MtM
Posts: 1579
Joined: Fri Jun 27, 2008 2:20 pm
Hardware configuration: Q6600 - 8gb - p5q deluxe - gtx275 - hd4350 ( not folding ) win7 x64 - smp:4 - gpu slot
E6600 - 4gb - p5wdh deluxe - 9600gt - 9600gso - win7 x64 - smp:2 - 2 gpu slots
E2160 - 2gb - ?? - onboard gpu - win7 x32 - 2 uniprocessor slots
T5450 - 4gb - ?? - 8600M GT 512 ( DDR2 ) - win7 x64 - smp:2 - gpu slot
Location: The Netherlands
Contact:

Re: PPD Bonus Scheme

Post by MtM »

k1wi wrote:http://en.wikipedia.org/wiki/Wheat_and_ ... rd_problem is basically what folding is up against.
That's clearly true and not being disputed. I'm just saying that it's not that big of a problem if you're willing to implement a factorized ppd output and credit count.

Edit:

As to not only have critical remarks against your dollar value suggestion, I'll make a suggestion of my own.

You can keep the ppd comparable with previous years and still allow people like me to gather enough information to find the actual processing power if you do two things. Keep all historical projects forever, and benchmark them again with every change to the benchmark machine and publish the speedup factor compared with the previous results. Then use the speedup factor in reverse to set the base credit for new projects.

Results in base credit which will only increase if a project has a higher scientific value ( like how some gpu work unit's are now worth 358 points and some 512 points ), and will still allow people to look up the computational power needed to get that ppd.

This is a variation on the benchmark proposal I read from someone else on the forum, it's not something I came up with myself ( though I'm not 100% sure I'm accurately reflecting it here, I might have given it a personal twist ).

The drawback is PG would have to be willing to benchmark all previous projects on every upgrade to get the proper speedup factor, and I think that was already said to not being feasible.

I really don't think you can use anything other then a benchmark of some kind. Maybe they could come up with a benchmark separate from the projects which would give the speedup factor of common fahcore functions, then there wouldn't be an increased work load on every benchmark machine replacement.
Last edited by MtM on Fri Mar 16, 2012 2:57 am, edited 1 time in total.
k1wi
Posts: 910
Joined: Tue Sep 22, 2009 10:48 pm

Re: PPD Bonus Scheme

Post by k1wi »

I am trying to think how clearly I can spell this out.

Computer A should earn the same PPD for project A as it earns for project B. I'm talking PPD not Points Per WU. K factors are adjusted so that this is the case and the PPD of one project is the same as that of another.

Therefore, the SAME points curve should apply to all work units - that curve that I illustrated should apply to ALL work units, the only thing that changes between WUs is the TPFs along the x axis. Regardless of WU, my i7 should stay at its point on the curve, and newer, faster computers should move ever increasingly up the curve.

The issue that FAH faces is that every 18 months we move further along the x axis towards the Y axis, therefore the nominal values all get bigger and bigger and bigger and people see 1,000,000 point differences instead of say 100 or 1000 point differences. That is a HUGE psyhcological barrier to people joining in the project. And it is purely a pshycological barrier, but one that needs to be addressed.

Your argument seems to rest on "we cannot determine the level of adjustment to apply". Rather than suggest methods of creating an accurate method of adjustment, you could ask "how do we determine what level of adjustment to apply on a regular basis to avoid the issue we are having?". Unfortunately, you seem to forget that Pande Group has a huge amount of data at their fingertips. In the same way that the Federal Reserve has a huge amount of data at their finger tips when they set interest rates.

Therefore, in the interests of moving beyond this futile discussion of real vs. nominal points I will give a suggestion that I have been mulling over: Median or Mean ppd. Standardise one month to the next using the median or mean (depending on what is happening in the computer world) PPD. Compare Month 1 to Month 2, and note what increase (if any) there is in PPD. If the increase in a given month is 4%, then adjust the ppd of every subsequent WU to reflect that value. Make it as regular as the Federal Reserve does. Hell, it doesn't have to be limited to mean PPD or median PPD, it could reflect both or a basket of statistics. All of which are quantifiable and can be pointed to if need be.
MtM
Posts: 1579
Joined: Fri Jun 27, 2008 2:20 pm
Hardware configuration: Q6600 - 8gb - p5q deluxe - gtx275 - hd4350 ( not folding ) win7 x64 - smp:4 - gpu slot
E6600 - 4gb - p5wdh deluxe - 9600gt - 9600gso - win7 x64 - smp:2 - 2 gpu slots
E2160 - 2gb - ?? - onboard gpu - win7 x32 - 2 uniprocessor slots
T5450 - 4gb - ?? - 8600M GT 512 ( DDR2 ) - win7 x64 - smp:2 - gpu slot
Location: The Netherlands
Contact:

Re: PPD Bonus Scheme

Post by MtM »

I edited the post above yours with a counter proposal which does just what you are asking.
Last edited by MtM on Fri Mar 16, 2012 3:03 am, edited 1 time in total.
Jesse_V
Site Moderator
Posts: 2851
Joined: Mon Jul 18, 2011 4:44 am
Hardware configuration: OS: Windows 10, Kubuntu 19.04
CPU: i7-6700k
GPU: GTX 970, GTX 1080 TI
RAM: 24 GB DDR4
Location: Western Washington

Re: PPD Bonus Scheme

Post by Jesse_V »

If I may interject, I'm not sure if such a change to the point system would be implemented. Maybe it would stabilize things in the long run, but wouldn't it be extremely disruptive at the change? I see your point. Apparently there are people on this forum who have folded back when "1 WU == 1 point". Clearly that's not the case now. While I see how there is point inflation and points are more easily gained, I'm also a fan of tying points to scientific production. IIRC, there are some DC projects out there that give credit depending on how long it took the other computers in the quorum to complete the same work. If I understand correctly, that's basically what you're proposing, except that F@h normally only does work once. That would normalize things, but IMO it would move away from the relationship with scientific value. Computers are gaining speed and efficiency in accordance with Moore's law, and I think its better that the points production rises with it. We're most likely handling far more demanding work than 10 years ago. We should likewise get lots more PPD. I understand your "delay delay delay" point and theoretically I could see that being a problem. But I don't know if it realistically is. I personally would doubt it.

I'm all for PPD being normalized across the board. There are some projects that I hate getting because my PPD drops. If we had to apply your proposal, then I would choose a mean PPD standardization scheme over the median. I think over time the adjustment would be smoother and not so jumpy.

I'm really glad that you have a well thought out proposal. IMO, there are two few of them on this forum. However, I think there are some practicality issues with it though. Even if you upgraded the benchmark machine every six months or so, there would still be relative differences in the architectures of that machine and other people's hardware. AFAIK, there have only been a few upgrades of the benchmark machine, so those variance issues aren't so much a problem at the moment. I think they would be under your idea.
F@h is now the top computing platform on the planet and nothing unites people like a dedicated fight against a common enemy. This virus affects all of us. Lets end it together.
k1wi
Posts: 910
Joined: Tue Sep 22, 2009 10:48 pm

Re: PPD Bonus Scheme

Post by k1wi »

Project A needs to equal the same PPD as Project B for the very reason that Jesse_V sates states above.

Points per WU between different Projects can change - I have no issue with that, but PPD between different Projects should not change.
MtM
Posts: 1579
Joined: Fri Jun 27, 2008 2:20 pm
Hardware configuration: Q6600 - 8gb - p5q deluxe - gtx275 - hd4350 ( not folding ) win7 x64 - smp:4 - gpu slot
E6600 - 4gb - p5wdh deluxe - 9600gt - 9600gso - win7 x64 - smp:2 - 2 gpu slots
E2160 - 2gb - ?? - onboard gpu - win7 x32 - 2 uniprocessor slots
T5450 - 4gb - ?? - 8600M GT 512 ( DDR2 ) - win7 x64 - smp:2 - gpu slot
Location: The Netherlands
Contact:

Re: PPD Bonus Scheme

Post by MtM »

Jesse said this:
I'm all for PPD being normalized across the board. There are some projects that I hate getting because my PPD drops. If we had to apply your proposal, then I would choose a mean PPD standardization scheme over the median. I think over time the adjustment would be smoother and not so jumpy.
This is why I don't agree with that:


This would eliminate the swing in ppd, but I don't think it is fair.

You can't predict which projects you'll be running. There is never an even spread, sometimes you run allot of project X and sometimes allot of project Y. If project Y is worth 15k ppd and unit X 10k PPD, you can't say let's use 12.5K PPD for all projects since you have no idea of what project you'll be running when and in what quantities.

How would this work with A4 core's? Will you average them for a uniprocessor or an smp:6 ( maybe 6 reported cores will be average then ).

Do you want to throw all projects from a certain fahcore in a big bag and mix em and get a mean/median value, or do you want to include all fahcore's which can run on the same hardware and all their projects? Or an average of all projects in total...

The more you say yes to the above question the less fair it will be because of what I said above, you can't predict which projects you will be running eventually. Take into account not all projects cause the same amount of load, and you're getting into even deeper water.
Points per WU between different Projects can change - I have no issue with that, but PPD between different Projects should not change.
PPW in the same project -> based on processing speed, will be higher if you have faster hardware. PPD between different projects can't be normalized because of the reason outlined above. Also if you only allow PPW differences between different projects, and still want to make each project equal in PPD IN ALL CASES you're basicly saying to let loose computational power as a factor again as hardware might be better suited for one kind of simulation as opposed to another, or the software could be more optimized ).
Post Reply