Suggested Change to the PPD System

Moderators: Site Moderators, FAHC Science Team

Post Reply
ChasR
Posts: 402
Joined: Sun Dec 02, 2007 5:36 am
Location: Atlanta, GA

Re: Suggested Change to the PPD System

Post by ChasR »

Benchmark the regular SMP WUs on 4 or 8 cores of the new MP benchmark machine and set the value such that when run on the current i5 benchmark machine it produces the same ppd as today. The faq has a good description of the normalization process. Run that smp WU on 16 or 24 cores of the new benchmark machine to get baseline production and set the value of BA work to whatever premium PG decides it's worth. THe faq states 50% more than SMP work. My opinion is that when combined with the QRB on 4P machines, that is too generous, but it's less than the 150% bonus we see on p6903 and p6904. Benchmark on more cores as the machine becomes dated and do it all over when it becomes obsolete.
k1wi
Posts: 910
Joined: Tue Sep 22, 2009 10:48 pm

Re: Suggested Change to the PPD System

Post by k1wi »

ChasR:

When you note that people will experience decreasing ppd over time (if they do not add new hardware) you highlight the difficulty with the proposal, but in effect, those that consider themselves the worst off are actually, in the long run better off.

ALL hardware will earn less PPD over time, but that is no different to now. Currently "old hardware" maintains the same level of ppd over time, it's just newer machines are earning exponentially more points. Over time I look at the PPD my machine gets compared with other users and other users are oh a whole getting ever increasing more points than my machine.

At some point adjustments will either need to be made - by how easy it is to earn new points, or by altering the total accumulated points (or by doing both), or points will increase until they are measured in trillions and factors will solve this problem in the short term (but every 18 months the value of the factor will double and so we will go from ^2 to ^4 to ^8 ^16 ^32 every 18 months...)

The longer we leave adjustments, the larger they need to be to keep us at a point that is 'current' to what PG had when they devised/established the process. People seeing their points drop from 1000 to 100 will probably create a huge uproar - people will say "oh, Stanford does not appreciate my effort I'm shutting off". If it is made a regular interval process in the folding calendar two things happen. 1: The adjustments are smaller. 2: People know that an adjustment will be made and when it will be made. When we base the adjustment on accounting for computational improvement another things happens 3. People will look and say "my one machine dropped in PPD by x percent, but the reason for this x percent is because this is the rate at which PG have determined computers are improving and proportionally, I'm not worse off." Furthermore, their previous contribution is not deflated relative to current performance purely because computers have gotten faster (which is what happens now).

Regarding how we benchmark between single processors and multiprocessors. My belief is that the issue here stems from the fact that under the QRB formula, inflation due to technological improvement occurs twice. Once on the base points - that is PPD without QRB, and then once again on the Speed Ratio. Because in effect computers are getting twice as fast on average every 18 months and therefore, the speed ratio doubles every 18 months. This can be seen by the fact that PPD with QRB grows exponentially faster than PPD without QRB. Once I control for PPDn/Yn PPD without QRB becomes constant and PPD with QRB remains curved. Somehow we need to make the speed ratio twice as hard every 18 months in order to keep things constant. I don't see anything fundamentally wrong with the original formula that PG made, but I think that as time has gone on, technological improvements has made it more controversal because absolutes are increasing. On the one hand, you have Grandpa saying "Yanno, the present system does not reward MP systems handsomely enough" why you are saying "actually, the present system rewards MP too well".

MtM - I am not ignoring your post, again I feel what you are asking requires more time to answer than I currently have to allocate in a single point in time at present.
MtM
Posts: 1579
Joined: Fri Jun 27, 2008 2:20 pm
Hardware configuration: Q6600 - 8gb - p5q deluxe - gtx275 - hd4350 ( not folding ) win7 x64 - smp:4 - gpu slot
E6600 - 4gb - p5wdh deluxe - 9600gt - 9600gso - win7 x64 - smp:2 - 2 gpu slots
E2160 - 2gb - ?? - onboard gpu - win7 x32 - 2 uniprocessor slots
T5450 - 4gb - ?? - 8600M GT 512 ( DDR2 ) - win7 x64 - smp:2 - gpu slot
Location: The Netherlands
Contact:

Re: Suggested Change to the PPD System

Post by MtM »

bruce wrote:Suppose you do change the benchmark machine to be more representative of BA hardware. Stanford is going to still need to use the top 10% of the hardware to study some small fraction of the initial simulation of whatever projects they'll be working on next year which cannot be completed within the lifetime of the scientist. As hardware improves, science finds more challenging simulations to be studied.

That's not solved by adjusting the points but by defining the "top 10%" as a class of problems that changes over time.
I'm not sure I'm following you. Since BA is already the defining component, no you don't want/should want to cut it up in smaller parts. If you want to cut it up even more, you need big advanced and big big advanced, where ba has a 16 core limit and bbba a 48 core limit.

The set of problems is already a component of the ba definition, and tied to the hardware requirements needed to 'solve' them ( during a short enough time span ).

If I misunderstood, please explain what you were saying with using 10% as a separate set of problems?
ChasR wrote:Benchmark the regular SMP WUs on 4 or 8 cores of the new MP benchmark machine and set the value such that when run on the current i5 benchmark machine it produces the same ppd as today. The faq has a good description of the normalization process. Run that smp WU on 16 or 24 cores of the new benchmark machine to get baseline production and set the value of BA work to whatever premium PG decides it's worth. THe faq states 50% more than SMP work. My opinion is that when combined with the QRB on 4P machines, that is too generous, but it's less than the 150% bonus we see on p6903 and p6904. Benchmark on more cores as the machine becomes dated and do it all over when it becomes obsolete.
I'm going to be blunt here, I don't meant this as an attack but I want this perfectly clear: How does your opinion validates anything regarding the point value of work units?

You feel PG makes mistakes setting the scientific value because you can't see how time and work unit size are worth the point premium, while you have no idea of the possible results being made obtainable with the BA program.

Where are you getting your facts from that BA is not worth the points they are getting, outside of the % of machines falling on the hockey stick end and the one or two systems which submit a work unit only seconds before passing the deadline?

Futhermore, you're only 'fixing' 4p machines which are falling on the hockey stick part, you're not fixing the exponential increases in computational power we're seeing and which I asked your opinion about.

Also, it's clear that a MP setup even when running a single cpu, is never equal to a mainstream machine. Mainstream now for instance is an i5, not an i7. MP systems don't use i5's, they use the top of the line desktop variant with additions. Still I agree it's the most cost effective way for PG to benchmark for both BA and regular SMP.

Edit: K1w1, find me a post where Grandpa is saying he is not getting enough points? As far as I know he is only saying that people who are questioning the QRB have some kind of crystal boll and are capable of seeing things we without one can not: scientific value. He's saying PG is capable of setting that value, even if the current system has flaws which can be fixed, this does not proof the mean/median value of the QRB points is really flawed.

In general, not directed to you, PG has the power to set kfactor as low as they want, to lower the importance of time... they have not done so in many cases, so there is no reason to assume it's needed, or that doing so would make points fall in line with science better.
ChasR
Posts: 402
Joined: Sun Dec 02, 2007 5:36 am
Location: Atlanta, GA

Re: Suggested Change to the PPD System

Post by ChasR »

@ k1wi & mtm, if one does exponentially more science, one should get exponentially more ppd. If one does the same science, they should get the same amount, not less. The current QRB steepens the curve way too much. That's how I see it.

@ mtm, As for where I get my facts, I read the FAQ. It says that BA work should be worth 50% more than SMP work. IIRC, that was revised downward. Using the cache and bandwidth bound i5 to determine the premium for BA work is where the problem lies. Running on machines that qualify to do the work, that aren't cache and bandwidth bound, the result is way over the original 50% or the 30% premium it was reduced to, closer to 150% (p6903 and p6904). Rebench on a proper MP machine using the original or revised % premium and the value of BA work will be much less. THe FAQ is one of the principal reasons I keep insisting the BA values are wrong, the unintended consequence of using an ill suited benchmark machine.

I have a very simple formula to replace the QRB. It increases the value of the WU linearly with a decrease in time out.

bonus value = base value x (1+ (preferred deadline - time out) x .1)

I thought of this when the base value of an a2 BA work unit was 24560 points. It may not overcome scaling losses. There is no huge bonus for beating the preferred deadline by one second. There is a penalty, increasng with TO for missing the preferred deadline.
k1wi
Posts: 910
Joined: Tue Sep 22, 2009 10:48 pm

Re: Suggested Change to the PPD System

Post by k1wi »

ChasR - do you disagree that the reason people are doing more science over time is that because computers are getting faster?

That is, someone gots a new computer. The reason why they get, in absolute terms, more points is because between your computer and their computer, Intel or AMD or NVidia improved their technology.

Because, in effect, what that suggests is that points are getting easier to earn simply because of technological improvement. To me, it makes more sense to account for that technological improvement and say that people should only earn more points as time goes on because they are effectively making a larger proportional contribution to the project.
mdk777
Posts: 480
Joined: Fri Dec 21, 2007 4:12 am

Re: Suggested Change to the PPD System

Post by mdk777 »

One other option to consider is a seniority multiplier.
The objective is to keep people folding for the long term, right?

Pensions used to have vesting schedules that encouraged employee loyalty. (outlawed due to personal liberty and the desire to make changing jobs easier rather than harder in a modern society)

Anyway, a seniority multiplier could be a counter-balancing factor to point inflation.

As an individual returned WU, his specific seniority multiplier would continuously increase by some factor.

Just an idea to make the entire system even more complicated. :lol: :lol: :mrgreen:
Transparency and Accountability, the necessary foundation of any great endeavor!
ChasR
Posts: 402
Joined: Sun Dec 02, 2007 5:36 am
Location: Atlanta, GA

Re: Suggested Change to the PPD System

Post by ChasR »

@K1wi, I agree that points are easier to get and that it's due in large part to improvements in hardware. I don't agree with you as to what to do about it. I think each WU has a scientific value and that value should remain constant. It shouldn't be modified by QRBs or "regular normalization". Points are also easier to get because of points inflation, something I'd like to see minimized. mdk's idea may have merit. :D
MtM
Posts: 1579
Joined: Fri Jun 27, 2008 2:20 pm
Hardware configuration: Q6600 - 8gb - p5q deluxe - gtx275 - hd4350 ( not folding ) win7 x64 - smp:4 - gpu slot
E6600 - 4gb - p5wdh deluxe - 9600gt - 9600gso - win7 x64 - smp:2 - 2 gpu slots
E2160 - 2gb - ?? - onboard gpu - win7 x32 - 2 uniprocessor slots
T5450 - 4gb - ?? - 8600M GT 512 ( DDR2 ) - win7 x64 - smp:2 - gpu slot
Location: The Netherlands
Contact:

Re: Suggested Change to the PPD System

Post by MtM »

Your first quote is confusing
if one does exponentially more science, one should get exponentially more ppd. If one does the same science, they should get the same amount, not less. The current QRB steepens the curve way too much. That's how I see it.
1. if one does exponentially more science, one should get exponentially more ppd.
2. if one does the same science, they should get the same amount, not less.
3. the current qrb steepens the curve way to much.

a. If you say 1, why do you propose a linear formula below? Don't you understand time is a addative for scientific value? See below as well.

b. After normalisation, you could still get the actual scientific value. As I said a couple of times already; it's the only reason and only way I would support normalisation.

c: If you bencmark on a ba system, doesn't that mean you can easier predict the scaling between a ba16 machine and a top of the line machine? Doesn't that mean you can easier set the deadline and kfactor to a value where no machine is going to fall on the steepest end of the machine.

As to the formula;

Can you give me an example of the formula in action?

I'll say in advance that I think it's flawed: no linear formula will prevent people from running two instances with a lower core count if the then current wu spread would appear to make that more efficient in a ppd sense of way.

The exponential increase is there for a reason, and it's not because PG could not come up with an alternative. If you don't make it exponential, it will not incite people to stop running their multiple instances setups over a single instance capable of using all the available resources. Exponentiality is a key part, not optional. You can try and argue it's not, but you won't be able to back it up.

You claim to read FAQ's, you should have known about the importance of time.

@ MDK,

No, please do not break the tie between science and points. Don't make it a {H} based value.

Edit: ChasR

When you accept exponentiality as a integral part, you would probably come to the conclusions I have: you need to control where you appear on the slope. To do this, you need to benchmark different classes, and assign different classes specific work units. If you have this, you control the spread on the slope. For a while at least, untill hardware changes and you need to change the classes.
Last edited by MtM on Mon Mar 19, 2012 9:25 pm, edited 2 times in total.
MtM
Posts: 1579
Joined: Fri Jun 27, 2008 2:20 pm
Hardware configuration: Q6600 - 8gb - p5q deluxe - gtx275 - hd4350 ( not folding ) win7 x64 - smp:4 - gpu slot
E6600 - 4gb - p5wdh deluxe - 9600gt - 9600gso - win7 x64 - smp:2 - 2 gpu slots
E2160 - 2gb - ?? - onboard gpu - win7 x32 - 2 uniprocessor slots
T5450 - 4gb - ?? - 8600M GT 512 ( DDR2 ) - win7 x64 - smp:2 - gpu slot
Location: The Netherlands
Contact:

Re: Suggested Change to the PPD System

Post by MtM »

mdk777 wrote:One other option to consider is a seniority multiplier.
The objective is to keep people folding for the long term, right?

Pensions used to have vesting schedules that encouraged employee loyalty. (outlawed due to personal liberty and the desire to make changing jobs easier rather than harder in a modern society)

Anyway, a seniority multiplier could be a counter-balancing factor to point inflation.

As an individual returned WU, his specific seniority multiplier would continuously increase by some factor.

Just an idea to make the entire system even more complicated. :lol: :lol: :mrgreen:
Nah, it would just mean a bigger difference.

Increasing a multiplier is the same as increasing the total by a certain factor no :?:
k1wi
Posts: 910
Joined: Tue Sep 22, 2009 10:48 pm

Re: Suggested Change to the PPD System

Post by k1wi »

MtM wrote:
k1wi wrote:MtM suggested his proposal:
We make ba make instead off 1000 points, 100 points ( 10%), and smp instead of 100 points, 10 (10%).
We use 10% as a start as it's easy to use.

The next round, we use Y which I described should be based on the computation speed increase. This speed increase should be applied to both trajectories again, so let's assume we have a 10% speedup which have caused ba to be making 1000 points again, and smp 100 points = we normalize down by 10% for both and publish the 10% number so people can still see the how much computational/scientific effort was needed to make earn x credit. How does this influence the relative progression of a ba versus a regular smp instance? The BA instance will still earn the same amount of points more than the SMP instance relative to the total points obtainable.
First of all, I disagree with this proposal for a number of reasons, first that it is ambiguous:

1. Why are we differentiating between BA and SMP? It would be simpler to simply write PPD1 = PPD2/y1, which is exactly the concept I proposed in the original post and exactly the formula I proposed in the post prior to your suggestion.

2. By talking about BA and SMP separately, you are increasing the complexity of the adjustment - It is quite easy for people to read that proposal and suggest perhaps you are normalising BA to 1000 points and SMP to 100.

3. The 1/10th figure is completely arbitrary and appears 'simply thrown out there'. Amongst other things, massive. Far too large for a single adjustment. Why choose 10%? Why not make the value of new points half the value of original points? I have had to justify every single element of my posts including exactly how to calculate the value of technological improvement (and have the examples of how it can be calculated then used as a vector to shoot down my entire proposal) and I think so should you.

4. If what you are attempting to propose has the same effect as my original proposal, why not use the formula that I proposed in the prior post? As per point 1, my formula is much more simple.
Wait what? I didn't suggest that, I explained MY INITIAL SUGGESTION using that example.

1. We are not, the 10% is the same for smp and ba and only used to showcase that relative numbers are not changing!

2/3/4 Your idea? You came up with 'dollar value'... I came up using Y as computational relationship. This is not your idea :!:
In the initial post I said that "As I see it, the easiest way to keep 'PPD inflation' from going nuclear, is to adjust the benchmark machine to the rate of CPU Improvement. At a defined interval (annually is probably too infrequent, but monthly may be too frequently), adjust the benchmark machine's PPD to reflect the improvements of computing power." In other words, from the first post I said that PPD should be continuously adjusted to reflect increasing computational power, or, in formula form PPDn / Yn. Do you disagree with this?
Mtm wrote:
k1wi wrote:P.S. Has anyone been able to advance my theories around accounting for technological improvement in the speed ratio?
Seriously, stop this :!:

Do you really need me to quote you again on your dollar value and link to the post where I said you should use the computational ratio instead? Admit you had a totaly different idea in mind and you changed this after me pointing out the obvious flaws in it and offering this alternative which has a much higher chance of being succesfull.Really, it's not done to try and claim something which is so obviously not your own idea.

The ppd / Y is the only thing which is the same, but the implications from using a dollar value ( which has ZERO chance of working, ZERO :!: ) as opposed to MY IDEA OF USING THE COMPUTATIONAL SPEED INCREASE to calculate Y makes ppd / y a totally different formula entirely.

This is where you first corrected YOUR IDEA OF A DOLLAR VALUE -> viewtopic.php?p=210444#p210444
This is where I made my first suggestion of using computational capabilities -> viewtopic.php?p=210435#p210435 which I made admitting the idea might not be totally original as I remember there being simular discussions about this in the past and this idea might have been brought up there by someone else.

Don't take credit for something which isn't your idea. Unless you can proof you're the one who made a simular posts like mine a long time ago, you're definetly not the one who concieved this idea.
Why do you always come back to me putting a dollar value on it? In post 3 of this thread I used an economic example as a way of describing/explaining the issue that is present, hell, I also used the concept 'average computing power' as a method of explaining the conceptual issue in order to explain the present issue. Those are methods of calculating the computational speed increase, or quantifying Y, it's not coming up with the fundamental creation of PPDn / Yn. The first time I saw you create a formula (which was 1000/10 and 100/10) was after my post...

In post 10 you made a massive edit (which in my opinion should have been a new post) and stated
MtM wrote:As to not only have critical remarks against your dollar value suggestion, I'll make a suggestion of my own.

You can keep the ppd comparable with previous years and still allow people like me to gather enough information to find the actual processing power if you do two things. Keep all historical projects forever, and benchmark them again with every change to the benchmark machine and publish the speedup factor compared with the previous results. Then use the speedup factor in reverse to set the base credit for new projects.

Results in base credit which will only increase if a project has a higher scientific value ( like how some gpu work unit's are now worth 358 points and some 512 points ), and will still allow people to look up the computational power needed to get that ppd."
The only way I can see that differing from my original concept in post 1 is that we keep projects forever. I have never agreed that they should be kept forever because I don't think that is needed for the concept to work.
MtM wrote:To me it seems like you took my suggestion, complicated it with adding n to both sides of the equation ( and yeah, that means it's exactly the same ) and are doing a whole lot of talking around the issue so to hide this fact.
In post 2 you said that my whole basis was that I want "people to be credited for participation over actual scientific contribution".[/quote]I did add n to both sides of the equation. I did so because it has a fundamental mathematical basis - that is, n represents the time period. I cannot see where you wrote PPD/Y anywhere before I did. I can see you conceptualising it, but that is just the same as I did in my first post?
k1wi wrote:Furthermore, I've never proposed running the one project across all clients!
Maybe other people are subconsiously reading some ( parts ) of my suggestions and attributing them to you as well ;) That wouldn't suprise me as it sounds like something which could happen if you claim other parts of my suggestions are indeed actually your own.[/quote]


MtM - When I say "my proposal" I mean what I proposed in the original post. That is that "the easiest way to keep 'PPD inflation' from going nuclear, is to adjust the benchmark machine to the rate of CPU Improvement. At a defined interval (annually is probably too infrequent, but monthly may be too frequently), adjust the benchmark machine's PPD to reflect the improvements of computing power."

Which, as has been refined means PPDn / Yn where Y is the relative improvement in technology and n is the time period in which we are measuring. "My proposal", as I have used it progressively, makes it clear that that formula does not deal with the issue of how PG determines computational improvement. I discussed how they could could determine computational improvement, because people like yourself asked me how it could be done, but "my proposal" actually leaves that to PG to determine, because they have the tools to develop the most appropriate methodology.

I have put this to the community in it's infancy because I did not have a completed proposal. I put it to the community so that the community could improve it faster than I would be able to formulate it on my own. So that by the end of it I can explain better what I am suggesting, based on the back and forth of robust conversation. I know that others have put similar suggestions to the community in the past, but I also know that those suggestions were killed off by people who think their "proposal already falls apart." People who used their conceptual disagreement with it to kill it rather than to improve it. Because of those people, the people suggesting it have gotten to page 5 and said "to hell with it, let bruce close it, nothing will change."

Do you think that if PG adopts this I want it to be known as the k1wi adjustment? No, but that is what you seem to be suggesting that I am doing. Do you want the formula to be "yours"?

Will it be the concept that has been brought up for the nth time and persevered with until some sort of workable concept was made? Yes.

Does what ever final concept include the musings and input of every contributor to the thread? Yes.

Am I spending a lot of time replying to your posts because I think you are bringing up a lot of points of discussion that need to be defended/explained? Yes.

Do I apologise if there are typos, things I could phrase better in my posts and points that I miss? Yes, but in response to your posts I am writing near essays of 16 lines per page and attempting to answer numerous specific points.



You stated you do not agree with the proposal. I do respect that despite disagreeing with it you are wanting to find holes in it and as I have made clear to you publicly and privately, I do appreciate the passion in which you approach forum discussions.
MtM
Posts: 1579
Joined: Fri Jun 27, 2008 2:20 pm
Hardware configuration: Q6600 - 8gb - p5q deluxe - gtx275 - hd4350 ( not folding ) win7 x64 - smp:4 - gpu slot
E6600 - 4gb - p5wdh deluxe - 9600gt - 9600gso - win7 x64 - smp:2 - 2 gpu slots
E2160 - 2gb - ?? - onboard gpu - win7 x32 - 2 uniprocessor slots
T5450 - 4gb - ?? - 8600M GT 512 ( DDR2 ) - win7 x64 - smp:2 - gpu slot
Location: The Netherlands
Contact:

Re: Suggested Change to the PPD System

Post by MtM »

Kiwi, you openened with this:
Its pretty obvious that there is an ongoing issue regarding the PPD bonus system etc, and while there has been a lot of hypothesis about how points should be awarded etc, I was wondering whether I might suggest an option that will help alleviate the problems associated with the currents PPD bonus system.
You edited your post 12h after making it, it had no reference to computational power in it before. Your edit was made after my post.
k1wi
Posts: 910
Joined: Tue Sep 22, 2009 10:48 pm

Re: Suggested Change to the PPD System

Post by k1wi »

mdk777 wrote:One other option to consider is a seniority multiplier.
The objective is to keep people folding for the long term, right?

Pensions used to have vesting schedules that encouraged employee loyalty. (outlawed due to personal liberty and the desire to make changing jobs easier rather than harder in a modern society)

Anyway, a seniority multiplier could be a counter-balancing factor to point inflation.

As an individual returned WU, his specific seniority multiplier would continuously increase by some factor.

Just an idea to make the entire system even more complicated. :lol: :lol: :mrgreen:
Like MtM I believe this would amplify the issue - in effect it's saying points should be easier to earn over time because of technological improvement (because we have not accounted for this) and because someone has folded more. Of course, it then opens the next can of worms of "do we measure seniority by WU #?" If so, people who fold smaller, less difficult WUs will earn a quicker multiplier.

You do raise a question "The objective is to keep people folding for the long term, right?" I would hypothesise that normalising for computational improvement would keep folding people in the long time because there would not be a situation where absolute points increases mean that a person's 10 year points total is surpassed in a single day by a current folder!

MtM: The 1 edit I made, which was 12 hours later, changed the title of the thread. Nothing else. I can tell you my intial post was not one paragraph long because I put a lot of time and effort into it.
k1wi
Posts: 910
Joined: Tue Sep 22, 2009 10:48 pm

Re: Suggested Change to the PPD System

Post by k1wi »

MtM wrote:Edit: K1w1, find me a post where Grandpa is saying he is not getting enough points?
http://foldingforum.org/viewtopic.php?p=210650#p210650
There is a fundamental problem with the proposal as I see it, and it actually is not the proposal, it is that it does not adjust the MP to where it should be. Right now there is a rather unique situation in the MP arena and quite a few are being built and used. But that is because of the fact that the last generation of AMD processors is better for folding than the current generation of processors, as the big super servers are changing out there MC processors for Indelargos there is a surplus of MC processors which can be bought for a reasonable price. But just in the last month a 6174 has gone up 20% a piece on the used market and still rising. With the current point system a person folding a 4P will make 5X the points folding on 6903 or 6904 over my 970 or 980X folding the same WU which is not to bad since you can currently build a 4P rig for reasonable $$$ and they only consume slightly more electricity to operate. But the new bigadv WU only gives you slightly more than 1/2 of the points of the 6903 and 04 which quit frankly I can not find anybody that is willing to fold them at this time.

Why because the economics just are not there if the reward system does not address this then you will see less and less MP rigs being built and used for folding. And for some reason people want to ignore it but I know people will not buy / build MP rigs for folding if the cost them 10 X as much to purchase and I only receive 4X or 5X as much reward for using them. So how is this issue going to be addressed. As you have it set up does it deal with the issue. Is the QRB expected to deal with it. Is there a way to figure in cost factors along with scientific need because simple economics say if the economics are not there it is not going to happen.
MtM
Posts: 1579
Joined: Fri Jun 27, 2008 2:20 pm
Hardware configuration: Q6600 - 8gb - p5q deluxe - gtx275 - hd4350 ( not folding ) win7 x64 - smp:4 - gpu slot
E6600 - 4gb - p5wdh deluxe - 9600gt - 9600gso - win7 x64 - smp:2 - 2 gpu slots
E2160 - 2gb - ?? - onboard gpu - win7 x32 - 2 uniprocessor slots
T5450 - 4gb - ?? - 8600M GT 512 ( DDR2 ) - win7 x64 - smp:2 - gpu slot
Location: The Netherlands
Contact:

Re: Suggested Change to the PPD System

Post by MtM »

K1w1; You made posts which were very conceptual but not devided between your attack on the exponential QRB part and the part about dealing with the hockeystick effect caused by exponential increases in computation power, I can't find any unaltered post where you suggest using computational speed before I mention it.

No, this isn't the mtm's formula, if I wanted it to be that I would be asking a certain PG member to post here and try to recall when he and I discussed this the first time.

It a logical extension of wanting to keep a direct tie between science and points, nothing more and nothing less.

There are also differences between what you're saying and what I'm thinking but because of the way you're posting I'm not sure if that's because you want something else or because you're saying it in a way which makes it seem the same.

For instance n as time factor is fine, but if you use ppd n = ppd n / y it's the same as ppd = ppd /y since n is on both side of the equation and can be removed from both sides without altering the equation.

Also, you don't want to discuss Y, but I can tell you now that's not going to work. There will be a discussion about it at some time, because allot of people are like me, anal, and they want to be able to compare results across the board and know how the Y value is obtained on each normalization. You could even expect PG to ask for a clear definition of Y as they have shown in the past they like to be as transparant as possible. For this, they would gladly be transparant in their abilities to calculate an improvement.

If you refer to me saying your suggestion will fall apart, that's because I was refering to your suggestion of using a dollar value. And yes you did change more then your title.

If you're referring to previous post which might have been killed because of simular comments from me, would be nice to be able to see if I had not said the same things there as I'm saying now, making this again, a lot less 'your proposal'.

Since I also don't want to claim this as my proposal, it would be sufficient to refer to it as the 'the proposal', but not 'your' proposal.

It's just bad form to claim ownership of a concept. If you want, I will call you 'the guy who tried to work this proposal out', and I might call you 'the guy who succeeded in coming up with a proposal I can fully support' if you keep this up. I hope that's ok with you.

As to grandpa: yeah bad comment right there. On the other hand, look at it this way;

If there is a bonus in science to folding on 4p systems, the points system should make it more attractive. if you don't, you won't get people to buy the hardware. So his reasoning is 100% correct. If you can't make it attractive to run systems which are worth more scientificly, you're doing something wrong. If you have a method but can't use it because it would upset the other donors, we need to make it so the other donors don't object.

How about solving that problem as well. That would be an original fix, depending on the outcome.
Last edited by MtM on Mon Mar 19, 2012 10:01 pm, edited 1 time in total.
ChasR
Posts: 402
Joined: Sun Dec 02, 2007 5:36 am
Location: Atlanta, GA

Re: Suggested Change to the PPD System

Post by ChasR »

@ MtM, I'm sorry I confused you.

Simplify my quote to "I think each WU has a scientific value and that value should remain constant."

Exponential bonuses are unsustainable. That's why I came up with an alternative. For the record, I'm opposed to my own alternative. Plug the formula into Excel, it's not that difficult. I did note the "It may not overcome scaling losses", which would mean running two instances of folding may produce more ppd in some circumstances. I should have been more clear.
Post Reply