Open letter to PG [with response]

Moderators: Site Moderators, FAHC Science Team

filu
Posts: 45
Joined: Mon Aug 03, 2009 9:33 am
Location: Krzeszyce, Poland

Open letter to PG [with response]

Post by filu »

donors from Team Poland wrote:Dear friends,

Recently a number of our top folders (Borgis, chillerworks.com, P.Holcman, GandalfG, KeyJey) decided to quit both team Poland and F@H altogether. This was made in contact with another outstanding donor, Tear of his own DarkSwarm team, who not only did contribute a significant and dedicated crunching power, but with his software skills helped the general project in many forum topics too.
This triggered further stormy discussion within our team, which resulted in some points we would like to present, since they may be of significance for the entire folding scheme.

As Pande Group itself knows (it is encouraging this trend with bonus system after all), the nature of the system shifts from large number of weak, single CPU donors towards lesser number of strong contributors, often completing dedicated folding hardware. That's understandable, with stress given via bonus points system to speed and range of calculations.
However, growing "quality" of donors must be accompanied by growing quality of service - the more knowledge people have about project and systems, the more they expect. They do invest sometimes considerable money into hardware and electricity bills, so they would like to see an efficient system with clear rules, not a hodgepodge of uncompatible cores, unrefined clients and unreliable servers.

Since there is growing number of highly productive yet even more disappointed donors considering quitting FAH and moving to other DC schemes, we would like Pande Group to address following points:

1. The original idea of the project was to use free CPU cycles. Just a couple of months ago a single CPU like E5200 with two simple clients produced 800-900 PPD worth, now it is down to 300-400. This is discouraging many donors who are not going to built dedicated machines. While the very trend (diminishing number of ever stronger donors) is understandable, we would like to point that it can be seen as disrupting the basic idea of widely distributed computing. Yes, single processors’ power is getting even less attractive vs. GPUs and PS3s, yet it is the number of conscious contributors that makes for popularization (and thus success) of the entire project. The greater base of little donors, the more of them can grow into big donors and/or encourage ther people to fold. Let's not forget them - any tower will ultimately fall, if it's base is not wide enough.

2. Stanford's servers (or California electric grid for that matter) are notorious for their unreliability. While it can be frustrating to wait for the results to be sent, it is outright wrong to credit SMP's only after accepting the results. After all, the donor mustn't be punished with lack of bonus for communication faults. Our suggestion: if it's not possible to have a really solid server's net, let's move bonus calculation to clients instead of work servers.

3. The famed and long-awaited integrated client would work for ever larger base of folders - one client serving simultaneously both the GPU and CPU, enabling easy configuration and monitoring within one computer would be great in achieving that.
When can we expect such a client?

4. Folders who built dedicated –bigadv crunching rigs are very disappointed with bigWU’s reality. Sure it’s the science that counts the most, still many people are motivated mainly by achieved points, visibly representing their contribution. Really, no tthat many people would stick with the project if it was to scrap the point system one day.
That’s why we would like to understand why there are such big disproportions in PPD on same machines.

Some examples (core i7-920@4,2GHz) :
- project 6701 – 13,7 kPPD,
- project 2684 – 21,3 kPPD,
- project 2685 – 35 kPPD.

Difference between 6701 and 2685 reaches 255%, between 2684 and 2685 164%. Such a lottery and praying for specific WU is really hard to accept, while clear rules usually work much better at motivating people.
To top it all, recently there are problems with obtaining a bigWU at all. Pande Group is supposed to be in need of those, as high point reward suggest, so what's the matter?
And what about bigWUs fot Linux?

5. Another point is the ATI. Many people not optimising their gear for FAH needs have ATI cards. Will there ever be a client fully utilising these cards? Oddly, a new Fermi core was ready within couple of weeks, while the ATI cards potential stays unused for over two years now.
Maybe it is not in your plans altogether due to specific ATI architecture. But then, wouldn't it be fair to tune award system a bit so that ATI/NV PPD gap would close somewhat?

6. Finally, the general information. Anybody doing anything is interested to see some results. In case of FAH they are available as scientific papers based on our calculations; that's good, still by definition it is problematic for the wide public to understand specialized papers. That's why it would be really desirable to have some kind of popular-science blog, where Pande team would be explaining their current projects, results, expectations etc.
Another great idea are short descriptions of all WUs; alas, there are flaws there too, like SMP projects not explained for almost a year.

Please don't dismiss all these points as petty details or malicious grievances. These are legitimate remarks, as we are really into F@H and would like to see some reasonable answers instead of typical 7im's bulling and ridiculing. You can contribute to F@H by discussing its ways too, not just by plain folding; then disparaging new ideas is not the best way to advance the project.
We believe that fulfilling these points will immensely help to convince people that Stanford cares for its folders. There is still a number of folders who are on the edge of quitting the project - let's not push them away.
This statement gave us a drinking the cup of suffering.

Below are links to our discussion.
http://www.forum.zwijaj.pl/viewtopic.php?f=3&t=693
http://www.forum.zwijaj.pl/viewtopic.ph ... 2&start=30
http://www.forum.zwijaj.pl/viewtopic.php?f=2&t=689

Edit: Altered thread title -UF
Image
i7-2600K@4.8 Asus P8P67 EVO 2x2GB GTX480
i7-920@4.0 GA-EX58-UD5 3x2GB 2xGTX560Ti
2x Xeon 5620 6x 2GB
2x Xeon 5645 6x 2GB
PantherX
Site Moderator
Posts: 7020
Joined: Wed Dec 23, 2009 9:33 am
Hardware configuration: V7.6.21 -> Multi-purpose 24/7
Windows 10 64-bit
CPU:2/3/4/6 -> Intel i7-6700K
GPU:1 -> Nvidia GTX 1080 Ti
§
Retired:
2x Nvidia GTX 1070
Nvidia GTX 675M
Nvidia GTX 660 Ti
Nvidia GTX 650 SC
Nvidia GTX 260 896 MB SOC
Nvidia 9600GT 1 GB OC
Nvidia 9500M GS
Nvidia 8800GTS 320 MB

Intel Core i7-860
Intel Core i7-3840QM
Intel i3-3240
Intel Core 2 Duo E8200
Intel Core 2 Duo E6550
Intel Core 2 Duo T8300
Intel Pentium E5500
Intel Pentium E5400
Location: Land Of The Long White Cloud
Contact:

Re: What PG respond to that?

Post by PantherX »

Here is what I think:

1) Personally, on my laptop, I get ~200 PPD and it has remained the same when I started in April 2009 until now. My laptop did take a few months break due to heat issues but overall, the PPD is the same. A possible explanation is that there might be some high points WUs that were more common as the researcher wants results faster and now that he has plenty of wuresults, he needs to analysis it before any further progress can be made. Occasionally, I do get WUs that double the PPD to ~400 but that doesn't happen on regular basis so I would consider it as "bonus" to the 200 PPD that my laptop is capable of. Also can you provide details of the system because F@h uses free CPU Cycles and if you happen to install new software that uses some more CPU Cycles, it will obviously have a negative effect on F@h production. It could be possible that the user is unaware that the software is utilizing some CPU Cycles and thinks that F@h production is getting low.

2) Moving Bonus Points calculation to Clients could possibly allow the software to be manipulated to give more points (nobody wants that). Also one has to remember that the Gen X+1 cannot be sent out unless the Gen X is submitted to the Server. As the Bonus Points scheme favors Science, you are not "contributing" to science if you finish the WU and haven't uploaded the wuresult. Nonetheless, Servers side issues can be reduced by bringing more SMP Servers online which has been stated by PG a couple of times and hopefully, they will be online by Christmas (maybe sooner). BTW I have read (unreliable source so I could be wrong) that Standford has its own generators so doesn't that takes care of the "loss of power" issue?

3) The v7 of the Client is still in internal Testing and as usual, there isn't any ETA. However I am hopeful that it just might make it to Open Public Beta by Christmas.

4) Not sure why Project 6701 is included since it isn't a bigadv WU. However if you are comparing it to bigadv WU, then yes, there will always be more points for the bigadv WU on the same system since bigadv does more Science than normal WUs. Now, Project 2684 and Project 2685 are two different Projects but when I checked their complete descriptions, I found them to be the same. Judging form that, I would have to guess that Project 2684 was the first to be released on Windows platform and then a "tweaked" version of it came online as Project 2685. Now when they saw how much difference it made, they then released Project 2686, Project 2692 which is based on the tweaked one. I would believe that the first Project 2684 was to "test the waters" and when they found that it worked fine, they made further improvements thus they were able to release 3 more bigadv Projects have have similar TPFs and higher PPD. So an educated guess is that they have found the "sweat spot" and further Projects of similar nature may give similar PPD.
Regarding the availability of bigadv WUs, there are those F@h Donors that fold bigadv WUs on systems that are not real 8 Cores so it takes long for the WUs to be folded when compared to high-end dedicated bigadv folding rigs that are multi-socket.
bigadv WUs were problematic on Linux thus were temporarily stopped. I haven't heard if any progress has been made on that front.
Now I believe that in the near future, the plausible way to deal with the lack of bigadv WUs is:
A) Have a performance index so the fastest systems will be munching on the bigadv WUs in case there is a lack of bigadv WUs. In abundance, "slower" machine may be allowed to nibble on the bigadv WUs. With this method, the fastest machines ensure the maximum availability of bigadv WUs while the slower machines are not "guaranteed" bigadv WU.
B) Raise the minimum requirement to more than 8 Physical Cores
C) Improve the CPU detection mechanism so that there isn't any "loophole" to exploit by common F@h Donors
D) Increase the number of bigadv Projects.
The reason is that Sandy Bridge will have a Quad Core with HT and that will again cause issues just like the current i7-800 and i7-900 CPUs.

5) As a matter of fact, the v7 of the F@h Client is needed to run the new ATI OpenMM/OpenCL FahCore_16. It is already in internal testing but I am not sure how its performance is compared to the present FahCore_11. Hopefully, it will be significantly better.

6) That would be nice, however, I am sure that you are aware of the limited resources that PG has. Now assuming that v7 is out, there might be some "free time" so during that time, maybe PG members can update the mini descriptions and give us basic science information. I recently came across Prof. Vijay's speech and for the most part, I kinda understood it and it gave me a fresh perspective on the F@h Project.
ETA:
Now ↞ Very Soon ↔ Soon ↔ Soon-ish ↔ Not Soon ↠ End Of Time

Welcome To The F@H Support Forum Ӂ Troubleshooting Bad WUs Ӂ Troubleshooting Server Connectivity Issues
theo343
Posts: 74
Joined: Thu Jul 03, 2008 12:43 pm
Hardware configuration: Home Network:
ADSL 12Mbps - 807 / 14.439
USRobotics 8port 10/100/1000
All computers connected with TP CAT5

HC1:
E8400@4GHz(8*500)
4GB PC2-8000
9800GX2@stock
removed - (8800GTS-512MB@724/1810/972)
Mist 600W rev2
WinXP Pro 32bit SP3
FW180.43
1xGPU2 v6.20 R1 Core 1.18/1.19
1xSMP v6.23 Beta R1

HC2:
AM2+ x4 Phenom 9950@3GHz
2GB Crucial Ballistix PC2-5300
3x8800GS-384MB
Corsair TX 750W
WinXP Pro 32bit SP3
FW178.24
3xGPU2 v6.20 R1 Core 1.18
1xSMP v6.23 Beta R1

HC3: (not folding atm and outdated)
X2 6000+@Stock
HD3850OC-512MB@783MHz
2GB PC6400
PSU 420W
WinXP Pro 32bit SP3
CCC8.6
1*GPU2 v6.12 Beta8 Core 1.04
1*CPU 5.04


Office Network:
SDSL 20Mbps
100/1000
All computers connected with TP CAT5

OC1:
E6750@stock
8800GT-256MB@702/1755/900
4GB PC6400
Tagan 480W
Vista Ultimate 32bit SP1
FW 178.24
1xGPU2 v6.20 R1 Core 1.15
1xSMP v6.23 Beta R1


OC2:
E8400@stock
2x8800GT-512MB@stock
8GB PC3-8500
Corsair HX520
Vista Business 64bit
FW178.24
2xGPU2 v6.20 R1 Core 1.15
1xSMP v6.23 Beta R1
Location: Norway

Re: What PG respond to that?

Post by theo343 »

2) Moving Bonus Points calculation to Clients could possibly allow the software to be manipulated to give more points (nobody wants that). Also one has to remember that the Gen X+1 cannot be sent out unless the Gen X is submitted to the Server. As the Bonus Points scheme favors Science, you are not "contributing" to science if you finish the WU and haven't uploaded the wuresult. Nonetheless, Servers side issues can be reduced by bringing more SMP Servers online which has been stated by PG a couple of times and hopefully, they will be online by Christmas (maybe sooner). BTW I have read (unreliable source so I could be wrong) that Standford has its own generators so doesn't that takes care of the "loss of power" issue?
Why not making the CS structure a litte more flexible and robust to provide a better service to the users. Distribute out some Proxy CS overseas and at other keypoints worldwide that have the task of collecting finished work and giving the right bonus to the clients. I stated a long time ago (when the nvidia gpus started folding) there should be more tiers of CS than only one.

It seems its still the same model that have been used for years and years without any signs of wanting to modernise the infrastructure. I can very easily understand people getting fed up by issues like these when there has been no effort to make a more flexible infrastructure.

The "proxy model" would also greatly ease the load of the main CS servers since they mainly would have to talk to a much fewer numbers of computers (cs proxies). This could also be implemented for the AS structure.

The "CS Proxy" servers and the "CS backend" could also serve togheter as a failover grid. if the proxies are washed down then clients get redirected to the backend CS, or just have more Proxies dealing with the load.

The reason I'm mentioning "overseas", "keypoints" etc. is because this is a worldwide taskforce and the internet infrastructure inst perfect either so making it a little better for the clients wouldnt hurt the project.
Last edited by theo343 on Tue Oct 19, 2010 2:48 pm, edited 2 times in total.
Image
VijayPande
Pande Group Member
Posts: 2058
Joined: Fri Nov 30, 2007 6:25 am
Location: Stanford

Re: What PG respond to that?

Post by VijayPande »

Thanks for your constructive comments. Let me give it a try to address your questions.

1) We have a plan to bring more PPD to classic clients by implementing the QRB into classic clients. This aligns points with the science (which is good for everyone) and gives classic donors more PPD for helping the science further. A "win-win". This will come with a more complete rollout of the v6 WS and is under testing right now.

2) A working CS will help resolve the server issues. This seems to now be working with the v6.0.6 WS, which has been rolled out to a few WS. We are working to roll this out more broadly. Handling bonus points client side opens up a major hole for points cheating. While this wouldn't be a problem for the vast majority of donors, those who do would create an unfair environment for everyone else.

3) The v7 client is going through close Alpha testing. I have kept it there (rather than an open beta) since I want it to be in fairly good shape before the rest of the world sees it. It has gone a long way even in the last few weeks. It's hard to be sure when it's ready, but I expect we'll have something by the end of the year, perhaps a bit sooner. I agree that this client will go a long way to bringing in more donors, as it is intended to make it a lot easier to run FAH.

4) Modern hardware is very different from CPU to CPU and board to board, based on differing qualities such as cache, memory speed, efficiency of HT, etc. To the extent that people's hardware is similar to our benchmark machine is the degree to which they'll get consistent PPD. However, very different hardware can easily get very different PPD. We expect that this variation will get worse as hardware gets more complex and are debating what we should do about it. One idea was to determine PPD from purely a benchmark calculation on the donor machine and then WU points are determined by that PPD x time to complete the WU. This scheme is also open to cheating schemes, but would give a much more uniform PPD on a given machine. This and other points schemes have been extensively debated on this web site and there hasn't been any alternate scheme which has garnered broad donor approval.

5) ATI deprecated their GPU language, Brook, in favor of OpenCL. So, we were forced to completely rewrite our GPU core for ATI in openCL. This takes time, especially to write highly optimized code. We have been internally testing this and expect to start beta testing it shortly (days to weeks). It will require client changes that are now built into the v7 client.

6) This is a good point and I wish we had more time to post to my blog (http://folding.typepad.com/). Other ways to see what FAH has done is to judge us by our numerous awards (that gives you a sense of what our colleagues think about what we've done -- http://folding.stanford.edu/English/Awards) or videos that discuss what FAH does (I posted one to my blog recently). In the end, I've put our resources into pushing our scientific results for scientists (hence pushing scientific papers http://folding.stanford.edu/English/Papers). That will be ultimately how FAH will be judged. We've been around for 10 years and that success is due, in my opinion, to how we've pushed science first. With that said, I completely agree that we should do more to help donors understand what we've been able to do and I will think about what we can do with the resources we have.

To summarize, thanks for your constructive comments. Many of the issues you raise (especially #1, #2, and #4) are on the forefront of my mind and we have been working behind the scenes to make significant improvements. Big changes don't come fast and I wish they would come faster, but we are getting close on many of these fronts. Please note that our previous work to improve other aspects of FAH have been successful, such as our web site, NVIDIA GPU servers, and stats updates improvements. As FAH goes along, there will always be places to improve, especially as we push the boundaries, and I thank donors for constructive feedback like this to help us make the project better and better.
Prof. Vijay Pande, PhD
Departments of Chemistry, Structural Biology, and Computer Science
Chair, Biophysics
Director, Folding@home Distributed Computing Project
Stanford University
VijayPande
Pande Group Member
Posts: 2058
Joined: Fri Nov 30, 2007 6:25 am
Location: Stanford

Re: What PG respond to that?

Post by VijayPande »

PS @ theo343 -- there were some replication issues (which can be very tricky) with the old CS code. I think Joe has fixed that in the new CS. A working CS would go *a long way* to improving the donor experience. Combined with the v7 client, I hope to solve most of the most annoying current issues w/FAH.
Prof. Vijay Pande, PhD
Departments of Chemistry, Structural Biology, and Computer Science
Chair, Biophysics
Director, Folding@home Distributed Computing Project
Stanford University
theo343
Posts: 74
Joined: Thu Jul 03, 2008 12:43 pm
Hardware configuration: Home Network:
ADSL 12Mbps - 807 / 14.439
USRobotics 8port 10/100/1000
All computers connected with TP CAT5

HC1:
E8400@4GHz(8*500)
4GB PC2-8000
9800GX2@stock
removed - (8800GTS-512MB@724/1810/972)
Mist 600W rev2
WinXP Pro 32bit SP3
FW180.43
1xGPU2 v6.20 R1 Core 1.18/1.19
1xSMP v6.23 Beta R1

HC2:
AM2+ x4 Phenom 9950@3GHz
2GB Crucial Ballistix PC2-5300
3x8800GS-384MB
Corsair TX 750W
WinXP Pro 32bit SP3
FW178.24
3xGPU2 v6.20 R1 Core 1.18
1xSMP v6.23 Beta R1

HC3: (not folding atm and outdated)
X2 6000+@Stock
HD3850OC-512MB@783MHz
2GB PC6400
PSU 420W
WinXP Pro 32bit SP3
CCC8.6
1*GPU2 v6.12 Beta8 Core 1.04
1*CPU 5.04


Office Network:
SDSL 20Mbps
100/1000
All computers connected with TP CAT5

OC1:
E6750@stock
8800GT-256MB@702/1755/900
4GB PC6400
Tagan 480W
Vista Ultimate 32bit SP1
FW 178.24
1xGPU2 v6.20 R1 Core 1.15
1xSMP v6.23 Beta R1


OC2:
E8400@stock
2x8800GT-512MB@stock
8GB PC3-8500
Corsair HX520
Vista Business 64bit
FW178.24
2xGPU2 v6.20 R1 Core 1.15
1xSMP v6.23 Beta R1
Location: Norway

Re: What PG respond to that?

Post by theo343 »

That sounds very promising. Looking foreward to check it out :)

And I understand it wouldnt be easy to make a x-tier model of the CS system, but not every good thing comes easy. But lets first see how what you mention works out.
Image
Xilikon
Posts: 155
Joined: Sun Dec 02, 2007 1:34 pm

Re: What PG respond to that?

Post by Xilikon »

Thanks for taking your time replying to the concerns outlined by the OP. This will show to everyone what you are working to improve on those issues and I'm sure it will get better and better with time (you are already on the right path since 2-3 years).
Image
HaloJones
Posts: 920
Joined: Thu Jul 24, 2008 10:16 am

Re: What PG respond to that?

Post by HaloJones »

I understand that the points awarded relate to the science achieved. I also understand that the performance of a donor's client can only be judged against that of the "benchmark" machine.

But do you understand that the donor's measure of the contribution is the points awarded? If my computer is 100% loaded 24 hours a day, shouldn't the points it earns be consistent?

My smp clients have no ability to choose which work units they are given; they do what they're told. But some days they can earn 50% more than other days. The client doesn't change. The machine doesn't change. The power draw doesn't change. There's no background software I don't know about! But the "reward" I see does.

"it's the science not the points" I hear you cry. Well, the only visible reward for doing 6701s is a loss of "earnings". 99% of donors don't understand the science and do it because they're competing in a points race. Maybe the posters on here are all altruistic but go visit most of the individual team pages and it's all about points maximising and daily points contributions and what hardware delivers the most ppd or the best ppw. Show me a post about the great science on the [H] or suchlike.

And the message about the science is crucial. I work in a highly technical field and our greatest challenge is how to get buy-in to the value of what we do. We do that by using non-technical language to explain the value. It takes extra time as the normal deliverables cannot be used but showing technical documents doesn't work; no-one reads them and the message is lost.

I know you're all super-bright and super-busy. But if you don't do more to show progress, you will lose the buy-in and lose the mass market.

I agree with most of the OP and I don't think the answers given so far are good enough.
single 1070

Image
endrik
Posts: 34
Joined: Mon Dec 10, 2007 10:41 pm
Location: Wroclaw, Poland
Contact:

Re: What PG respond to that?

Post by endrik »

VijayPande wrote:Thanks for your constructive comments. Let me give it a try to address your questions.
Thank you even more for detailed answers. Crunching-wise I am almost Mr. Nobody and don't know all the tricks, but acting here as spokesman of more advanced folders I know they were mostly satisfied with your statement.
2) A working CS will help resolve the server issues.
Definitely so. Still the idea of adding some flexibility to the structure with, say, one backup CS server in Europe and one in Asia, is very promising. Of course it is up to you to judge whether it is technically viable and to check performance of new CS code - now we know what's going on and will have patience for another couple of months :)
6) This is a good point and I wish we had more time to post to my blog (http://folding.typepad.com/).
We keep our fingers crossed :) Even as some responsibilities can be delegated (see below), there is nothing like contact with the very boss. Posting your last speech to blog was brilliant, many people learned a lot from that.
We don't have any doubts about scientific effects of the FAH - we read published papers, and that's precisely why some people were sorry to see that they doesn't mean much to them :(
Sure it's not necessarily a job of the scientist to be his own PR officer. Even if there is no such position within faculty or a school, and popularization of science would be too much for the University's spokesperson, I am sure there is a number of journalists capable of providing us with a monthly digest of what's going on, enriched with some enlightment for the masses ;) Why, even among already existing FAH society there may be some students of molecular biology willing to do just that...

Anyway, thank you again for the answers. Even if they've been already known in some instances (maybe not publicized wide enough?), answering them personally will, as Xilikon wrote, clear many doubts. After all a GI is much more convinced to go into battle when addressed by the general and not his commanding officer :)

Sorry for taking your time and let's fold some more.
yours,
endrik

*Bookworms will rule the world
(after we finish the background reading).
7im
Posts: 10189
Joined: Thu Nov 29, 2007 4:30 pm
Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
Location: Arizona
Contact:

Re: What PG respond to that?

Post by 7im »

HaloJones wrote:I understand that the points awarded relate to the science achieved. I also understand that the performance of a donor's client can only be judged against that of the "benchmark" machine.

But do you understand that the donor's measure of the contribution is the points awarded? If my computer is 100% loaded 24 hours a day, shouldn't the points it earns be consistent?
Yes, they do understand the measurements. And no, points need not be consistent one day to the next.

Unfortunately, the concepts of baseline (benchmark) level points, and bonus level points are not well understood, or even that different points levels exist (reading the FAQs would be a big help here).

For example, on the CPU client, we have normal work units. Benchmarked at 110 PPD. We also have BigWUs, which take up more computing resources. More memory, more bandwidth, etc. These are benchmarked at 220 PPD, to reward you for the extra donation of resources.

However, you processor can run at 100%, and get 110 PPD for one normal work unit. OR, your processor can run at 100% for the exact same amount of time, and earn 220 PPD for a bigwu.

That is how PPD can vary from one day to the next while still running at the same speed, even if your hardware was exactly the same as the benchmark computer.


If you want, I can also discuss in more detail how hardware differences do the same thing. For example p6701s might be performing near the baseline level of PPD, and most other project perform above it. So p6701s aren't considered a loss so much as all the other projects are considered a bonus. ;)
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Waldeusz
Posts: 1
Joined: Tue Oct 19, 2010 11:04 pm

Re: What PG respond to that?

Post by Waldeusz »

7im wrote:For example, on the CPU client, we have normal work units. Benchmarked at 110 PPD. We also have BigWUs, which take up more computing resources. More memory, more bandwidth, etc. These are benchmarked at 220 PPD, to reward you for the extra donation of resources.
It is straight and obvious. Problem in it around everyone who has the dedicated computer and 8 core would like to count bigwu and instead of it is getting 6701 or other.

Upss.. It is my first commentary on this forum :roll:
Image
7up1n3
Posts: 68
Joined: Sun Dec 02, 2007 2:55 am
Contact:

Re: What PG respond to that?

Post by 7up1n3 »

VijayPande wrote:5) ATI deprecated their GPU language, Brook, in favor of OpenCL. So, we were forced to completely rewrite our GPU core for ATI in openCL. This takes time, especially to write highly optimized code. We have been internally testing this and expect to start beta testing it shortly (days to weeks). It will require client changes that are now built into the v7 client.
Considering the norm of not issuing time frames, this is very encouraging news for a project that has seen several generations of ATI hardware pass by without an optimized client. Thanks Vijay!
Image
Rage3D Admin ~ The Fighting 300 ~ Team Rage3D Folding
VijayPande
Pande Group Member
Posts: 2058
Joined: Fri Nov 30, 2007 6:25 am
Location: Stanford

Re: Open letter to PG [with response]

Post by VijayPande »

I should emphasize that this will require v7, which is still in alpha, so while this new core could go to outside testing in days to weeks, the release date is tied to v7's release. My hope is that we can do a v7 open beta soon (weeks to months), but that depends on progress in alpha testing. Joe is getting a lot of helpful suggestions which is making the client better, but is slowing down the rollout.

If one wants a preview of our ATI GPU code, you can see it in the OpenMM release (it's the same code). At least that can help make it clear that this isn't vaporware and that it has come pretty far in the last few weeks.
Prof. Vijay Pande, PhD
Departments of Chemistry, Structural Biology, and Computer Science
Chair, Biophysics
Director, Folding@home Distributed Computing Project
Stanford University
Evil Penguin
Posts: 146
Joined: Sun Apr 13, 2008 4:34 am
Location: Texas, United States

Re: Open letter to PG [with response]

Post by Evil Penguin »

VijayPande wrote:I should emphasize that this will require v7, which is still in alpha, so while this new core could go to outside testing in days to weeks, the release date is tied to v7's release. My hope is that we can do a v7 open beta soon (weeks to months), but that depends on progress in alpha testing. Joe is getting a lot of helpful suggestions which is making the client better, but is slowing down the rollout.

If one wants a preview of our ATI GPU code, you can see it in the OpenMM release (it's the same code). At least that can help make it clear that this isn't vaporware and that it has come pretty far in the last few weeks.
Looking forward to it.
Last edited by Evil Penguin on Sat Nov 13, 2010 11:12 pm, edited 1 time in total.
muziqaz
Posts: 901
Joined: Sun Dec 16, 2007 6:22 pm
Hardware configuration: 7950x3D, 5950x, 5800x3D, 3900x
7900xtx, Radeon 7, 5700xt, 6900xt, RX 550 640SP
Location: London
Contact:

Re: Open letter to PG [with response]

Post by muziqaz »

Mr. Pande, I can't understand why you are reluctant to release v7 to beta testers :D I Think everyone who are participating in beta program will agree, that we are quite understanding bunch in there. We worked with Joe when testing a4 core(if I remember correctly) and me personally I liked his enthusiasm during that period. Unless we were pain in the arse for him :D
I for one am dying to use my evergreen gpu plus its little brother Cayman (6970) is coming home.
So if there is any showstoppers in a way during the beta testing it wouldn't be any problems for us.
And as I understand base code is almost ready, just new features are added.
Last edited by muziqaz on Wed Oct 27, 2010 5:00 pm, edited 1 time in total.
FAH Beta tester
Post Reply