What do Points Really Mean?

Moderators: Site Moderators, FAHC Science Team

Post Reply
boof
Posts: 2
Joined: Mon May 11, 2020 8:55 pm

What do Points Really Mean?

Post by boof »

Hi, I hope this is the right place for my question. While I know that points are a metric used to compete / compare oneself and teams to others, I am more interested in how much power I am really bringing to the table when using Folding@home. I am a PhD astrophysics student and I run physics simulations on the ComputeCanada supercluster. When my advisor and I apply for time, we must show the performance of our code and give an estimate for how many compute hours we need each term. So, is there some way to convert points to compute hours? I am sure this is both a hardware-dependent and project-dependent question, and I have extremely limited knowledge of the projects I have been running. Are all the projects using the same code base? How do projects vary in terms of hardware utilization? Does anyone know of an average for any of these things?

I assume that points are assigned to projects based on estimated compute hours, but I cannot find any information about it on the Folding@home website, on these forums, or anywhere else on the internet.

Thanks in advance!
Neil-B
Posts: 2027
Joined: Sun Mar 22, 2020 5:52 pm
Hardware configuration: 1: 2x Xeon E5-2697v3@2.60GHz, 512GB DDR4 LRDIMM, SSD Raid, Win10 Ent 20H2, Quadro K420 1GB, FAH 7.6.21
2: Xeon E3-1505Mv5@2.80GHz, 32GB DDR4, NVME, Win10 Pro 20H2, Quadro M1000M 2GB, FAH 7.6.21 (actually have two of these)
3: i7-960@3.20GHz, 12GB DDR3, SSD, Win10 Pro 20H2, GTX 750Ti 2GB, GTX 1080Ti 11GB, FAH 7.6.21
Location: UK

Re: What do Points Really Mean?

Post by Neil-B »

For a quick 101 just in case you haven't seen/found it ... https://foldingathome.org/support/faq/points/ ... I'll leave complex descriptions to those who know more details.

I also found this https://foldingathome.org/support/faq/flops/ which gives some of the background as to how the FAH project defines some of its compute values ... and also https://stats.foldingathome.org/os which begins to give some headline stats from the compute resource.
2x Xeon E5-2697v3, 512GB DDR4 LRDIMM, SSD Raid, W10-Ent, Quadro K420
Xeon E3-1505Mv5, 32GB DDR4, NVME, W10-Pro, Quadro M1000M
i7-960, 12GB DDR3, SSD, W10-Pro, GTX1080Ti
i9-10850K, 64GB DDR4, NVME, W11-Pro, RTX3070

(Green/Bold = Active)
bruce
Posts: 20910
Joined: Thu Nov 29, 2007 10:13 pm
Location: So. Cal.

Re: What do Points Really Mean?

Post by bruce »

The points represent "scientific value" but that's not proportional to compute hours. Promptness is also part of the value. One useful compute hour that produces results today is worth more that one compute hour that will produce results next week.
JimboPalmer
Posts: 2573
Joined: Mon Feb 16, 2009 4:12 am
Location: Greenwood MS USA

Re: What do Points Really Mean?

Post by JimboPalmer »

Just to amplify, the Base Credit is (ideally) linear with FLOPS.

But a Quick Return Bonus is added to the Base Credit which is non linear with results as it is based on the time needed to return results.

Adding them together you get Estimated Credit. Because that actual Upload takes more or less time, it is just an estimate until the results are back on the server.

Some of this does not translate well to your mainframe job. It is either done or not done, you do not need to wait one user in Mississippi to return the last 1% of the results.

And upload and download times are insignificant locally, but a real problem globally.

The science is done on programs called Cores
Core_a7 is the only current Core running on CPUs, it can run on any Pentium 4 or newer, it uses SSE2. If you have a CPU that supports the AVX instruction set, it uses that, which is twice as fast. The science package is called GROMACS.

Core_21 is an older Core for Graphics cards, using an older science package. (OpenMM 6.2) It requires OpenCL 1.2 and Double Precision floating point math (FP64) The very latest AMD cards (Navi or RDNA) do not work on Core_21. As older research project complete, Core_21 will be retired.

Core_22 is a newer Core with newer science code (OpenMM 7.4.1) and compatible with the latest AMD cards. It has the same prerequisites as Core_21. I see about a 20% increase in points with Core_22.

https://en.wikipedia.org/wiki/List_of_F ... home_cores
https://en.wikipedia.org/wiki/SSE2
https://en.wikipedia.org/wiki/Advanced_ ... Extensions
https://en.wikipedia.org/wiki/GROMACS
https://simtk.org/projects/openmm/
Last edited by JimboPalmer on Mon May 11, 2020 10:20 pm, edited 3 times in total.
Tsar of all the Rushers
I tried to remain childlike, all I achieved was childish.
A friend to those who want no friends
Neil-B
Posts: 2027
Joined: Sun Mar 22, 2020 5:52 pm
Hardware configuration: 1: 2x Xeon E5-2697v3@2.60GHz, 512GB DDR4 LRDIMM, SSD Raid, Win10 Ent 20H2, Quadro K420 1GB, FAH 7.6.21
2: Xeon E3-1505Mv5@2.80GHz, 32GB DDR4, NVME, Win10 Pro 20H2, Quadro M1000M 2GB, FAH 7.6.21 (actually have two of these)
3: i7-960@3.20GHz, 12GB DDR3, SSD, Win10 Pro 20H2, GTX 750Ti 2GB, GTX 1080Ti 11GB, FAH 7.6.21
Location: UK

Re: What do Points Really Mean?

Post by Neil-B »

I guess as a ball park one could take the "base points" for each project WU completed which should approximate to a measure of the compute resource for that WU (as per the points benchmarking), multiply that up by the number of each specific WU over a set time period and come up with some form of approximation for how much compute resource has been used. If you can get some indication/estimation of the compute resource used by the benchmarking setup to generate the Base Points then you may get to the type of information I think you might be after.

Another way would be to reverse this and use a bit of kit that you have confidence that you can measure the compute resource used to whatever measure you are happy with and run WUs for a (fairly long) while ... take the base points for all those WUs and you can approximate an average Compute to points/WUs conversion factor ... this could then be applied to WU counts and give some sort of figure for compute resource utilised.
2x Xeon E5-2697v3, 512GB DDR4 LRDIMM, SSD Raid, W10-Ent, Quadro K420
Xeon E3-1505Mv5, 32GB DDR4, NVME, W10-Pro, Quadro M1000M
i7-960, 12GB DDR3, SSD, W10-Pro, GTX1080Ti
i9-10850K, 64GB DDR4, NVME, W11-Pro, RTX3070

(Green/Bold = Active)
PantherX
Site Moderator
Posts: 7020
Joined: Wed Dec 23, 2009 9:33 am
Hardware configuration: V7.6.21 -> Multi-purpose 24/7
Windows 10 64-bit
CPU:2/3/4/6 -> Intel i7-6700K
GPU:1 -> Nvidia GTX 1080 Ti
§
Retired:
2x Nvidia GTX 1070
Nvidia GTX 675M
Nvidia GTX 660 Ti
Nvidia GTX 650 SC
Nvidia GTX 260 896 MB SOC
Nvidia 9600GT 1 GB OC
Nvidia 9500M GS
Nvidia 8800GTS 320 MB

Intel Core i7-860
Intel Core i7-3840QM
Intel i3-3240
Intel Core 2 Duo E8200
Intel Core 2 Duo E6550
Intel Core 2 Duo T8300
Intel Pentium E5500
Intel Pentium E5400
Location: Land Of The Long White Cloud
Contact:

Re: What do Points Really Mean?

Post by PantherX »

Welcome to the F@H Forum boof,

Do you know what architecture the ComputeCanada supercluster is using? Is it amd64 or something specific?
ETA:
Now ↞ Very Soon ↔ Soon ↔ Soon-ish ↔ Not Soon ↠ End Of Time

Welcome To The F@H Support Forum Ӂ Troubleshooting Bad WUs Ӂ Troubleshooting Server Connectivity Issues
JimboPalmer
Posts: 2573
Joined: Mon Feb 16, 2009 4:12 am
Location: Greenwood MS USA

Re: What do Points Really Mean?

Post by JimboPalmer »

boof wrote:When my advisor and I apply for time, we must show the performance of our code and give an estimate for how many compute hours we need each term. So, is there some way to convert points to compute hours? I am sure this is both a hardware-dependent and project-dependent question, and I have extremely limited knowledge of the projects I have been running. Are all the projects using the same code base? How do projects vary in terms of hardware utilization? Does anyone know of an average for any of these things?
Compute Hours may be easy for your use, as the cluster all uses the same CPU or GPU
https://www.computecanada.ca/research-p ... resources/

FLOPS is a better measure for F@H as the researcher has no idea which volunteers will decide to offer resources and where the servers will depoly them.

At the bottom end I am crunching on a 12 year old

Code: Select all

20:43:20:          CPU: Intel(R) Core(TM)2 Duo CPU T6600 @ 2.20GHz
20:43:20:       CPU ID: GenuineIntel Family 6 Model 23 Stepping 10
20:43:20:         CPUs: 2
That produces about 440 Points Per Day via SSE2

An AMD Treadripper 3990X averages 277856 PPD of AVX_256 power.

And F@H has no idea who will fetch the next Work Unit, although the number of threads (F@H calls them CPUs) is passed in the request. So compute Hours is going to vary wildly, while the FLOPS needed to fold the protein is known.

On the GPU side you have a similar issue (I am going to use current Nvidia cards, but you will read folks trying to make decade old cards fold.)

The slowest card, a GTX 1030 has 384 threads for 1.1 GFLOPS https://www.techpowerup.com/gpu-specs/g ... 1030.c2954
The fastest is a GTX 2080 ti has 4352 threads for 13.5 GFLOPS https://www.techpowerup.com/gpu-specs/g ... 0-ti.c3305

So a 12 to 1 performance differential with very similar cards technologically.
For F@H, FLOPS is a better measurement.

Proteins with more atoms and bonds can use more threads, some users of top end graphic cards wail if they get small proteins to fold.

CPUs having less resources get the smaller Proteins.
Tsar of all the Rushers
I tried to remain childlike, all I achieved was childish.
A friend to those who want no friends
boof
Posts: 2
Joined: Mon May 11, 2020 8:55 pm

Re: What do Points Really Mean?

Post by boof »

Neil-B wrote:For a quick 101 just in case you haven't seen/found it ... https://foldingathome.org/support/faq/points/ ... I'll leave complex descriptions to those who know more details.
Thanks! This is a helpful resource.
One useful compute hour that produces results today is worth more that one compute hour that will produce results next week.
It was my understanding that the compute hours metric is calculated using FLOP/S and (hopefully) mem cycles, but I am not sure about that. I do know that compute time and wall time are vastly different.
And upload and download times are insignificant locally, but a real problem globally.
That is true. Upload/download should certainly be factored in to the project executions. Maybe one could figure out the amount of computational resources that are consumed by server transfers and include that in the total amount of resources required for a project. I am more interested in what my computer brings to the table, though. Unfortunately I do not know the performance of any of the F@h codes, so I do not know how much of my computer F@h is able to access for any given project. That is, simply calculating my CPU's and GPU's FLOP/s and mem cycles will not give me any insights.
Thank you for posting additional information about the different F@h "Cores", as they are called (not CPU/GPU "cores").
Another way would be to reverse this and use a bit of kit that you have confidence that you can measure the compute resource used to whatever measure you are happy with and run WUs for a (fairly long) while ... take the base points for all those WUs and you can approximate an average Compute to points/WUs conversion factor ... this could then be applied to WU counts and give some sort of figure for compute resource utilised.
You gave me a great idea! I can simply log my computer's resource utilization locally while F@h is running and calculate the performance that way. I don't know why this didn't occur to me until just now, lol.
Welcome to the F@H Forum boof,
Do you know what architecture the ComputeCanada supercluster is using? Is it amd64 or something specific?
Thanks! It's good to be here :). I run projects on the Cedar cluster and I use the GPU nodes exclusively. So I am either accessing 2 x Intel E5-2650 v4 Broadwell @ 2.2GHz or 2 x Intel Silver 4216 Cascade Lake @ 2.1GHz CPUs per node and 4 x NVIDIA P100 Pascal or 4 x NVIDIA V100 Volta GPUs. These are per-node, and I typically use anywhere between 1 to 4 nodes per job. I just let Slurm figure that one out!
PantherX
Site Moderator
Posts: 7020
Joined: Wed Dec 23, 2009 9:33 am
Hardware configuration: V7.6.21 -> Multi-purpose 24/7
Windows 10 64-bit
CPU:2/3/4/6 -> Intel i7-6700K
GPU:1 -> Nvidia GTX 1080 Ti
§
Retired:
2x Nvidia GTX 1070
Nvidia GTX 675M
Nvidia GTX 660 Ti
Nvidia GTX 650 SC
Nvidia GTX 260 896 MB SOC
Nvidia 9600GT 1 GB OC
Nvidia 9500M GS
Nvidia 8800GTS 320 MB

Intel Core i7-860
Intel Core i7-3840QM
Intel i3-3240
Intel Core 2 Duo E8200
Intel Core 2 Duo E6550
Intel Core 2 Duo T8300
Intel Pentium E5500
Intel Pentium E5400
Location: Land Of The Long White Cloud
Contact:

Re: What do Points Really Mean?

Post by PantherX »

boof wrote:...I run projects on the Cedar cluster and I use the GPU nodes exclusively. So I am either accessing 2 x Intel E5-2650 v4 Broadwell @ 2.2GHz or 2 x Intel Silver 4216 Cascade Lake @ 2.1GHz CPUs per node and 4 x NVIDIA P100 Pascal or 4 x NVIDIA V100 Volta GPUs. These are per-node, and I typically use anywhere between 1 to 4 nodes per job. I just let Slurm figure that one out!
In that case, have a look here: https://www.ocf.co.uk/blog/hpc-for-the-greater-good/
ETA:
Now ↞ Very Soon ↔ Soon ↔ Soon-ish ↔ Not Soon ↠ End Of Time

Welcome To The F@H Support Forum Ӂ Troubleshooting Bad WUs Ӂ Troubleshooting Server Connectivity Issues
Post Reply