Page 3 of 4

Re: Why are CPU projects worth so few points relative to GPU

Posted: Fri May 08, 2020 10:45 pm
by bruce
Well, I still consider the original question to be why CPU projects are worth so few points relative to GPU. It's really very simple. CPU projects have many, many times fewer atoms than GPU projects so if you adjust for scientific value, you'll see that the points are inherently fair.

Re: Why are CPU projects worth so few points relative to GPU

Posted: Fri May 08, 2020 10:51 pm
by Joe_H
Or even if the CPU and GPU projects are working on systems with similar amounts of atoms, the GPU projects have the WUs do many times the number of time steps.

Re: Why are CPU projects worth so few points relative to GPU

Posted: Sat May 09, 2020 3:34 am
by Crunchtimer
Agree and I believe the original author of this post thus can set it to solved.

Re: Why are CPU projects worth so few points relative to GPU

Posted: Sat Jun 13, 2020 1:01 am
by The_Bad_Penguin
Ok, for a moment, let us forget about points and amount of science done.

Are work units that are assigned to cpus able to also be completed by gpus, or are (some/all) of the work units assigned to cpus only capable of being run/solved on a cpu? Is there something special about these wu's that make them suitable for only cpus?

If not, and if all wu's are capable of being run on a gpu (e.g., the ones which are being run on cpus), then given the amount of science done by gpus compared to cpus, why is F@H even bothering with cpu wu's? Why isn't F@H strictly a gpu project?

Re: Why are CPU projects worth so few points relative to GPU

Posted: Sat Jun 13, 2020 1:22 am
by PantherX
Welcome to the F@H Forum The_Bad_Penguin,
The_Bad_Penguin wrote:...Are work units that are assigned to cpus able to also be completed by gpus, or are (some/all) of the work units assigned to cpus only capable of being run/solved on a cpu?...
Some Projects are only CPU while others are only GPU. Occasionally, there can be Projects that can make use of CPU and GPU but that's not duplication of work, rather, it's complementary work being done to gain a better understanding of the Project.
The_Bad_Penguin wrote:...Is there something special about these wu's that make them suitable for only cpus?...
CPUs are really good at serial work while GPUs are really good at parallel work. Plus, CPUs can can perform a variety of complex mathematical instructions while GPUs can't achieve a similar level of complexity.
The_Bad_Penguin wrote:...Why isn't F@H strictly a gpu project?
Because GPUs can only perform X calculations while CPUs can perform Y calculations. Together, it provides a much better datasets and helps researchers.

Re: Why are CPU projects worth so few points relative to GPU

Posted: Sat Jun 13, 2020 6:13 am
by Joe_H
To also answer the question a bit differently, the data in a WU for GPU processing is formatted as needed by the Gromacs code used. Similarly the data is specific in a GPU WU to be read in and processed by the OpenMM code in the GPU core.

In both case the data from the raw description of a protein system often can be converted into WUs suitable for either GPU or CPU processing, but the WUs themselves are not interchangable.

As for why out is not exclusively GPU, another reason is that there are a lot more CPUs whose use can be donated than GPUs.

Re: Why are CPU projects worth so few points relative to GPU

Posted: Sat Jun 13, 2020 11:49 am
by The_Bad_Penguin
Thank you for the responses, this is sort of where I was trying to go.

@PantherX: I agree with the general statement that "CPUs can can perform a variety of complex mathematical instructions while GPUs can't achieve a similar level of complexity."

But, I suppose the question remains, is this what is specifically happening with the F@H cpu wu's? Do they actually use "complex mathematical instructions" that "GPUs can't achieve".

So, let us take a brief journey to hypothetical land, a place where you can purchase an amd 3900x cpu for $400 or an amd 3970x cpu for $1900; and a nvidia 2080ti super-duper-fancy-name-of-the-week gpu for $1600 or a generic run-of-the mill video card for $100.

Where do you get the most "science done" for the $2000?

Is the same (complex mathematical instructions) science being done with a 3970x and $100 gpu as it is with a 3900x and 2080ti?

Or is the 3970x and $100 gpu doing (very?) different (complex mathematical instructions) than the 3900x and 2080ti, and thus it is "apples and oranges" to try to compare the scientific work done, as the work done between the two choices is vastly different?

Sorry for splitting hairs . . .

@Joe_H: Thanks for mentioning that "there are a lot more CPUs whose use can be donated than GPUs."

Re: Why are CPU projects worth so few points relative to GPU

Posted: Sat Jun 13, 2020 12:02 pm
by HaloJones
If - and it's a big if with loads of history and weirdness - points = science, then the place to put your $2000 is on the best gpu you can buy for that money.

FAH has never really been about dedicated folding hardware. The premise was and still is that you have hardware for whatever reason you have it, and you let it run FAH when you're not using it yourself. So if you need a 3970x for whatever you do on the rig then let your 3970x do the best science available. If you have a 2080ti for gaming for example and when you're not gaming you are OK with it running FAH then great, thanks very much.

The dedicated hardware really started with bigadv. These were special units only suitable for those with loads of cpu cores and people built very expensive systems with multi-socket mobos and expensive cpus. Today, because of the quick return bonus brought in around that time, a 2080ti left to run 24/7 will produce vastly more points than any cpu system.

If you're determined to build a dedicated rig, my advice would be to buy a good used gpu with good cooling. The rest of the system (other than perhaps the PSU) can be unfit for modern use for anything else.

Re: Why are CPU projects worth so few points relative to GPU

Posted: Sat Jun 13, 2020 12:33 pm
by ajm
If I had to build a folding rig (for 24/7 use) from scratch, I would choose an Intel processor (an old Xeon for example) and one or several water-cooled GPU(s). The latter is all the more important if you have to "live" with the rig, ie at home or in your office. High-end hardware, like the 2080 ti, will produce a lot of heat and only water cooling can handle it without too much noise and excess ambient heat. I would probably not use the ti version either, rather the super, less extreme, more balanced.

Otherwise, the 3970X (Zen2 in general) would NOT be a good choice, as its main weakness is precisely located where FAH demands the most: the AVX instruction set - a domain where the Xeon and the X299 platform perform the best. For folding, it would be very difficult (and expensive) to cool a Zen 2 processor and a good part of that heat would not really benefit science.

Re: Why are CPU projects worth so few points relative to GPU

Posted: Sat Jun 13, 2020 1:04 pm
by PantherX
The_Bad_Penguin wrote:...is the 3970x and $100 gpu doing (very?) different (complex mathematical instructions) than the 3900x and 2080ti, and thus it is "apples and oranges" to try to compare the scientific work done, as the work done between the two choices is vastly different?...
F@H aims to understand protein folding at an atomic level. To do that, it uses molecular simulation to do that. Specifically, it is Markov state models which it uses. Now, that is achieved by using:
1) GROMACS
2) OpenMM

GROMACS is mainly used on CPUs while OpenMM is mainly used by GPUs. In F@H's case, FahCore_a4 and FahCore_a7 use GROMACS while FahCore_21 and FahCore_22 uses OpenMM via OpenCL. There are plans to get OpenMM via CUDA but that's unknown ETA. Recently, GROMACS has suppored off-loading to GPU, that means, the CPUs would perform serial tasks (since they are exceptionally good at it) while the parallel tasks are given to the GPU (since they are exceptionally good at it) so the overall result is the best parts of CPU and GPU are used together to speed up GROMACS. Time will tell if F@H chooses to use this but we would simply have to wait and see what happens :)

Re: Why are CPU projects worth so few points relative to GPU

Posted: Sat Jun 13, 2020 4:26 pm
by JimF
I build my own systems, and can go either way. Usually, that has meant GPUs for the obvious benefit in output. But a couple of years ago, I noted that the CPU projects (mainly from the Voelz lab at the time) looked very interesting scientifically, not that I am an expert on the subject. But the peptides (smaller than proteins I believe) can be used for a variety of intriguing purposes.

So I first tried out a Ryzen 3700X, and found it was not bad at around 275 k PPD. Then I built a Ryzen 3950X on Ubuntu 18.04.4 early this year, and when the COVID rush came along, I built another. They are getting over 600 k PPD. When you look at the power consumed, they are almost as efficient as the GPUs, so I don't feel any loss there. I am due for a GPU next, but that will be only next year with a 7nm card.

Re: Why are CPU projects worth so few points relative to GPU

Posted: Sat Jun 13, 2020 4:44 pm
by JimboPalmer
[Just chit chat]

When CPUs have 'enough' threads, they get about the same PPD as GPUs. Epycs and Threadrippers with 128 or more threads are very competitive with GPUs. My i3 with 3 threads, not so much. Soit is not so much why are CPUs less able to keep up, but why is it so expensive to buy a CPU that can keep up?

Re: Why are CPU projects worth so few points relative to GPU

Posted: Sat Jun 13, 2020 5:09 pm
by bruce
There's one more fact about parallel vs. serial computations.
As has already been stated. GPUs work very effectively if the calculations need a high degreee of parallelism. CPUs work very effectively with relatively low degrees of parallelism.

Proteins come in all sizes. The degree of parallelism depends directly on the number of atoms. Some projects are constructed with massive numbers of atoms ... some projects are constructed from relatively few atoms (but still quite a number). If a scientist is preparing a new project, he can consider that there are lots of CPUs and fewer GPU and the gpus span a range from a few hundred shaders (cores) to many thousands of them. If the protein is small, it can run efficiently on CPUs with relatively few Cores or on GPUs with relaviely few shaders. If the protein being analyzed has a very large number of atoms it will be efficient on a GPU with large numbers of shaders. That protein with relatively few atoms can't use the extra shaders on big GPUs very effectively.

Re: Why are CPU projects worth so few points relative to GPU

Posted: Sun Jun 14, 2020 11:47 pm
by MeeLee
Projects have a baseline PPD and a QRB.
Once you surpass the point where QRB becomes higher than PPD in performance, you'll see the PPDs fly, and get very near GPU PPDs.
And that means a high Ghz CPU, and/or a high CPU count.
Like mentioned, Top tier Ryzens (3900x, 3950x) and Threadrippers are currently the only ones coming close to GPUs in terms of performance; but threadrippers are out of a lot of people's budget when they try building a new system.
That could change in a year, as technology doesn't stand still.
It's just right now, most CPUs don't have the processing power to surpass that barrier, where QRB shoots through the roof (like with GPUs).

I'm expecting in a year or two, Ryzen to adopt to 5nm.
And if they do, I do hope they'll come with SMT4, as the AM4 socket is limited to 16 CPU cores; and if they limit their CPUs with 16 cores, 32 threads, their upcoming 5nm CPUs would run at 75W.
That's 25 Watts they can still assign to either higher boost, or more threads.
SMT4 would allow for 16C 64T on a CPU, which is within the boundaries of Windows 10 home, which has a max CPU count of 64.
Upcoming Threadrippers at 5nm will more than likely either need Linux, or Windows Enterprise to run.
But with high end next gen CPUs, we'll definitely see a closing between CPU and GPU on FAH.
It wouldn't be outside of the realm of possibility to see 1M PPD on next gen Ryzens, and 2-4M PPD on next gen Threadrippers.

Re: Why are CPU projects worth so few points relative to GPU

Posted: Tue Jun 16, 2020 3:09 pm
by The_Bad_Penguin
Would love to find an affordable 3990x, but "rumors" are that the next gen Ryzens (4Q2020?) will be ~20% more efficient. And certainly, Ryzen/Threadripper 5nm would be a sight to behold.

Don't know that I would agree that "But with high end next gen CPUs, we'll definitely see a closing between CPU and GPU on FAH," because as there will be a new gen high-end cpu's, there will also be a new gen of high-end gpu's.

Starting to see web articles/blogs/videos that "NVIDIA GeForce RTX 3090 rumors: up to 60-90% faster than RTX 2080 Ti"

I would assume that a rumored nvidia 3090 ($2000?) [4m ppd?] would cost less than a 3990x ($3000?) [3m ppd?] or next gen threadripper 64t/128t ($???) [4m+ ppd?].