Page 1 of 1

Xeon E5-2670 performance?

Posted: Wed Feb 01, 2017 10:37 pm
by CeeVee
I'm thinking about hardware upgrades for F@H and one of the options I'm considering is building around a pair of Xeon E5-2670 processors which are currently available for less than £100 each.
I'm trying to work out what sort of PPD I'd get with this setup performing just CPU folding.
The benchmarks that I've seen have an AMD 8350 as a comparison, which I have in one of my machines.
The AMD 8350 gives me around 20,000 PPD and the benchmarks have the dual Xeon's at around 4 times the performance.
Putting some figures into the online bonus points calculator, (assuming that the TPF for the Xeon is proportinal to that of the AMD 8350), gives the Xeon's a PPD of around 180,000.
Can any body confirm, or otherwise, that that is a reasonable ball-park figure for this pair of Xeon processors.
I'm considering other options as well but the PPD figures for those are more readily available, I just want to make sure I'm doing reasonable comparison between different hardware configurations.

Re: Xeon E5-2670 performance?

Posted: Wed Feb 01, 2017 10:50 pm
by Nathan_P
I doubt you will get 180k with a pair of 2670's unless you are really lucky with the WU allocation and constantly get core A7 based wu that will utilise all 32 threads. Even then my pair of 12c/24t xeons only gets 220k PPD and that's with 16 more threads. A more likely PPD will be 120-130k PPD. The days of good PPD with a pair of CPU's is long gone.

You are much better off getting 1060 gpu - less power used and more points for around the same money

Re: Xeon E5-2670 performance?

Posted: Thu Feb 02, 2017 12:56 am
by CeeVee
Thanks for the fast response, my estimates were based on extrapolating benchmark comparisons and it looks like I've got the factors a bit high.
GPU upgrades is one of the other options I'm costing up.
I've recently put a GTX1060 into one of my machines and I'm averaging somewhere in the region of 230k PPD.
I guess I'll go rework my spreadsheets a bit more before I draw any firm conclusions.

Re: Xeon E5-2670 performance?

Posted: Thu Feb 02, 2017 1:55 am
by ChristianVirtual
But CPU-WU's also need some love ;-)

For that I got a used 1650v2 single 6/12 folding 130k when A7

Re: Xeon E5-2670 performance?

Posted: Thu Feb 02, 2017 3:59 am
by CeeVee
Yes I agree.
If I was simply looking to max out my PPD then I'd go for GPU upgrades/additions.
It seems to me that the PPD values are aimed at the people donating computer power for folding as a 'reward' system, but it strikes me that from the project perspective it's really the WU's per day completed that really matter.
I'm unclear from my reading what the balance is between GPU/CPU WU's and what the future holds for this balance.
As someone who is going to upgrade my folding platforms it would be nice to know what the relative projected balance is between number of WU's for CPU/GPU for the next few months/years.
I haven't found this information available anywhere. If it is can somebody point me at it please.
I've currently got four machines folding but only one of them has a 'foldable' GPU. The CPU's folding are all AMD, one 8-core and three 4-core. The least powerful is the 25W AMD 5370 in my HTPC which runs 24/7 and folds 3 WU's per day for about 4,000 points. My AMD 8350 manages about 11 WU's per day for about 20,00 points and the other two 4-core machines manage about 4 WU's per day for about 5,000 points
One of my upgrade plans involves replacing the two 4-core non-HTPC processors with 8-core AMD's.
I'm still debating the decision between the various GTX graphics cards. I have a restricted budget so it's coming down to a decision between speed of increase of PPD versus eventual PPD, where eventual is a somewhat vague point in the future.
I won't have any budget available until May, so I've got a while to sort out my choices.

Re: Xeon E5-2670 performance?

Posted: Thu Feb 02, 2017 8:21 am
by Nathan_P
ChristianVirtual wrote:But CPU-WU's also need some love ;-)

For that I got a used 1650v2 single 6/12 folding 130k when A7
Oh they need lots of love and I have plenty of idle hardware waiting to give that love but when i'm running a GPU box with a 1070 and 1080 in it and pulling 415w from the wall for 1.5m PPD and my 2p 24c/48t xeon boxes are pulling 300w for 220k PPD where is the incentive?

Once A7 is in general use I may relook at it but for now 6 cpu's sit idle

Re: Xeon E5-2670 performance?

Posted: Thu Feb 02, 2017 2:13 pm
by rwh202
Nathan_P wrote:where is the incentive?
Exactly. There either needs to be a reduction in the huge artificial QRB incentive seen by fast GPUs or another huge artificial incentive applied to CPUs when points are the primary measure of output.
Right now, folding on my CPUs reduces my points since losing a fraction of a second TPF on pairs of 1080s costs more in bonus points than the i7 CPUs can earn.
I have started a CPU client after years of being purely GPU - a 4C 8T xeon E3-1240v5 for 30k ppd, or a 10th of what a similar cost/power GPU would produce.

Re: Xeon E5-2670 performance?

Posted: Fri Feb 03, 2017 12:46 am
by bruce
rwh202 wrote:Right now, folding on my CPUs reduces my points since losing a fraction of a second TPF on pairs of 1080s costs more in bonus points than the i7 CPUs can earn.
How much spare CPU resources are free?

THe first rule is to dedicate a minimum of one CPU to support each GPU.
The second rule is that there should also be some additional CPU resources unused. With either Intel HyperThreading or AMD's equivalent, pairs of threads share some resources.. Then,too, even if you never open your browser or TeamViewer or whatever, there's always some background activity that has to steal some CPU resources from somewhere. With one pair of GPUs, try reducing your CPU processing from (N-2) to (N-3) and see if your total PPD goes up. After allowing some free CPU resources so that FAHClient_xx never gets interrupted ... or you manage to set those tasks to higher priority ... you still may have enough CPU resources to provide a bump in your total PPD.

Re: Xeon E5-2670 performance?

Posted: Fri Feb 03, 2017 4:01 pm
by rwh202
bruce wrote:
rwh202 wrote:Right now, folding on my CPUs reduces my points since losing a fraction of a second TPF on pairs of 1080s costs more in bonus points than the i7 CPUs can earn.
How much spare CPU resources are free?

THe first rule is to dedicate a minimum of one CPU to support each GPU.
The second rule is that there should also be some additional CPU resources unused. With either Intel HyperThreading or AMD's equivalent, pairs of threads share some resources.. Then,too, even if you never open your browser or TeamViewer or whatever, there's always some background activity that has to steal some CPU resources from somewhere. With one pair of GPUs, try reducing your CPU processing from (N-2) to (N-3) and see if your total PPD goes up. After allowing some free CPU resources so that FAHClient_xx never gets interrupted ... or you manage to set those tasks to higher priority ... you still may have enough CPU resources to provide a bump in your total PPD.
OK, thanks.

I was running CPU:6 on a 4C 8T system with a pair of GPUs and no other user applications on Linux when I last experimented. I've just rerun some tests with an i7 2600 and a pair of GTX 980 with CPU 0, 4 and 6.

Going from CPU0 to 4 and averaged over 10 frames, the TPFs for the GPUs didn't increase significantly (0.3 seconds and 0.7 seconds) and the CPU generated 9285 PPD, giving a boost of 5870 PPD (or 0.7% for a 10% increase in power draw)

Going from CPU0 to 6 and averaged over 10 frames, the TPFs for the GPUs increased by 1.6 and 1.7 seconds and the CPU went up to 13746 PPD, giving a net loss of 282 PPD.

At CPU6, it would appear that it is being oversubscribed and the GPUs are taking a hit as you thought. At CPU4 there is still a slight hit, and the CPU does make net points, but whether it is worth the return based on finite power budget is debatable. The same would be true even if I could get the CPU5 or CPU6 to run without impacting the GPUs.

Re: Xeon E5-2670 performance?

Posted: Fri Feb 03, 2017 5:38 pm
by bruce
That's good information. (and I also like it because it confirms my suspicions.)

The next time you consider a similar test, you might want to also consider the check-pointing process. It's probably a lot of work to isolate these inconsistencies from your primary result... and it's a rather insignificant detail which can easily be ignored.

Ten frames is an arbitrary choice.

In fact, all WUs suspend processing briefly while they collect the data to write a checkpoint ... plus there's some overhead as the data passes through the cache to the disk. With some additional digging, maybe you can measure N frames which at least contain an equal number of checkpoints.

WUs using the CPU write a checkpoint every 15 minutes by default, and you can configure that to other numbers between 3 and 30.

The checkpoints frequency for GPU WUs is established by the project owner. It's revealed in the private log found inside of the work-files for that project. It's revealed with messages like Checkpoint write frequency: 200000 (20%) or Checkpoint write frequency: 125000 (2.5%) more or less 80 lines from the top of that log (in Core_21 and probably somewhere in Core_18). Actual timing, of course, starts from the last time the WU was (re-)started and can be observed by looking at the times certain files are written in /work/*. (More details if you want them.)

I'm not suggesting that you SHOULD consider this ... I'm just giving you the information in case you WANT to.

Re: Xeon E5-2670 performance?

Posted: Fri Dec 23, 2022 11:44 am
by Duce H_K_
Excuse me that I probably not found more appropriate topic. Thank you doctors for warming up the PPD on old machines. A7 provided me 130 000, now it peaks up to 200k!

Image

Re: Xeon E5-2670 performance?

Posted: Fri Dec 23, 2022 3:12 pm
by Joe_H
Yeah, Core_A8 supports use of AVX2, as I recall A7 just supported SSE2 or AVX. So a bit of a boost from the ability to more fully utilize your Xeon. A8 also uses a later revision of the GROMACS software package which has some new optimizations for processing the data.

Core_A9 is also out in beta testing and some recent projects have been released that use it. So far it has only been released for CPU projects, A9 can also be used on GPUs. But further testing is needed to evaluate how best to use it in that mode before being publicly released.