Page 1 of 1

How to get smaller WUs?

PostPosted: Tue Oct 13, 2020 2:16 pm
by MosleyCale
Hello,

I'm folding with quite a small config (3 to 4 CPUs) and not full time.
I am getting WU with big time per frame on my machine (3 hours) so that there is big chance it won't be computed within the delay.
Is there a way to configure client (i am on Windows 10) to get smaller WUs?

Thanks

Re: How to get smaller WUs?

PostPosted: Tue Oct 13, 2020 4:27 pm
by Jonazz
I'm folding with 4 cores and haven't encountered a TPF of more than 6 minutes, this seems a big difference.

Do you wait after the WU has completed 1-2% of the work?
What are the specs of your computer?

Sad news

PostPosted: Tue Oct 13, 2020 5:09 pm
by Foliant
MosleyCale wrote:I'm folding with quite a small config (3 to 4 CPUs) and not full time.
I am getting WU with big time per frame on my machine (3 hours)

Im also folding with 3 of my 4 Cores from a Celeron J1900 - this CPU does not support AVX and therefore its slow.
The Celeron normally finishes short before timeout but i must admit its running 24/7 (its my NAS).
My current TPF for 16806 (9, 729, 17) is 27 mins 45 secs

MosleyCale wrote:Re: How to get smaller WUs?

I dont know a way to force small WUs - but that question is constantly in the Forum Topics.
For GPUs ive read theyre working on improvements for the assignments but there isnt a release date for it.


Regards
Patrick

Re: How to get smaller WUs?

PostPosted: Sat Oct 17, 2020 8:11 am
by bruce
Unlike some folks, I fold with both CPUs and GPUs. On a couple of old machines, I have quad CPUs without AVX plus a gpu which means I have three CPU threads left over. I seem to finish assignments right around the TImeout so often I get bonus points and sometimes I don't. They do run 24x7.

I have a couple of dual CPU machines without AVX plus a GPU, which means I have only one thread available. I don't try to fold on those CPUs because I'd just be dumping perfectly good CPU assignments.

When faHCore_a8 is in full prouction, I may re-evaluate because it's faster.