Project 18251 very low PPD on RTX 2060s
Moderators: Site Moderators, FAHC Science Team
-
- Posts: 1165
- Joined: Wed Apr 01, 2009 9:22 pm
- Hardware configuration: Asus Z8NA D6C, 2 x5670@3.2 Ghz, , 12gb Ram, GTX 980ti, AX650 PSU, win 10 (daily use)
Asus Z87 WS, Xeon E3-1230L v3, 8gb ram, KFA GTX 1080, EVGA 750ti , AX760 PSU, Mint 18.2 OS
Not currently folding
Asus Z9PE- D8 WS, 2 E5-2665@2.3 Ghz, 16Gb 1.35v Ram, Ubuntu (Fold only)
Asus Z9PA, 2 Ivy 12 core, 16gb Ram, H folding appliance (fold only) - Location: Jersey, Channel islands
Re: Project 18251 very low PPD on RTX 2060s
Its still happening with this project, my 2060 is currently pulling 620k PPD on this project.
I will however check the ram config on this machine and see if it needs 16GB
I will however check the ram config on this machine and see if it needs 16GB
-
- Posts: 534
- Joined: Fri Apr 03, 2020 2:22 pm
- Hardware configuration: ASRock X370M PRO4
Ryzen 2400G APU
16 GB DDR4-3200
MSI GTX 1660 Super Gaming X
Re: Project 18251 very low PPD on RTX 2060s
For anyone having this issue......
Use whatever method you can to check work unit history. If you get project 18252, it is identical to 18251, but with the Core 26. I was just made aware of this from @muziqaz in the Discord channel. IF this fixes the problem some of us are having, I'm sure the researcher would like to know.
I haven't picked up any of the "new" one myself, so no comparison.
And Nathan,
Though I did find a memory issue on my machine, fixing it helped nothing. I went from 16GB with some errors on one stick, to 8GB on the good memorty stick, to 32GB on the replacement new sticks, which checked out 100% on mulitiple MemTest86 runs. In my case once I hit a certain date, they ran at about twice the TPF. My earlier runs gave normal perfomance levels.
Use whatever method you can to check work unit history. If you get project 18252, it is identical to 18251, but with the Core 26. I was just made aware of this from @muziqaz in the Discord channel. IF this fixes the problem some of us are having, I'm sure the researcher would like to know.
I haven't picked up any of the "new" one myself, so no comparison.
And Nathan,
Though I did find a memory issue on my machine, fixing it helped nothing. I went from 16GB with some errors on one stick, to 8GB on the good memorty stick, to 32GB on the replacement new sticks, which checked out 100% on mulitiple MemTest86 runs. In my case once I hit a certain date, they ran at about twice the TPF. My earlier runs gave normal perfomance levels.
Fold them if you get them!
-
- Posts: 1094
- Joined: Sun Dec 16, 2007 6:22 pm
- Hardware configuration: 9950x, 7950x3D, 5950x, 5800x3D
7900xtx, Radeon 7, 5700xt, 6900xt, RX 550 640SP - Location: London
- Contact:
Re: Project 18251 very low PPD on RTX 2060s
Hopefully, once core26 is fully fixed, researchers will be able to spread their wings with their projects, and we can have bigger sample size 

FAH Omega tester
Re: Project 18251 very low PPD on RTX 2060s
18251 Arc B570 TPF 4m12s
-
- Posts: 51
- Joined: Wed Mar 18, 2020 2:55 pm
- Hardware configuration: HP Z600 (5) HP Z800 (3) HP Z440 (3)
ASUS Turbo GTX 1060, 1070, 1080, RTX 2060 (3)
Dell GTX 1080 - Location: Sydney Australia
Re: Project 18251 very low PPD on RTX 2060s
OK, I switched Z443 to prefer Alzheimer's projects to see if I can catch an 18252. It is trundling along at 2.2 MPPD at present, so we'll see.BobWilliams757 wrote: ↑Thu Jan 23, 2025 6:33 pm For anyone having this issue......
Use whatever method you can to check work unit history. If you get project 18252, it is identical to 18251, but with the Core 26. I was just made aware of this from @muziqaz in the Discord channel. IF this fixes the problem some of us are having, I'm sure the researcher would like to know.
I haven't picked up any of the "new" one myself, so no comparison.
Rev 1:LARreports a 2060 (TU104 version) on 18252 Vs TU104 and TU106 with 18251 as below: Strange.
18252 [GeForce RTX 2060] Nvidia TU104 PPD Avg=3,198,300 Points/WU=561,832 Ave WU Time 4 hrs 13 mins
18251 [GeForce RTX 2060] Nvidia TU104 PPD Avg=2,213,500 Points/WU=894,502 Ave WU Time 10 hrs 42 mins
18251 [Geforce RTX 2060] Nvidia TU106 PPD Avg= 756,841 Points/WU=152,116 Ave WU Time 5 hrs 49 mins
-
- Posts: 51
- Joined: Wed Mar 18, 2020 2:55 pm
- Hardware configuration: HP Z600 (5) HP Z800 (3) HP Z440 (3)
ASUS Turbo GTX 1060, 1070, 1080, RTX 2060 (3)
Dell GTX 1080 - Location: Sydney Australia
Re: Project 18251 very low PPD on RTX 2060s
Argh. In accordance with Murphy's Law, of course the next job for the TU106 on Z443 was 18251 and we are getting 0.6 M PPD with TPF 7m 16s and ETA 12 hours away. So that's it for today.
Rev 1 3 Feb 2025. Waiting for another 18251 to finish 14-hour run on Z443 at 0.6M PPD. Meanwhile, no change in LAR data on 18252, so presumably it has been pulled and there is no point continuing test. Z443 reverting to prefer Cancer so as to avoid 18251s as far as possible.
Rev 1 3 Feb 2025. Waiting for another 18251 to finish 14-hour run on Z443 at 0.6M PPD. Meanwhile, no change in LAR data on 18252, so presumably it has been pulled and there is no point continuing test. Z443 reverting to prefer Cancer so as to avoid 18251s as far as possible.
Last edited by appepi on Mon Feb 03, 2025 2:26 am, edited 1 time in total.
-
- Posts: 1094
- Joined: Sun Dec 16, 2007 6:22 pm
- Hardware configuration: 9950x, 7950x3D, 5950x, 5800x3D
7900xtx, Radeon 7, 5700xt, 6900xt, RX 550 640SP - Location: London
- Contact:
Re: Project 18251 very low PPD on RTX 2060s
It is possible that 18252 alongside core26 have been moved to Beta away from full FAH due to few issues with the core. I know we made a request to do so, but I am not sure if it has been done, yet
FAH Omega tester
-
- Posts: 51
- Joined: Wed Mar 18, 2020 2:55 pm
- Hardware configuration: HP Z600 (5) HP Z800 (3) HP Z440 (3)
ASUS Turbo GTX 1060, 1070, 1080, RTX 2060 (3)
Dell GTX 1080 - Location: Sydney Australia
Re: Project 18251 very low PPD on RTX 2060s
Out of curiosity I decided to try 18251 with Fah under Ubuntu rather than W10. My Z440's boot drives are Samsung 970 EVO NVME via a PCI3 adapter card in their second PCI 3 x 16(16) slot, and from time to time I swap a W10 NVME with an Ubuntu NVME. I've been too busy folding lately to do this, so the Ubuntu was 20.04 LTS. Step 1 was to upgrade to 24.04 LTS via 22.04 LTS, and while that was happening I did some searches to see how FaH could be controlled by a user whose dim memories of BSD Unix in the 1980s are of only limited use with Ubuntu.
Discovery #1 was https://snapcraft.io/install/folding-at ... e90/ubuntu , which required me to type only one command in the Terminal, to load the whole thing including FahControl and the viewer, namely:
$ sudo snap install folding-at-home-fcole90
I did, and it worked immediately, with Fahcontrol an option in the "Education" menu, but ... It recognised the basic CPU and GPU hardware, and FahControl showed that Donor Z442u had started its career with a 12 hour CPU job on 10 cores (which I will allow to finish but it will be the only one it ever does). But what about the 2060 GPU? The log said that there were no CUDA or CL cores to fold with, and thus no GPU. "Drivers" I thought, aided by zillions of posts on the topic of Ubuntu and NVIDIA drivers, let alone Folding. Sure enough, the default drivers might not do the job, so I started at the top with the NVIDIA proprietary ones, and, being an optimist, with the latest (550), which had the magic word "tested" after it. Reboot.
It worked a tiny bit better. Fahcontrol now recognised that the thing in the slot was a GPU, accepted a job, tried to run it, failed repeatedly, and so I got a "disabled GPU" for my trouble. I deleted the GPU slot and re-created it, and tried again, Murphy's law 2.0 grabbed an 18251 which would have made for a perfect test except that it failed again and the GPU slot was disabled. OK, I went back a step to 535 drivers, which however repeated the fails with the default drivers.
Aiming to upgrade my evidence from "accident" to "coincidence" status, I went back to the 550 proprietary drivers and repeated the above. I wasn't surprised when it failed again. But instead of repeatedly failing again and being disabled, it suddenly started running a new job at about 2.5 M PPD. Why? Core 22, that's why. The other jobs were Core 24. Now for log extracts:
(1) With the 550 drivers, the GPU is recognised:
Discovery #1 was https://snapcraft.io/install/folding-at ... e90/ubuntu , which required me to type only one command in the Terminal, to load the whole thing including FahControl and the viewer, namely:
$ sudo snap install folding-at-home-fcole90
I did, and it worked immediately, with Fahcontrol an option in the "Education" menu, but ... It recognised the basic CPU and GPU hardware, and FahControl showed that Donor Z442u had started its career with a 12 hour CPU job on 10 cores (which I will allow to finish but it will be the only one it ever does). But what about the 2060 GPU? The log said that there were no CUDA or CL cores to fold with, and thus no GPU. "Drivers" I thought, aided by zillions of posts on the topic of Ubuntu and NVIDIA drivers, let alone Folding. Sure enough, the default drivers might not do the job, so I started at the top with the NVIDIA proprietary ones, and, being an optimist, with the latest (550), which had the magic word "tested" after it. Reboot.
It worked a tiny bit better. Fahcontrol now recognised that the thing in the slot was a GPU, accepted a job, tried to run it, failed repeatedly, and so I got a "disabled GPU" for my trouble. I deleted the GPU slot and re-created it, and tried again, Murphy's law 2.0 grabbed an 18251 which would have made for a perfect test except that it failed again and the GPU slot was disabled. OK, I went back a step to 535 drivers, which however repeated the fails with the default drivers.
Aiming to upgrade my evidence from "accident" to "coincidence" status, I went back to the 550 proprietary drivers and repeated the above. I wasn't surprised when it failed again. But instead of repeatedly failing again and being disabled, it suddenly started running a new job at about 2.5 M PPD. Why? Core 22, that's why. The other jobs were Core 24. Now for log extracts:
(1) With the 550 drivers, the GPU is recognised:
(2) But then a Core24 job fails repeatedly without getting off the ground:07:28:32: GPUs: 1
07:28:32: GPU 0: Bus:2 Slot:0 Func:0 NVIDIA:7 TU106 [Geforce RTX 2060]
07:28:32: CUDA Device 0: Platform:0 Device:0 Bus:2 Slot:0 Compute:7.5 Driver:12.4
07:28:32:OpenCL Device 0: Platform:0 Device:0 Bus:2 Slot:0 Compute:3.0 Driver:550.120
(3.)And then a Core22 job runs happily.07:32:42:WU01:FS01:Started FahCore on PID 3566
07:32:42:WU01:FS01:Core PID:3570
07:32:42:WU01:FS01:FahCore 0x24 started
07:32:42:WARNING:WU01:FS01:FahCore returned: FAILED_2 (1 = 0x1)
07:32:42:WARNING:WU01:FS01:Too many errors, failing
07:32:42:WU01:FS01:Sending unit results: id:01 state:SEND error:FAILED project:18238 run:432 clone:1 gen:62 core:0x24 unit:0x000000010000003e0000473e000001b0
07:32:42:WU01:FS01:Connecting to 158.130.118.23:8080
07:32:43:WU02:FS01:Connecting to assign1.foldingathome.org:80
07:32:44:WU01:FS01:Server responded WORK_ACK (400)
07:32:44:WU01:FS01:Cleaning up
So to run the actual test I need to know what needs to be done to make Ubuntu 20.04 LTS and Core24 work toghether, and then i can test 18251 to see if that works also.07:37:10:WU01:FS01:0x22:********************************************************************************
07:37:10:WU01:FS01:0x22:Project: 19502 (Run 11, Clone 1, Gen 452)
07:37:10:WU01:FS01:0x22:Reading tar file core.xml
07:37:10:WU01:FS01:0x22:Reading tar file integrator.xml
07:37:10:WU01:FS01:0x22:Reading tar file state.xml.bz2
07:37:10:WU01:FS01:0x22:Reading tar file system.xml.bz2
07:37:10:WU01:FS01:0x22:Digital signatures verified
07:37:10:WU01:FS01:0x22:Folding@home GPU Core22 Folding@home Core
07:37:10:WU01:FS01:0x22:Version 0.0.20
-
- Posts: 1094
- Joined: Sun Dec 16, 2007 6:22 pm
- Hardware configuration: 9950x, 7950x3D, 5950x, 5800x3D
7900xtx, Radeon 7, 5700xt, 6900xt, RX 550 640SP - Location: London
- Contact:
Re: Project 18251 very low PPD on RTX 2060s
Snap version of FAH is broken please do not use it. Download fahclient from FAH website.
FAH Omega tester
-
- Posts: 1094
- Joined: Sun Dec 16, 2007 6:22 pm
- Hardware configuration: 9950x, 7950x3D, 5950x, 5800x3D
7900xtx, Radeon 7, 5700xt, 6900xt, RX 550 640SP - Location: London
- Contact:
Re: Project 18251 very low PPD on RTX 2060s
Ubuntu 20.04 will not work with core24. This is GLIBC mismatch issue. 24.04 works fine with core24, and 23, but fails with core22. Again same issue
FAH Omega tester
-
- Posts: 51
- Joined: Wed Mar 18, 2020 2:55 pm
- Hardware configuration: HP Z600 (5) HP Z800 (3) HP Z440 (3)
ASUS Turbo GTX 1060, 1070, 1080, RTX 2060 (3)
Dell GTX 1080 - Location: Sydney Australia
Re: Project 18251 very low PPD on RTX 2060s
1. I logged back in to report that the above setup was happily working through a Core23 job as below:
2. Following the advice to download from Fah, I did (Beta 8.4.9), and when the package installer tries to install it, it reports "Same version is already installed" and offers me the options of "Reinstall package" (which is clear enough) or "Remove package" (which isn't because it doesn't say which one is to be removed)." Of course I don't know if "same" is based on matching every bit of both packages or just their name, so it would be nice to know if what I got from the snap process is the "same" as what I got from Fah.
This leads to question such as: "what does 'broken' mean?" Like, does it mean that the result is bad science in undetectable ways? If so, Canonical should get rid of it entirely and issue a recall, as it were. Or does it mean that it used to be broken and has now perhaps been fixed so as to be the "same version" as I downloaded from Fah, which is what the package installer thinks. Apart from that, given that the servers are dispensing a mixed stream of Core22, Core23 and Core24 jobs, and no version of Ubuntu seems to be OK will all three, the only way I can avoid failures that affect the reputation of a donor will be to head back to Windows, which is OK because I was aiming to do that anyway so as to have better control over temperatures etc.
But I will take the "Reinstall Package" option and see what happens.
9:35:42:WU01:FS01:0x23:There are 4 platforms available.
09:35:42:WU01:FS01:0x23:Platform 0: Reference
09:35:42:WU01:FS01:0x23:Platform 1: CPU
09:35:42:WU01:FS01:0x23:Platform 2: OpenCL
09:35:42:WU01:FS01:0x23: opencl-device 0 specified
09:35:42:WU01:FS01:0x23:Platform 3: CUDA
09:35:42:WU01:FS01:0x23: cuda-device 0 specified
09:35:48:WU01:FS01:0x23:Attempting to create CUDA context:
09:35:48:WU01:FS01:0x23: Configuring platform CUDA
09:35:58:WU01:FS01:0x23: Using CUDA on CUDA Platform and gpu 0
09:35:58:WU01:FS01:0x23: GPU info: Platform: CUDA
09:35:58:WU01:FS01:0x23: GPU info: PlatformIndex: 0
09:35:58:WU01:FS01:0x23: GPU info: Device: NVIDIA GeForce RTX 2060
09:35:58:WU01:FS01:0x23: GPU info: DeviceIndex: 0
09:35:58:WU01:FS01:0x23: GPU info: Vendor: 0x10de
09:35:58:WU01:FS01:0x23: GPU info: PCI: 02:00:00
09:35:58:WU01:FS01:0x23: GPU info: Compute: 7.5
09:35:58:WU01:FS01:0x23: GPU info: Driver: 12.4
09:35:58:WU01:FS01:0x23: GPU info: GPU: true
2. Following the advice to download from Fah, I did (Beta 8.4.9), and when the package installer tries to install it, it reports "Same version is already installed" and offers me the options of "Reinstall package" (which is clear enough) or "Remove package" (which isn't because it doesn't say which one is to be removed)." Of course I don't know if "same" is based on matching every bit of both packages or just their name, so it would be nice to know if what I got from the snap process is the "same" as what I got from Fah.
This leads to question such as: "what does 'broken' mean?" Like, does it mean that the result is bad science in undetectable ways? If so, Canonical should get rid of it entirely and issue a recall, as it were. Or does it mean that it used to be broken and has now perhaps been fixed so as to be the "same version" as I downloaded from Fah, which is what the package installer thinks. Apart from that, given that the servers are dispensing a mixed stream of Core22, Core23 and Core24 jobs, and no version of Ubuntu seems to be OK will all three, the only way I can avoid failures that affect the reputation of a donor will be to head back to Windows, which is OK because I was aiming to do that anyway so as to have better control over temperatures etc.
But I will take the "Reinstall Package" option and see what happens.
-
- Posts: 51
- Joined: Wed Mar 18, 2020 2:55 pm
- Hardware configuration: HP Z600 (5) HP Z800 (3) HP Z440 (3)
ASUS Turbo GTX 1060, 1070, 1080, RTX 2060 (3)
Dell GTX 1080 - Location: Sydney Australia
Re: Project 18251 very low PPD on RTX 2060s
Hmmm .. the reinstall seems to have taken zero time, but I don't know if that is good or bad. Fahclient still seems to be running. I will try a restart.
-
- Posts: 1094
- Joined: Sun Dec 16, 2007 6:22 pm
- Hardware configuration: 9950x, 7950x3D, 5950x, 5800x3D
7900xtx, Radeon 7, 5700xt, 6900xt, RX 550 640SP - Location: London
- Contact:
Re: Project 18251 very low PPD on RTX 2060s
Reinstall is best option. Remove means remove existing one from computer, and probably installing the new one, which would be the same as reinstall
but in this instance I believe it will just remove old version and leave it at that.
Broken means it causes various issues with connectivity and general FAH related initialisation issues. I requested Snap to remove FAH from their repositories few versions ago, clearly they did not listen
But I see from your experience 8.4.9 snap edition works better.
Reinstall taking no time at all is Linux thing and lightweight nature of the client itself
That is a good thing, when you get used to Windows nonsense.
Ubuntu 22.04 is the safe bet right now to work with all the available cores, except maybe core22, but I think it is still ok for that core too. We are trying to push core22 out completely.

Broken means it causes various issues with connectivity and general FAH related initialisation issues. I requested Snap to remove FAH from their repositories few versions ago, clearly they did not listen

But I see from your experience 8.4.9 snap edition works better.
Reinstall taking no time at all is Linux thing and lightweight nature of the client itself

Ubuntu 22.04 is the safe bet right now to work with all the available cores, except maybe core22, but I think it is still ok for that core too. We are trying to push core22 out completely.
FAH Omega tester
-
- Posts: 51
- Joined: Wed Mar 18, 2020 2:55 pm
- Hardware configuration: HP Z600 (5) HP Z800 (3) HP Z440 (3)
ASUS Turbo GTX 1060, 1070, 1080, RTX 2060 (3)
Dell GTX 1080 - Location: Sydney Australia
Re: Project 18251 very low PPD on RTX 2060s
Well, after reinstall - which had no obvious effect - the jobs continued to completion as they had been doing. Because I created the Ubuntu 24.04 version aof Z442 as a new User called Z442u (my usual process), the experimentation and Core Fails only make Z442u unlikely to receive a bonus any time soon, and the reputation of the W10 version Z442 remains intact. [I note that when I first started folding in 2020 I don't really get the the Team/User/Client thing works so each physical box is a "Donor'. Given that I have seen folk complaining that some testing or error has generated thousands of dumps that destroyed their reputation forever, my nonstandard structure is useful in the present case. The routinely updated Donor stats tell me quickly enough if something is going wrong.)
- user WUs Finished WUs Expired 1 % Finished Bonus Active
Z442 4212 14 99.67% True
Z442u 5 10 33.33% False
Re: Project 18251 very low PPD on RTX 2060s
They're out there. The Arc B570 has been picking up a lot of 18251s. I do notice that my 5600G goes to 100% CPU usage for about 30 seconds while it processes each 5% checkpoint while other projects have 1 second checkpoints.