Page 1 of 1

Tensor Processing for FaH?

Posted: Thu Apr 20, 2023 2:19 am
by dapple26
I have been looking around trying to find out if TPU cards such as ASUS AI Accelerator PCIe Card or the Coral Edge, both of which seems to improve processing power of the system is plugged into, could be used to improve my performance with Folding at Home. Trying to find any hard information is limited right now.

Re: Tensor Processing for FaH?

Posted: Thu Apr 20, 2023 10:27 pm
by Joe_H
I looked up specs for the TPU cards, it is unlikely they would be useful for F@h. Calculations are 8-bit based on these TPUs, F@h uses mostly 32-bit single precision (FP32) with some double precision (FP64) calculations where needed to maintain accuracy.

Re: Tensor Processing for FaH?

Posted: Fri Apr 21, 2023 11:29 am
by muziqaz
It is entirely possible that at some point in the future AI algorithms become stable enough and reliable enough to be incorporated in openmm, so that tensor cores could assist cuda cores with simulations. However using tensor cores as replacement for cuda cores, that will never happen, since they are using very low precision, as Joe mentioned

Re: Tensor Processing for FaH?

Posted: Fri Apr 21, 2023 4:07 pm
by dapple26
Joe_H wrote: Thu Apr 20, 2023 10:27 pm I looked up specs for the TPU cards, it is unlikely they would be useful for F@h. Calculations are 8-bit based on these TPUs, F@h uses mostly 32-bit single precision (FP32) with some double precision (FP64) calculations where needed to maintain accuracy.
Thanks for making it clear before I spend money on trying to make a tensor card run Folding.