Maybe this is the right thread to tell the story how a german donor fixed the Core_18 AMD performance issue on Windows for Core_21 using FahBench. I downloaded FahBench 2.0 beta in mid 2015 and run it on my AMD R9 280 but the score was bad compared to FahBench 1.2 matching the FahCore_18 experience.
I wanted to look into this but the FahCores are closed source that means you cannot build it yourself because the source code is not available.
So it is not so easy for a software developer which is not part of the PandeGroup to check what is going wrong.
But FahBench is open source on GitHub so with technical knowledge you can build it from source and get your own FahBench.exe
The common base for both FahCores and FahBench is the OpenMM framework which is also open source.
So I could build FahBench 2.0 beta and OpenMM 6.2 from scratch. And you can even run real FAH work units in FahBench.
What a surprise when my build FahBench 2.0 beta performance score on my AMD R9 280 was good again and the same as FahBench 1.2
So something must be different from my build compared to released FahBench 2.0 beta dlls on GitHub.
We found FahBench 2.0 beta and also FahCore were build at Stanford on a machine having a Nvidia SDK to support OpenCL and CUDA.
However on my machine there was no Nvidia but AMD SDK because I have an AMD R9 280 GPU.
When I build FahBench 2.0 beta/OpenMM 6.2 on my machine using Nvidia SDK I got the bad performance issue again.
So the solution was to build the OpenCL code with AMD SDK. After the developers at Stanford were convinced of this solution a new FahCore_21 beta core was build with AMD SDK for OpenCL and the FahCore_18 introduced AMD bad performance issue was solved.
This shows that donors can also support development if they have the knowledge.
We may have 100000 donors but only 10 developers at Standford.
They may be a little denying if a stranger tries to mess with the experts but if you can prove your facts it will be accepted.
And like donors are familiar with we just give and we don't expect a Thank You
