codysluder wrote:FahCore_11 looks fine to me. It's running at a higher priority that FahCore_a2 which is what's important. It's using 4% of one core, which is right. If you change it to a real-time priority so that it takes priority over anything else that the computer is doing (which adds up to zero) I don't think you'll see any difference.
Viper666 wrote:You just have to create a link name libcudart.so.2 to the new libcudart.so.3.0.8 or rename the link thats already there from libcudart.so.3 to libcudart.so.2
either the dll or the client looks for the older file not sure which.
linux-gate.so.1 => (0xffffe000)
libcudart.so.2 => not found
libwine.so.1 => /usr/local/lib/libwine.so.1 (0xf7e4b000)
Mindmatter wrote:The SMP client is set to idle priority, both GPU's set to low in the FAH config file. I start the SMP client with "nice -n 19" and no nice flags for the GPU's. Looking at Ksysguard and top both show the running cores for all clients are set as expected. The fahcore_a1/a3 processes are at 19 and the fahcore_11/14 are at 0. Without the SMP client running, the GPU's almost double in PPD. I have forced the fahcore_11/14 processes to run in realtime priority with chrt and the performance went down more!!!! I first had them set to FIFO and then round robin thinking maybe they were interfering with each other but no change. Also the SMP client performance was minimally effected by the priority changes.
I was running the two GPU's and one uniclient and saw no issues with performance at all. I think I will try two unclients but I would really like to fix this SMP issue. Why is the SMP client sort of overriding all priorities?
My current solution is no way near optimal, but I use core 2 and 3 for GPU, and SMP on core 0 and 1, using "taskset -c 0,1" for smp, and dito for each GPU.
Users browsing this forum: No registered users and 1 guest