Lucid Virtu and Folding At Home

Moderators: Site Moderators, PandeGroup

Lucid Virtu and Folding At Home

Postby Banner » Tue Aug 30, 2011 8:25 pm


In the below text, iGPU = internal GPU (Sandy Bridge), dGPU = discrete GPU (NVidia GTX 570).


I just installed Folding@Home onto a newly assembled system.

Folding@Home works fine under "standard" conditions.

However, under other conditions, when I select "Display" from the Folding@Home menu, I receive the message "viewer.exe has stopped working".

The "other conditions" (when viwer.exe crashes) is that I am using Lucid Virtu (software that virtualizes GPU resources), in i-mode.

What that means is:

* My BIOS is configured to use the iGPU rather than the dGPU.

* My monitor is connected to the motherboard (i.e., the iGPU).

* When an application in Virtu's application-list is run, Virtu intercepts requests for GPU resources, uses the most suitable GPU depending on the need (e.g., for video
transcoding, the iGPU would be used, for games the dGPU would be used).

* When the dGPU is used, the content of the dGPU's frame buffer is copied to the iGPU's frame buffer. Of course, with Folding@home, there is no transfer of frame buffer to

I have added both the folding@home application and the viewer.exe application to Virtu's application list.

My questions are:

1) Has anyone gotten the viewer to work in Virtu i-mode?
2) In i-mode, is folding@home making use of the dGPU (the GTX 570), even though the viewer program crashes?

Regarding question 2:

When I hover the mouse over the Folding@Home icon in the task bar, I receive a message of the form: <Number>/50000.

I suspect that 50,000 is the number of "work units" in a downloaded "work package", and that <number> is the number of completed work units.

Based on this, progress is being made when the system is in i-mode (the value of <number> increases).

How could work be performed? The only potential computational resources are:

1) The CPU.
2) The iGPU (on the sandy Bridge processor).
3) The dGPU (GTX 570).

As for the CPU, task manager reports 0% CPU usage by the Folding@Home application. Certainly, the calculations might be performed by a service program to which the

Folding@Home app is simply a client. However, my overall system CPU usage is essentially zero.

That seems to leave the GPUs.

It seems to me that in i-mode, <number> increases more slowly than when Virtu is not being used, which is consistent with Folding@Home using the iGPU.

Can Folding@Home utilize the internal Sandy Bridge GPU? I am using the NVidia version of folding@home, which uses CUDA.

Also, the folding@home application release date preceeds, I believe, Sandy Bridge.

However, it conceivable that the software and APIs are layered in such a way that folding@home can utilize a non-nvidia GPU (though less efficiently) developed later than the
release of the folding@home app.

Although the viewer is certainly a nicety, I could live without it.

So, essentially, how can I determine whether the GTX 570 is being used by folding@home in Virtu i-mode?

My current intent is to simply record pairs of (folding@home <number>, system time) in both environments so that I can determine with certainty whether <number> increases
slower in i-mode.

Even ignoring Virtu considerations, I presently don't know whether the increase of <number> is linear with time or even with absolute certainty whether <number> has the
meaning I suspect (I can likely determine the latter with a little browsing).

Any feedback appreciated.


As you can see below, DxDiag reports the iGPU as the display device.

Operating System: Windows 7 Professional 64-bit (6.1, Build 7601) Service Pack 1 (7601.win7sp1_gdr.110622-1506)
Language: English (Regional Setting: English)
BIOS: BIOS Date: 05/18/11 21:36:37 Ver: 04.06.04
Processor: Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz (8 CPUs), ~3.4GHz
Memory: 8192MB RAM
Available OS Memory: 7912MB RAM
Page File: 2557MB used, 13265MB available
Windows Dir: C:\Windows
DirectX Version: DirectX 11
DX Setup Parameters: Not found
User DPI Setting: Using System DPI
System DPI Setting: 96 DPI (100 percent)
DWM DPI Scaling: Disabled
DxDiag Version: 6.01.7601.17514 64bit Unicode

Display Devices
Card name: Intel(R) HD Graphics Family
Manufacturer: Intel Corporation
Chip type: Intel(R) HD Graphics Family
DAC type: Internal
Last edited by Banner on Wed Aug 31, 2011 1:00 am, edited 1 time in total.
Posts: 2
Joined: Tue Aug 30, 2011 8:16 pm

Re: Lucid Virtu and Foldig At Home

Postby Zagen30 » Tue Aug 30, 2011 10:28 pm

Welcome to the forum, Banner.

I'm not sure how many other people use Lucid Virtu, so you may not get many (or any) specific responses.

The SB GPU cannot fold. One goal of the continuing development is an OpenCL core that can run on any OpenCL-capable hardware, but that hasn't happened yet. As of now only Nvidia and AMD GPUs can fold, and they have separate cores (cores are the software that runs the calculations).

Assuming you're running v6 of the client (if you got it from this page, then that's what it is), the viewer in that never worked properly. v7 does have a working viewer if you care enough about that feature. It's still in beta but it's fairly stable. Both clients use the same core, so there's not real performance impetus to upgrade, just cosmetic and possibly ease-of-use considerations.

The xxxxx/50000 does indicate progress, but not in the terms you think. Nothing is really referred to as a Work Packet; the thing you download, crunch, and return is referred to as a Work Unit, and you can only download one at a time (not counting v7, where you can download a second one as the first one nears completion, but still nowhere near 50k at a time). The number does indicate completion, but I forget exactly what units it's in.

Most people use Points Per Day to measure their speed; the basic way to do that is to take the point value for the current project, observe the time it takes to complete each percentage of the WU (you can find it in the log file, called FAHlog.txt), and extrapolate the results to figure out the total points you could expect to earn in a day. Points Per Day are normalized on the benchmark hardware; for example, Nvidia GPU projects all earn the same PPD on the benchmark GTX 460 no matter how long they take. The other numbers are not, so while it could help you determine speed over the course of the same WU, comparison between WUs from different projects doesn't say much unless you're talking about PPD. There are software programs that can do PPD calculations for you; see the Tools thread in the 3rd Party Software sub-forum for a list of them.

If you wanted to determine GPU utilization, there are several tools that could do it. I'm almost certain GPU-Z or MSI Afterburner would indicate utilization; I know EVGA Precision does.
Posts: 1589
Joined: Tue Mar 25, 2008 12:45 am

Re: Lucid Virtu and Folding At Home

Postby bruce » Wed Aug 31, 2011 5:09 am

The 50000 is in units that are directly related to units of simulated time. In most cases, you will be seeing "steps per total-steps" reported and in your case, one Work Unit is completed when your hardware has calculated all 50000 steps. At that point, the results will be uploaded, a new WU will be assigned and you will start on the next WU.

There is a wide variation in the characteristics of Priojects, but all WUs from a specific Project are pretty much the same. The protein has a specific number of atoms and each WU contain a specific number of steps. A WU from a different project may or may not have a similar number of atoms or a similar number of steps and, in fact, a step may not represent the same amount of simulated time, although they often do.

Progress on a single WU is generally measured in Percent, so you'll probably see reports of 500/50000 followed by 1000/50000, 1500/50000 etc. which is just another way of reporting each 1% of progress toward the completion of the WU.

In the early days of GPU folding, Stanford developed a core in an attempt to use DirectX as a API for Folding. I don't know the details, but it didn't work well enough and they developed a core that worked with ATI's CAL by way of some middle-ware called Brook. That did work successfully. Later, NVidia released an API called CUDA and Stanford developed a core that would work with it. Based on those two developments, FAH runs on sufficiently powerful GPUs from either ATI or NV.

All of the GPU manufacturers have talked about supporting OpenCL but development has proceeded more slowly than anyone would have liked. Stanford has released a core that is folding successfully on ATI's version of OpenCL. They've also worked on a core that runs on NV's version of OpenCL. Hopefully those two paths will merge and they'll only need to support one GPU core -- but I don't know how soon (or even "if") that might happen.

I don't know a lot about the SandyBridge GPU or how/when Intel plans to support OpenCL, but that's potentially a future consideration, as Zagen30 has already said.
Posts: 22035
Joined: Thu Nov 29, 2007 10:13 pm
Location: So. Cal.

Re: Lucid Virtu and Folding At Home

Postby Banner » Sat Sep 03, 2011 9:11 pm

Thanks Zagan and Bruce for the info. Bruce, I found your info especially interesting.

Incidentally, a glance at the log files shows that my discrete GPU (NVidia GTX 570) is indeed being utilized in Virtu i-mode.
Posts: 2
Joined: Tue Aug 30, 2011 8:16 pm

Re: Lucid Virtu and Folding At Home

Postby BertNZ » Sun Mar 18, 2012 3:02 am

Dear Banner,

This post is a little old now, but figured hey I'll share my thoughts and experiences with FAH and the LucidLogix Virtue "stuff". I can confirm I have a similar issue with the Viewer program, after enabling the LucidLogix "stuff" on my PC and connecting my monitor to the intergrated graphics DVI port.

I have a near new i7 2700K clocked to 4.5 GHz and a MSI GTX570 graphics card running plugged into an MSI Z68A-GD65 G3 motherboard. It serves as a gaming rig along with a machine at home to check email, dev, etc. I run Windows 7 Ult x64, and use the 6.41 GPU client and 6.34 CPU SMP client.

Why did I try the LucidLogix stuff?

I had been running F@H fine for about three months (both NV GPU and SMP CPU simultaneously :D ) but was having a bit of difficulty in a couple of programs with REALLY bad graphics performance (IE9 was one - though please don't scold me I use Chrome where possible :) ) while the FAH GPU client was running. I decided to give the LucidLogix stuff a go since, well, the CPU has a good enough GPU integrated for running 2D/3D desktop stuff and that might solve my performance woes while running the GPU client.

What did I find/experience?

After properly setting up the Lucid drivers and telling the config control panel app thingy to run the FAH core and VIEWER apps on the Discrete GPU, the FAH client works perfectly fine in both the "i mode" and "d mode" of the Lucid stuff. I'm completing work units just as fast as I was prior, and the FAH client is being properly assigned the discrete GPU. I can continue to run games while the FAH client is "paused" then resume work after finishing the game.

I noticed the same performance problems when running the FAH GPU client while the LucidLogix stuff was in "d mode" - though the viewer worked OK. So I changed to "i mode" and the viewer stopped working but the IE9 performance issues went away. Unsure why though the reasoning around framebuffers or other stuff could be the cause.

To anyone considering using the LucidLogix Virtue stuff:

FAH will run fine in either i-mode or d-mode (i-mode = monitor plugged into the integrated port, d-mode = monitor plugged into (one of) the graphics card ports. Just make sure you tell the Lucid Control Panel app to run FAH on the discrete GPU. Otherwise it will quite rightly crash because the Intel integrated GPU doesn't support CUDA.

Hope this helps someone
Posts: 8
Joined: Sun Mar 18, 2012 2:44 am
Location: New Zealand

Return to General GPU client issues

Who is online

Users browsing this forum: No registered users and 1 guest