P5-133XL wrote:Client stats seem to indicate that ATI GPU's and NVIDIA GPU's are doing a similar amount of work. Since, the all WU's are benchmarked against both brands why is NVIDIA getting much more PPD? The stats seem to be contradictory to the arguement that NVIDIA is so much faster than ATI and thus deserves more PPD.
Current client stats as of Nov. 8th, 2008 wrote:OS Type Current TFLOPS* Active CPUs
ATI GPU 475 4316
NVIDIA GPU 1751 15922
From those numbers I get ATI is producing .11TFLOPS/client and NVidia is producing .1099TFLOPS/client. It has been my observation that NVidia gets over twice the PPD of an equivilent ATI card. These numbers seem to be contradictory.
So is it that there are a disproptionate number low-end NVidia dragging down its average stats while there are large proportion of high-end ATI cards so that the average Teraflops is not correlating with the benchmarking?
Why do you assume Tflop is equivalent to scientific value? PPD should represent that, not maximum floating point operations per second which is btw only a synthetic number.
Why do you think a card which get's a certain tflop rating from it's vendor, will reach that potential utilising code bases still in early beta stages?
And most importantly, why are the results skewed? Their not, considering the
current benchmark system.
If you ask if it's a fair system, no I don't think so. But since it's in place, there are no skewed results. You complain Nvidia get's to much points? Want to come to xs and tell the team there that? You'll get lynched because we just had the 5748 bomb dropped on us. Ppd plummited and people where screaming bloody murder, ready to get the nooses and lynch someone. But, still, with the current point system, the ppd is in line with what most people knew. Nvidia got the better hand with the smaller wu's, but when the atom size will increase, Nvidia will take the bigger hit compared to Ati. It's not screwed results at all, it's results due to the benchmarking machine being an Ati card.