Project: 2669 (Run 1, Clone 138, Gen 21)

Moderators: Site Moderators, FAHC Science Team

Post Reply
^w^ing
Posts: 136
Joined: Fri Mar 07, 2008 7:29 pm
Hardware configuration: C2D E6400 2.13 GHz @ 3.2 GHz
Asus EN8800GTS 640 (G80) @ 660/792/1700 running the 6.23 w/ core11 v1.19
forceware 260.89
Asus P5N-E SLi
2GB 800MHz DDRII (2xCorsair TwinX 512MB)
WinXP 32 SP3
Location: Prague

Project: 2669 (Run 1, Clone 138, Gen 21)

Post by ^w^ing »

This could be related to the problem with WUs running on only one core, the payload was even smaller

Code: Select all

[09:57:00] Initial: 0000; - Receiving payload (expected size: 1142751)
but it didnt start the WU, not even with the 2.08 core.

Code: Select all

[09:56:51] - Autosending finished units... [August 27 09:56:51 UTC]
[09:56:51] Trying to send all finished work units
[09:56:51] + No unsent completed units remaining.
[09:56:51] - Autosend completed
[09:56:51] - Preparing to get new work unit...
[09:56:51] + Attempting to get work packet
[09:56:51] - Will indicate memory of 752 MB
[09:56:51] - Connecting to assignment server
[09:56:51] Connecting to http://assign.stanford.edu:8080/
[09:56:54] Posted data.
[09:56:54] Initial: 40AB; - Successful: assigned to (171.64.65.56).
[09:56:54] + News From Folding@Home: Welcome to Folding@Home
[09:56:54] Loaded queue successfully.
[09:56:54] Connecting to http://171.64.65.56:8080/
[09:57:00] Posted data.
[09:57:00] Initial: 0000; - Receiving payload (expected size: 1142751)
[09:57:10] - Downloaded at ~111 kB/s
[09:57:10] - Averaged speed for that direction ~109 kB/s
[09:57:10] + Received work.
[09:57:10] + Closed connections
[09:57:10] 
[09:57:10] + Processing work unit
[09:57:10] At least 4 processors must be requested.Core required: FahCore_a2.exe
[09:57:10] Core found.
[09:57:10] Working on queue slot 07 [August 27 09:57:10 UTC]
[09:57:10] + Working ...
[09:57:10] - Calling './mpiexec -np 4 -host 127.0.0.1 ./FahCore_a2.exe -dir work/ -suffix 07 -checkpoint 30 -forceasm -verbose -lifeline 4399 -version 624'

[09:57:10] 
[09:57:10] *------------------------------*
[09:57:10] Folding@Home Gromacs SMP Core
[09:57:10] Version 2.08 (Mon May 18 14:47:42 PDT 2009)
[09:57:10] 
[09:57:10] Preparing to commence simulation
[09:57:10] - Ensuring status. Please wait.
[09:57:19] - Assembly optimizations manually forced on.
[09:57:19] - Not checking prior termination.
[09:57:20] - Expanded 1142239 -> 17887233 (decompressed 1565.9 percent)
[09:57:20] Called DecompressByteArray: compressed_data_size=1142239 data_size=17887233, decompressed_data_size=17887233 diff=0
[09:57:20] - Digital signature verified
[09:57:20] 
[09:57:20] Project: 2669 (Run 1, Clone 138, Gen 21)
[09:57:20] 
[09:57:21] Assembly optimizations on if available.
[09:57:21] Entering M.D.
NNODES=4, MYRANK=0, HOSTNAME=ubuntu
NNODES=4, MYRANK=1, HOSTNAME=ubuntu
NNODES=4, MYRANK=2, HOSTNAME=ubuntu
NODEID=1 argc=22
NODEID=0 argc=22
NNODES=4, MYRANK=3, HOSTNAME=ubuntu
NODEID=2 argc=22
                         :-)  G  R  O  M  A  C  S  (-:

                   NODEID=3 argc=22
Groningen Machine for Chemical Simulation

                 :-)  VERSION 4.0.99_development_20090425  (-:


      Written by David van der Spoel, Erik Lindahl, Berk Hess, and others.
       Copyright (c) 1991-2000, University of Groningen, The Netherlands.
             Copyright (c) 2001-2008, The GROMACS development team,
            check out http://www.gromacs.org for more information.


                                :-)  mdrun  (-:

Reading file work/wudata_07.tpr, VERSION 3.3.99_development_20070618 (single precision)

-------------------------------------------------------
Program mdrun, VERSION 4.0.99_development_20090425
Source code file: smalloc.c, line: 147

Fatal error:
Not enough memory. Failed to calloc 2773589135 elements of size 4 for block->index
(called from file tpxio.c, line 1180)
For more information and tips for trouble shooting please check the GROMACS Wiki at
http://wiki.gromacs.org/index.php/Errors
-------------------------------------------------------

Thanx for Using GROMACS - Have a Nice Day
: Cannot allocate memory
Error on node 0, will try to stop all the nodes
Halting parallel program mdrun on CPU 0 out of 4

gcq#0: Thanx for Using GROMACS - Have a Nice Day

[cli_0]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 0
Post Reply