BAD_WORK_UNIT, Project 14088

If you're new to FAH and need help getting started or you have very basic questions, start here.

Moderators: Site Moderators, FAHC Science Team

Post Reply
contrib854
Posts: 5
Joined: Mon Dec 11, 2017 10:54 am

BAD_WORK_UNIT, Project 14088

Post by contrib854 »

06:43:06:WU02:FS00:0xa7:Project: 14088 (Run 76, Clone 4, Gen 4)
06:43:06:WU02:FS00:0xa7:Unit: 0x000000070002894b5b105746546b36fa
06:43:06:WU02:FS00:0xa7:Reading tar file core.xml
06:43:06:WU02:FS00:0xa7:Reading tar file frame4.tpr
06:43:06:WU02:FS00:0xa7:Digital signatures verified
06:43:06:WU02:FS00:0xa7:Calling: mdrun -s frame4.tpr -o frame4.trr -cpt 15 -nt 3
06:43:06:WU02:FS00:0xa7:Steps: first=5000000 total=1250000
06:43:06:WU02:FS00:0xa7:ERROR:
06:43:06:WU02:FS00:0xa7:ERROR:-------------------------------------------------------
06:43:06:WU02:FS00:0xa7:ERROR:Program GROMACS, VERSION 5.0.4-20161122-4846b12ba-unknown
06:43:06:WU02:FS00:0xa7:ERROR:Source code file: /host/debian-stable-64bit-core-a7-sse-release/gromacs-core/build/gromacs/src/gromacs/mdlib/domdec.c, line: 6902
06:43:06:WU02:FS00:0xa7:ERROR:
06:43:06:WU02:FS00:0xa7:ERROR:Fatal error:
06:43:06:WU02:FS00:0xa7:ERROR:There is no domain decomposition for 3 ranks that is compatible with the given box and a minimum cell size of 2.16085 nm
06:43:06:WU02:FS00:0xa7:ERROR:Change the number of ranks or mdrun option -rdd or -dds
06:43:06:WU02:FS00:0xa7:ERROR:Look in the log file for details on the domain decomposition
06:43:06:WU02:FS00:0xa7:ERROR:For more information and tips for troubleshooting, please check the GROMACS
06:43:06:WU02:FS00:0xa7:ERROR:website at http://www.gromacs.org/Documentation/Errors
06:43:06:WU02:FS00:0xa7:ERROR:-------------------------------------------------------
06:43:11:WU00:FS00:Upload 21.70%
06:43:11:WU02:FS00:0xa7:WARNING:Unexpected exit() call
06:43:11:WU02:FS00:0xa7:WARNING:Unexpected exit from science code
06:43:11:WU02:FS00:0xa7:Saving result file ../logfile_01.txt
06:43:11:WU02:FS00:0xa7:Saving result file md.log
06:43:11:WU02:FS00:0xa7:Saving result file science.log
06:43:11:WU02:FS00:0xa7:Folding@home Core Shutdown: BAD_WORK_UNIT
06:43:11:WARNING:WU02:FS00:FahCore returned: BAD_WORK_UNIT (114 = 0x72)
06:43:11:WU02:FS00:Sending unit results: id:02 state:SEND error:FAULTY project:14088 run:76 clone:4 gen:4 core:0xa7 unit:0x000000070002894b5b105746546b36fa

I guess this is an issue whicht has to be fixed by the project owner. I merely want it to announce this way to open the opportunity for.

Greetings
Mrmajik45
Posts: 11
Joined: Tue Aug 21, 2018 11:43 pm

Re: BAD_WORK_UNIT, Project 14088

Post by Mrmajik45 »

Can you cancel that project and do a new one?
ReactOS Donator ~ $5.00 | Linux Mint Donator ~ $1.00 in BTC
toTOW
Site Moderator
Posts: 6309
Joined: Sun Dec 02, 2007 10:38 am
Location: Bordeaux, France
Contact:

Re: BAD_WORK_UNIT, Project 14088

Post by toTOW »

Thanks for the report. I forwarded the information to the project owner.

contrib854> is there a reason for you to run only with 3 threads ? You could temporarily fix the issue by running 2 or 4 threads.
Image

Folding@Home beta tester since 2002. Folding Forum moderator since July 2008.
contrib854
Posts: 5
Joined: Mon Dec 11, 2017 10:54 am

Re: BAD_WORK_UNIT, Project 14088

Post by contrib854 »

Mrmajik45 wrote:Can you cancel that project and do a new one?
FaH did this automatically.
toTOW wrote:
contrib854> is there a reason for you to run only with 3 threads ? You could temporarily fix the issue by running 2 or 4 threads.
3 threads is the proposal made by FaH. I have a 4 core CPU. 1 thread is for the management of the FaH slots which leaves 3 threads for the FaH CPU slot. Because this would slow down the computation of other projects as well I am reluctant to resort to 2 threads.
toTOW
Site Moderator
Posts: 6309
Joined: Sun Dec 02, 2007 10:38 am
Location: Bordeaux, France
Contact:

Re: BAD_WORK_UNIT, Project 14088

Post by toTOW »

If you have a NV GPU, it's often better to leave 2 threads free to feed it efficiently.
Image

Folding@Home beta tester since 2002. Folding Forum moderator since July 2008.
Post Reply