130.237.232.140

Moderators: Site Moderators, FAHC Science Team

Bobby-Uschi
Posts: 70
Joined: Thu Jul 31, 2008 3:26 pm
Hardware configuration: PC1//C2Q-Q9450,GA-X48-DS5-NinjaMini,GTX285,2x160GB Western Sata2,2x1GB Geil800,Tagan 800W;XP Pro SP3-32Bit;
PC2//C2Q-Q2600k.GB-P67UD4-Freezer 7Pro,GTX285Leadtek,260 GB Western Sata2,4x2GB GeilPC3,OCZ600W;Win7-64Bit;Siemens 22"
Location: Deutschland

Re: 130.237.232.140

Post by Bobby-Uschi »

Wed May 5 01:20:10 PDT 2010 130.237.232.140 classic folding-3 kasson full Reject
Gruss
PC1//C2Q-Q9450,GA-X48-DS5-,2xGTX285,2x160GB Western Sata2,2x1GB Geil800,Tagan 800W;XP Pro SP3-32Bit
PC2//C2Q-Q2600k.GB-P67UD4-Freezer 7Pro,GTX285Leadtek,260 GB WeSata2,4x2GB GeilPC3,OCZ600W;Win7-64Bit;Siemens 22"stern
kg4icg
Posts: 53
Joined: Sat Jun 13, 2009 10:13 pm

Re: 130.237.232.140

Post by kg4icg »

Looks like the changes didn't keep, Have 2 wu's I can't get uploaded and can't get a new wu, oh boy.
Macaholic
Site Moderator
Posts: 811
Joined: Thu Nov 29, 2007 11:57 pm
Location: 1 Infinite Loop

Re: 130.237.232.140

Post by Macaholic »

kg4icg wrote:Looks like the changes didn't keep, Have 2 wu's I can't get uploaded and can't get a new wu, oh boy.
The proper people have been notified. Thank you. :)
Fold! It does a body good!™
kasson
Pande Group Member
Posts: 1459
Joined: Thu Nov 29, 2007 9:37 pm

Re: 130.237.232.140

Post by kasson »

We're working on this server (the changes we did earlier were a band-aid and we're working on the real fix), so you may see it come up and down a bit. If it's up and you can return work but you have errors getting work, please let us know.
artoar_11
Posts: 657
Joined: Sun Nov 22, 2009 8:42 pm
Hardware configuration: AMD R7 3700X @ 4.0 GHz; ASUS ROG STRIX X470-F GAMING; DDR4 2x8GB @ 3.0 GHz; GByte RTX 3060 Ti @ 1890 MHz; Fortron-550W 80+ bronze; Win10 Pro/64
Location: Bulgaria/Team #224497/artoar11_ALL_....

Re: 130.237.232.140

Post by artoar_11 »

Is there a real danger during the "Chimp Challenge 2010" have overloaded servers or absence of WUs. Today I see difficulty in the return and acceptance of WUs. In particular, with A3.
I guess in PG are prepared for it.

Thanks
7im
Posts: 10189
Joined: Thu Nov 29, 2007 4:30 pm
Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
Location: Arizona
Contact:

Re: 130.237.232.140

Post by 7im »

Not likely. Read back through the thread. This issue started a few days ago. Chimps only just started today.

And no offense to the chimps, but their contest results in only a fractional increase in demand over the typical daily demand.
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
kg4icg
Posts: 53
Joined: Sat Jun 13, 2009 10:13 pm

Re: 130.237.232.140

Post by kg4icg »

The 2 wu's that I had in cue ready to be sent were uploaded yesterday. Now I'm steadily crunching away as normal. Tnx
extreme_nfsgame
Posts: 2
Joined: Tue May 18, 2010 3:47 pm
Hardware configuration: Q6700 @ 3,6GHz @ Wakü, ASUS Maximus Formula X38, 4GB DDR2-800 RAM, Geforce 9800GT @ 1835MHz Shader @ 1,20V @ Wakü, Be Quiet Straight Power E6 650W
Location: Hanover (Germany)
Contact:

130.237.232.140

Post by extreme_nfsgame »

My SMP-Client couldn't send his work units. It's saying:

Code: Select all

[13:04:12] Completed 36920 out of 500001 steps  (7%)
[13:04:13] + Could not connect to Work Server (results)
[13:04:13]     (130.237.232.140:8080)
[13:04:13] + Retrying using alternative port
[13:04:30] + Could not connect to Work Server (results)
[13:04:30]     (130.237.232.140:80)
[13:04:30] - Error: Could not transmit unit 07 (completed May 19) to work server.


[13:04:30] + Attempting to send results [May 19 13:04:30 UTC]
[13:07:20] - Server does not have record of this unit. Will try again later.
[13:07:20]   Could not transmit unit 07 to Collection server; keeping in queue.
[13:08:19] Completed 40001 out of 500001 steps  (8%)
That's the second work unit today with this failure :(.
Image
mplee73
Posts: 15
Joined: Wed May 19, 2010 3:11 pm

Re: 130.237.232.140

Post by mplee73 »

It seems that I've got 2 that haven't been able to send to this server.

Code: Select all

[15:10:01] + Attempting to send results [May 19 15:10:01 UTC]
[15:10:01] - Reading file work/wuresults_00.dat from core
[15:10:01]   (Read 20550154 bytes from disk)
[15:10:01] Connecting to http://130.237.165.141:8080/
[15:10:02] - Couldn't send HTTP request to server
[15:10:02]   (Got status 503)
[15:10:02] + Could not connect to Work Server (results)
[15:10:02]     (130.237.165.141:8080)
[15:10:02] + Retrying using alternative port
[15:10:02] Connecting to http://130.237.165.141:80/
[15:10:02] - Couldn't send HTTP request to server
[15:10:02] + Could not connect to Work Server (results)
[15:10:02]     (130.237.165.141:80)
[15:10:02]   Could not transmit unit 00 to Collection server; keeping in queue.
[15:10:02] Project: 6014 (Run 0, Clone 191, Gen 151)

[15:10:02] + Attempting to send results [May 19 15:10:02 UTC]
[15:10:02] - Reading file work/wuresults_00.dat from core
[15:10:02]   (Read 20550154 bytes from disk)
[15:10:02] Connecting to http://130.237.232.140:8080/
[15:10:39] Posted data.
[15:10:39] Initial: 0000; + Could not connect to Work Server (results)
[15:10:39]     (130.237.232.140:8080)
[15:10:39] + Retrying using alternative port
[15:10:39] Connecting to http://130.237.232.140:80/
[15:11:11] Posted data.
[15:11:11] Initial: 0000; + Could not connect to Work Server (results)
[15:11:11]     (130.237.232.140:80)
[15:11:11] - Error: Could not transmit unit 00 (completed May 19) to work server.
[15:11:11] - 8 failed uploads of this unit.

[15:11:11] + Attempting to send results [May 19 15:11:11 UTC]
[15:11:11] - Reading file work/wuresults_00.dat from core
[15:11:11]   (Read 20550154 bytes from disk)
[15:11:11] Connecting to http://130.237.165.141:8080/
[15:11:12] - Couldn't send HTTP request to server
[15:11:12]   (Got status 503)
[15:11:12] + Could not connect to Work Server (results)
[15:11:12]     (130.237.165.141:8080)
[15:11:12] + Retrying using alternative port
[15:11:12] Connecting to http://130.237.165.141:80/
[15:11:12] - Couldn't send HTTP request to server
[15:11:12] + Could not connect to Work Server (results)
[15:11:12]     (130.237.165.141:80)
[15:11:12]   Could not transmit unit 00 to Collection server; keeping in queue.
[15:11:12] + Sent 0 of 2 completed units to the server
[15:11:12] - Failed to send all units to server
[15:11:12] ***** Got a SIGTERM signal (2)
[15:11:12] Killing all core threads
[15:11:12] Could not get process id information.  Please kill core process manually
ikerekes
Posts: 95
Joined: Thu Nov 13, 2008 4:18 pm
Hardware configuration: q6600 @ 3.3Ghz windows xp-sp3 one SMP2 (2.15 core) + 1 9800GT native GPU2
Athlon x2 6000+ @ 3.0Ghz ubuntu 8.04 smp + asus 9600GSO gpu2 in wine wrapper
5600X2 @ 3.19Ghz ubuntu 8.04 smp + asus 9600GSO gpu2 in wine wrapper
E5200 @ 3.7Ghz ubuntu 8.04 smp2 + asus 9600GT silent gpu2 in wine wrapper
E5200 @ 3.65Ghz ubuntu 8.04 smp2 + asus 9600GSO gpu2 in wine wrapper
E6550 vmware ubuntu 8.4.1
q8400 @ 3.3Ghz windows xp-sp3 one SMP2 (2.15 core) + 1 9800GT native GPU2
Athlon II 620 @ 2.6 Ghz windows xp-sp3 one SMP2 (2.15 core) + 1 9800GT native GPU2
Location: Calgary, Canada

Re: 130.237.232.140

Post by ikerekes »

And here is mine, (same problem. it's really gonna hurt the bonus)

Code: Select all

[13:54:25] Completed 145000 out of 500000 steps  (29%)
[13:57:04] - Autosending finished units... [May 19 13:57:04 UTC]
[13:57:04] Trying to send all finished work units
[13:57:04] Project: 6012 (Run 2, Clone 75, Gen 88)


[13:57:04] + Attempting to send results [May 19 13:57:04 UTC]
[13:57:04] - Reading file work/wuresults_02.dat from core
[13:57:04]   (Read 20537611 bytes from disk)
[13:57:04] Connecting to http://130.237.232.140:8080/
[13:58:20] Posted data.
[13:58:20] Initial: 0000; + Could not connect to Work Server (results)
[13:58:20]     (130.237.232.140:8080)
[13:58:20] + Retrying using alternative port
[13:58:20] Connecting to http://130.237.232.140:80/
[13:58:58] Posted data.
[13:58:58] Initial: 0000; + Could not connect to Work Server (results)
[13:58:58]     (130.237.232.140:80)
[13:58:58] - Error: Could not transmit unit 02 (completed May 19) to work server.
[13:58:58] - 4 failed uploads of this unit.


[13:58:58] + Attempting to send results [May 19 13:58:58 UTC]
[13:58:58] - Reading file work/wuresults_02.dat from core
[13:58:58]   (Read 20537611 bytes from disk)
[13:58:58] Connecting to http://130.237.165.141:8080/
[13:58:59] - Couldn't send HTTP request to server
[13:58:59] + Could not connect to Work Server (results)
[13:58:59]     (130.237.165.141:8080)
[13:58:59] + Retrying using alternative port
[13:58:59] Connecting to http://130.237.165.141:80/
[13:58:59] - Couldn't send HTTP request to server
[13:58:59]   (Got status 503)
[13:58:59] + Could not connect to Work Server (results)
[13:58:59]     (130.237.165.141:80)
[13:58:59]   Could not transmit unit 02 to Collection server; keeping in queue.
[13:58:59] + Sent 0 of 1 completed units to the server
[13:58:59] - Autosend completed
[14:00:32] Completed 150000 out of 500000 steps  (30%)
[14:06:39] Completed 155000 out of 500000 steps  (31%)
[14:12:46] Completed 160000 out of 500000 steps  (32%)
Last edited by ikerekes on Wed May 19, 2010 4:34 pm, edited 1 time in total.
Image
mplee73
Posts: 15
Joined: Wed May 19, 2010 3:11 pm

Re: 130.237.232.140

Post by mplee73 »

As an update, I just noticed another machine has a WU that was unable to send to that server also.
RAH
Posts: 131
Joined: Sun Dec 02, 2007 6:29 am
Hardware configuration: 1. C2Q 8200@2880 / W7Pro64 / SMP2 / 2 GPU - GTS250/GTS450
2. C2D 6300@3600 / XPsp3 / SMP2 / 1 GPU - GT240
Location: Florida

Re: 130.237.232.140

Post by RAH »

I have almost lost all bonus for two wu's because of this. But what can PG do? Its not even on campus.
Pretty good, 8.5 hours to do the work, and 23 hours and counting to upload.
Image
kasson
Pande Group Member
Posts: 1459
Joined: Thu Nov 29, 2007 9:37 pm

Re: 130.237.232.140

Post by kasson »

I think we just fixed the problem--work units are coming in smoothly. Sorry for the inconvenience, and please let us know if you continue to have problems.
Ragnar Dan
Posts: 52
Joined: Fri Dec 07, 2007 3:21 am
Location: U.S. (TechReport.com's Team 2630)

Re: 130.237.232.140

Post by Ragnar Dan »

I appear to have a problem with this server too, but not the same as others. I uploaded a WU over night and appear not to have gotten any credit for it.

Code: Select all

[21:22:18] + Attempting to get work packet
[21:22:18] Passkey found
[21:22:18] - Will indicate memory of 2009 MB
[21:22:18] - Connecting to assignment server
[21:22:18] Connecting to http://assign.stanford.edu:8080/
[21:22:18] Posted data.
[21:22:18] Initial: ED82; - Successful: assigned to (130.237.232.140).
[21:22:18] + News From Folding@Home: Welcome to Folding@Home
[21:22:18] Loaded queue successfully.
[21:22:18] Connecting to http://130.237.232.140:8080/
[21:22:19] Posted data.
[21:22:19] Initial: 0000; - Receiving payload (expected size: 1799243)
[...]
[21:22:28] + Processing work unit
[21:22:28] Core required: FahCore_a3.exe
[21:22:28] Core found.
[21:22:28] Working on queue slot 06 [May 24 21:22:28 UTC]
[21:22:28] + Working ...
[21:22:28] - Calling './FahCore_a3.exe -dir work/ -nice 19 -suffix 06 -np 4 -checkpoint 10 -forceasm -verbose -lifeline 1074 -version 629'
[21:22:28] 
[21:22:28] *------------------------------*
[21:22:28] Folding@Home Gromacs SMP Core
[21:22:28] Version 2.19 (March 6, 2010)
[21:22:28] 
[21:22:28] Preparing to commence simulation
[21:22:28] - Assembly optimizations manually forced on.
[21:22:28] - Not checking prior termination.
[21:22:28] - Expanded 1798731 -> 2396877 (decompressed 133.2 percent)
[21:22:28] Called DecompressByteArray: compressed_data_size=1798731 data_size=2396877, decompressed_data_size=2396877 diff=0
[21:22:28] - Digital signature verified
[21:22:28] 
[21:22:28] Project: 6014 (Run 1, Clone 8, Gen 143)
[21:22:28] 
[21:22:28] Assembly optimizations on if available.
[21:22:28] Entering M.D.
[21:22:34] Completed 0 out of 500000 steps  (0%)
[21:28:50] Completed 5000 out of 500000 steps  (1%)
[21:35:06] Completed 10000 out of 500000 steps  (2%)
[21:41:22] Completed 15000 out of 500000 steps  (3%)
[21:47:38] Completed 20000 out of 500000 steps  (4%)
[21:53:53] Completed 25000 out of 500000 steps  (5%)
[22:00:09] Completed 30000 out of 500000 steps  (6%)
[22:06:25] Completed 35000 out of 500000 steps  (7%)
[22:12:40] Completed 40000 out of 500000 steps  (8%)
[22:18:55] Completed 45000 out of 500000 steps  (9%)
[22:25:10] Completed 50000 out of 500000 steps  (10%)
[...]
[07:41:39] Completed 495000 out of 500000 steps  (99%)
[07:47:54] Completed 500000 out of 500000 steps  (100%)
[07:47:54] DynamicWrapper: Finished Work Unit: sleep=10000
[07:48:04] 
[07:48:04] Finished Work Unit:
[07:48:04] - Reading up to 20457096 from "work/wudata_06.trr": Read 20457096
[07:48:04] trr file hash check passed.
[07:48:04] edr file hash check passed.
[07:48:04] logfile size: 57500
[07:48:04] Leaving Run
[07:48:08] - Writing 20550156 bytes of core data to disk...
[07:48:09]   ... Done.
[07:48:09] - Shutting down core
[07:48:09] 
[07:48:09] Folding@home Core Shutdown: FINISHED_UNIT
[07:48:09] CoreStatus = 64 (100)
[07:48:09] Unit 6 finished with 93 percent of time to deadline remaining.
[07:48:09] Updated performance fraction: 0.927403
[07:48:09] Sending work to server
[07:48:09] Project: 6014 (Run 1, Clone 8, Gen 143)


[07:48:09] + Attempting to send results [May 25 07:48:09 UTC]
[07:48:09] - Reading file work/wuresults_06.dat from core
[07:48:09]   (Read 20550156 bytes from disk)
[07:48:09] Connecting to http://130.237.232.140:8080/
[07:49:10] Posted data.
[07:49:10] Initial: 0000; - Uploaded at ~318 kB/s
[07:49:12] - Averaged speed for that direction ~300 kB/s
[07:49:12] + Results successfully sent
[07:49:12] Thank you for your contribution to Folding@Home.
[07:49:12] + Number of Units Completed: 41

[07:49:12] Trying to send all finished work units
[07:49:12] + No unsent completed units remaining.
[07:49:12] - Preparing to get new work unit...
[07:49:12] Cleaning up work directory
[07:49:12] + Attempting to get work packet
[07:49:12] Passkey found
[07:49:12] - Will indicate memory of 2009 MB
I expected something near 2600 points or so for it (including bonus for turnaround in under 10 hours 27 minutes), but so far have gotten no points.
mplee73
Posts: 15
Joined: Wed May 19, 2010 3:11 pm

Re: 130.237.232.140

Post by mplee73 »

I'm having issues with this server again....

Code: Select all

[20:49:45] Assembly optimizations on if available.
[20:49:45] Entering M.D.
[20:49:46] - Couldn't send HTTP request to server
[20:49:46] + Could not connect to Work Server (results)
[20:49:46]     (130.237.232.140:8080)
[20:49:46] + Retrying using alternative port
[20:49:46] Connecting to http://130.237.232.140:80/
[20:49:48] - Couldn't send HTTP request to server
[20:49:48] + Could not connect to Work Server (results)
[20:49:48]     (130.237.232.140:80)
[20:49:48] - Error: Could not transmit unit 08 (completed June 6) to work server.
[20:49:48] - 6 failed uploads of this unit.


[20:49:48] + Attempting to send results [June 6 20:49:48 UTC]
[20:49:48] - Reading file work/wuresults_08.dat from core
[20:49:48]   (Read 20537162 bytes from disk)
[20:49:48] Connecting to http://130.237.165.141:8080/
[20:49:50] - Couldn't send HTTP request to server
[20:49:50] + Could not connect to Work Server (results)
[20:49:50]     (130.237.165.141:8080)
[20:49:50] + Retrying using alternative port
[20:49:50] Connecting to http://130.237.165.141:80/
[20:49:51] - Couldn't send HTTP request to server
[20:49:51] + Could not connect to Work Server (results)
[20:49:51]     (130.237.165.141:80)
[20:49:51]   Could not transmit unit 08 to Collection server; keeping in queue.
[20:49:51] + Sent 0 of 1 completed units to the server
[20:49:51] - Autosend completed
Post Reply