I followed the recommandation to reduce the size of my VM (number of CPU from 4 to 2 and memory from 16GO to 8 Go). After updating the configuration and restarting the VM i was not able to access the VM via ssh.
The VM has an external IP.
The troublshoot diagnostic using gcloud does not show any error or issue in the log. Everything is fine regarding the firewall configuration.
I tried to create a new VM under my project (same project as the original VM). I cannot access it with ssh. If i create a new project and a new VM instance under this new project then I can ssh it. --> The problem seems to be related to the project itself.
I tried to access vie serial port and I am getting these errors:
Mar 8 20:31:11 myvm systemd[1]: Started Google OSConfig Agent.
Mar 8 20:32:11 myvm OSConfigAgent[1173]: 2022-03-08T20:32:11.5643Z OSConfigAgent Critical main.go:100: Error parsing metadata, agent cannot start: network error when requesting metadata, make sure your instance has an active network and can reach the metadata server: Get http://169.254.169.254/computeMetadata/v1/?recursive=true&alt=json&wait_for_change=true&last_etag=0&timeout_sec=60: dial tcp 169.254.169.254:80: connect: network is unreachable
Mar 8 20:32:11 myvm systemd[1]: google-osconfig-agent.service: Main process exited, code=exited, status=1/FAILURE
Mar 8 20:32:11 myvm systemd[1]: google-osconfig-agent.service: Failed with result 'exit-code'.
Mar 8 20:32:12 myvm systemd[1]: google-osconfig-agent.service: Service hold-off time over, scheduling restart.
Mar 8 20:32:12 myvm systemd[1]: google-osconfig-agent.service: Scheduled restart job, restart counter is at 4.
I am blocked... I am asking for your support. Any idea or suggestion?
I tried to use Amazon-SDK(Java) sample code S3TransferProgressSample.java to upload large files to Amazon-S3 storage (also posted here on AWS docs).
But when I am trying to upload 11 GB files, the upload is getting stuck at different points with the error message:
Unable to upload file to Amazon S3: Unable to upload part: Unable toexecute HTTP request: Unbuffered entity enclosing request can not be repeated " (attached screenshot).
It looks like that after IOException occurs SDK is not able to retry the request (see below).
Does anyone encounter this? What is the best-practice to resolve this? Any code is appreciated.
INFO: Received successful response: 200, AWS Request ID:
2B66E7669E24DA75<br> Jan 15, 2011 6:44:46 AM
com.amazonaws.http.HttpClient execute<br> INFO: Sending Request: PUT
s3.amazonaws.com /test_file_upload/autogenerated.txt Parameters:
(uploadId:
m9MqxzD484Ys1nifnX._IzJBGbCFIoT_zBg0xdd6kkZ4TAtmcG0lXQOE.LeiSEuqn6NjcosIQLXJeKzSnKllmw--, partNumber: 1494, )<br> Jan 15, 2011 6:45:10 AM
org.apache.commons.httpclient.HttpMethodDirector executeWithRetry<br>
**INFO: I/O exception (java.net.SocketException) caught when processing request: Connection reset by peer: socket write error**<br>
Jan 15, 2011 6:45:10 AM
org.apache.commons.httpclient.HttpMethodDirector executeWithRetry<br>
INFO: Retrying request<br> Jan 15, 2011 6:45:12 AM
com.amazonaws.http.HttpClient execute<br> WARNING: Unable to execute
HTTP request: Unbuffered entity enclosing request can not be
repeated.<br> Jan 15, 2011 6:45:12 AM
org.apache.commons.httpclient.HttpMethodDirector executeWithRetry<br>
**INFO: I/O exception (java.net.SocketException) caught when processing request: Connection reset by peer: socket write error**<br>
Jan 15, 2011 6:45:12 AM
org.apache.commons.httpclient.HttpMethodDirector executeWithRetry<br>
INFO: Retrying request<br> Jan 15, 2011 6:45:13 AM
org.apache.commons.httpclient.HttpMethodDirector executeWithRetry<br>
**INFO: I/O exception (java.net.SocketException) caught when processing request: Connection reset by peer: socket write error**<br>
Jan 15, 2011 6:45:13 AM
org.apache.commons.httpclient.HttpMethodDirector executeWithRetry<br>
INFO: Retrying request<br> Jan 15, 2011 6:45:13 AM
com.amazonaws.http.HttpClient execute<br>
**WARNING: Unable to execute HTTP request: Unbuffered entity enclosing request can not be repeated.**<br> Jan 15, 2011 6:45:14 AM
com.amazonaws.http.HttpClient execute<br> WARNING: Unable to execute
HTTP request: Unbuffered entity enclosing request can not be
repeated.<br> Jan 15, 2011 6:45:14 AM com.amazonaws.http.HttpClient
execute<br> WARNING: Unable to execute HTTP request: Unbuffered entity
enclosing request can not be repeated.<br> Jan 15, 2011 6:45:14 AM
com.amazonaws.http.HttpClient execute<br> WARNING: Unable to execute
HTTP request: Unbuffered entity enclosing request can not be
repeated.<br> Jan 15, 2011 6:45:15 AM com.amazonaws.http.HttpClient
execute<br> WARNING: Unable to execute HTTP request: Unbuffered entity
enclosing request can not be repeated.<br> Jan 15, 2011 6:45:16 AM
com.amazonaws.http.HttpClient execute<br> WARNING: Unable to execute
HTTP request: Unbuffered entity enclosing request can not be
repeated.<br> Jan 15, 2011 6:45:16 AM
com.amazonaws.http.HttpClient
execute<br> WARNING: Unable to execute HTTP request: Unbuffered entity
enclosing request can not be repeated.<br> Jan 15, 2011 6:45:17 AM
com.amazonaws.http.HttpClient execute<br> WARNING: Unable to execute
HTTP request: Unbuffered entity enclosing request can not be
repeated.<br> Jan 15, 2011 6:45:19 AM com.amazonaws.http.HttpClient
execute<br> WARNING: Unable to execute HTTP request: Unbuffered entity
enclosing request can not be repeated.<br> Jan 15, 2011 6:45:19 AM
com.amazonaws.http.HttpClient execute<br> ....<br> Jan 15, 2011
6:45:21 AM com.amazonaws.http.HttpClient handleResponse<br>
**INFO: Received successful response: 204, AWS Request ID: E794B8FCA4C3D007**<br> Jan 15, 2011 6:45:21 AM
com.amazonaws.http.HttpClient execute<br> ...<br> Jan 15, 2011 6:45:19
AM com.amazonaws.http.HttpClient execute<br> INFO: Sending Request:
DELETE s3.amazonaws.com /test_file_upload/autogenerated.txt
Parameters:<br> ...<br> Jan 15, 2011 6:47:01 AM
com.amazonaws.http.HttpClient handleErrorResponse<br> INFO: Received
error response: Status Code: 404, AWS Request ID: 0CE25DFE767CC595,
AWS Error Code: NoSuchUpload, AWS Error Message: The specified upload
does not exist. The upload ID may be invalid, or the upload may have
been aborted or completed.<br>
Try using the low level API.
This will give you far more control when things go wrong, as they are likely to do with an 11GB file.
Requests to and from S3 do fail from time to time. With the low level API, you'll be able to retry a part of the upload if it fails.
Refactoring the example in the Amazon docs a bit:
// Step 2: Upload parts.
long filePosition = 0;
for (int i = 1; filePosition < contentLength; i++) {
// Last part can be less than 5 MB. Adjust part size.
partSize = Math.min(partSize, (contentLength - filePosition));
// Create request to upload a part.
UploadPartRequest uploadRequest = new UploadPartRequest()
.withBucketName(existingBucketName).withKey(keyName)
.withUploadId(initResponse.getUploadId()).withPartNumber(i)
.withFileOffset(filePosition)
.withFile(file)
.withPartSize(partSize);
// repeat the upload until it succeeds.
boolean anotherPass;
do {
anotherPass = false; // assume everythings ok
try {
// Upload part and add response to our list.
partETags.add(s3Client.uploadPart(uploadRequest).getPartETag());
} catch (Exception e) {
anotherPass = true; // repeat
}
} while (anotherPass);
filePosition += partSize;
}
// Step 3: complete.
CompleteMultipartUploadRequest compRequest = new
CompleteMultipartUploadRequest(
existingBucketName,
keyName,
initResponse.getUploadId(),
partETags);
s3Client.completeMultipartUpload(compRequest);
Note: I am not a java developer so I could have messed things up syntactically, but hopefully this gets you going in the right direction. Also, you'll want to add in a 'retry counter' to prevent an endless loop if the upload repeatedly fails.
As a side note, 404 errors can be thrown if you try to do a multipart upload to a key that is already under a multipart upload.
I think you should try Multipart API supported by AWS.
Check this out : http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManager.html
The answer of Geoff Appleford works for me.
However, I would add a && retryCount < MAX_RETRIES to the while loop control statement and increment of the retryCount on every exception caught inside the while.
Aviad
I wanted to add a comment to Geoff Appleford's answer but SO wouldn't allow me to. In general his answer to use low level API works fine but even if we do now have a do-while loop the way for loop is designed there is in-built retry logic. In his code snippet the file position increases only when there is a success otherwise you are uploading the same part again.
This question already has answers here:
Getting error while running the selenium test cases with selenium grid?
(1 answer)
What is the exact purpose of Selenium Grid?
(1 answer)
selenium grid listening on node port instead of hub port
(3 answers)
Closed 2 years ago.
I am trying to bring the selenium Grid up and running in my local machine and the below is the command that i ran on my local machine.
java -jar selenium-server-standalone-2.28.0.jar -role webdriver -hub http://<IP>:4444/grid/register -host <IP>
Nov 27, 2017 7:42:06 PM org.openqa.grid.selenium.GridLauncher main
INFO: Launching a selenium grid node
Nov 27, 2017 7:42:07 PM org.apache.http.impl.client.DefaultRequestDirector tryConnect
INFO: I/O exception (java.net.NoRouteToHostException) caught when connecting to the target host: No route to host (Host unreachable)
Nov 27, 2017 7:42:07 PM org.apache.http.impl.client.DefaultRequestDirector tryConnect
INFO: Retrying connect
Nov 27, 2017 7:42:07 PM org.apache.http.impl.client.DefaultRequestDirector tryConnect
INFO: I/O exception (java.net.NoRouteToHostException) caught when connecting to the target host: No route to host (Host unreachable)
Nov 27, 2017 7:42:07 PM org.apache.http.impl.client.DefaultRequestDirector tryConnect
INFO: Retrying connect
Nov 27, 2017 7:42:07 PM org.apache.http.impl.client.DefaultRequestDirector tryConnect
INFO: I/O exception (java.net.NoRouteToHostException) caught when connecting to the target host: No route to host (Host unreachable)
Nov 27, 2017 7:42:07 PM org.apache.http.impl.client.DefaultRequestDirector tryConnect
INFO: Retrying connect
Nov 27, 2017 7:42:07 PM org.openqa.grid.internal.utils.SelfRegisteringRemote startRemoteServer
WARNING: error getting the parameters from the hub. The node may end up with wrong timeouts.No route to host (Host unreachable)
19:42:07.460 INFO - Java: Oracle Corporation 25.131-b11
19:42:07.464 INFO - OS: Linux 4.10.0-38-generic amd64
19:42:07.477 INFO - v2.28.0, with Core v2.28.0. Built from revision 18309
19:42:07.582 INFO - RemoteWebDriver instances should connect to: http://127.0.0.1:5555/wd/hub
19:42:07.583 INFO - Version Jetty/5.1.x
19:42:07.584 INFO - Started HttpContext[/selenium-server,/selenium-server]
19:42:07.586 INFO - Started org.openqa.jetty.jetty.servlet.ServletHandler#1d8d30f7
19:42:07.587 INFO - Started HttpContext[/wd,/wd]
19:42:07.587 INFO - Started HttpContext[/selenium-server/driver,/selenium-
Also, i am facing the same issue while running the above command in my Virtual machine as well.
I have deployed SOAP Service using tomcat. I pushed a request to the SOAP service and it worked. But when i submitted bulk request., tomcat crashed.
I can see below error in log files:-
SEVERE: Endpoint ServerSocket[addr=0.0.0.0/0.0.0.0,port=0,localport=7106] ignored exception: java.net.SocketException: Too many open files
java.net.SocketException: Too many open files
at java.net.PlainSocketImpl.socketAccept(Native Method)
at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:384)
at java.net.ServerSocket.implAccept(ServerSocket.java:450)
at java.net.ServerSocket.accept(ServerSocket.java:421)
at org.apache.tomcat.util.net.DefaultServerSocketFactory.acceptSocket(DefaultServerSocketFactory.java:61)
at org.apache.tomcat.util.net.PoolTcpEndpoint.acceptSocket(PoolTcpEndpoint.java:408)
at org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java:71)
at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:685)
at java.lang.Thread.run(Thread.java:595)
WARNING: Reinitializing ServerSocket
Jun 5, 2014 6:43:08 PM org.apache.tomcat.util.net.PoolTcpEndpoint acceptSocket
SEVERE: Endpoint null ignored exception: java.net.SocketException: Too many open files
java.net.SocketException: Too many open files
at java.net.ServerSocket.createImpl(ServerSocket.java:255)
at java.net.ServerSocket.getImpl(ServerSocket.java:205)
at java.net.ServerSocket.bind(ServerSocket.java:319)
at java.net.ServerSocket.<init>(ServerSocket.java:185)
at java.net.ServerSocket.<init>(ServerSocket.java:141)
at org.apache.tomcat.util.net.DefaultServerSocketFactory.createSocket(DefaultServerSocketFactory.java:50)
at org.apache.tomcat.util.net.PoolTcpEndpoint.initEndpoint(PoolTcpEndpoint.java:293)
at org.apache.tomcat.util.net.PoolTcpEndpoint.acceptSocket(PoolTcpEndpoint.java:469)
at org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java:71)
at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:685)
at java.lang.Thread.run(Thread.java:595)
Jun 5, 2014 6:43:08 PM org.apache.tomcat.util.net.PoolTcpEndpoint acceptSocket
WARNING: Restarting endpoint
Jun 5, 2014 6:43:08 PM org.apache.tomcat.util.net.PoolTcpEndpoint acceptSocket
SEVERE: Endpoint null shutdown due to exception: java.net.SocketException: Too many open files
java.net.SocketException: Too many open files
at java.net.ServerSocket.createImpl(ServerSocket.java:255)
at java.net.ServerSocket.getImpl(ServerSocket.java:205)
at java.net.ServerSocket.bind(ServerSocket.java:319)
at java.net.ServerSocket.<init>(ServerSocket.java:185)
at java.net.ServerSocket.<init>(ServerSocket.java:141)
at org.apache.tomcat.util.net.DefaultServerSocketFactory.createSocket(DefaultServerSocketFactory.java:50)
at org.apache.tomcat.util.net.PoolTcpEndpoint.initEndpoint(PoolTcpEndpoint.java:293)
at org.apache.tomcat.util.net.PoolTcpEndpoint.acceptSocket(PoolTcpEndpoint.java:481)
at org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java:71)
at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:685)
at java.lang.Thread.run(Thread.java:595)
Jun 5, 2014 6:43:08 PM org.apache.tomcat.util.threads.ThreadPool$ControlRunnable run
SEVERE: Caught exception (java.lang.ThreadDeath) executing org.apache.tomcat.util.net.LeaderFollowerWorkerThread#4d8d50, terminating thread
Can some body tell me cause and solution to the problem.
The cloudbees SDK 1.1 is attempting to connect to localhost:8080 when I run commands.
Any idea what I need to do to fix it?
Example
bees app:info -v
# CloudBees SDK version: 1.1
Enter application ID (ex: account/appname) : account/app
API call: http://localhost:8080/api?timestamp=1344846702&v=1.0&api_key=KEY&action=application.info&app_id=account%2Fapp&format=xml&sig_version=1&sig=SIGN
Aug 13, 2012 6:31:42 PM org.apache.commons.httpclient.HttpMethodDirector executeWithRetry
INFO: I/O exception (java.net.ConnectException) caught when processing request: Connection refused
Aug 13, 2012 6:31:42 PM org.apache.commons.httpclient.HttpMethodDirector executeWithRetry
INFO: Retrying request
Aug 13, 2012 6:31:42 PM org.apache.commons.httpclient.HttpMethodDirector executeWithRetry
INFO: I/O exception (java.net.ConnectException) caught when processing request: Connection refused
Aug 13, 2012 6:31:42 PM org.apache.commons.httpclient.HttpMethodDirector executeWithRetry
INFO: Retrying request
Aug 13, 2012 6:31:42 PM org.apache.commons.httpclient.HttpMethodDirector executeWithRetry
INFO: I/O exception (java.net.ConnectException) caught when processing request: Connection refused
Aug 13, 2012 6:31:42 PM org.apache.commons.httpclient.HttpMethodDirector executeWithRetry
INFO: Retrying request
ERROR: Connection refused
Thank you
Jono
Add bees.api.url=https\://api.cloudbees.com/api to ~/.bees/bees.config
After backing up ~/.bees and then running bees init and recreating my bees configuration directory I noticed the extra entry in bees.config.