Attach stdin of docker container via websocket - api

I am using the chrome websocket client extension to attach to a running container calling the Docker remote API like this:
ws://localhost:2375/containers/34968f0c952b/attach/ws?stream=1&stdout=1
The container is started locally from my machine executing a jar in the image that waits for user input. Basically I want to supply this input from an input field in the web browser.
Although I am able to attach using the API endpoint, I am encountering a few issues - probably due to my lackluster understanding of the ws endpoint as well as the bad documentation - that I would like to resolve:
1) When sending data using the chrome websocket client extension, the frame appears to be transmitted over the websocket according to the network inspection tool. However, the process running in the container waiting for input only receives the sent data when the websocket connection is closed - all at once. Is this standard behaviour? Intuitively you would expect that the input is immediately sent to the process.
2) If I attach to stdin and stdout at the same time, the docker deamon gets stuck waiting for stdin to attach, resulting in not being able to see any output:
[debug] attach.go:22 attach: stdin: begin
[debug] attach.go:59 attach: stdout: begin
[debug] attach.go:143 attach: waiting for job 1/2
[debug] server.go:2312 Closing buffered stdin pipe
[error] server.go:844 Error attaching websocket: use of closed network connection
I have solved this opening two separate connections for stdin and stdout, which works, but is really annoying. Any ideas on this one?
Thanks in advance!

Related

Kafka Connect S3 source throws java.io.IOException

Kafka Connect S3 source connector throws the following exception around 20 seconds into reading an S3 bucket:
Caused by: java.io.IOException: Attempted read on closed stream.
at org.apache.http.conn.EofSensorInputStream.isReadAllowed(EofSensorInputStream.java:107)
at org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:133)
at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
The error is preceded by the following warnning:
WARN Not all bytes were read from the S3ObjectInputStream, aborting HTTP connection. This is likely an error and may result in sub-optimal behavior. Request only the bytes you need via a ranged GET or drain the input stream after use. (com.amazonaws.services.s3.internal.S3AbortableInputStream:178)
I am running Kafka connect out of this image: confluentinc/cp-kafka-connect-base:6.2.0. Using the confluentinc-kafka-connect-s3-source-2.1.1 jar.
My source connector configuration looks like so:
{
"connector.class":"io.confluent.connect.s3.source.S3SourceConnector",
"tasks.max":"1",
"s3.region":"eu-central-1",
"s3.bucket.name":"test-bucket-yordan",
"topics.dir":"test-bucket/topics",
"format.class": "io.confluent.connect.s3.format.json.JsonFormat",
"partitioner.class":"io.confluent.connect.storage.partitioner.DefaultPartitioner",
"schema.compatibility":"NONE",
"confluent.topic.bootstrap.servers": "blockchain-kafka-kafka-0.blockchain-kafka-kafka-headless.default.svc.cluster.local:9092",
"transforms":"AddPrefix",
"transforms.AddPrefix.type":"org.apache.kafka.connect.transforms.RegexRouter",
"transforms.AddPrefix.regex":".*",
"transforms.AddPrefix.replacement":"$0_copy"
}
Any ideas on what might be the issue? Also I was unable to find the repository of Kafka connect S3 source connector, is it opensource?
Edit: I don't see the problem if gzip compression on the kafka-connect sink is disabled.
The warning means that close()was called before the file was read. S3 was not done with sending the data but the connection was left hanging.
2 options:
Validate that the input stream contains no more data. That way the connection can be reused
Call s3ObjectInputStream.abort() (NOTE: this connection could not be reused if you abort the input stream and a new one will need to created which will have performance impact.) In some cases this might make sense e.g. when the read is getting too slow etc.

Tomcat server causing broken pipe for big payloads

I made a simple spring-boot application that returns a static json response for all requests.
When the app gets a request with a large payload (~5mb json, 1 TP ), the client receives the following error:
java.net.SocketException: Broken pipe (Write failed)
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111)
at java.net.SocketOutputStream.write(SocketOutputStream.java:155)
I have tried increasing every limit i could - here are my tomcat settings:
spring.http.multipart.max-file-size=524288000
spring.http.multipart.max-request-size=524288000
spring.http.multipart.enabled=true
server.max-http-post-size=10000000
server.connection-timeout=30000
server.tomcat.max-connections=15000
server.tomcat.max-http-post-size=524288000
server.tomcat.accept-count=10000
server.tomcat.max-threads=200
server.tomcat.min-spare-threads=200
What can I do to make this simple spring boot with just one controller, to handle such payloads successfully?
This springboot application and the client sending the large payload run on an 8-core machine with 16gb ram. So resources shouldn't be a problem.
This was because the controller was returning a response without consuming the request body.
So the server closes the connection as soon as it receives the request, without consuming the full request body. The client still hadn't finished sending the request and the server closed the connection before that.
Solution:
1. Read the full request body in your code
2. Set tomcat's maxSwallowSize to a higher value (default : 2mb)
server.tomcat.max-swallow-size=10MB

Tracker GV500 - Device management

I currently registered my Quecklink GV500 to cumulocity and I'm able to receive some events and measurements.
But when I try to send command to my Quecklink GV500 registered in Cumulocity but I always hava a FAILED response. For example, I tried to send this command (which is fully supported by the GV500) from SHELL tab: AT+GTTMA=gv500,+,1,0,0,,,,,,FFFF$
And as result I got:
Failure reason: Command currently not supported
I also tried to get the agent logs by using "Log file request" in the "Log" tab of my Device and as result I got:
Failure reason: Cannot build command. Search parameters only allow the
following characters [a-zA-Z0-9_]
Is it normal?
When I look the general information in "Info" tab I have:
Send connection: online
Push connection: inactive
Is it normal that Push connection is marked as inactive?
The tracker-agent in it current state does not use a push connection for receiving operations but does a polling of the operations. Therefore the push connection is always shown as inactive.
If you receive "Failure reason: Command currently not supported" it is an error from the agent not the device. The agent seems not to support shell operations for Queclink.
As for the error on the log file request it seems that there was an unsupported character in the search parameter. Maybe you can share what you entered for the parameters in the UI
Thanks for your answer. For the log file request I let blank value in the search input field. If I try to enter "gl200", I get the following error: Command currently not supported.
So to resume can you confirm that Quecklink devices can't be managed from Cumulocity for the moment? It's supported for which device?

Amazon S3 File Read Timeout. Trying to download a file using JAVA

New to Amazon S3 usage.I get the following error when trying to access the file from Amazon S3 using a simple java method.
2016-08-23 09:46:48 INFO request:450 - Received successful response:200, AWS Request ID: F5EA01DB74D0D0F5
Caught an AmazonClientException, which means the client encountered an
internal error while trying to communicate with S3, such as not being
able to access the network.
Error Message: Unable to store object contents to disk: Read timed out
The exact lines of code worked yesterday.I was able to download 100% of 5GB file in 12 min. Today I'm in a better connected environment but only 2% or 3% of the file is downloaded and then the program fails.
Code that I'm using to download.
s3Client.getObject(new GetObjectRequest("mybucket", file.getKey()), localFile);
You need to set the connection timeout and the socket timeout in your client configuration.
Click here for a reference article
Here is an excerpt from the article:
Several HTTP transport options can be configured through the com.amazonaws.ClientConfiguration object. Default values will suffice for the majority of users, but users who want more control can configure:
Socket timeout
Connection timeout
Maximum retry attempts for retry-able errors
Maximum open HTTP connections
Here is an example on how to do it:
Downloading files >3Gb from S3 fails with "SocketTimeoutException: Read timed out"

how do i stop reqest.finish() (twisted .web server)from closing the http connection

I have written a chat server which writes a message line on the browser of one client to other client. The problem is that if i use request.finish() the output is shown but the connection is closed .if i dont use request.finish () the browser or program buffers the output and displaying output only after 20 or so request have been send.however the connection remains open.
I think best approach is reconnecting client just after receiving a message and closing connection.