How to upload image with Socket Ktor? - kotlin

I want to send image in Socket.
How can I do that with Ktor?

AFAIK , you cannot do like http multi part upload in websockets directly.
Several things you can try.
Try convert the image into Base64/Byte Array in client and send it to websocket server.
Several things you may need to handle is, if you use byte array you might need to handle file headers in the byte array.
If you read file from image you can get like this and pass it to the websocket.
var arr = File(path).inputStream().use { it.readBytes() }
Downside of doing this is , if image size is higher it may not work as expected. And if your websocket sends it to multiple listening clients, it will be quite overload and leads to some delay while sending additional data with file byte array.
Another best approach is to upload image to you sever using http multipart upload(Without socket) and send the image url to server. So url can be sent to clients listening to the particular socket. So the image data can be loaded in client only when required.
If you send byte array in webssocket for big image, the particular websocket response size will be higher than sending the image with image url.
Recommended approach will be method 2 mostly , except some specific use cases.

Related

WebRTC - receiving H264 key frames

I've been playing with WebRTC using libdatachannel, experimenting and learning.
Wrote some code to parse RTP packets into NALU's, and testing connecting to a "known good" server which sends H264 video.
Problem:
I'm only seeing NALU's with type = 1 (fragmented into multiple FU-A's) and sometimes type = 24 (which contain embedded SPS and PPS NALU's).
So I don't understand how to decode / render this stream - I would expect the server to send a NALU with a key frame (NALU type 5) automatically to a newly connected client, but it does not.
What am I missing to be able to decode the stream? What should I do to receive a key frame quickly? If my understanding is correct, I need a key frame to start decoding / rendering.
Tried requesting a key frame from code - it does arrive (type 5) but after some delay which is undesirable.
And yet the stream plays perfectly fine with a web browser client (Chrome, JavaScript) and starts up quickly.
Am I maybe overthinking this, and the browser also has a delay but I'm just perceiving it as instant?
In any case, what's the situation with key frames? Is a client supposed to request them (and without that, a server should not be expected to send them)?
If so what's a good interval? One second, two, three?

Logic App HTTP action Bad request due to max buffer size

Real quickly I am trying to complete an http action in azure logic app that will send out a get request to return a csv file as the response body. The issue is when I run it I get a "BadRequest. Http request failed as there is an error: 'Cannot write more bytes to the buffer than the configured maximum buffer size: 104857600.'". I am not sure how to mitigate this buffer limit or whether I can increase it. I could use some help I really need this csv file returned so I can get it into to blob storage.
Please try this way:
1. In the HTTP action's upper-right corner, choose the ellipsis button (...), and then choose Settings.
2. Under Content Transfer, set Allow chunking to On.
You can refer to Handle large messages with chunking in Azure Logic Apps

Working of S3 file download

To download a file from S3 using java SDK, we need to do the following ->
Note- Multipart download is off.
S3Object s3Object = s3.getObject(getObjectRequest);
S3ObjectInputStream s3ObjectInputStream = s3Object.getObjectContent();
//Write to a file from this stream
When we make a getObject call, the SDK does a GET call on that object.
This call returns just the headers of the response.
When we actually start reading from the s3ObjectInputStream, we get the response body.
But this all is one REST call.
So, I was confused why the call returned only the headers first.
And how did S3 know when to start sending in the response body?
We are making only one call, so how are we notifying S3 that we have now started reading from the s3ObjectInputStream.
Where is the actual file stored till we read it from the stream ?
S3 starts sending the response body immediately.
You just haven't started reading it from the network.
getObject
Be extremely careful when using this method; the returned Amazon S3 object contains a direct stream of data from the HTTP connection. The underlying HTTP connection cannot be reused until the user finishes reading the data and closes the stream.
https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3.html#getObject-com.amazonaws.services.s3.model.GetObjectRequest-
A small amount has been buffered, but the object isn't being stored anywhere. The network connection is stalled.
If you were to start a request and wait long enough before reading it, S3 would eventually detect the connection as stalled, give up, and close the connection.
In practice, it's easy to separate HTTP headers from body in a stream, because the boundary between them is always exactly \r\n\r\n. This 4 byte sequence is invalid within the headers, and mandatory after the headers, so the SDK simply stops extracting headers at this point in the response from S3 and builds and returns the response object, from which you can read the body from the stream from the network.

what is BlobTransfer Policy in ActiveMQ

In ActiveMQ while using blob messages we use this as broker
String broker1 = "tcp://localhost:7005?jms.blobTransferPolicy.UploadUrl=http://localhost:7005/fileserver/"
Can anybody explain what is UploadUrl and why we need to configure for blob messages(we don't need to configure for text messages). Why it doesn't allow tcp protocol?
So plain text messages are good and easy to use, but needs to be in memory at all times. It works well with a KBs of data, or even a few MB. However, sending very large files, such as initial data loads, large media files or BI data, is not nice to keep around in memory. There may still be a need to pass the message around, route/filter based on message properties, use transactions and similar.
Blob messages is an attempt to solve the need to pass around GBs of data through the semantics of messaging. The trade off is that you have to define a streaming based server somewhere that both sender and receiver can reach. It can be HTTP, FTP, a local file, WebDAV or similar. ActiveMQ comes with a HTTP based fileserver if you have no other file area around.

Is there a way to add header to apache response on how long it took to retrieve a resource?

Is there a module or a built-in function in apache which I can use/activate to send information how long it took to retrieve/process a resource?
For example the resource http://dom.net/resource is accessed. The response header will include the total time it took to wait for the resource to be ready before it gets sent back to the client.
Apache doesn't really 'wait' until the resource is ready before sending the response back to you - it streams data back to the client as and when it receives it.
Depending on what you're interested in measuring, you could record the time taken for the client to receive the first byte/last byte back from Apache, or measure the time taken for Apache to receive the first byte from the (remote?) resource like so. The time taken for Apache to receive the entire response back from the remote resource is not something you can send in the headers, as the headers will have been sent to the client before the remote response is fully received. This information could trivially be written to the Apache logs, however.