WebRTC: When using blob and when arraybuffer for dataChannel.binaryType? - blob

To send binary data via the WebRTC/RTCDataChannel binaryType can be set either to "blob" or "arraybuffer".
dataChannel.binaryType = "blob";
dataChannel.binaryType = "arraybuffer";
I can't find out in what cases binary data should be send as blob and when as ArrayBuffers. Any hint?

The binaryType mostly affects how you receive objects. Whether to send as Blob or ArrayBuffer mostly depends on what you're sending. If you want to send files which you have been reading as blobs (and they're large), then sending them as a blob is the obvious thing to do.
Note that Chrome currently only implements sending ArrayBuffers, star https://bugs.chromium.org/p/webrtc/issues/detail?id=2276 for Blob support.

Related

How can I ensure that ASP.NET Core's IFormFile stream doesn't read more than what's specified in the file's Content-Length?

I have an API endpoint for uploading large files, streaming then directly to DB. I use ASP.NET Core's IFormFeature to do this, calling IFormFile.OpenReadStream() to get a Stream that I pass to SqlClient for streaming.
I want to enforce a a maximum file size to avoid abuse. I know IFormFile has a Length property, but I assume that is based on Content-Length or similar and can not be trusted (please correct me if I'm wrong, but AFAIK the only way to be 100% sure about the file size is to actually read the data; the client could send an incorrect Content-Length.)
I must therefore ensure that when the stream is read, it does not read more than what is specified in IFormFile.Length (ideally it should throw if it encounters additional bytes). I have not found a way to do this. Is this possible, or is there perhaps a better way to ensure the server doesn't read enormous amounts of data from clients sending incorrect Content-Length headers?
(It should go without saying that this must not entail reading the entire file into memory.)

Implementing basic S3 compatible API with akka-http

I'm trying to implement the file storage ыукмшсу with basic S3 compatible API using akka-http.
I use s3 java sdk to test my service API and got the problem with the putObject(...) method. I can't consume file properly on my akka-http backend. I wrote simple route for the test purposes:
def putFile(bucket: String, file: String) = put{
extractRequestEntity{ ent =>
val finishedWriting = ent.dataBytes.runWith(FileIO.toPath(new File(s"/tmp/${file}").toPath))
onComplete(finishedWriting) { ioResult =>
complete("Finished writing data: " + ioResult)
}
}
}
It saves file, but file is always corrupted. Looking inside the file I found the lines like these:
"20000;chunk-signature=73c6b865ab5899b5b7596b8c11113a8df439489da42ddb5b8d0c861a0472f8a1".
When I try to PUT file with any other rest client it works as fine as expected.
I know S3 uses "Expect: 100-continue" header and may it he causes problems.
I really can't figure out how to deal with that. Any help appreciated.
This isn't exactly corrupted. Your service is not accounting for one of the four¹ ways S3 supports uploads to be sent on the wire, using Content-Encoding: aws-chunked and x-amz-content-sha256: STREAMING-AWS4-HMAC-SHA256-PAYLOAD.
It's a non-standards-based mechanism for streaming an object, and includes chunks that look exactly like this:
string(IntHexBase(chunk-size)) + ";chunk-signature=" + signature + \r\n + chunk-data + \r\n
...where IntHexBase() is pseudocode for a function that formats an integer as a hexadecimal number as a string.
This chunk-based algorithm is similar to, but not compatible with, Transfer-Encoding: chunked, because it embeds checksums in the stream.
Why did they make up a new HTTP transfer encoding? It's potentially useful on the client side because it eliminates the need to either "read your payload twice or buffer [the entire object payload] in memory [concurrently]" -- one or the other of which is otherwise necessary if you are going to calculate the x-amz-content-sha256 hash before the upload begins, as you otherwise must, since it's required for integrity checking.
I am not overly familiar with the internals of the Java SDK, but this type of upload might be triggered by using .withInputStream() or it might be standard behavor for files too, or for files over a certain size.
Your minimum workaround would be to throw an HTTP error if you see x-amz-content-sha256: STREAMING-AWS4-HMAC-SHA256-PAYLOAD in the request headers since you appear not to have implemented this in your API, but this would most likely only serve to prevent storing objects uploaded by this method. The fact that this isn't already what happens automatically suggests that you haven't implemented x-amz-content-sha256 handling at all, so you are not doing the server-side payload integrity checks that you need to be doing.
For full compatibility, you'll need to implement the algorithm supported by S3 and assumed to be available by the SDKs, unless the SDKs specifically support a mechanism for disabling this algorithm -- which seems unlikely, since it serves a useful purpose, particularly (it appears) for streams whose length is known but that aren't seekable.
¹ one of four -- the other three are a standard PUT, a web-based html form POST, and the multipart API that is recommended for large files and mandatory for files larger than 5 GB.

Uploading a file via Jaxax REST Client interface, with third party server

I need to invoke a remote REST interface handler and submit it a file in request body. Please note that I don't control the server. I cannot change the request to be multipart, the client has to work in accordance to external specification.
So far I managed to make it work like this (omitting headers etc. for brevity):
byte[] data = readFileCompletely ();
client.target (url).request ().post (Entity.entity (data, "file/mimetype"));
This works, but will fail with huge files that don't fit into memory. And since I have no restriction on filesize, this is a concern.
Question: is it somehow possible to use streams or something similar to avoid reading the whole file into memory?
If possible, I'd prefer to avoid implementation-specific extensions. If not, a solution that works with RESTEasy (on Wildfly) is also acceptable.
ReastEasy as well as Jersey support InputStream out of the box so simply use Entity.entity(inputStream, "application/octet-stream"); or whatever Content-Type header you want to set.
You can go low-level and construct the HTTP request using a library such as the plain java.net.URLConnection.
I have not tried it myself but there is example code which reads a local file and writes it to the request stream without loading it into a byte array.
Upload files from Java client to a HTTP server
Of course this solution requires more manual coding but it should work (unless java.net.URLConnection loads the whole file into memory)

How to check if NSData has a multimedia content?

I have a NSData object, obtained from a URL request.Now I don't know how to read it.
However in my application I don't know if the data contains a video or not, so I would know:
How to know if NSData has some video inside it?
How to interpret the data, reading it byte per byte?
I'm not familiar with the particular API you're using so I can't say what the code should be, but any web/HTTP client library should provide you the Content-Type of the data as well as the data itself. Use the Content-Type (and only the Content-Type; doing otherwise can lead to security bugs) to determine how to interpret the content. For example, if the Content-Type (also known as MIME type) starts with video/, then the content is definitely video; the part after the slash will tell you the specific format to interpret it as.
If you intend to play the video that the data may contain, then just do that. Whichever playback API you use should give you an error if the data isn't anything it recognizes.

SBJson Stream Parser

I'm working in Xcode 4.3.2 + building for an app in iOS 5.
I've decided to use SBJson to parse streams of data from our server. I've verified that I'm receiving a valid JSON response from the server. My question concerns the design behind the classes SBJsonStreamParser and the SBJsonParser.
It appears that in SBJsonParser the method "objectWithData" takes the data received from the JSON response and uses the SBJsonStreamParserAccumulator to append the stream of data into a single JSON document. Once the data stream is gathered into one object, it is then parsed by the "parse" method in SBJsonStreamParser.
I've run into several issues when requesting larger JSON documents. The size of the responses seem to be reasonable (specially 9.4 KB response). It appears that the SBJsonStreamParser breaks when getting a data stream greater than a certain size. The parser succeeds when the response is small (~3KB), but fails when the response is larger (~10KB).
I used NSLog to verify that in both cases, pulling a small & large stream, the methods are successfully receiving the full json document - because it looks like [{"id": .... 123}]. I'm convinced that the issue is that the data stream is too long.
I'm wondering if I'm using SBJson incorrectly or is this simply a limitation of the parser? Is there anything that I can configure that allows SBJsonStreamParser to not throw an error for larger (but reasonable) data streams & continue to parse the full response?
Thanks in advance!
Actually you have the workings of objectWithData: backwards. SBJsonStreamParserAccumulator is used to accumulate the parsed output, not the unparsed data stream.