I need to build a recording feature on top of a web conferencing app that makes use of WebRTC. To do this I am using the RecordRTC js library.
The recording is NOT uploaded at the end of the call, but for practical reasons every 3 seconds one portion of the stream is uploaded from client to server. This is to avoid waiting at the end for a large upload.
Here's the JavaScript:
RTC_recorder = RecordRTC(stream, {
type: 'video',
mimeType: 'video/webm;codecs=vp8',
timeSlice: 3000,
ondataavailable: function(blob){
upload_to_server(blob);
}
});
I have been able to save separate blobs on the server:
-blob1.webm (readable video)
-blob2.webm (not readable)
-blob3.webm (not readable)
But unfortunately, I don't understand how to merge the blobs into 1 video (SERVER SIDE), and haven't found any working example in the documentation, nor any clear answer to this question.
Can anyone help?
Thanks.
Concatenating the files without any further modification should result in a valid file.
A simple search revealed this question which was about how concatenating files works in PHP.
Related
Been absolutely banging my head on this one and I would like a sanity check. Mainly if what I want to do it's even possible as I am severely constrained by react-native which has pretty dodgy Blob support.
We all know that video encoding is expensive, thus instead of forcing the user to encode using ffmpeg I would like to delegate the whole process to the backend. It's all good, except that sometimes you might want to trim 30s of a video and it's pointless to upload 3+ minutes of it.
So I had this idea of slicing the blob of the video file:
const startOffset = (startTime * blobSize) / duration;
const endOffset = (endTime * blobSize) / duration;
const slicedBlob = blob.slice(startOffset, endOffset);
// Setting the type as third option is ignored
Something like this, the problem is that the file becomes totally unreadable once it reaches the backend.
React Native cannot handle Blob uploads, thus they are converted in base64, which is totally fine for the whole video, but not for the sliced blob.
This even if I keep the beginning intact:
const slicedBlob = blob.slice(0, endOffset);
I feel like the reason is that the file becomes a application/octet-stream which might impact the decoding?
I am at a bit of a loss here as I cannot understand if this is a react native issue with blobs or if it simply cannot be done.
Thanks for any input.
p.s. I prefer to stick to vanilla expo without using external libraries, I am aware that one exists to handle blobs, but not keen on ejecting relying on external libraries if possible.
You can not simply cut of chunks of a file and have it readable on the other side. For example, in an mp4 the video resolution is only stored in one place. If those bytes get removed, the decoder has no idea how to decode the video.
Yes it is possible to repackage the video client side by rewriting the container, and dropping full GOPs. But it’s would be about 1000 lines for code for you to write and would be limited to certain codecs and containers.
I'm trying to implement the file storage ыукмшсу with basic S3 compatible API using akka-http.
I use s3 java sdk to test my service API and got the problem with the putObject(...) method. I can't consume file properly on my akka-http backend. I wrote simple route for the test purposes:
def putFile(bucket: String, file: String) = put{
extractRequestEntity{ ent =>
val finishedWriting = ent.dataBytes.runWith(FileIO.toPath(new File(s"/tmp/${file}").toPath))
onComplete(finishedWriting) { ioResult =>
complete("Finished writing data: " + ioResult)
}
}
}
It saves file, but file is always corrupted. Looking inside the file I found the lines like these:
"20000;chunk-signature=73c6b865ab5899b5b7596b8c11113a8df439489da42ddb5b8d0c861a0472f8a1".
When I try to PUT file with any other rest client it works as fine as expected.
I know S3 uses "Expect: 100-continue" header and may it he causes problems.
I really can't figure out how to deal with that. Any help appreciated.
This isn't exactly corrupted. Your service is not accounting for one of the four¹ ways S3 supports uploads to be sent on the wire, using Content-Encoding: aws-chunked and x-amz-content-sha256: STREAMING-AWS4-HMAC-SHA256-PAYLOAD.
It's a non-standards-based mechanism for streaming an object, and includes chunks that look exactly like this:
string(IntHexBase(chunk-size)) + ";chunk-signature=" + signature + \r\n + chunk-data + \r\n
...where IntHexBase() is pseudocode for a function that formats an integer as a hexadecimal number as a string.
This chunk-based algorithm is similar to, but not compatible with, Transfer-Encoding: chunked, because it embeds checksums in the stream.
Why did they make up a new HTTP transfer encoding? It's potentially useful on the client side because it eliminates the need to either "read your payload twice or buffer [the entire object payload] in memory [concurrently]" -- one or the other of which is otherwise necessary if you are going to calculate the x-amz-content-sha256 hash before the upload begins, as you otherwise must, since it's required for integrity checking.
I am not overly familiar with the internals of the Java SDK, but this type of upload might be triggered by using .withInputStream() or it might be standard behavor for files too, or for files over a certain size.
Your minimum workaround would be to throw an HTTP error if you see x-amz-content-sha256: STREAMING-AWS4-HMAC-SHA256-PAYLOAD in the request headers since you appear not to have implemented this in your API, but this would most likely only serve to prevent storing objects uploaded by this method. The fact that this isn't already what happens automatically suggests that you haven't implemented x-amz-content-sha256 handling at all, so you are not doing the server-side payload integrity checks that you need to be doing.
For full compatibility, you'll need to implement the algorithm supported by S3 and assumed to be available by the SDKs, unless the SDKs specifically support a mechanism for disabling this algorithm -- which seems unlikely, since it serves a useful purpose, particularly (it appears) for streams whose length is known but that aren't seekable.
¹ one of four -- the other three are a standard PUT, a web-based html form POST, and the multipart API that is recommended for large files and mandatory for files larger than 5 GB.
I have an existing FineUploader implementation for small files using the Traditional (upload-to-server) version which is working great. However, I'd like to also allow Direct S3 uploads from a different part of the application which deals with large attachments, without rewriting the existing code for small files.
Is there some way to allow both Direct S3 and Traditional uploads to work alongside each other? This is a single-page application, so I can't just load one or the other fine-uploader versions depending on which page I'm on.
I tried just including both fine-uploader JS files, but it seemed to break my existing code.
Client-side code:
$uploadContainer = this.$('.uploader')
$uploadButton = this.$('.upload-button')
$uploadContainer.fineUploader(
request:
endpoint: #uploadUrl
inputName: #inputName
params:
authenticity_token: $('meta[name="csrf-token"]').attr('content')
button: $uploadButton
).on 'complete', (event, id, fileName, response) =>
#get('controller').receiveUpload(response)
Good find, #Melinda.
Fine Uploader lives within a custom-named namespace so that it does not conflict with other potential global variables, this is the qq namespace (historically named). What is happening is that each custom build is redeclaring this namespace along with all member objects when you include it in the <script> tags on your page.
I've opened up an issue on our bug tracker that explains the issue in more technical details, and we're looking to prioritize a fix to the customize page so that in the future no one will have this issue.
I know Amazon S3 added the multi-part upload for huge files. That's great. What I also need is a similar functionality on the client side for customers who get part way through downloading a gigabyte plus file and have errors.
I realize browsers have some level of retry and resume built in, but when you're talking about huge files I'd like to be able to pick up where they left off regardless of the type of error out.
Any ideas?
Thanks,
Brian
S3 supports the standard HTTP "Range" header if you want to build your own solution.
S3 Getting Objects
I use aria2c. For private content, you can use "GetPreSignedUrlRequest" to generate temporary private URLs that you can pass to aria2c
S3 has a feature called byte range fetches. It’s kind of the download compliment to multipart upload:
Using the Range HTTP header in a GET Object request, you can fetch a byte-range from an object, transferring only the specified portion. You can use concurrent connections to Amazon S3 to fetch different byte ranges from within the same object. This helps you achieve higher aggregate throughput versus a single whole-object request. Fetching smaller ranges of a large object also allows your application to improve retry times when requests are interrupted. For more information, see Getting Objects.
Typical sizes for byte-range requests are 8 MB or 16 MB. If objects are PUT using a multipart upload, it’s a good practice to GET them in the same part sizes (or at least aligned to part boundaries) for best performance. GET requests can directly address individual parts; for example, GET ?partNumber=N.
Source: https://docs.aws.amazon.com/whitepapers/latest/s3-optimizing-performance-best-practices/use-byte-range-fetches.html
Just updating for current situation, S3 natively supports multipart GET as well as PUT. https://youtu.be/uXHw0Xae2ww?t=1459.
NOTE: For Ruby user only
Try aws-sdk gem from Ruby, and download
object = AWS::S3::Object.new(...)
object.download_file('path/to/file.rb')
Because it download a large file with multipart by default.
Files larger than 5MB are downloaded using multipart method
http://docs.aws.amazon.com/sdkforruby/api/Aws/S3/Object.html#download_file-instance_method
I'm trying to record a webcam's video and audio to a FLV file stored on the users local hard disk. I have a version of this code working which uses NetConnection and NetStream to stream the video over a network to a FMS (Red5) server, but I'd like to be able to store the video locally for low bandwidth/flaky network situations. I'm using FLex 3.2 and AIR 1.5, so I don't believe there should be any sandbox restrictions which prevent this from occurring.
Things I've seen:
FileStream - Allows reading.writing local files but no .attachCamera and .attachAudio methids for creating a FLV.
flvrecorder - Produces screen grabs from the web cam and creates it's own flv file. Doesn't support Audio. License prohibits commercial use.
SimpleFLVWriter.as - Similar to flvrecorder without the wierd license. Doesn't support audio.
This stackoverflow post - Which demonstrates the playback of a video from local disk using a NetConnection/NetStream.
Given that I have a version already which uses NetStream to stream to the server I thought the last was most promising and went ahead and put together this demo application. The code compiles and runs without errors, but I don't have a FLV file on disk which the stop button is clicked.
-
<mx:Script>
<![CDATA[
private var _diskStream:NetStream;
private var _diskConn:NetConnection;
private var _camera:Camera;
private var _mic:Microphone;
public function cmdStart_Click():void {
_camera = Camera.getCamera();
_camera.setQuality(144000, 85);
_camera.setMode(320, 240, 15);
_camera.setKeyFrameInterval(60);
_mic = Microphone.getMicrophone();
videoDisplay.attachCamera(_camera);
_diskConn = new NetConnection();
_diskConn.connect(null);
_diskStream = new NetStream(_diskConn);
_diskStream.client = this;
_diskStream.attachCamera(_camera);
_diskStream.attachAudio(_mic);
_diskStream.publish("file://c:/test.flv", "record");
}
public function cmdStop_Click() {
_diskStream.close();
videoDisplay.close();
}
]]>
</mx:Script>
<mx:VideoDisplay x="10" y="10" width="320" height="240" id="videoDisplay" />
<mx:Button x="10" y="258" label="Start" click="cmdStart_Click()" id="cmdStart"/>
<mx:Button x="73" y="258" label="Stop" id="cmdStop" click="cmdStop_Click()"/>
</mx:WindowedApplication>
It seems to me that there's either something wrong with the above code which is preventing it from working, or NetStream just can't be abused in this wany to record video.
What I'd like to know is, a) What (if anything) is wrong with the code above? b) If NetStream doesn't support recording to disk, are there any other alternatives which capture Audio AND Video to a file on the users local hard disk?
Thanks in advance!
It is not possible To stream video directly to the local disk without using some streaming service like Windows Media encoder, or Red5 or Adobe's media server or something else.
I have tried all the samples on the internet with no solution to date.
look at this link for another possibility:
http://www.zeropointnine.com/blog/updated-flv-encoder-alchem/
My solution was to embed Red5 into AIR.
Sharing with you my article
http://mydevrecords.blogspot.com/2012/01/local-recording-in-adobe-air-using-red5.html
In general, the solution is to embed free media server Red5 into AIR like an asset. So, the server will be present in AIR application folder. Then, through the NativeProcess, you can run Red5 and have its instance in memory. As result, you can have local video recording without any network issues.
I am also trying to do the same thing, but I have been told from the developers of avchat.net that it is not possible to do this with AIR at the moment. If you do find out how to do it, I would love to know!
I also found this link, not sure how helpful it is http://www.zeropointnine.com/blog/webcam-dvr-for-apollo/
Well, I just think that letting it connect to nothing(NULL) doesn't work. I've already let him try to connect to localhost, but that didn't work out either. I don't think this is even possible. Streaming video works only with Flash Media Server and Red5, not local. Maybe you could install Red5 on you PC?
Sadly video support in flash from cameras is very poor. When you stream its raw so the issue is that you have to encode to FLV and doing it in real time takes a very fast computer. First gen concepts would write raw bitmaps to a file (or serialize an array) then a second method would convert the file to an FLV. Basically you have to poll the camera and save each frame as a bitmap then stack in an array. This is very limited and could not do audio. It was also very hard to get above 5-10fps.
The gent at zero point nine, came up with a new version and your on the right path. Look at the new flv recorder. I spent a lot of time working with this but never quite got it to work for my needs (two cameras). I just could not get the FPS i needed. But it might work for you. It was much faster than the original method.
The only other working option I know of is to have the Red5 save the video and download it back to the app.