html MediaRecorder save last n seconds - html5-video

I'm using MediaRecorder and receive video by chunks in dataavailable. I would like to do 2 things with these chunks:
Save only the last n sec of a video. I noticed that just concatenating all the chunks in a video file generates a valid video file. Though, as soon as I start to remove some chunks from the beginning (let's say taking only the last 10 chunks to save only the last 10s of the video) then the resulting file is not valid anymore. I expect this is due to missing metadata or something.
I'm trying to send these chunks using socket io and read them from another browser using MediaSource. It kind of worked as long as I read the chunks from the beginning again. If I start reading after the server started to send the chunks, I receive chunks but not from the beginning and it fails to display the video.
Are there ways to avoid these problems?
Thank you!

Related

Nuxeo Document Upload Chunks Returns Incorrect Status

We have setup nuxeo in 3-tier cluster mode. While uploading large files, we are breaking the file into smaller chunks and then uploading the chunks one-by-one.
Generally, when we upload each chunk, Nuxeo returns “Resume Incomplete (308)” as status code for first chunk and “Created (201)” as status code ONLY for the last chunk when the upload completes.
However, on one of our environments, we are getting 201 (Created) as the status code from nuxeo server during upload of chunks. Can anyone help us identify what are we missing here?

apache nifi S3 PutObject stuck

Sorry if this is a dumb question, very new to nifi.
Have set up a process group to dump sql queries to CSV and then upload them to S3. Worked fine with small queries, but appears to be stuck with larger files.
The input queue to the PutS3Object processor has a limit of 1GB, but the file it is trying to put is almost 2 GB. I have set the multi-part parameters in the S3 processor to be 100M but it is still stuck.
So my theory is the S3PutObject needs a complete file before it starts uploading. Is this correct? Is there no way to get it uploading in a "streaming" manner? Or do I just have to up the input queue size?
Or am I on the wrong track and there is something else holding this all up.
The screenshot suggests that the large file is in PutS3Object's input queue, and PutS3Object is actively working on it (from the 1 thread indicator in the top-right of the processor box).
As it turns out, there were no errors, just a delay from processing a large file.

Has anybody ever stored an mp3 in a redis cache?

I'm new to redis, and I think I have a good use case for redis. What I'm trying to do is to cache an mp3 file for a short time. These MP3s are >2M in side, but I'm also only talking maybe 5-10 stored any any moment in time. The TTL on them would be fairly short too, minutes, not hours, etc.
(disk persistence isn't an option).
So, what I'm wondering, do I need to get fancy and Base64 out the mp3 to store it? Or can I simply set keyvalue=bytearray[]?
This redis hit will be from a web service, which in turn, gets it's data from a downstream service with disk hits. So what I'm trying to do is to cache the mp3 file a short time on my middleware if you will. I won't need to do it for every file, just the ones >2M so I don't have to keep going bak to the downstream servers and request the file from the disk again.
Thanks!
Nick
You can certainly store them, and 2MB is nothing for redis to store.
Redis is binary safe and you don't need to base64 your data, just store via byte array in your favorite client.
One thing I'd consider doing (it might not be worth it with 2Mb of data, but if I were to store video files for example) is to store the file a sequence of chunks, and not load everything at once. If your app won't hold many files in memory at once, and the files are not that big, it might not be worth it. But if you're expecting high concurrency, do consider this as it will save application memory, not redis memory.
You can do this in a number of ways:
You can store each block as an element of a sorted set with the sequence number as score, and read them one by one, so you won't have to load everything to memory at once.
You can store the file a complete string, but read chunks with GETRANGE.
e.g.
GETRANGE myfile.mp3 0 10000
GETRANGE myfile.mp3 100000 200000
... #until you read nothing of course

WCF streamed message download time

I have a streamed service. The message returned from the operation has a stream as the only body member, which is a stream to a file in the file system. I wonder if there's a way to record how much time it takes to the client to consume such file, from the server?
One of the ways you can go - return from server not only stream, but data structure, contains file size as well.
On client - you can use timer and calculate time against already read vs took time vs full file size.
See this example: http://www.codeproject.com/Articles/20364/Progress-Indication-while-Uploading-Downloading-Fi

Does writeToFile:atomically: blocks asynchronous reading?

A few times while using my application I am processing some large data in the background. (To be ready when the user needs it. Something kind of indexing.) When this background process finished it needs to the save the data in a cache file, but since this is really large it take some seconds.
But at the same time the user may open some dialog which displays images and text loaded from the disk. If this happens at the same time while the background process data is saved, the user interface needs to wait until the saving process is completed. (This is not wanted, since the user then have to wait 3-4 seconds until the images and texts from the disk are loaded!)
So I am looking a way to throttling the writing to disk. I thought of splitting up the data in chunks and inserting a short delay between saving the different chunks. In this delay, the user interface will be able to load the needed texts and images, so the user will not recognize a delay.
At the moment I am using [[array componentsJoinedByString:'\n'] writeToFile:#"some name.dic" atomically:YES]. This is very high-level solution which doesn't allow any customization. How can I implement without large data into one file without saving all the data as one-shot?
Does writeToFile:atomically: blocks asynchronous reading?
No. It is like writing to a temporary file. Once completed successfully, then renaming the temporary file to the destination (replacing the pre-existing file at the destination, if it exists).
You should consider how you can break your data up, so it is not so slow. If it's all divided by strings/lines and it takes seconds, and easy approach to divide the database would be by first character. Of course, a better solution could likely be imagined, based on how you access, search, and update the index/database.
…inserting a short delay between saving the different chunks. In this delay, the user interface will be able to load the needed texts and images, so the user will not recognize a delay.
Don't. Just implement the move/replace of the atomic write yourself (writing to a temporary file during index and write). Then your app can serialize read and write commands explicitly for fast, consistent and correct accesses to these shared resources.
You have to look to the class NSFileHandle.
Using combination of seekToEndOfFile and writeData:(NSData *)data you can do the work you wish.