Understanding HLS implementation? - file-upload

User will upload the videos from browser which need to be stored on server and playback. First understanding from google suggest I need to go for HTTP live streaming(HLS) here.
But I am not sure how it works internally ?
There are three components in above workflow i.e. client/server/data store for saving and retrieving videos.
Save flow :
I believe I need to plugin the HLS client for sending the streaming data.
Does Client in itself divide the file into chunks while sending and maintain the chaining of these chunk where each chunks points to next one ? something like this as I believe server is dumb and will work in same fashion as http upload functionality and no other intelligence is required here ?
But not sure how HLS server side component works here i.e. Will it save as single file or single file is split in to multiple files and then saved on disk ?
I believe it store the file as single file like regular http upload file ?
Retrieval part
In normal regular http file download, client asks for file data and server sends the response back in chunks but all response chunks are sent back against the same request.
I believe in case of HLS , its pull based where client initiate the pull request for each stream request. In each chunk pull request client gets the file name of next chunk and send the request to serverthe relevant chunk from single file for each poll request etc ? So for server its kind of regular http file download request and all intelligence lies with client

Save flow: When you upload a video, it must be converted into HLS format. You can use FFMPEG to do that. You'll end up creating manifest files, and all the segments of the video.
Retrieval part:
The player will read the manifest file to know which segments to request. I've written a post on how HLS playback works with the manifest files: https://api.video/blog/video-trends/what-is-hls-video-streaming-and-how-does-it-work

Related

Is there a way to upload large file in chunks/blocks through Amazon Workdocs api or it's client provider?

I completed the upload file part in Workdocs via rest api's.
But in that, how does Workdocs handles large file upload, saw the InitiateDocumentVersionUpload api which doesnot indicate the restrictions on file size if any.

JMeter File Upload - Raw contents

I have a very peculiar scenario where the raw data of the file has to be sent. I have tried following options and still wasn't successful.
Http Request with form-data disabled but still its sending as multi-part which is not acceptable by the system
Sending the file contents in request body by following methods . They were successful however the file uploaded was encoded in some format so the MD5 hash of the original file and uploaded file doesn't match. Hence the uploaded file looked different from original
FileToString Method
Reading the file using HTTP request . Capturing the response and passing to the body of File upload request
Using Https Raw data . Since its https request, it cannot be used.
All possible encoding formats available but nothing worked as the application just expects the raw data without any encoding.
You're supposed to provide an example successful request and the file you're uploading, it can be captured using a sniffer tool like Wireshark or Fiddler, only this way we'll be able to come up with the relevant JMeter configuration required to replicate the request.
In the mean time I can only suggest trying recording the request using JMeter's HTTP(S) Test Script Recorder
Start JMeter's HTTP(S) Test Script Recorder
Import JMeter's certificate into your browser (or system certificates storage if the upload is being performed by other application), see HTTPS recording and certificates chapter of JMeter's documentation on HTTP(S) Test Script Recorder
Copy the file you're going to upload to "bin" folder of your JMeter installation (or JMeter's current working directory if you're launching it from a desktop shortcut or whatever), see Recording File Uploads with JMeter for more details
Perform the upload in the browser (or other application)
JMeter should intercept the request and generate proper HTTP Request sampler and HTTP Header Manager

Twilio - Play audio file stored on S3

I'm trying to play an audio file using the <Play> verb, but Twilio is making a POST request to retrieve it instead of a GET, and S3 doesn't accept it.
The file is this one
And here's the request and the response on Twilio's console.
Any ideas on how to make this work? Thanks!
My mistake, the issue was on the conference waitUrl, which I fixed by specifying the waitMethod.

using content-length when downloading a file using WCF Rest?

We are developing an application for Web. Inside that application, to download a file, I have created a WCF Rest service that will download the files based on this link Download using WCF Rest. The purpose is to check for user authentication before downloading. I used streaming concept to download the file. It is now that I have found out few things
When the user downloads the file, he is not able to determine what are the file size and the time remaining. I analyzed and found out that the reason is because, it’s using the “Transfer Encoding: chunked” in the header so that the file will be downloaded in chunks. One of the advantages is that the memory consumption is less in the server even when there are many users downloading a file. So I thought of adding “Content-Length” header, but I found out that you can use only either one of the headers not both. So I was thinking how Hotmail and Gmail were downloading attachments. From my investigation, I found out that Hotmail uses chunking header whereas Gmail uses Content-length header. Also in the case of Gmail, it is also checking if the session is active or not then downloads the file accordingly. I want to achieve the following
a) Like Gmail, I want to check if the session is active or not and then downloads the files accordingly. What will be the method for me to implement it?
b) When downloading the file, I want to use Content-Length header instead of Chunked header. Also the memory consumption should be less. Can we achieve it in WCF Rest? If so how?
c) Is it possible for me to add a header in WCF that will display the file size in the browser Downloads window?
d) When downloading an inline images from WCF, I found out that the image after loading is not cached in local machine. I was thinking that once an image is shown in an HTML page, it will get automatically cached and the next time user visits the page, the image will load from cache instead from server. I want to cache the inline images to cache, what is the option that I can use for it? Are there any headers that I need to specify when downloading an inline image from server?
e) When I download a zip file using WCF in IPhone Chrome browser, it’s not downloading at all. But the same link works in Android Chrome browser. What could be the problem? Am I missing header in WCF?
Are there any methods that will achieve the above?
Regards,
Jollyguy

Does Amazon S3 help anything in this case?

I'm thinking about whether to host uploaded media files (video and audio) on S3 instead of locally. I need to check user's permissions on each download.
So there would be an action like get_file, which first checks the user's permissions and then gets the file from S3 and sends it using send_file to the user.
def get_file
if #user.can_download(params[:file_id])
# first, download the file from S3 and then send it to the user using send_file
end
end
But in this case, the server (unnecessarily) downloads the file first from S3 and then sends it to the user. I thought the use case for S3 was to bypass the Rails/HTTP server stack for reduced load.
Am I thinking this wrong?
PS. I'm using CarrierWave for file uploads. Not sure if that's relevant.
Amazon S3 provides something called RESTful authenticated reads, which are basically timeoutable URLs to otherwise protected content.
CarrierWave provides support for this. Simply declare S3 access policy to authenticated read:
config.s3_access_policy = :authenticated_read
and then model.file.url will automatically generate the RESTful URL.
Typically you'd embed the S3 URL in your page, so that the client's browser fetches the file directly from Amazon. Note however that this exposes the raw unprotected URL. You could name the file with a long hash instead of something predictable, so it's at least not guessable -- but once that URL is exposed, it's essentially open to the Internet. So if you absolutely always need access control on the files, then you'll need to proxy it like you're currently doing. In that case, you may decide it's just better to store the file locally.