Objective-C Download a Smooth Streaming Video - objective-c

I am wondering if there is a way with Objective-C to download a mp4 video that is setup to stream in a Smooth Streaming format. So far I have tried AFNetworking, NSInputStream, and MPMoviePlayerController to try and access the raw video data, but have come up empty in each try.
I would like to then take this video data and save it as a mp4 to disk to be played offline. The URL looks something like this:
http://myurl.com/videoname.ism/manifest(format=m3u8-aapl)

I am going to assume that you are asking about an HTTP Live Streaming video, as indicated by your example URL, instead of a Smooth Streaming video. If this is not the case, please leave a comment and I will edit the answer to speak of Smooth Streaming.
Structure of an HTTP Live Streaming video
There are multiple versions of HTTP Live Streaming (HLS), the newer of which have added proper support for multilanguage audio and captions, which complicates the scenario significantly. I will assume that you do not have interest in such features and will focus on the simple case.
HLS has a three-layer structure:
At the root, you have the Master Playlist. This is what the web server provides when you request the video root URL. It contains references to one or more Media Playlists.
A Media Playlist represents the entire video for one particular configuration. For example, if the media is encoded using two quality levels (such as 720p and 1080p), there will be two Media Playlists, one for each. A Media Playlist contains a list of references to the media segments that actually contain the media data.
The media segments are MPEG Transport Streams which contain a piece of the data streams, generally around 10 seconds per file.
When multilanguage features are out of the picture, it is valid to think of an HLS video as multiple separate videos separated into 10 second chunks - all videos containing the same content but using a different quality level.
Each of the above entities - Master Playlist, Media Playlist, each media segment - is downloaded separately by a player using standard HTTP file download mechanisms.
Putting the pieces back together
All the information a media player requires is present in the media segments - you can mostly ignore the Master Playlist and Media Playlist as their only purpose is to give you the URLs to the media segments.
Thankfully, the MPEG Transport Stream format is very simple in nature. All you need to do in order to put the media segments back together is to concatenate them together. That's it, really!
Pseudocode
I am going to assume that you are not asking about how to perform HTTP requests using Objective-C, as there are many other answers on Stack Overflow on that topic. Instead, I will focus on the algorithm you need to implement.
First, you simply need to download the Master Playlist.
masterPlaylist = download(rootUrl);
The Master Playlist contains both comment lines and data lines. Each data line is a reference to a Media Playlist. Note that the lowest quality level for HLS will generally only have the audio stream. Let's assume here you care about whatever the first quality level in the file is, for simplicity's sake.
masterPlaylistLines = masterPlaylist.split('\n');
masterPlaylistDataLines = filter(masterPlaylistLines, x => !x.startsWith("#"));
firstMasterPlaylistDataLine = masterPlaylistDataLines[0];
This data line will contain the relative URL to the Media Playlist. Let's download it. The URL appending code should be smart and understand how to make relative URLs, not simply a string concatenation.
mediaPlaylist = download(rootUrl + firstMasterPlaylistDataLine);
The Media Playlist, in turn, is formatted the same but contains references to media segments. Let's download them all and append them together.
mediaPlaylistLines = mediaPlaylist.split('\n');
mediaPlaylistDataLines = filter(mediaPlaylistLines, x => !x.startsWith("#"));
foreach (dataLine : mediaPlaylistDataLines)
{
// URL concatenation code is assumed to be smart, not just string concatenation.
mediaSegment = download(rootUrl + firstMasterPlaylistDataLine + dataLine);
appendToFile("Output.ts", mediaSegment);
}
The final output will be a single MPEG Transport Stream file, playable on most modern media players. You can use various free tools such as FFmpeg if you wish to convert it to another format.

Related

Adding Photo to vCard

I'm trying to create a vCard containing the text below:
BEGIN:VCARD
VERSION:3.0
PHOTO;VALUE=uri:https://upload.wikimedia.org/wikipedia/commons/2/25/Intel_logo_%282006-2020%29.jpg
N:Raven;Test;;;
END:VCARD
according to this documentation (
screenshot of the part I'm talking about ) I tried base64 and it's working fine ( Contact app loads the image ) but in the URI situation it's not working ( Contact app does not load the image ),
To avoid making a large file, my goal is to have a url in my vCard.vcf file not a base64, I'm stuck understanding what's wrong with my vCard.
basically what I'm trying to make is a vCard containing a photo that gets fetched by the url given and shows the photo in contact app of whatever OS the user will open it on (Windows/Android/IOS/macOS).
I'm not using base64 because it makes my vCard file so big.
External urls are probably blocked by most programs, same as loading external images are blocked. It's a massive privacy concern.
Maybe hosting it on a service like Google Cloud would help, in that you can edit the CONTENT-TYPE and CACHE meta data attributes? It’s my novice understanding that smartphone OS is particularly wary of “unknown” file properties - probably for good reason.

How can I get IMSC XML from an HLS manifest?

I am reading about IMSC and the docs say it should be in XML https://developer.mozilla.org/en-US/docs/Related/IMSC
Meanwhile, I have an RTMP stream with embedded caption data from the HLS manifest. When I look at the fragments, it all looks like binary to me rather than XML. I actually checked all network traffic from the browser and only see the manifest and fragment calls. In sample players online I DO see the captions getting built up and displayed, but I'm not sure how they go from Manifest -> XML.
As far as I can tell devs should be using https://developer.mozilla.org/en-US/docs/Related/IMSC/Using_the_imscJS_polyfill if they want to show live captions.
ISMC is carried inside fragmented mp4 files, they are not stand alone text files like WebVTT.

Getting the original photo from Picasa Web

.I am using the Picasa API (in python) with a small utility I wrote. When I try to download a high-res photo that I have uploaded, I can't get the original one - the resolution is the same, but the size in bytes is different, as much as half of the size.
Two months ago, I succeeded downloading the original file with the additional imgmax=d parameter which results in the elements
.referencing the original uploaded photo, including all original Exif data, but now, it is impossible
Is there a way to get the original photo (not just the resolution, but also the size) via the API?
Thank You, Jane.

LiveLeak: API / Video Metadata

the only slightly helpful thing I could find regarding LiveLeak API was this Question on Stackexchange: CURL: grabbing liveleak video
If I only have got the Video URL (e.g. http://www.liveleak.com/view?i=numbered_videoid), is there a way to get video meta data without pulling the whole page? xml or json ideally?
Thanks!
LiveLeak's 'API' is at best their RSS Feeds.
The best I could do was to download the 'internal' page for a certain videoid by appending &ajax=1 to the url:
http://mobile.liveleak.com/view?i=100_1338007444&ajax=1
This saved 10.06 KB of bandwidth (about 32%).

using appengine blobs for binary data in an obj-c app

I'm writing an obj-c app and would like to upload a binary file a few megs in size to my appengine server (python). I'm guessing I need to use the blob entity for this, but am unsure how to go about doing this. I've been using http requests and responses to send and receive data up to now, but they've been encoded in strings. Can someone advise how I'd go about doing the same with blobs from an obj-c app? I see some examples that involve http requests but they seem geared toward web page and I'm not terribly familiar with it. Are there any decent tutorials or walkthroughs perhaps?
I'm basically not completely sure, if I'm supposed to encode it into the http request and send it back through the response, how to get the binary data into the http string from the client and how to send it back properly from the server when downloading my binary data. I'm thinking perhaps the approach has to be totally different from what I'm used to with encoding values into my request in the param1=val&param2=val2 style format but uncertain.
Should I be using the blobstore service for this? One important note is that I've heard there is a 1 meg limit on blobs, but I have audio files 2-3 megs in size that I need to store (at the very least 1.8 megs).
I recently had to do something similar, though it was binary data over a socket connection. To the client using XML, to the server as a data stream. I ended up base64 encoding the binary data when sending it back and forth. It's a bit wordy but especially on the client side it made things easier to deal with, no special characters to worry about in my XML. I then translated it with NSData into a real binary format. I used this code to do the encoding and decoding, search for "cyrus" to find the snippet I used, there are a few that would work here.
In your case I would change your http request to a post data call rather than putting it all in the URL. If you're not sure what the difference is, have a look here.
I'm not as familiar with python, but you could try here for help on that end.
Hope that helps.
Edit - it looks like blobs are the way to go. Have a look at this link for the string/blob type as well as this link for more info on working with the blob.
There are three questions in one here:
Should you use a BLOB for binary data?
How do you post binary data, and use it from app engine
How do you retrieve binary data from app engine
I can't answer if you "should" use blobs, only you would know the answer to that, and it greatly depends upon the type of data you are trying to store, and how it will be used. Let's take an image for example (which is probably the most popular use case for this). You want users to take a photo with their phone, upload it, and then share it with other users. That's a good use of blobs, but as #slycrel suggests you'll run into limitations on record size. This can be workable, for example you could use the python image library (pil) to downsize the image.
To post binary data, see this question. It would be best to cache 2 copies, a thumbnail and a full size. This way the resizing only has to happen once, on upload. If you want to go one better, you can use the new background jobs feature of app engine to queue up the image processing for later. Either way, you'll want to return the ID of the newly created blob so you can reference it from the device without an additional http request.
To retrieve data, I think the best approach would be to treat the BLOB as it's own resource. Adjust your routes such that any given blob has a unique URL:
http://myweb/images/(thumbnail|fullsize)/<blobid>.(jpg|png|gif)
Where BLOBID is dynamic, and JPG, PNG or GIF could be used to get the particular type of image. Thumbnail or fullsize could be used to retrieve the smaller or larger version you saved when they posted it.