How can I get IMSC XML from an HLS manifest? - rtmp

I am reading about IMSC and the docs say it should be in XML https://developer.mozilla.org/en-US/docs/Related/IMSC
Meanwhile, I have an RTMP stream with embedded caption data from the HLS manifest. When I look at the fragments, it all looks like binary to me rather than XML. I actually checked all network traffic from the browser and only see the manifest and fragment calls. In sample players online I DO see the captions getting built up and displayed, but I'm not sure how they go from Manifest -> XML.
As far as I can tell devs should be using https://developer.mozilla.org/en-US/docs/Related/IMSC/Using_the_imscJS_polyfill if they want to show live captions.

ISMC is carried inside fragmented mp4 files, they are not stand alone text files like WebVTT.

Related

Adding Photo to vCard

I'm trying to create a vCard containing the text below:
BEGIN:VCARD
VERSION:3.0
PHOTO;VALUE=uri:https://upload.wikimedia.org/wikipedia/commons/2/25/Intel_logo_%282006-2020%29.jpg
N:Raven;Test;;;
END:VCARD
according to this documentation (
screenshot of the part I'm talking about ) I tried base64 and it's working fine ( Contact app loads the image ) but in the URI situation it's not working ( Contact app does not load the image ),
To avoid making a large file, my goal is to have a url in my vCard.vcf file not a base64, I'm stuck understanding what's wrong with my vCard.
basically what I'm trying to make is a vCard containing a photo that gets fetched by the url given and shows the photo in contact app of whatever OS the user will open it on (Windows/Android/IOS/macOS).
I'm not using base64 because it makes my vCard file so big.
External urls are probably blocked by most programs, same as loading external images are blocked. It's a massive privacy concern.
Maybe hosting it on a service like Google Cloud would help, in that you can edit the CONTENT-TYPE and CACHE meta data attributes? It’s my novice understanding that smartphone OS is particularly wary of “unknown” file properties - probably for good reason.

SONOS Playback error "These Songs are not available for Streaming from APPNAME"

I've been preparing POC to integrate our music service with SONOS, I've written simple service for testing purpose. I've implemented three essential methods to play url "getMetadata", "getMediaMetadata" and "getMediaURI".
First I've tried with media type "track" and returned song url(hard coded) from "getMediaURI" method which is .mp4 format, It worked fine as expected.
Later when I've tried with 7-digital url playback fails by saying "These Songs are not available for Streaming from "APPNAME". I've tried changing mime type values also nothing seems working. Type : audio/x-m4a
Note: Same 7 digital url is playing fine on browser.
Am I doing anything wrong here? Am I missing anything? Any help is really appreciated.
Thanks.
Looking at the documentation on Sonos' developer website, it doesn't seem like that audio/x-m4a is a supported MIME type. Do you know the audio format of 7-Digital's track for sure? If its mp4 or m4a, I would try setting the MIME type to one of these - audio/mp4, audio/aac,
application/x-mpegURL, application/vnd.apple.mpegURL, audio/x-mpegurl
Also make sure that your track's sampling is supported as described in the table at the link below.
Link http://musicpartners.sonos.com/node/464

Dynamically Update a Map

I have a bit of a situation. I was assigned a task to create a system that will take a KML file and update markers dynamically on a map. I'm currently generating the KML from a Wireshark Dissection and now need a way to take said data into a mapping tool. There are a few situations:
The PC that will be running the system will not have internet access, so I will need to cache de map data.
Each marker might move location so I need to erase said marker's previous location and update it with a new marker location. I do have a sequence ID I can identify it with, but I don't know how I'll update the new location.
It needs to be dynamically updated. A system will send data, my Wireshark Dissector will dissect the data and export it into a KML. This KML will need to be dynamically loaded into the system.
The basic idea in mind is like looking at Google Maps and watching your car move as it tracks your GPS location. But I need to make this tracking system work for a lot more targets than just one.
I'm sorry I currently have no foundation on where to start, but that's why I ask for your guidance. I've researched on ArcGIS, QGIS, Google Earth and Maps, but I haven't found a way to upload dynamically nor refresh the system.
Anything that could help me start finding a solution for this task will be appreciated.
Thank you for your time.
I had experience using leaflet js which allowed you to use bing map, google map, or opensource MapQuest to display mobile-track and car-tracking (for GM OnStar). I am also coding for kml to display flight-tracking on google earth now.
First, I am not sure it is possible or not :
you have a machine not connecting to internet
you want to use those map resource on internet
SO that I will assume that your machine can access internet. Then, there are many solutions.
You may try to see the simple tutorial on http://leafletjs.com/
You will have idea how to do it.
Plus, you have search for examples for Google earth (on which, I can display 3D tracking route).
Hope this help.
Other than map, see my sample to " Dynamic update data on Google Earth " in the following :
https://sites.google.com/site/canadadennischen888/home/kml/auto-refresh-3d-tracking
hope this help ....
(The following are copy from my link which talking about KML for 3D Google Earth. But I believe you can make it into 2D if you have to "not-use-google-earth".)
...
How to make a dynamic Auto refresh 3D Tracking :
prepare a RestFul service to generate KML file from DB
(sample as in https://sites.google.com/site/canadadennischen888/home/kml/3d-tracking)
My other jsp code will generate a KMZ file which has a link to my Restful service. KMZ file has onInterval ( as in the bottom)
Jsp web page allow user to download KMZ file.
When Google Earth open KMZ file, Google Earth will auto refresh to get new data from that Restful service
Everytime refreshing, server will send the latest update KML data with new data to GE.
KMZ sample:
<?xml version="1.0" encoding="UTF-8"?>
<kml xmlns="http://www.opengis.net/kml/2.2" xmlns:gx="http://www.google.com/kml/ext/2.2"
xmlns:kml="http://www.opengis.net/kml/2.2" xmlns:atom="http://www.w3.org/2005/Atom">
<NetworkLink>
<name>Dennis_Chen_Canada#Hotmail.com</name>
<open>1</open>
<Link>
<href>http://localhost:9080/google-earth-project/rest/kml/10001/20002</href>
<refreshMode>onInterval</refreshMode>
</Link>
</NetworkLink>
</kml>
see result

Objective-C Download a Smooth Streaming Video

I am wondering if there is a way with Objective-C to download a mp4 video that is setup to stream in a Smooth Streaming format. So far I have tried AFNetworking, NSInputStream, and MPMoviePlayerController to try and access the raw video data, but have come up empty in each try.
I would like to then take this video data and save it as a mp4 to disk to be played offline. The URL looks something like this:
http://myurl.com/videoname.ism/manifest(format=m3u8-aapl)
I am going to assume that you are asking about an HTTP Live Streaming video, as indicated by your example URL, instead of a Smooth Streaming video. If this is not the case, please leave a comment and I will edit the answer to speak of Smooth Streaming.
Structure of an HTTP Live Streaming video
There are multiple versions of HTTP Live Streaming (HLS), the newer of which have added proper support for multilanguage audio and captions, which complicates the scenario significantly. I will assume that you do not have interest in such features and will focus on the simple case.
HLS has a three-layer structure:
At the root, you have the Master Playlist. This is what the web server provides when you request the video root URL. It contains references to one or more Media Playlists.
A Media Playlist represents the entire video for one particular configuration. For example, if the media is encoded using two quality levels (such as 720p and 1080p), there will be two Media Playlists, one for each. A Media Playlist contains a list of references to the media segments that actually contain the media data.
The media segments are MPEG Transport Streams which contain a piece of the data streams, generally around 10 seconds per file.
When multilanguage features are out of the picture, it is valid to think of an HLS video as multiple separate videos separated into 10 second chunks - all videos containing the same content but using a different quality level.
Each of the above entities - Master Playlist, Media Playlist, each media segment - is downloaded separately by a player using standard HTTP file download mechanisms.
Putting the pieces back together
All the information a media player requires is present in the media segments - you can mostly ignore the Master Playlist and Media Playlist as their only purpose is to give you the URLs to the media segments.
Thankfully, the MPEG Transport Stream format is very simple in nature. All you need to do in order to put the media segments back together is to concatenate them together. That's it, really!
Pseudocode
I am going to assume that you are not asking about how to perform HTTP requests using Objective-C, as there are many other answers on Stack Overflow on that topic. Instead, I will focus on the algorithm you need to implement.
First, you simply need to download the Master Playlist.
masterPlaylist = download(rootUrl);
The Master Playlist contains both comment lines and data lines. Each data line is a reference to a Media Playlist. Note that the lowest quality level for HLS will generally only have the audio stream. Let's assume here you care about whatever the first quality level in the file is, for simplicity's sake.
masterPlaylistLines = masterPlaylist.split('\n');
masterPlaylistDataLines = filter(masterPlaylistLines, x => !x.startsWith("#"));
firstMasterPlaylistDataLine = masterPlaylistDataLines[0];
This data line will contain the relative URL to the Media Playlist. Let's download it. The URL appending code should be smart and understand how to make relative URLs, not simply a string concatenation.
mediaPlaylist = download(rootUrl + firstMasterPlaylistDataLine);
The Media Playlist, in turn, is formatted the same but contains references to media segments. Let's download them all and append them together.
mediaPlaylistLines = mediaPlaylist.split('\n');
mediaPlaylistDataLines = filter(mediaPlaylistLines, x => !x.startsWith("#"));
foreach (dataLine : mediaPlaylistDataLines)
{
// URL concatenation code is assumed to be smart, not just string concatenation.
mediaSegment = download(rootUrl + firstMasterPlaylistDataLine + dataLine);
appendToFile("Output.ts", mediaSegment);
}
The final output will be a single MPEG Transport Stream file, playable on most modern media players. You can use various free tools such as FFmpeg if you wish to convert it to another format.

using appengine blobs for binary data in an obj-c app

I'm writing an obj-c app and would like to upload a binary file a few megs in size to my appengine server (python). I'm guessing I need to use the blob entity for this, but am unsure how to go about doing this. I've been using http requests and responses to send and receive data up to now, but they've been encoded in strings. Can someone advise how I'd go about doing the same with blobs from an obj-c app? I see some examples that involve http requests but they seem geared toward web page and I'm not terribly familiar with it. Are there any decent tutorials or walkthroughs perhaps?
I'm basically not completely sure, if I'm supposed to encode it into the http request and send it back through the response, how to get the binary data into the http string from the client and how to send it back properly from the server when downloading my binary data. I'm thinking perhaps the approach has to be totally different from what I'm used to with encoding values into my request in the param1=val&param2=val2 style format but uncertain.
Should I be using the blobstore service for this? One important note is that I've heard there is a 1 meg limit on blobs, but I have audio files 2-3 megs in size that I need to store (at the very least 1.8 megs).
I recently had to do something similar, though it was binary data over a socket connection. To the client using XML, to the server as a data stream. I ended up base64 encoding the binary data when sending it back and forth. It's a bit wordy but especially on the client side it made things easier to deal with, no special characters to worry about in my XML. I then translated it with NSData into a real binary format. I used this code to do the encoding and decoding, search for "cyrus" to find the snippet I used, there are a few that would work here.
In your case I would change your http request to a post data call rather than putting it all in the URL. If you're not sure what the difference is, have a look here.
I'm not as familiar with python, but you could try here for help on that end.
Hope that helps.
Edit - it looks like blobs are the way to go. Have a look at this link for the string/blob type as well as this link for more info on working with the blob.
There are three questions in one here:
Should you use a BLOB for binary data?
How do you post binary data, and use it from app engine
How do you retrieve binary data from app engine
I can't answer if you "should" use blobs, only you would know the answer to that, and it greatly depends upon the type of data you are trying to store, and how it will be used. Let's take an image for example (which is probably the most popular use case for this). You want users to take a photo with their phone, upload it, and then share it with other users. That's a good use of blobs, but as #slycrel suggests you'll run into limitations on record size. This can be workable, for example you could use the python image library (pil) to downsize the image.
To post binary data, see this question. It would be best to cache 2 copies, a thumbnail and a full size. This way the resizing only has to happen once, on upload. If you want to go one better, you can use the new background jobs feature of app engine to queue up the image processing for later. Either way, you'll want to return the ID of the newly created blob so you can reference it from the device without an additional http request.
To retrieve data, I think the best approach would be to treat the BLOB as it's own resource. Adjust your routes such that any given blob has a unique URL:
http://myweb/images/(thumbnail|fullsize)/<blobid>.(jpg|png|gif)
Where BLOBID is dynamic, and JPG, PNG or GIF could be used to get the particular type of image. Thumbnail or fullsize could be used to retrieve the smaller or larger version you saved when they posted it.