How to determine video codec - objective-c

I want to develop small mac (not iphone) application for self-educational purpose.
What application should do: just open video file and show information about video codec.
Main problem is that i never work with media files and i don`t now from which point i can start.
May be somebody can advice some articles or may be even examples?

I strongly recommend using something like FFmpeg to get codec information.
Simply run the following command through NSTask:
ffmpeg -i video.mpg
Project page:
http://www.ffmpeg.org/
Extracting this information yourself is a LOT of work.
Every video format stores things different.
Not to mention error-handling and corrupted files.

The codec data is usually held in container formats. To start you should pick one container format and parse that, a popular one would be the MPEG4 container format. Follow the links from this page to get you started.
http://en.wikipedia.org/wiki/MPEG-4_Part_14

Related

How can I use Adaptive streaming for VODs in Ant Media Server?

I'm using Ant Media Server for streaming. My use case requires me to record the Live Streams as VODs so the users can access the content later as well.
Like the live streams, I want to apply adaptive settings to the VODs as well so that users can get the suited resolution as per their network.
I can't find any built in solution for this yet. Can you please tell me any solution as to how can I do this!
I'm using S3 to store the recordings.
Thanks.
Thank you for the question. As far as I understand from the question, it seems that Live Streams are recorded as VoD files.
I think the most efficient way is doing that through HLS. With this way, the VoD files are recorded as HLS and multibitrates is available. No need to transcode again and it'll be played directly. Let me explain this solution step by step.
Set HLS playlist type to event and settings.deleteHLSFilesOnEnded to false . Edit your red5-web.properties for the application and set the following settings
settings.hlsPlayListType=event
settings.deleteHLSFilesOnEnded=false
Restart the server
sudo service antmedia restart
Add adaptive bitrates on the web panel.
Start Live Streaming and let the Ant Media Server create HLS(m3u8 and ts) files for each bitrate.
Stop Live Streaming
Then you can watch the stream by giving the master m3u8 file which is {STREAM_ID}_adaptive.m3u8. It can be even played directly by embedded player even if it's not live.
For more information, take a look at this wiki about HLS Playing
Please let me know if this approach helps you.
antmedia.io

AWS Media Convert - Generating sprite for video preview on progress bar hovering

Here is my process :
I upload a video in an S3 bucket
I encode the video with AWS media converter
The videos are generated by AWS Media converter
I need "something" to create a sprite like the picture below so I can use it to display the video on hover
If anyone has a solution or just an idea, I'll be very interested.
You could code a solution that shouldn't be too hard. I'm not sure exactly what your architecture is, but you could bootstrap an EC2 instance with an install of ffmpeg, and script out your thumbnail reel with something like this. Imagemagick is another solution, see this post.
You could probably even run that solution in Lambda. Here's a generic image processing post using that technology.
In either case, once up, you can configure s3 to fire an event when a video is dropped and use Lambda natively or something like SQS or SNS to notify your ec2 instance to extract and store the thumbnail reel alongside the video file in S3 using a solution similar to this blog post.
Again, this assumes a lot about your architecture, but hopefully there are some useful pointers in there. Feel free to leave comments if you have questions. Happy to help.

Download chunks of MP4 file from iOS

I am developing an iOS app that synchronises with GoPro cameras.
One of the feature requires downloading MP4 from the GoPro (potentially huge).
I basically have a url like: http://10.9.9.5/whatever/video.mp4.
However, I only need parts of the video, let's say between 1:00 and 1:05.
I am thinking on downloading just parts of the MP4, using HTTP "Range" header. I believe that it's possible and I will get a bunch of bytes.
However, is it a valid file? Will I be able to create a MP4 ? Do I need the MP4 header with meta information? Do any of you faced this kind of challenge?
I am using Objective C but I believe that this is a general question.
The MP4 file is a container for video that is structured around something called boxes. Probably you'll have h.264 video in that MP4 file, knowing that, you'll need to know the structure of the file you are trying to chunk.
Depending on the way it is encoded you'll have to look for a box with metadata that'll allow you to search for the correct part of the file either at the beginning or at the end, but you'll have to reconstruct a valid MP4 with the data you get from the original file.
You can see a reference of the file format here http://xhelmboyx.tripod.com/formats/mp4-layout.txt.

SMIL adaptive streaming in Videojs

What is required to use SMIL file to utilize adaptive streaming in a videojs player. I have created the SMIL file in my wowza application and it is creating my 4 separate streams and making them available. However I cannot get my webpage, that uses videojs, to correctly play the SMIL file. Hints on that coding or where to go to find the correct documentation would be greatly appreciated.
There aren't many implementations of SMIL players. I'm sure I've seen wowza URLs that suggest it will output the SMIL as other formats, something like whatever.smil/manifest.m3u8. That's HLS which could be played on mobile and Safari natively and with videojs-contrib-hls elsewhere.
I know the question is old, but I've been struggling with this recently, so I want to share my experience in case anyone is interested. My scenario is very similar: want to deliver adaptive bitrate streaming from Wowza to clients using videojs.
There is a master link that explains how to setup and run Wowza Transcoder for live streaming, and how to set up your Adaptive Bitrate Streams using an SMIL file. Following the video in there you can achieve to have a stream that uses ABS, but the SMIL file is attached to the stream name, so it is not a solution if you have streams that come to Wowza from another Media Server origin and that need to be transcoded before being served to the clients. In the article there are a few key things mentioned (like the Stream Name Groups), but somehow things doesn't seem pretty clear, at least to me. So here is some clarification from what I understood from all articles I read and what I did to achieve ABS:
You can achieve ABS in Wowza either with SMIL files or with Stream Name Groups (NGRP). NGRP refres to a block of streams that is defined in the Transcoder template that can be played back using multi-bitrate streaming (dynamically) (<- this is what I used). And SMIL files are used to create a "static" list of streams for multi-bitrate VOD streaming. If you are using Wowza Origin-Edge Delivery you'll need the .smil file, because NGRP do not get forwarded to the edge. (Source for all this information: here).
In case you need the SMIL file, you probably need to generate a new one for every stream, and probably you want to do that in an automated way, so best way would be through an HTTP request (in the link above it is explained how to achieve this).
In case you can live with NGRP, things are a bit easier:
You need to enable Wowza Transcoder (this is pretty easy and steps are in the video I mention above).
You should create your own Transcoder Template with the different stream presets you want to deliver, as an example you can check the default ones that are already there. The more presets you add, the more work Wowza will need to do whenever a stream comes, since it will need to generate a new stream for every preset that you have defined.
Now is when we generate the NGRPs. In your Transcoder Template, you can generate as many NGRPs as you want (to clarify: these are like groups of streams, that you will be able to set in your clients video player. Each NGRP contains the streams that the video will be able to use when doing the adaptive bitrate streaming). For instance, these are the default NGRPs:
If you play the ngrp "_mobile" in the clients video player, the ABS algorithm in the player will be able to adapt itself to play either the 240p or the 160p streams based on the client capabilities.
So imagine you have these two NGRP. In order to play them in videoJS, you will need to set the source to:
http://[wowza-ip-address]:1935/<name-of-your-application>/ngrp:myStream_all/playlist.m3u8
or
http://[wowza-ip-address]:1935/<name-of-your-application>/ngrp:myStream_mobile/playlist.m3u8
... based on how many options you want to provide to the client player to use for the ABS. (For instance: if your targets are old mobile devices, you probably just want to offer a couple of low bitrate streams).
(This would be in case you're delivering an HLS stream. If other format, the extension would change, for instance if you are delivering a DASH stream you would have "/manifest.mpd" instead of "playlist.m3u8").
That is all, there is also a very helpful link in video.js documentation explaining how it does the bitrate switching: here.
I hope it helps someone! At least clarifying things! :)

Titanium: Best place for storing secret files

Since i cant play an audio file from a DB blob, i have to write it as a file, before i can play it.
Looking at the documentation, my choices are:
Ti.Filesystem.applicationDataDirectory
Ti.Filesystem.tempDirectory
Ti.Filesystem.externalStorageDirectory
Ti.Filesystem.applicationCacheDirectory
Considering that i want my file to be secret, so no other app can see that file, what is my best option?
EDIT: So the issue is more with Android, i'm afraid that any file browser will be able to find the file.
I'm assuming you're building for android, because ios data are sandboxed and not readily accessible by default (until ios8). Rather than hiding them, just encode them using base64encode(). And then you can decode when accessing the file with base64decode().
With that said, I've never had to use it. So, I don't have an example. But you can read about it in the api doc.
Hope that helps.
You haven't mentioned if you are building an app for iOS or Android. For example Ti.Filesystem.externalStorageDirectory is available only for Android (SD card).
Anyway if you want to save an audio file, you should save it in <Application_Home>/Documents ,so you should use Ti.Filesystem.applicationDataDirectory. Don't forgot to set up a remoteBackup flag see http://docs.appcelerator.com/titanium/3.0/#!/api/Titanium.Filesystem.File-property-remoteBackup
Also have a look at iOS Data Storage Guidelines for more details
https://developer.apple.com/icloud/documentation/data-storage/index.html
*If you want to save an audio file only because you need to play it ,but you don't need to store it in a fileSystem then is better to use Ti.Filesystem.tempDirectory