Using FFMPEG in an OS X application - objective-c

so I've been wanting to make a real-time live streaming application. Essentially, the application would send the microphone feed from an Xcode application to a website where it can be viewed in real-time. Is FFMPEG the best solution for this? How would I go about doing this? If that's too broad, then how do I use the FFMPEG framework in an OS X objective-c application?

To directly address your questions:
(1) Is FFMPEG the best solution for this?
It depends. When setting up a live streaming environment you will likely stumble over FFMPEG, VLC and gstreamer, which are the options you have to simply stream video/audio. Therefore, yes, FFMPEG can be used as part of the solution. Please look into the following question: DIY Video Streaming Server
(2) How would I go about doing this?
Your requirement is to make a live streaming application which sends the mic input onto the web. This includes the following steps:
(0) Your Xcode application will need to provide a method to start this process. You don't necessarily need to integrate a framework to achieve this.
(1) Streaming / Restreaming
Use FFMPEG or VLC to grab your device and stream it locally:
ffmpeg -i audioDevice -acodec libfaac -ar 44100 -ab 48k -f rtp rtp://host:port
(2) Segmenting for HTTP Live Streaming*
Use a segmenter such as: mediastreamsegmenter (Apple), livehttp (VLC) or segment (FFMPEG) to prepare your stream for web delivery:
vlc -vvv -I dummy <SOURCEADDRESS> --sout='#transcode{acodec=libfaac,ab=48}:std{access=livehttp{seglen=10,delsegs=false,numsegs=10,index=/path/to/your/index/prog_index.m3u8,index-url=YourUrl/fileSequence######.ts},mux=ts{use-key-frames},dst=/path/to/your/ts/files/fileSequence######.ts}'
*you could also simply use VLC to grab your audiodevice with qtsound (see this question) and prepare it for streaming with livehttp.
(3) HTML 5 Delivery
Publish your stream
<audio>
<source src="YOUR_PATH/playlist.m3u8" />
</audio>
(3) If that's too broad, then how do I use the FFMPEG framework in an OS X objective-c application?
Either use an external wrapper framework to access FFMPEG functionality and consult the tutorials to work with these frameworks or you could also approach this by using NSTask to wrap your command line arguments in Objective-C and simply start those tasks from your application - as in this question.
Another way would be to use VLCKit, which offers VLC functionality in a framework for Objective-C (VLCKit wiki). However when tackling streaming challenges I prefer to work with the actual commands instead of pushing another layer of framework in between, which may be missing some options.
I hope this points you in the right directions. There are multiple ways to solve this. It's a broad question, therefore this broad approach to answer your question.

Related

OpenTok and File Sharing

I am building a video chat website using OpenTok. I have the video and text chat working, (still working on the screen sharing), but I was wondering if anyone could point me in the right direction regarding file sharing?
I would like both parties to be able to send files to each other, but not really sure how to go about it. Would it be possible to use Peer5?
There are several ways to get the peers to send files to each other.
A first way is to upload the file to your server or to some cloud storage service. Then share the link to the other peers via OpenTok's Signaling API (which is, presumably, an abstraction over WebRTC's DataChannels). This solution is simple, but not peer-to-peer.
Another solution is to, again, upload the file to a server and share the link to the other peers, but this time have the peers download the file via Peer5's Downloader. The Peer5 Downloader uses a coordination server to figure out which peers are available to help with the download. If no peers are available, the download will fall back to the HTTP server. This of course only makes sense if the file is being shared with several peers at the same time. In 1-to-1 communications it is pointless.
The previous solution is P2P only in the download part; the user still has to upload the file to a server. Another way, which would be P2P all the way, is to cut the file into chunks, and send them over the OpenTok Signaling API. It is a complicated process but there are several tutorials about this. The tutorials use the WebRTC DataChannel, but it is reasonable to assume that they could be adapted to the Signaling API:
https://bloggeek.me/send-file-webrtc-data-api/
http://www.html5rocks.com/en/tutorials/webrtc/datachannels/#build-a-file-sharing-application
A interesting open source application of a file sharing app using WebRTC is Sharefest, made by the guys from Peer5. You can use it for inspiration if you are inclined to make such a system.
As a side note, OpenTok seems to be considering to build a starter kit with sample code about how to integrate OpenTok with Peer5 in a file sharing application. I do not know how such an implementation will work, but I presume it is some variation of my second suggestion here. It could be good to keep an eye on it.

Creating a WebRTC receiver

I am new to WebRTC and trying to figure out how to create a program outside a browser which receives a WebRTC audio stream and outputs it on speakers.
Are there any WebRTC libraries for Java or C#?
That receiver will be running on a linux machine.
--
I've been thinking about using getUserMedia() to access the microphone. But then:
In what format will such a stream be transmitted?
Let's say I use WebRTC2SIP and build a Java endpoint using JSIP;
or I just use a socket and send the stream over http.
What audio format will I get on the receiver side? So far I have read WebRTC does compress the stream somehow.
I guess there are two ways for you:
build the whole WebRTC voice engine for android/iOS or Mac etc., and just use the API provide by VOE.
build standalone NS/VAD/AECM/AGC modules and using it in your project. for example, you build standalone NS module for android mobile, you use AudioRecord(java layer, android things) to record sound from MIC, and do the noise suppression process on these data(jni layer, WebRTC things), and finally playback the processed data by using AudioTrack(java layer, android things).
EDIT:
for the 2nd situation, the format is PCM raw data.
Check out the working Audio demo and code at demo.easyrtc.com
The code is all open source and can be checked out at https://github.com/priologic/easyrtc
You can look for any known issues around easyRTC at our forum at
https://groups.google.com/forum/#!forum/easyrtc
Also check out our main site at easyrtc.com

RTSP stream in iOS

I'm new to Objective-C.
How to read an RTSP stream in an iOS app?
Apparently libraries live555 and ffmpeg are able, but I find no such simple and functional.
Are there other solutions ?
All solutions use ffmpeg.
But some are easier than others, here's a tutorial to get you started.
http://sol3.typepad.com/exotic_particles/
personally I find live 555 offers the most options but also is quite hard to get started with.
most of the difficulty is usually in building ffmpeg libs.

Red 5 publish Issue

I m publishing a stream on red5 using microphone on client side as3 code . but it not published good stream but the same thing i m doing on FMS it creates perfect stream
I need to be understand what is the issue during publish on red 5 .
Read Red5 documentation for that. And ofcourse there are differences between the performances of the two servers. However if you want to improve the quality of stream you can use FFMPEG or Xuggler with Red5 to encode streams.
Because you are not saying what your encoder is, it is hard to give a clear answer. If you are using Adobe's FMLE to create the stream that goes to your FMS server, it is the FMLE that explains why you have good video and audio encoding 'out-of-the-box'.
I have never tried to use FMLE with RED5, so I cannot tell you if it works, but doubtful it works out-of-the-box. It probably can work with a bit of tweaking on both client and server side.
To use your own encoder, what you do is capture two streams using ffmpeg, a great example on how to do that is on stackoverflow here.
Once you are capturing, you can use ffmpeg to send the combined audio and video streams to a file, or you can send it directly to your red 5 server. A simplified version of the ffmpeg command to show mapping two streams to give a single rtmp output is shown below. ffmpeg -i video_stream -i audio_stream -map 0:0 -map 1:0 -f flv rtmp://my.red5.server:1935/live/mystream

Newbie question on Flash video players, products/SDKs, and API

I'm a C programmer and a total newbie to Flash/video/web world. Don't know where/how to start, and so would greatly appreciate your initial help.
Question
If I need to host flash videos off of my website (instead of embedding YouTube links on my webpages),
AND
If I need to provide player API like YouTube's that can be used, say, for supporting chromeless player versions customizable via this custom API of mine...
THEN
What do I need to do essentially...?
Write a custom Flash video player?
If yes, how? I mean, using which Adobe products / tools / SDKs / language(s)?
Is there anything free/opensource available for doing this? Especially, for Linux platform?
Write a new browser (firefox) plugin for users visiting my site?
Not sure how my custom Flash video player will get to the user visiting my site for the first time?
Any books, resources that cover this problem well?
Does the Flash content need to hosted off of a Windows server only?
Currently lost. Thanks in advance,
/SD
Flash has video playback support built-in, so all you need to do is use the Flash authoring environment or Flex to compile a .SWF file that uses the video API, with some buttons to stop and start the stream, volume, seeking, anything else you want your player to do.
Many people have already done this for you, in a way you can easily use from simple HTML. See eg. OSFLV, Flowplayer, JW...
Write a new browser (firefox) plugin for users visiting my site? Does the Flash content need to hosted off of a Windows server only?
Lord no! Flash video would never have taken off if it was just another custom-server+custom-plugin piece of unpleasantness. Though special streaming servers are possible, for the most part it's just an FLV file sitting on a web server.
(FLV is the video format supported by the Flash video playing functions. There are many, many tools you can use to convert other formats to it; I use Avidemux.)
If you are planning to use a "Progressive Download" approach, then your FLV files can be hosted on a Windows or a Linux box. Be aware that:
it is no as efficient as true
streaming.
you may not use it for live events
nor only for stored video files.
it cannot automatically detect the
end user's connection speed.
it is not possible to jump ahead to
another part while it's downloaded.
the video file will be saved on the
end user's computer.
If you are planning to use a "Streaming" approach then you can either buy and use Adobe's solution (Flash Media Server, available on both Windows and Linux box) or sign up for a hosted solution. On this page you will find recommended providers by Adobe. I personally have been using Influxis's hosting with success for a couple of years already.
You can also write your own streaming server but that would be a lot of hard work. If you are interested in that, I would recommend you have a look a Red5 which is an open source Flash Server written in Java.