Play encrypted video with AVPlayer - objective-c

I'm implementing an application that contains video player. For some reasons video files are encrypted with AES, and size of these files can be rather big to avoid loading it to RAM as one piece. I'm looking for some way to play it with AVPlayer.
Tried:
1) Custom NSURLProtocol as suggested here http://aptogo.co.uk/2010/07/protecting-resources/
Didn't work, I suggest that AVPlayer uses it's own and mine does not get called.
2) Use AVAsset to chop video in small chunks and then feed them to AVPlayer - failed because there's no API in AVPlayer for that.
Any workaround would be greatly appreciated :)

You have 2 options:
If targeting iOS 7 and newer the check out AVAssetResourceLoaderDelegate. It allows you to do what you would with a custom NSURLProtocol but specifically for AVPlayer.
Emulate an HTTP server with support for the Range header and point the AVURLAsset to localhost.
I implemented #2 before and can provide more info if needed.

I just downloaded the Apple sample project https://developer.apple.com/library/ios/samplecode/sc1791/Listings/ReadMe_txt.html and it seems to do exactly what you want.
The delegate catch each AVURLAsset's AVAssetResourceLoader calls and makes up a brand new .m3a8 file with a custom decryption key in it.
Then it feeds the player with all .ts file URLs in the m3a8.
The project is a good overview of what it is possible to do with HLS feeds.

Related

Video call through the body of tegram api

I would like to know whether it is possible to start a video call with another user by means of the tdlib library and transfer a picture from a camera connected to a Raspberry Pi to this call? And if so, how do you do that? What methods should I use?
To work with video calls part of Telegram you need to use Telegram's WebRTC client (https://github.com/TelegramMessenger/tgcalls). With MTProto methods you can get params to start this library. Video and audio bytes passing via this library.
There is already implemented high level library for Python that works with official tgcalls library. But working with private calls in a TODO list. You can use this project as an example how to work with tgcalls library.
https://github.com/MarshalX/tgcalls
Here are the python sources With working code of video translator with youtube/m3ui/mp4
https://github.com/EverythingSuckz/tgvc-video-tests

how to perform continuous speech to text on webrtc communication audio stream in mobile app

I am trying to add a continuous speech to text recognizer in a mobile application during a webrtc audio-only call.
I'm using react native on the mobile side, with the react-native-webrtc module and a custom web api for the signaling part. I've got the hand of the web api, so I am able to add the feature on it's side if it's the only solution, but I prefer to perform it on the client side to avoid consuming bandwidth if there is no need.
First, I have worked and tested some ideas with my laptop browser. My first idea, was to use the SpeechRecognition interface from the webspeechapi : https://developer.mozilla.org/en-US/docs/Web/API/SpeechRecognition
I have merged the audio only webrtc demo with the audiovisualiser demonstration in one page but there, I did not find how to connect a mediaElementSourceNode (created via AudioContext.createMediaElementSource(remoteStream) at line 44 of streamvisualizer.js) to a web_speech_api SpeechRecognition class. In the Mozilla documentation, the audio stream seems to came with the constructor of the class, which may call the getUserMedia() api.
Second, during my researches I have found two open source speech to text engine : cmusphinx and mozilla's deep-speech. The first one have a js binding and seems great with the audioRecoder that I can feed with my own mediaElementSourceNode from the first try. However, how to embed this in my react native application?
There are also Android and iOS natives webrtc modules, which I may be able to connect with cmusphinx platform specific bindings (iOS, Android) but I don't know about native classes inter-operability. Can you help me with that?
I haven't already created any "grammar" or define "hot-words" because I am not sure of technologies involved, but I can do it latter if I am able to connect a speech recognition engine to my audio stream.
You need to stream the audio to the ASR server by either adding another webrtc party on the call or by some other protocol (TCP/Websocket/etc). On the server you perform recognition and send results back.
First, I have worked and tested some ideas with my laptop browser. My first idea, was to use the SpeechRecognition interface from the webspeechapi : https://developer.mozilla.org/en-US/docs/Web/API/SpeechRecognition
This is experimental and does not really work in Firefox. In Chrome it only takes microphone input directly, not dual stream from caller and callee.
The first one have a js binding and seems great with the audioRecoder that I can feed with my own mediaElementSourceNode from the first try.
You will not be able to run this as local recognition inside your react native app

a way to upload video to php server with AVFoundation?

I was wondering what would be the best approach to upload a video using AVfoundation to a server in h264 format. I will be using NSURL and some form of httprequest to post data every 30 second intervals. I was wondering what would be the best way to upload if there are any established libraries to ease my life?
thank you.
You can just use NSURLConnection with a NSMutableURLRequest. Assign a NSInputStream using the
- (void)setHTTPBodyStream:(NSInputStream *)inputStream
method.
Check the documentation:
https://developer.apple.com/library/mac/#documentation/Cocoa/Reference/Foundation/Classes/NSMutableURLRequest_Class/Reference/Reference.html
https://developer.apple.com/library/mac/#documentation/Cocoa/Reference/Foundation/Classes/nsurlconnection_Class/Reference/Reference.html
https://developer.apple.com/library/mac/#documentation/Cocoa/Reference/Foundation/Classes/NSInputStream_Class/Reference/Reference.html

How do I record video to a local disk in AIR?

I'm trying to record a webcam's video and audio to a FLV file stored on the users local hard disk. I have a version of this code working which uses NetConnection and NetStream to stream the video over a network to a FMS (Red5) server, but I'd like to be able to store the video locally for low bandwidth/flaky network situations. I'm using FLex 3.2 and AIR 1.5, so I don't believe there should be any sandbox restrictions which prevent this from occurring.
Things I've seen:
FileStream - Allows reading.writing local files but no .attachCamera and .attachAudio methids for creating a FLV.
flvrecorder - Produces screen grabs from the web cam and creates it's own flv file. Doesn't support Audio. License prohibits commercial use.
SimpleFLVWriter.as - Similar to flvrecorder without the wierd license. Doesn't support audio.
This stackoverflow post - Which demonstrates the playback of a video from local disk using a NetConnection/NetStream.
Given that I have a version already which uses NetStream to stream to the server I thought the last was most promising and went ahead and put together this demo application. The code compiles and runs without errors, but I don't have a FLV file on disk which the stop button is clicked.
-
<mx:Script>
<![CDATA[
private var _diskStream:NetStream;
private var _diskConn:NetConnection;
private var _camera:Camera;
private var _mic:Microphone;
public function cmdStart_Click():void {
_camera = Camera.getCamera();
_camera.setQuality(144000, 85);
_camera.setMode(320, 240, 15);
_camera.setKeyFrameInterval(60);
_mic = Microphone.getMicrophone();
videoDisplay.attachCamera(_camera);
_diskConn = new NetConnection();
_diskConn.connect(null);
_diskStream = new NetStream(_diskConn);
_diskStream.client = this;
_diskStream.attachCamera(_camera);
_diskStream.attachAudio(_mic);
_diskStream.publish("file://c:/test.flv", "record");
}
public function cmdStop_Click() {
_diskStream.close();
videoDisplay.close();
}
]]>
</mx:Script>
<mx:VideoDisplay x="10" y="10" width="320" height="240" id="videoDisplay" />
<mx:Button x="10" y="258" label="Start" click="cmdStart_Click()" id="cmdStart"/>
<mx:Button x="73" y="258" label="Stop" id="cmdStop" click="cmdStop_Click()"/>
</mx:WindowedApplication>
It seems to me that there's either something wrong with the above code which is preventing it from working, or NetStream just can't be abused in this wany to record video.
What I'd like to know is, a) What (if anything) is wrong with the code above? b) If NetStream doesn't support recording to disk, are there any other alternatives which capture Audio AND Video to a file on the users local hard disk?
Thanks in advance!
It is not possible To stream video directly to the local disk without using some streaming service like Windows Media encoder, or Red5 or Adobe's media server or something else.
I have tried all the samples on the internet with no solution to date.
look at this link for another possibility:
http://www.zeropointnine.com/blog/updated-flv-encoder-alchem/
My solution was to embed Red5 into AIR.
Sharing with you my article
http://mydevrecords.blogspot.com/2012/01/local-recording-in-adobe-air-using-red5.html
In general, the solution is to embed free media server Red5 into AIR like an asset. So, the server will be present in AIR application folder. Then, through the NativeProcess, you can run Red5 and have its instance in memory. As result, you can have local video recording without any network issues.
I am also trying to do the same thing, but I have been told from the developers of avchat.net that it is not possible to do this with AIR at the moment. If you do find out how to do it, I would love to know!
I also found this link, not sure how helpful it is http://www.zeropointnine.com/blog/webcam-dvr-for-apollo/
Well, I just think that letting it connect to nothing(NULL) doesn't work. I've already let him try to connect to localhost, but that didn't work out either. I don't think this is even possible. Streaming video works only with Flash Media Server and Red5, not local. Maybe you could install Red5 on you PC?
Sadly video support in flash from cameras is very poor. When you stream its raw so the issue is that you have to encode to FLV and doing it in real time takes a very fast computer. First gen concepts would write raw bitmaps to a file (or serialize an array) then a second method would convert the file to an FLV. Basically you have to poll the camera and save each frame as a bitmap then stack in an array. This is very limited and could not do audio. It was also very hard to get above 5-10fps.
The gent at zero point nine, came up with a new version and your on the right path. Look at the new flv recorder. I spent a lot of time working with this but never quite got it to work for my needs (two cameras). I just could not get the FPS i needed. But it might work for you. It was much faster than the original method.
The only other working option I know of is to have the Red5 save the video and download it back to the app.

Start playing streaming audio on symbian

The tiny question is:
How to start (realplayer ?) playing given online resourse (e.g. http://example.com/file.mp3)
PyS60, C++ or C# via RedFiveLabs would do.
EDIT1: Title changed from "Start RealPlayer on symbian" to the more appropriate.
I think the title is a little misleading if you just want to play back media content and not use a particular application for it.
In C++ there is CMdaAudioPlayerUtility::OpenUrlL() but it's not widely implemented. For example in S60 it will complete with KErrNotSupported status. To play files you can use other open functions in CMdaAudioPlayerUtility such as OpenFileL() or OpenDesL() but you need a separate mechanism for retrieving the files or at least the bytes onto the device.
There is also CVideoPlayerUtility::OpenUrlL() which supports rtsp audio streams but not http.