Is HLS Streaming Supported on Tizen TV Emulator? - html5-video

I need to play an HTTP live stream (m3u8 playlist), and it does not work in the sample AVPlay app. I wonder whether the problem is that the emulator does not support HLS or not. However, I can play the individual streams in the playlist.
I am getting the following response from the URL: (I modified the values randomly to ensure that the streams are not accessible)
#EXTM3U
#EXT-X-STREAM-INF:PROGRAM-
ID=1,BANDWIDTH=678000,RESOLUTION=768x432,CODECS="avc1.77.30, mp4a.40.2"
https://livestream-f.akamaihd.net/i/2025iz9od4_1#99633/index_123_avp.m3u8
sd=10&dw=100&rebase=on&hdntl=exp=150977~acl=%2fil~hmac=adfe2a9bdd6
#EXT-X-STREAM-INF:PROGRAM-
ID=1,BANDWIDTH=1756000,RESOLUTION=864x486,CODECS="avc1.77.30, mp4a.40.2"
https://livestream-
f.akamaihd.net/i/20251386_6385852_lsifljk7m8ab9906ud4_1#99633/index_5432_av-
p.m3u8?
sd=10&dw=100&rebase=on&hdntl=exp=1500980977~acl=%2fi
%2f2025135852_lsifljk7m8aw9iz6dd4_1%4099633%2f*~data=hdntl~hmac=add2addcaee
#EXT-X-STREAM-INF:PROGRAM-
ID=1,BANDWIDTH=1756000,RESOLUTION=864x486,CODECS="avc1.77.30, mp4a.40.2"
https://livestream-
f.akamaihd.net/i/20251386_6385852_lsifljk7m87w9iz6dd4_1#99633/index_1771_av-
b.m3u8?
sd=10&dw=100&rebase=on&hdntl=exp=1500980977~acl=%2fi
%2f2025m8aw9iz6dd4_1%4099633%2f*~data=hdntl~hmac=23dkjddc9579ulp3bd52ce2a9bd
...
...
...

Related

Video creation with Microsoft Media Foundation and Desktop Duplication API

I'm using DDA for capturing the desktop image frames and sending them to the server, where these frames should be used to create video with MMF. I want to understand, what needs to be done with MMF, if i will use Source Reader and Sink Writer to render video from captured frames.
There are two questions:
1) Well, first of all, i can't fully understand is there, actually, need for the Source Reader with Media Source, if i already receive the video frames from DDA? Can i just send them to the Sink Writer and render the video?
2) As far as i understand, first thing to do, if there is still a need for Source Reader and Media Source, is write my own Media Source, which will understand the DXGI_FORMAT_B8G8R8A8_UNORM frames, that captured with DDA. Then i should use Souce Reader and Sink Writer with suitable Decoders\Encoders and send the media data to the Media Sinks. Could you, please, explain in more detail what needs to be done in this case?
Implementing SourceReader is not necessary in your case, but you can go ahead and implement it, it will work.
Instead, you can also directly feed your input buffer captured through Desktop Duplication to SinkWriter. Just as below,
CComPtr<IMFAttributes> attribs;
CComPtr<IMFMediaSink> m_media_sink;
IMFSinkWriterPtr m_sink_writer;
MFCreateAttributes(&attribs, 0);
attribs->SetUINT32(MF_LOW_LATENCY, TRUE);
attribs->SetUINT32(MF_READWRITE_ENABLE_HARDWARE_TRANSFORMS, TRUE);
IMFMediaTypePtr mediaTypeOut = MediaTypeutput(fps, bit_rate);
MFCreateFMPEG4MediaSink(stream, mediaTypeOut, nullptr, &m_media_sink));
MFCreateSinkWriterFromMediaSink(m_media_sink, attribs, &m_sink_writer);
//Set input media type
mediaTypeIn->SetGUID(MF_MT_SUBTYPE, MFVideoFormat_RGB32);
//Set output media type
mediaTypeOut->SetGUID(MF_MT_SUBTYPE, MFVideoFormat_H264);
IMFSamplePtr sample;
MFCreateSample(&sample);
sample->AddBuffer(m_buffer); // m_buffer is source buffer in R8G8B8A8 format
sample->SetSampleTime(m_time_stamp);
sample->SetSampleDuration(m_frame_duration);
m_sink_writer->WriteSample(m_stream_index, sample);
Here is a perfectly working sample based on SinkWriter. It supports both network and file sink. It actually captures the desktop through GDI approach though. DDA is almost the same, you can indeed obtain better performance using DDA.
I have also uploaded one more sample here which is in fact based on Desktop duplication, and directly uses IMFTransform instead, and streams the output video as RTP stream using Live555. I'm able to achieve up to 100FPS through this approach.
If you decide to follow the SinkWriter approach, you don't have to worry about the color conversion part as it is taken care by SinkWriter under the hood. And with IMFTransform, you will have to deal with color conversion part, but you will have a fine grained control over the encoder.
Here are some more reference links for you.
https://github.com/ashumeow/webrtc4all/blob/master/gotham/MFT_WebRTC4All/test/test_encoder.cc
DXGI Desktop Duplication: encoding frames to send them over the network
Getting green screen in ffplay: Streaming desktop (DirectX surface) as H264 video over RTP stream using Live555
Intel graphics hardware H264 MFT ProcessInput call fails after feeding few input samples, the same works fine with Nvidia hardware MFT
Color conversion from DXGI_FORMAT_B8G8R8A8_UNORM to NV12 in GPU using DirectX11 pixel shaders
GOP setting is not honored by Intel H264 hardware MFT
Encoding a D3D Surface obtained through Desktop Duplication using Media Foundation

WebRTC : attachMediaStream

Suppose for an incoming remote stream in WebRTC, I do not attach it to a video element using attachMediaStream(<videoElement>, <remoteStream>);.
I want to understand what happens now?
Is the stream still being sent by remote peer and I am not just displaying it as I have not attached to a video element
Remote has stopped streaming to me as I have not accepted the stream
attachMediaStream is a part of the adapter.js library. That library is a shim which contains "glue" code to abstract away differences between the browsers (Chrome/Firefox often have experimental API's prefixed with moz or webkit).
attachMediaStream was (deprecated now) responsible for attaching a MediaStream to a HTML video element. Nothing more.
You can have a look at the source for Firefox here:
attachMediaStream: function(element, stream) {
logging('DEPRECATED, attachMediaStream will soon be removed.');
element.srcObject = stream;
},
So 1. yes 2. no. Since attachMediaStream is purely a "local" helper to display the stream.
*attachMediaStream is not exclusive to adapter.js, that is just the most common adapter/helper library used.

WebRTC Changing Media Streams on the Go

Now since device enumeration is present in chrome, i know i can select a device during "getUserMedia" negotiation. I was also wondering whether i could switch devices during the middle of a call (queue up a local track and switch tracks or do i have to renegotiate the stream)? I am not sure if this is something that is still blocked or now is "allowable"
I have tried to make a new track, but i can't figure out how to switch the track on the go. I know this was previously impossible, but was wondering now if it is possible?
Even i have the same requirement. I have to record the video using MediaRecorder. For this I am using navigator.getUserMedia with constraints of audio and video. You can pass the video or audio tracks dynamically by getting the available devices from navigator.mediaDevices.enumerateDevices() and attaching the respective device to constraints and calling navigator.getUserMedia with new constraints again. The point to be noted when doing this is, you have to kill the existing tracks using track.stop() method.
You can see my example here.
StreamTrack's readyState is getting changed to ended, just before playing the stream (MediaStream - MediaStreamTrack - WebRTC)
In Firefox, you can use the RTPSender object to call replaceTrack() to replace a track on the fly (with no renegotiation). This should eventually be supported by other browsers as part of the spec.
Without replaceTrack(), you can remove the old stream, add a new one, deal with onnegotiationnedded, and let the client process the change in streams.
See the replaceTrack() test in the Mozilla source: https://developer.mozilla.org/en-US/docs/Web/API/RTCRtpSender/replaceTrack
Have you tried calling getUserMedia() when you want to change to a different device?
There's an applyConstraints() method in the Media Capture and Streams spec that makes it possible to change constraints on the fly, but it hasn't been implemented yet:
dev.w3.org/2011/webrtc/editor/getusermedia.html#the-model-sources-sinks-constraints-and-states
dev.w3.org/2011/webrtc/editor/getusermedia.html#methods-1

Play encrypted video with AVPlayer

I'm implementing an application that contains video player. For some reasons video files are encrypted with AES, and size of these files can be rather big to avoid loading it to RAM as one piece. I'm looking for some way to play it with AVPlayer.
Tried:
1) Custom NSURLProtocol as suggested here http://aptogo.co.uk/2010/07/protecting-resources/
Didn't work, I suggest that AVPlayer uses it's own and mine does not get called.
2) Use AVAsset to chop video in small chunks and then feed them to AVPlayer - failed because there's no API in AVPlayer for that.
Any workaround would be greatly appreciated :)
You have 2 options:
If targeting iOS 7 and newer the check out AVAssetResourceLoaderDelegate. It allows you to do what you would with a custom NSURLProtocol but specifically for AVPlayer.
Emulate an HTTP server with support for the Range header and point the AVURLAsset to localhost.
I implemented #2 before and can provide more info if needed.
I just downloaded the Apple sample project https://developer.apple.com/library/ios/samplecode/sc1791/Listings/ReadMe_txt.html and it seems to do exactly what you want.
The delegate catch each AVURLAsset's AVAssetResourceLoader calls and makes up a brand new .m3a8 file with a custom decryption key in it.
Then it feeds the player with all .ts file URLs in the m3a8.
The project is a good overview of what it is possible to do with HLS feeds.

How do I record video to a local disk in AIR?

I'm trying to record a webcam's video and audio to a FLV file stored on the users local hard disk. I have a version of this code working which uses NetConnection and NetStream to stream the video over a network to a FMS (Red5) server, but I'd like to be able to store the video locally for low bandwidth/flaky network situations. I'm using FLex 3.2 and AIR 1.5, so I don't believe there should be any sandbox restrictions which prevent this from occurring.
Things I've seen:
FileStream - Allows reading.writing local files but no .attachCamera and .attachAudio methids for creating a FLV.
flvrecorder - Produces screen grabs from the web cam and creates it's own flv file. Doesn't support Audio. License prohibits commercial use.
SimpleFLVWriter.as - Similar to flvrecorder without the wierd license. Doesn't support audio.
This stackoverflow post - Which demonstrates the playback of a video from local disk using a NetConnection/NetStream.
Given that I have a version already which uses NetStream to stream to the server I thought the last was most promising and went ahead and put together this demo application. The code compiles and runs without errors, but I don't have a FLV file on disk which the stop button is clicked.
-
<mx:Script>
<![CDATA[
private var _diskStream:NetStream;
private var _diskConn:NetConnection;
private var _camera:Camera;
private var _mic:Microphone;
public function cmdStart_Click():void {
_camera = Camera.getCamera();
_camera.setQuality(144000, 85);
_camera.setMode(320, 240, 15);
_camera.setKeyFrameInterval(60);
_mic = Microphone.getMicrophone();
videoDisplay.attachCamera(_camera);
_diskConn = new NetConnection();
_diskConn.connect(null);
_diskStream = new NetStream(_diskConn);
_diskStream.client = this;
_diskStream.attachCamera(_camera);
_diskStream.attachAudio(_mic);
_diskStream.publish("file://c:/test.flv", "record");
}
public function cmdStop_Click() {
_diskStream.close();
videoDisplay.close();
}
]]>
</mx:Script>
<mx:VideoDisplay x="10" y="10" width="320" height="240" id="videoDisplay" />
<mx:Button x="10" y="258" label="Start" click="cmdStart_Click()" id="cmdStart"/>
<mx:Button x="73" y="258" label="Stop" id="cmdStop" click="cmdStop_Click()"/>
</mx:WindowedApplication>
It seems to me that there's either something wrong with the above code which is preventing it from working, or NetStream just can't be abused in this wany to record video.
What I'd like to know is, a) What (if anything) is wrong with the code above? b) If NetStream doesn't support recording to disk, are there any other alternatives which capture Audio AND Video to a file on the users local hard disk?
Thanks in advance!
It is not possible To stream video directly to the local disk without using some streaming service like Windows Media encoder, or Red5 or Adobe's media server or something else.
I have tried all the samples on the internet with no solution to date.
look at this link for another possibility:
http://www.zeropointnine.com/blog/updated-flv-encoder-alchem/
My solution was to embed Red5 into AIR.
Sharing with you my article
http://mydevrecords.blogspot.com/2012/01/local-recording-in-adobe-air-using-red5.html
In general, the solution is to embed free media server Red5 into AIR like an asset. So, the server will be present in AIR application folder. Then, through the NativeProcess, you can run Red5 and have its instance in memory. As result, you can have local video recording without any network issues.
I am also trying to do the same thing, but I have been told from the developers of avchat.net that it is not possible to do this with AIR at the moment. If you do find out how to do it, I would love to know!
I also found this link, not sure how helpful it is http://www.zeropointnine.com/blog/webcam-dvr-for-apollo/
Well, I just think that letting it connect to nothing(NULL) doesn't work. I've already let him try to connect to localhost, but that didn't work out either. I don't think this is even possible. Streaming video works only with Flash Media Server and Red5, not local. Maybe you could install Red5 on you PC?
Sadly video support in flash from cameras is very poor. When you stream its raw so the issue is that you have to encode to FLV and doing it in real time takes a very fast computer. First gen concepts would write raw bitmaps to a file (or serialize an array) then a second method would convert the file to an FLV. Basically you have to poll the camera and save each frame as a bitmap then stack in an array. This is very limited and could not do audio. It was also very hard to get above 5-10fps.
The gent at zero point nine, came up with a new version and your on the right path. Look at the new flv recorder. I spent a lot of time working with this but never quite got it to work for my needs (two cameras). I just could not get the FPS i needed. But it might work for you. It was much faster than the original method.
The only other working option I know of is to have the Red5 save the video and download it back to the app.