I cannot play Ant Media Server recorded mp4 file in iOS. How to fix? - webrtc

I am using Ant Media Server as a WebRTC media server. I publish my webcam with WebRTC from browser. I have also enabled mp4 recording.
I can play recorded files on Linux and Windows. But I can't play some of them on iOS. How can I solve this?

This can be fixed by encoding file. But learning why it happens is better.
Since Ant Media Server records WebRTC streams, WebRTC changes internal resolution dynamically which means mp4 files resolution is also changed on-the-fly. This is handled well in other platforms but not on the IOS yet.
So, encoding the file again would likely fix it.

Related

S3 video play in edge and react native but not chrome or firefox

one of the user of the app I work for has an issue, all the video she upload doesn't work in an HTML5 video player, except on Edge and Safari for iOS (and if it works there I assume it could work in other browsers)
The video can be played in our react-native app or after being downloaded, but not directly using the S3 or cloudfront link
Since the vieos from the other users work, I'm assuming it's related to encryption and S3 specifications, does anyone have met this problem and found a solution ?
EDIT: forgot to put a sample link
https://video-reetags.s3.eu-west-3.amazonaws.com/compressed/aed0a512a419334fe5d0c0c6fb4094a21610642052.mp4
Since the videos from the other users are working fine, I'm assuming it's related to encryption and/or S3 specifications...
No, the problem is not encryption or S3 server issues.
Your MP4 container has video in HEVC format (aka H.265), which is not supported in Chrome or Firefox. You should still be able to hear the sound part since AAC audio is supported.
Playing the video is possible with React-Native and other (native) video players because they rely on the O.S running the player App to decode video. If a browser brand didn't buy a license for HEVC then that browser cannot play it.
Solution:
Re-encode such videos to MP4 containing H.264 with AAC audio (...not H.265 with AAC).
Re-encoding takes time but it's the only way for now. Either the user does it before any uploading, or your own app accepts any file and re-encodes the "not supported" ones on server-side (eg: using FFmpeg or GStreamer tool).

How to serve DASH video (MPEG-DASH and HLS) over a web sever

I am doing a small research project to test DASH streaming on very low bandwidth mobile connections in India.
I have an AWS machine where I can upload video and package it for MPEG-DASH and HLS streaming and create the MPD and m3u8 files.
But I am unable to serve the video.
I have tried with Apache and node.js. I was told that by just placing the folder that contains the mpd nad m3u8 files with the video chunks I should be able to stream the video.
I am not sure what I am doing wrong. Is there any special configuration I need to do to server MPEG-DASH video ? If there a tutorial/blog/github project someone could point me towards, that would be brilliant.
TIA.
Setting up a DASH Media streaming, is fairly involved. You can get all paid services from Bitmovin, Wowza and many others but don't give up yet. There are a lot of open-source stuff out there as well that works. I have been working on DASH for a while now.
Here's my setup,
OS: Ubuntu 16.04
Server: Apache2 (easy to setup): Few edits I had to do: CORS and an alias setting.
Client: Dash.js: Literally just get the dash.js-master branch from github. Don't get stuck with the dash.js-development branch, if you don't wanna end up editing stuff.
Content Generation: ffmpeg and MP4Box
All you have to do for initial setup is make 2 directories in your apache's root directory, (i.e. inside the folder that contains index.html). Your first directory will be the compiled dash.js client and the second will be your content directory.
Point a link on your server to the dash.js reference client, then all you have to do is play your mpd on the client. (Make sure it confirms to the mpd validation norms though)
Now, this might just work only on a computer and not a cell phone, but hey I think you disable all the connections (wireless and wired) on your computer and use one of those wireless dongles provided by Airtel/Reliance/any network provider!
Shall be here to answer more insightful questions, hath need be!
You do not need any server side application. If you are using AWS a simple S3 bucket behind Cloudfront will do the trick nicely, without any EC2 needed at all. Just ensure you have CORS and crossdomain.xml in place.
Stefen Lederer posted a blog about just this set up.
Also, use your browsers developer mode to catch failed requests and console errors which might give pointers as to why it is not working for you.

WebRTC -- can getUserMedia use local stream?

I want to let WebRTC encoded and play h264(NAL) stream(local file).
In the WebRTC tutorial, getUserMedia is use for get local camera connecting to the system, I don`t know if the getUserMedia function support
capture the local stream file like h264 stream.
If it doesn't work that way, may be I should modify WebRTC source code(I'm studying it).
Here is the question, If i change WebRTC code, how can i integration the new code into browser? Made it a plugin?
Firefox supports an extension to the <video> element that you can use to do this.
First, set the source of a video element:
v1.src = "file:///...";
Then you can call the (currently prefixed) mozCaptureStream or mozCaptureStreamUntilEnded function to get a MediaStream.
stream = v1.mozCaptureStream();
The proposed specification.
Note however that you need to ensure that the file is same origin with respect to the page. The same origin rules for file:/// are probably going to cause issues. Otherwise your MediaStream isn't going to be accessible to you. One way to ensure that is not to set the location directly, but to load the file using an <input type="file"> element.
As noted in other answers, Firefox currently only supports the baseline profile of H.264.
First, you are right getusermedia will not work for you. However, there are a couple of options.
Hack a stream together using RTCDataChannel. Breaking up the media stream and delivering each packet and then handling it on the client side.
Take a look at this demo for precorded media streams. I do not believe that H264 is addressed but it could help you on your way(probably for Firefox only)
Use some sort of webrtc breaker/endpoint that is native to stream the file. I know specifically that others(including myself) have streamed H264 to Firefox through the Janus-Gateway
Couple of asides:
Firefox only supports Baseline profiles in streaming h264 for a webrtc peerconnection
Chrome does not support h264 for webrtc at all
Are you trying to have getUserMedia return a h.264 encoded stream?
In which case, today it will only be possible with Firefox today, under some specific environment (cisco 264 plugin installed) and only for the base profile.
Chrome promised in november to add this capacity, but there is no timeline that I know of Expect at least Q2 2015.
Using our (temasys) commercial plugin you will soon be able to do that in IE and Safari.
Those are the only options on client side I can think of. On server side you can use whatever you want to transcode, including janus, kurento, powermedia, licode/lynkia, ....
Note: using other means like Datachannel or WebSocket are ok to transfer files, but would greatly reduce the user experience as you would not have all the added recovery (and security) mechanisms included in SRTP, DTLS, and would also not have specific mistreated media enhancements that are in webRTC like jitter, buffers, netQ, ect ...

Creating a WebRTC receiver

I am new to WebRTC and trying to figure out how to create a program outside a browser which receives a WebRTC audio stream and outputs it on speakers.
Are there any WebRTC libraries for Java or C#?
That receiver will be running on a linux machine.
--
I've been thinking about using getUserMedia() to access the microphone. But then:
In what format will such a stream be transmitted?
Let's say I use WebRTC2SIP and build a Java endpoint using JSIP;
or I just use a socket and send the stream over http.
What audio format will I get on the receiver side? So far I have read WebRTC does compress the stream somehow.
I guess there are two ways for you:
build the whole WebRTC voice engine for android/iOS or Mac etc., and just use the API provide by VOE.
build standalone NS/VAD/AECM/AGC modules and using it in your project. for example, you build standalone NS module for android mobile, you use AudioRecord(java layer, android things) to record sound from MIC, and do the noise suppression process on these data(jni layer, WebRTC things), and finally playback the processed data by using AudioTrack(java layer, android things).
EDIT:
for the 2nd situation, the format is PCM raw data.
Check out the working Audio demo and code at demo.easyrtc.com
The code is all open source and can be checked out at https://github.com/priologic/easyrtc
You can look for any known issues around easyRTC at our forum at
https://groups.google.com/forum/#!forum/easyrtc
Also check out our main site at easyrtc.com

Red5 cutting streams

I have a flex app that was developed using wowza as media server. The purpose of the app is to record the microphone of the client. I have to switch over to red5 now, and so I set up a VM with red5. I had success in creating streams, but only a small part of the audio is saved correctly on the flv files under the stream/ folders. Why is that happening? Could anybody point me out some suggestions, as I am aware only of the possibility to rewrite extending RTMPClient instead of using NetStream and NetConnection.
You should check the version of Red5.
Are/were you using Red5 version 1.0 ? There is a bug in that which truncates a lot from both ends in the final .flv.
You could downgrade to a lower version or use 1.0.2. It works fine in these.