how to interrupt rtmp flv broadcast and publish live rtmp broadcast on red5?
I am using osmf strobe player. I have my flv playlist working but when I broadcast live from my webcam what is the formula to stop the flv streams, then play flv countdown video then connect live broadcast from web cam?
Here is how I would do it, from a high-level since I don't have the code for what you're asking and "easy" transitions between streams isn't built-in to the server.
First, create a signaling or event system within your app to accept actions triggered by your broadcaster. Using the signaling system, transition your subscribers / viewers by sending triggered events telling their players to stop playing a current video and start a new one. I suggest using Shared Objects for this when passing signals around. Use server-side methods called by your broadcaster to send the signals on the Share Object. The "play" functionality is the easy part since you simply provide the stream name in your signal / event.
Related
On AWS, how do you play video from MediaLive through the UDP output group?
For my use case, I'm building a live stream pipeline that takes an MPEG-2 transport stream from MediaLive, processes it through a UDP server (configured as an output group), and consumed by a web client that plays on HTML5 video.
The problem is: the data is flowing, but the video isn't rendering.
Previously, my output group was set to AWS MediaPackage, but because I need the ability to read and update frames over live stream, I'm trying to feed through UDP.
Is setting the output group to UDP the right approach?
The documentation is a bit sparse here. I'm wondering if there are resources or examples where others were able to play video this way as oppose to HLS/DASH.
Thanks for your post. Yes the UDP or RTP output would be the right choice of output from MediaLive. Appropriate routing rules will need to be used on any intermediary routers or firewalls to ensure that the UDP traffic can reach the client.
You wrote that 'the data is flowing, but the video isn't rendering.' This suggests an issue with the web client.
I suggest adding another identical UDP output to your UDP server and sending its output to a computer (or AWS Workspace) running a copy of VLC player. Decoding that new stream will give you a confidence monitor on the output of the entire workflow up to that point. This will help isolate the problem.
You could achieve the same result with a packet capture or TS stream analyzer if you want to go that route instead. If you go this route, I recommend trying to play back one of the packet captures locally with the web client.
I'm trying to build an web app where there's a broadcaster of a camera stream, and viewers who can watch and control the stream. Is it possible for a viewer to control the constraints of the camera (exposure, brightness, etc.) and possibly pause, rewind, and record footage, being used to broadcast the stream using webrtc? Wanted to know before I decide to use webrtc as the way to accomplish this task. Based on my reading of the webrtc guide and example pages, I think recording is possible. But I wasn't sure about a remote peerconnection changing the local peerconnection's settings or vice versa.
Here, Data transfer is for controlling video pause and video record. We are using iMX8Mini Eval board - for streaming video to Android Mobile via USB-OTG. We would like to know, whether video stream does not affected with any command sent over same USB-OTG.
How does audio and video in a webrtc peerconnection stay in sync? I am using an API which publishes audio and video (I assume as one peer connection) to a media server. The audio can occasionally go out of sync up to 200ms. I am attributing this to the possibility that the audio and video are separate streams and this accounts for the why the sync can be out.
In addition to Sean's answer:
WebRTC player in browsers has a very low tolerance for timestamp difference between arriving audio and video samples. Your audio and video streams must be aligned (interleaved) precisely, i.e. the timestamp of last audio sample received from network, should be +- 200ms or so comparing to timestamp of last video frame received from network. Otherwise WebRTC player will stop using NTP Timestamps and will play streams individually. This is because WebRTC player tries to keep latency at a minimum. Not sure it's good decision from WebRTC team. If your bandwidth is not sufficient, or if live encoder provides streams not timestamp-aligned - then you will have out of sync playback. In my opinion, WebRTC player could have a setting - whether to use that tolerance value or always play in sync, using NTP Timestamps, at the expense of latency.
RTP/RTCP (which WebRTC uses) traditionally uses the RTCP Sender Report. That allows each SSRC stream to be synced on a NTP Timestamp. Browsers do use them today, so things should work.
Are you doing any protocol bridging or anything that could be RTP only? What Media Server are you using?
as the title shows, is there any methods I can use to play multiple videos continuously using simple rtmp client(my rtmp server is wowza)? Here is the way I think:
when the first video is about to be finished,open a new thread to send a new createStream command and a new play command and get the video rtmp packet and put them into a buffer list, when the first video is finished, then play the video rtmp in the buffer list..
Can this way be available or are there any other recommended methods to achieve it? Any suggestion will be appreciated, thanks!
While the functionality is not built-in, Wowza does have a module called StreamPublisher to allow you to implement a server-side type of playlist. The source code for the module is available on GitHub. A scheduled playlist of multiple VOD files is streamed as a live stream, similar to a TV broadcast.