How to play video from MediaLive through UDP? - udp

On AWS, how do you play video from MediaLive through the UDP output group?
For my use case, I'm building a live stream pipeline that takes an MPEG-2 transport stream from MediaLive, processes it through a UDP server (configured as an output group), and consumed by a web client that plays on HTML5 video.
The problem is: the data is flowing, but the video isn't rendering.
Previously, my output group was set to AWS MediaPackage, but because I need the ability to read and update frames over live stream, I'm trying to feed through UDP.
Is setting the output group to UDP the right approach?
The documentation is a bit sparse here. I'm wondering if there are resources or examples where others were able to play video this way as oppose to HLS/DASH.

Thanks for your post. Yes the UDP or RTP output would be the right choice of output from MediaLive. Appropriate routing rules will need to be used on any intermediary routers or firewalls to ensure that the UDP traffic can reach the client.
You wrote that 'the data is flowing, but the video isn't rendering.' This suggests an issue with the web client.
I suggest adding another identical UDP output to your UDP server and sending its output to a computer (or AWS Workspace) running a copy of VLC player. Decoding that new stream will give you a confidence monitor on the output of the entire workflow up to that point. This will help isolate the problem.
You could achieve the same result with a packet capture or TS stream analyzer if you want to go that route instead. If you go this route, I recommend trying to play back one of the packet captures locally with the web client.

Related

Is it possible to stream the output of an ffmpeg command to a client with dot net core?

I'm trying to take two videos and transform them with ffmpeg into a single video. It works great if you take the two videos, run them through ffmpeg and then serve that file up via an API. Unfortunately the upper range for these videos is ~20 minutes, and this method takes too long to create the full video (~30 seconds w/ ultrafast).
I had an idea to stream the output of the ffmpeg command to the client which would eliminate the need to wait for ffmpeg to create the whole video. I've tried to proof this out myself and haven't had much success. It could be my inexperience with streams, or this could be impossible.
Does anyone know if my idea to stream the in-progress output of ffmpeg is possible / feasible?
you should check hangfire. I used this for running the process on the background, and if it needs a notification, signalR will help you
What do you mean by "streaming" ? Serving the result of your command to an http client on the fly ? Or your client is some video player that play the video (like a VLC player receiving a tcp stream of 4 IP cameras) ?
Dealing with video isn't a simple task, and you need to choose your protocols, tools and even hardware carefully.
Based on the command that you send as an example, you probably need some jobs that convert your videos.
Here's a complete article on how to use Azure Batch to process using ffmeg. You can use any batching solution if you want (another answer suggests Hangfire and it's ok too)

Stream html5 camera output

does anyone know how to stream html5 camera output to other users.
If that's possible should I use sockets, images and stream them to the users or other technology.
Is there any video tutorial where I can take a look about it.
Many thanks.
The two most common approaches now are most likely:
stream from the source to a server, and allow users connect to the server to stream to their devices, typically using some form of Adaptive Bit Rate streaming protocol (ABR - basically creates multiple bit rate versions of your content and chunks them, so the client can choose the next chunk from the best bit rate for the device and current network conditions).
Stream peer to peer, or via a conferencing hub, using WebRTC
In general, the latter is more focused towards real time, e.g. any delay should be below the threshold which would interfere with audio and video conferences, usually less than 200ms for audio for example. To achieve this it may have to sacrifice quality sometimes, especially video quality.
There are some good WebRTC samples available online (here at the time of writing): https://webrtc.github.io/samples/

Server side real time analysis of video streamed from client

I'm trying to build a system for real-time analysis on server for video streamed from the client using WebRTC.
Here is what I currently have in mind. I would capture the webcam video stream from the client and send it (compressed using H.264?) to my server.
On my server, I would receive the stream and every raw frame to my C++ library for analysis.
The output of the analysis (box coordinates to draw) would then be sent back to the client via WebRTC or a separate WebSocket connection.
I've been looking online and found open-source media server like Kurento and Mediasoup but, since I only need to read the stream (no dispatch to other clients), do I really need to use an existing server? Or could I build it myself and if so, where to start?
I'm fairly new to the WebRTC and video streaming world in general so I was wondering, does this whole thing sound right to you?
That depends on how real-time your requirements are. If you want 30-60fps and near-realtime, getting the images to the server via RTP is the best solution. And then you'll need things like a jitter buffer, depacketization etc, video decoders, etc.
If you require only one image per second, grabbing it from the canvas and sending it via Websockets or HTTP POST is easier. https://webrtchacks.com/webrtc-cv-tensorflow/ shows how to do that in Python.

Trouble with RTMP ingest chunk stream

I am trying to build my own client RTMP library for an app that I am working on. So far everything has gone pretty successfully in that I am able to connect to the RTMP server negotiate the handshake and then send all the necessary packets (FCPublish Publish ETC) then from the server i get the onStatus message of NetStream.Publish.Start which means that I have successfully got the server to allow me to start publishing my live video broadcast. Wireshark also confirms that the information (/Data packetizing) is correct as it shows up correctly there also.
Now for where I am having some trouble is RTMP Chunking, going off the Adobe RTMP Specification on page 17 & 18 shows an example of how a message is chunked. From this example I can see that it is broken down based on the chunk size (128 bytes). For me the chunk size gets negotiated in the initial connect and exchange which is always 4096 bytes. So for when I am exchanging video data that is larger than 4096 bytes I need to chunk the message down sending the RTMP packetHeader combined with the first 4096 bytes of data then sending a small RTMP header which is 0xc4 (0xc0 | packetHeaderType (0x04)) combined with 4096 bytes of video data until the full packet specified by the header has been sent. Then a new frame comes in and the same process is repeated.
By checking other RTMP client example written in different languages this seems to be what they are all doing. Unfortunately the ingest server that I am trying to stream to is not picking up the broadcast video data, they dont close the connection on my they just never show video or any sign that the video is right. Wireshark shows that after the video atom packet is sent most packets sent are Unknown (0x0) for a little bit and then they will switch into Video Data and will sort of flip flop between showing Unknown (0x0) and Video Data. However if I restrict my payload max size to 20000 bytes Wireshark shows everything as Video Data. Obviously the ingest server will not show video in this situation as i am removing chunks of data for it to be only 20k bytes.
Trying to figure out what is going wrong I started another xcode project that allows me to spoof a RTMP server on my Lan so that I can see what the data looks like from libRTMP IOS as it comes into the server. Also with libRTMP I can make it log the packets it sends and they seem to inject the byte 0xc4 even 128 bytes even tho I have sent the Change Chunk size message as the server. When I try to replicate this in my RTMP client Library by just using a 128 chunk size even tho it has been set to 4096 bytes the server will close my connection on me. However if change libRTMP to try to go to the live RTMP server it still prints out within LibRTMP that it is sending packets in a chunk size of 128. And the server seems to be accepting it as video is showing up. When I do look at the data coming in on my RTMP server I can see that it is all their.
Anyone have any idea what could be going on?
While I haven't worked specifically with RTMP, I have worked with RTSP/RTP/RTCP pretty extensively, so, based on that experience and the bruises I picked up along the way, here are some random, possibly-applicable tips that might help/things to look for that might be causing an issue:
Does your video encoding match what you're telling the server? In other words, if your video is encoded as H.264, is that what you're specifying to the server?
Does the data match the container format that the server is expecting? For example, if the server expects to receive an MPEG-4 movie (.m4v) file but you're sending only an encoded MPEG-4 (.mp4) stream, you'll need to encapsulate the MPEG-4 video stream into an MPEG-4 movie container. Conversely, if the server is expecting only a single MPEG-4 video stream but you're sending an encapsulated MPEG-4 Movie, you'll need to de-mux the MPEG-4 stream out of its container and send only that content.
Have you taken into account the MTU of your transmission medium? Regardless of chunk size, getting an MTU mismatch between the client and server can be hard to debug (and is possibly why you're getting some packets listed as "Unknown" type and others as "Video Data" type). Much of this will be taken care of with most OS' built-in Segmentation-and-Reassembly (SAR) infrastructure so long as the MTU is consistent, but in cases where you have to do your own SAR logic it's very easy to get this wrong.
Have you tried capturing traffic in Wireshark with libRTMP iOS and your own client and comparing the packets side by side? Sometimes a "reference" packet trace can be invaluable in finding that one little bit (or many) that didn't originally seem important.
Good luck!

Play audio stream using WebAudio API

I have a client/server audio synthesizer where the server (java) dynamically generates an audio stream (Ogg/Vorbis) to be rendered by the client using an HTML5 audio element. Users can tweak various parameters and the server immediately alters the output accordingly. Unfortunately the audio element buffers (prefetches) very aggressively so changes made by the user won't be heard until minutes later, literally.
Trying to disable preload has no effect, and apparently this setting is only 'advisory' so there's no guarantee that it's behavior would be consistent across browsers.
I've been reading everything that I can find on WebRTC and the evolving WebAudio API and it seems like all of the pieces I need are there but I don't know if it's possible to connect them up the way I'd like to.
I looked at RTCPeerConnection, it does provide low latency but it brings in a lot of baggage that I don't want or need (STUN, ICE, offer/answer, etc) and currently it seems to only support a limited set of codecs, mostly geared towards voice. Also since the server side is in java I think I'd have to do a lot of work to teach it to 'speak' the various protocols and formats involved.
AudioContext.decodeAudioData works great for a static sample, but not for a stream since it doesn't process the incoming data until it's consumed the entire stream.
What I want is the exact functionality of the audio tag (i.e. HTMLAudioElement) without any buffering. If I could somehow create a MediaStream object that uses the server URL for its input then I could create a MediaStreamAudioSourceNode and send that output to context.destination. This is not very different than what AudioContext.decodeAudioData already does, except that method creates a static buffer, not a stream.
I would like to keep the Ogg/Vorbis compression and eventually use other codecs, but one thing that I may try next is to send raw PCM and build audio buffers on the fly, just as if they were being generated programatically by javascript code. But again, I think all of the parts already exist, and if there's any way to leverage that I would be most thrilled to know about it!
Thanks in advance,
Joe
How are you getting on ? Did you resolve this question ? I am solving a similar challenge. On the browser side I'm using web audio API which has nice ways to render streaming input audio data, and nodejs on the server side using web sockets as the middleware to send the browser streaming PCM buffers.