Broadcast received video signal via red5 - red5

What needs to be done: Broadcast a live event to a webpage. Each webpage visitor sees JW (or similar) player.
How I understand the logic:
TV bus at the event translates video signal to Red5 via internet.
Red5 broadcasts received signal and offers it to JW players at rtmp://myserver.com/oflaDemo.
JW player does the job and displays stream to visitors.
How do I manage received signal with Red5?

This is somewhat accurate, although i'd recommend you to use a CDN for it, and just have jwplayer fetch a live stream from the cdn.

Related

SFU with Kinesis video stream SDK js

I built a React.js and Node.js app, inspired by KVS example .
It is working for a few participants, a Master can stream his webcam video & audio via WebRTC, and get every viewer webcam video. But we realized it won't be suitable for 50 people as the Master CPU and network increase over each viewer connection.
For every viewer a peerConnection is created with Master. We'd rather the Master to send his stream only once to a server (then sending it to viewers)
We would like to go for a SFU solution, would it be possible with the Javascript SDK ?
It is advised for the C SDK to use putMedia & GStreamer, is there an equivalent in javascript ?
I saw that mediaSoup could do the SFU part, could it be used with Kinesis ?

Disabling Local stream on Remote Side after Call is connected via WebRtc in Android

I'm trying to isolate video and audio and am able to control the video feed from the caller side, however, unable to turn off the local video stream on the remote side since its an audio call. Any suggestions on how to isolate the video and audio feeds. It doesn't work just by removing the streams by getting the getStream.

Stream live video from Raspberry Pi Camera to Android App

I have multiple Raspberry Pi Devices with the native camera in my home and office (PUBLISHERS). - Publisher(Pi) they are on a local network behind a firewall/router and connected to the internet.
I have an EC2 webserver (BROKER). It is publicly accessible over a public IP Address.
I have an Android App on my phone. It has internet connectivity through a 4G Network. (SUBSCRIBER/CONSUMER/CLIENT)
I am trying to view the live feed of each of the raspberry cameras on my Android app. The problem is more conceptual than technical. I am unable to decide what should be the right approach and most efficient way to achieve this in terms of costs and latency.
Approaches, I have figured out based on my research on this:-
Approach 1:
1. Stream the camera in RTSP / RTMP in the pi device via raspvid/ffmpeg
2. Have a code in the pi device that reads the RTSP stream saves it to AWS S3
3. Have a middleware that transcodes the RTSP stream and saves it in a format accessible to mobile app via S3 url
Approach 2:
1. Stream the camera in RTSP / RTMP in the pi device via raspvid/ffmpeg
2. Have a code in the pi device that reads the RTSP stream pushes it to a remote frame gathering (ImageZMQ) server. EC2 can be used here.
3. Have a middleware that transcodes the frames to an RTSP stream and saves it in a format on S3 that is accessible to the mobile app via pubicly accessible S3 URL
Approach 3:
1. Stream the camera in WebRTC format by launching a web browser.
2. Send the stream to a media server like Kurento. EC2 can be used here.
3. Generate a unique webrtc pubicly accessible url to each stream
4. Access the webrtc video via mobile app
Approach 4:
1. Stream the camera in RTSP / RTMP in the pi device via raspvid/ffmpeg
2. Grab the stream via Amazon Kinesis client installed on the devices.
3. Publish the Kinesis stream to AWS Cloud
4. Have a Lambda store to transcode it and store it in S3
5. Have the mobile app access the video stream via publicly accessible S3 url
Approach 5: - (Fairly complex involving STUN/TURN Servers to bypass NAT)
1. Stream the camera in RTSP / RTMP in the pi device via raspvid/ffmpeg
2. Grab the stream and send it a to mediaserver like gstreamer. EC2 can be used here.
3. Use a live555 proxy or ngnix RTMP module. EC2 can be used here.
4. Generate a unique publicly accessible link for each device but running on the same port
5. Have the mobile app access the video stream via the device link
I am open to any video format as long as I am not using any third-party commercial solution like wowza, antmedia, dataplicity, aws kinesis. The most important constraint I have is all my devices are headless and I can only access them via ssh. As such I excluded any such option that involves manual setup or interacting with desktop interface of the PUBLISHERS(Pis). I can create scripts to automate all of this.
End goal is I wish to have public urls for each of Raspberry PI cams but all running on the same socket/port number like this:-
rtsp://cam1-frontdesk.mycompany.com:554/
rtsp://cam2-backoffice.mycompany.com:554/
rtsp://cam3-home.mycompany.com:554/
rtsp://cam4-club.mycompany.com:554/
Basically, with raspvid/ffmpeg you have a simple IP camera. So any architecture applicable in this case would work for you. As example, take a look at this architecture where you install Nimble Streamer on your AWS machine, then process that stream there and get URL for playback (HLS or any other suitable protocol). That URL can be played in any hardware/software player upon your choice and be inserted into any web player as well.
So it's your Approach 3 which HLS instead of WerRTC.
Which solution is appropriate depends mostly on whether you're viewing the video in a native application (e.g. VLC) and what you mean by "live" -- typically, "live streaming" uses HLS, which typically adds at least 5 and often closer to 30 seconds of latency as it downloads and plays sequences of short video files.
If you can tolerate the latency, HLS is the simplest solution.
If you want something real-time (< 0.300 seconds of latency) and are viewing the video via a native app, RTSP is the simplest solution.
If you would like something real-time and would like to view it in the web browser, Broadway.js, Media Source Extensions (MSE), and WebRTC are the three available solutions. Broadway.js is limited to H.264 Baseline, and only performs decently with GPU-accelerated canvas support -- not supported on all browsers. MSE is likewise not supported on all browsers. WebRTC has the best support, but is also the most complex of the three.
For real-time video from a Raspberry Pi that works in any browser, take a look at Alohacam.io (full disclosure: I am the author).

kurento media server not recording remote audio

I have extended tutorial one to one call for recording.
Original http://doc-kurento.readthedocs.io/en/stable/tutorials.html#webrtc-one-to-one-video-call
Extended https://github.com/gaikwad411/kurento-tutorial-node
Everything is fine but recording the remote audio.
When caller and callee videos are recorded, in the caller video recording callee voice is absent and vica versa.
I have searched kurento docs and mailing lists but did not find solution.
The workarounds I have in mind
1. Use ffmpeg to combine two videos
2. Use composite recording, I will also need to combine remote audio stream.
My questions are
1) Why it is happening, because I can hear the remote audio in ongoing call, but not in recording. In recording I can hear my own voice only.
2) Is there another solution apart from composite recording.
This is perfectly normal behaviour. When you connect a WebRtcEndpoint to a RecorderEndpoint, you only get the media that the endpoint is pushing into the pipeline. As the endpoint is one peer of a WebRTC connection between the browser and the media server, the media that the endpoint pushes into the pipeline is whatever it receives from the browser that has negotiated that WebRTC connection.
The only options that you have, as you have states already, are post-processing or composite mixing.

Real time live streaming with an iPhone for robotics

For a research purpose, I developed and app to control a wheeled mobile robot using the gyro and the accelerometer of an iPhone. The robot has a IP address, and I control it by sending messages through a socket. Since the robot has to be controlled from anywhere in the world, I mounted a camera on top of it. I tried to stream the video from the camera using the http live streaming protocol and vlc, but the latency is too high (15-30sec) to properly control it.
Now, vlc has the possibility to stream over udp or http, but the point is how do I decode the stream on the iPhone? How should I treat the data coming into the socket in order to visualize them as a continuous live video?