Recording an image through a RTMP Server when the host has stopped their camera - rtmp

Context
I have a Live-stream application implemented with AgoraWebSDK NG. The application streams to a RTMP server. The application uses setLiveTranscoding method of Agora client to set some configuration for the live streaming, including transcodingUsers, which setups the users layouts configuration in the live streaming. During the live streaming, the video is being recorded through the RTMP server, so it can be acceded later as a video asset
Problem
If the host has stopped their camera then the RTMP server will record the video with the last image of the host video at the moment they stopped their camera
So, I would like to know what is the necessary configuration to record the live streaming through the RTMP server with an image (user avatar) when the host has stopped their camera

Related

How to stream video from Inskam endoscope to PC via Wi-Fi?

I have Inskam Wi-Fi endoscope.
After launch, it starts Wi-Fi network. To see video stream, you need to connect to network via phone with Inskam application for Android or iOS installed.
But I need to capture a video stream on my PC.
I think that camera turns on the streaming application. My idea is to directly access the streaming resource.
I've tried to access it from my PC via connect to camera Wi-Fi network:
http://192.168.29.102:8080/
http://192.168.29.102:8080/?action=stream
http://192.168.1.1:8080/
http://192.168.1.1:8080/?action=stream
Connection is failed.
How to capture video stream of camera via PC?
Do you want to use your PC to play the stream from your Wi-Fi endoscope, which only provides iOS/Android app to watch the stream?
Maybe there is another solution: Use iOS or Android to play the stream, then use OBS to capture the screen of your mobile phone(please see tutorial for iOS or Android), now you could use OBS to stream to anywhere.
It works like this:
Wi-Fi ---Wifi--> iOS/Android ---USB--> OBS
endoscope (PC)
OBS is running on your PC and you could record the stream.
If you would to broadcast the stream to internet or other mobile phone, then you could use OBS to publish it to a media server(SRS) or live streaming platform, like bellow:
OBS ---RTMP--> SRS/YouTube/Twitch ---> Players
(PC) (PC/Mobile)
It enables multiple users to watch the stream.

Disabling Local stream on Remote Side after Call is connected via WebRtc in Android

I'm trying to isolate video and audio and am able to control the video feed from the caller side, however, unable to turn off the local video stream on the remote side since its an audio call. Any suggestions on how to isolate the video and audio feeds. It doesn't work just by removing the streams by getting the getStream.

Stream live video from Raspberry Pi Camera to Android App

I have multiple Raspberry Pi Devices with the native camera in my home and office (PUBLISHERS). - Publisher(Pi) they are on a local network behind a firewall/router and connected to the internet.
I have an EC2 webserver (BROKER). It is publicly accessible over a public IP Address.
I have an Android App on my phone. It has internet connectivity through a 4G Network. (SUBSCRIBER/CONSUMER/CLIENT)
I am trying to view the live feed of each of the raspberry cameras on my Android app. The problem is more conceptual than technical. I am unable to decide what should be the right approach and most efficient way to achieve this in terms of costs and latency.
Approaches, I have figured out based on my research on this:-
Approach 1:
1. Stream the camera in RTSP / RTMP in the pi device via raspvid/ffmpeg
2. Have a code in the pi device that reads the RTSP stream saves it to AWS S3
3. Have a middleware that transcodes the RTSP stream and saves it in a format accessible to mobile app via S3 url
Approach 2:
1. Stream the camera in RTSP / RTMP in the pi device via raspvid/ffmpeg
2. Have a code in the pi device that reads the RTSP stream pushes it to a remote frame gathering (ImageZMQ) server. EC2 can be used here.
3. Have a middleware that transcodes the frames to an RTSP stream and saves it in a format on S3 that is accessible to the mobile app via pubicly accessible S3 URL
Approach 3:
1. Stream the camera in WebRTC format by launching a web browser.
2. Send the stream to a media server like Kurento. EC2 can be used here.
3. Generate a unique webrtc pubicly accessible url to each stream
4. Access the webrtc video via mobile app
Approach 4:
1. Stream the camera in RTSP / RTMP in the pi device via raspvid/ffmpeg
2. Grab the stream via Amazon Kinesis client installed on the devices.
3. Publish the Kinesis stream to AWS Cloud
4. Have a Lambda store to transcode it and store it in S3
5. Have the mobile app access the video stream via publicly accessible S3 url
Approach 5: - (Fairly complex involving STUN/TURN Servers to bypass NAT)
1. Stream the camera in RTSP / RTMP in the pi device via raspvid/ffmpeg
2. Grab the stream and send it a to mediaserver like gstreamer. EC2 can be used here.
3. Use a live555 proxy or ngnix RTMP module. EC2 can be used here.
4. Generate a unique publicly accessible link for each device but running on the same port
5. Have the mobile app access the video stream via the device link
I am open to any video format as long as I am not using any third-party commercial solution like wowza, antmedia, dataplicity, aws kinesis. The most important constraint I have is all my devices are headless and I can only access them via ssh. As such I excluded any such option that involves manual setup or interacting with desktop interface of the PUBLISHERS(Pis). I can create scripts to automate all of this.
End goal is I wish to have public urls for each of Raspberry PI cams but all running on the same socket/port number like this:-
rtsp://cam1-frontdesk.mycompany.com:554/
rtsp://cam2-backoffice.mycompany.com:554/
rtsp://cam3-home.mycompany.com:554/
rtsp://cam4-club.mycompany.com:554/
Basically, with raspvid/ffmpeg you have a simple IP camera. So any architecture applicable in this case would work for you. As example, take a look at this architecture where you install Nimble Streamer on your AWS machine, then process that stream there and get URL for playback (HLS or any other suitable protocol). That URL can be played in any hardware/software player upon your choice and be inserted into any web player as well.
So it's your Approach 3 which HLS instead of WerRTC.
Which solution is appropriate depends mostly on whether you're viewing the video in a native application (e.g. VLC) and what you mean by "live" -- typically, "live streaming" uses HLS, which typically adds at least 5 and often closer to 30 seconds of latency as it downloads and plays sequences of short video files.
If you can tolerate the latency, HLS is the simplest solution.
If you want something real-time (< 0.300 seconds of latency) and are viewing the video via a native app, RTSP is the simplest solution.
If you would like something real-time and would like to view it in the web browser, Broadway.js, Media Source Extensions (MSE), and WebRTC are the three available solutions. Broadway.js is limited to H.264 Baseline, and only performs decently with GPU-accelerated canvas support -- not supported on all browsers. MSE is likewise not supported on all browsers. WebRTC has the best support, but is also the most complex of the three.
For real-time video from a Raspberry Pi that works in any browser, take a look at Alohacam.io (full disclosure: I am the author).

Videowhisper - video not push to media server

I have installed Red5 on the media server and configured VideoWhisper RTMP Application there, which is accessed from the hosting demo. The application demo show/enable webcam, but when I click on record the application only take a snapshot but didn't push the video to media server.
Anybody experience this before?... here is my screenshot of the RTMP url on videowhisper test tool:

Real time live streaming with an iPhone for robotics

For a research purpose, I developed and app to control a wheeled mobile robot using the gyro and the accelerometer of an iPhone. The robot has a IP address, and I control it by sending messages through a socket. Since the robot has to be controlled from anywhere in the world, I mounted a camera on top of it. I tried to stream the video from the camera using the http live streaming protocol and vlc, but the latency is too high (15-30sec) to properly control it.
Now, vlc has the possibility to stream over udp or http, but the point is how do I decode the stream on the iPhone? How should I treat the data coming into the socket in order to visualize them as a continuous live video?