I need to develop functionality for 360 live streaming. I have 'Insta 360 air' web camera for tests (max resolution for video 2560x1280).
I try to use Wowza Streaming Cloud and OBS or Wirecast broadcaster for the live streaming from PC.
Video has a lot of lags, I think that is because broadcaster uses CPU 90%, RAM 90% and GPU 80-90%.
Maybe who knows alternatives ways for live streaming 360 video.
Maybe WEBRTC to RTMP?
Related
I want to build a mobile application where one of the participating users can broadcast audio at a time to other participants can only listen audio, Ant Media, Jitsi and Janus Which one is best to start live voice stream one to many in Mobile Applications?
In my opinion [as a stream engineer] Before trying to choose a webrtc sfu, decide why you need webrtc technology ?
For one to many stream, hls is better and cheaper, easy to use with 3rd party cdn.
If you need to publish stream via webrtc, still you dont have to force end user to watch it as webrtc. Because the webrtc has limitation,
for instance
wowza can handle
750 concurrent endpoint
kurento 200
jitsi 500
janus (I tried 2017 and at that time not stable )
ant media 1300.
If I were you I'll prefer ant media,
here is reason
ingest webrtc -> play webrtc ( ABR enabled )
ingest webrtc -> play hls ( without any transcoding, the stream will be published with h264 )
ingest rtmp -> play hls
ingest rtmp -> play webrtc
wowza can only do
webrtc ingest -> transcode ( vp8,9 to h264 ) -> hls
rtmp to webrtc not supported by wowza. Also ant media enterprise edition is cheaper than wowza.
I have multiple Raspberry Pi Devices with the native camera in my home and office (PUBLISHERS). - Publisher(Pi) they are on a local network behind a firewall/router and connected to the internet.
I have an EC2 webserver (BROKER). It is publicly accessible over a public IP Address.
I have an Android App on my phone. It has internet connectivity through a 4G Network. (SUBSCRIBER/CONSUMER/CLIENT)
I am trying to view the live feed of each of the raspberry cameras on my Android app. The problem is more conceptual than technical. I am unable to decide what should be the right approach and most efficient way to achieve this in terms of costs and latency.
Approaches, I have figured out based on my research on this:-
Approach 1:
1. Stream the camera in RTSP / RTMP in the pi device via raspvid/ffmpeg
2. Have a code in the pi device that reads the RTSP stream saves it to AWS S3
3. Have a middleware that transcodes the RTSP stream and saves it in a format accessible to mobile app via S3 url
Approach 2:
1. Stream the camera in RTSP / RTMP in the pi device via raspvid/ffmpeg
2. Have a code in the pi device that reads the RTSP stream pushes it to a remote frame gathering (ImageZMQ) server. EC2 can be used here.
3. Have a middleware that transcodes the frames to an RTSP stream and saves it in a format on S3 that is accessible to the mobile app via pubicly accessible S3 URL
Approach 3:
1. Stream the camera in WebRTC format by launching a web browser.
2. Send the stream to a media server like Kurento. EC2 can be used here.
3. Generate a unique webrtc pubicly accessible url to each stream
4. Access the webrtc video via mobile app
Approach 4:
1. Stream the camera in RTSP / RTMP in the pi device via raspvid/ffmpeg
2. Grab the stream via Amazon Kinesis client installed on the devices.
3. Publish the Kinesis stream to AWS Cloud
4. Have a Lambda store to transcode it and store it in S3
5. Have the mobile app access the video stream via publicly accessible S3 url
Approach 5: - (Fairly complex involving STUN/TURN Servers to bypass NAT)
1. Stream the camera in RTSP / RTMP in the pi device via raspvid/ffmpeg
2. Grab the stream and send it a to mediaserver like gstreamer. EC2 can be used here.
3. Use a live555 proxy or ngnix RTMP module. EC2 can be used here.
4. Generate a unique publicly accessible link for each device but running on the same port
5. Have the mobile app access the video stream via the device link
I am open to any video format as long as I am not using any third-party commercial solution like wowza, antmedia, dataplicity, aws kinesis. The most important constraint I have is all my devices are headless and I can only access them via ssh. As such I excluded any such option that involves manual setup or interacting with desktop interface of the PUBLISHERS(Pis). I can create scripts to automate all of this.
End goal is I wish to have public urls for each of Raspberry PI cams but all running on the same socket/port number like this:-
rtsp://cam1-frontdesk.mycompany.com:554/
rtsp://cam2-backoffice.mycompany.com:554/
rtsp://cam3-home.mycompany.com:554/
rtsp://cam4-club.mycompany.com:554/
Basically, with raspvid/ffmpeg you have a simple IP camera. So any architecture applicable in this case would work for you. As example, take a look at this architecture where you install Nimble Streamer on your AWS machine, then process that stream there and get URL for playback (HLS or any other suitable protocol). That URL can be played in any hardware/software player upon your choice and be inserted into any web player as well.
So it's your Approach 3 which HLS instead of WerRTC.
Which solution is appropriate depends mostly on whether you're viewing the video in a native application (e.g. VLC) and what you mean by "live" -- typically, "live streaming" uses HLS, which typically adds at least 5 and often closer to 30 seconds of latency as it downloads and plays sequences of short video files.
If you can tolerate the latency, HLS is the simplest solution.
If you want something real-time (< 0.300 seconds of latency) and are viewing the video via a native app, RTSP is the simplest solution.
If you would like something real-time and would like to view it in the web browser, Broadway.js, Media Source Extensions (MSE), and WebRTC are the three available solutions. Broadway.js is limited to H.264 Baseline, and only performs decently with GPU-accelerated canvas support -- not supported on all browsers. MSE is likewise not supported on all browsers. WebRTC has the best support, but is also the most complex of the three.
For real-time video from a Raspberry Pi that works in any browser, take a look at Alohacam.io (full disclosure: I am the author).
I'm beginner developer.
And I don't speak engilsh very well. sorry
I want to broadcast live video from iPhone camera like iphone video call.
In this case, which do I choice better, hls or dss.
so, what's the functional difference HLS and DSS.
Can hls broadcast live video from iphone camera to another iphone?
Darwin Streaming Server is for RTSP streaming. HLS is a streaming technology based on using HTTP server for hosting the content.
iPhone to iPhone video isn't well served by either technology. It's possible to use an iPhone camera to capture video, upload it to a server, package it for HLS and serve it to the client viewers. This is very high latency (around 10-30 seconds), so it's likely not suitable for you.
If you want 1-to-1 messaging, you're probably better off using a real-time system like RTP, which is what's used by FaceTime and video calling programs.
I am using MSp430F5418 controller for an embedded device. I want to live broadcast some content to a red5 server from the device (Red5 server is in another computer, not in the embedded device). I have attached a camera and a microphone to my device. Can anybody share some thoughts for doing that?
Red5 is written in Java. I am not aware of a Java implementation that runs on the MSP430 family of processors
The MSP430 running at its top speed of 25MHz just does not have the processing power to handle video and audio processing.
The total RAM available on this processor is only 16kByte. Not really enough for this type of application.
....
For a research purpose, I developed and app to control a wheeled mobile robot using the gyro and the accelerometer of an iPhone. The robot has a IP address, and I control it by sending messages through a socket. Since the robot has to be controlled from anywhere in the world, I mounted a camera on top of it. I tried to stream the video from the camera using the http live streaming protocol and vlc, but the latency is too high (15-30sec) to properly control it.
Now, vlc has the possibility to stream over udp or http, but the point is how do I decode the stream on the iPhone? How should I treat the data coming into the socket in order to visualize them as a continuous live video?