How to get the live video feed from my DJI Spark to my pc?(RTMP or RTP) - rtmp

I want to get the live stream from DJI spark to my PC, because I want to run a deep learning model to do object detection in real time. Is there any ways to get the live stream by using RTMP or RTP servers. I found by using Tellopy module in python we can get the live feed from DJI tello.

If you mean built-into the SDK, then the answer is no but you could decide the video stream yourself and package it up accordingly based on the needs of the server you wish to communicate with.
A few have done this work, the GO app, in order to support the various streaming sites has done it so it's known to be possible. It might take some work though.
Best of luck!

Related

Limited devices Login's like Netflix

I am trying to build a video streaming platform and I need to implement a limited devices login feature just like netflix. I have seen some people using node device detector to get the device type from the useragent, but I don't think this is a good solution since the user agent can be faked. Please any ideas on how to effectively implement this?
Found a solution to my own answer, I can use fingerprint.js to identify devices and store in mongodb. and I found an open source version for fingerprint.js that is broprint.js

Send real-time video via wifiSend

I would like to make a personal application to be installed on two iPhones. The first to be used as a webcam that transmits to the second via wifi.
Having no experience with xCode, I am looking for a code example to connect 2 devices via wifi and transmit a real-time video stream.
Unfortunately, the documentation and examples I found are deprecated or partial and inconsistent.
Where can I find some code examples to help me solve my problem, preferably in ObjectiveC (but also in Swift)?
Thank you

How can I use Adaptive streaming for VODs in Ant Media Server?

I'm using Ant Media Server for streaming. My use case requires me to record the Live Streams as VODs so the users can access the content later as well.
Like the live streams, I want to apply adaptive settings to the VODs as well so that users can get the suited resolution as per their network.
I can't find any built in solution for this yet. Can you please tell me any solution as to how can I do this!
I'm using S3 to store the recordings.
Thanks.
Thank you for the question. As far as I understand from the question, it seems that Live Streams are recorded as VoD files.
I think the most efficient way is doing that through HLS. With this way, the VoD files are recorded as HLS and multibitrates is available. No need to transcode again and it'll be played directly. Let me explain this solution step by step.
Set HLS playlist type to event and settings.deleteHLSFilesOnEnded to false . Edit your red5-web.properties for the application and set the following settings
settings.hlsPlayListType=event
settings.deleteHLSFilesOnEnded=false
Restart the server
sudo service antmedia restart
Add adaptive bitrates on the web panel.
Start Live Streaming and let the Ant Media Server create HLS(m3u8 and ts) files for each bitrate.
Stop Live Streaming
Then you can watch the stream by giving the master m3u8 file which is {STREAM_ID}_adaptive.m3u8. It can be even played directly by embedded player even if it's not live.
For more information, take a look at this wiki about HLS Playing
Please let me know if this approach helps you.
antmedia.io

WebRTC - P2P - Server Side Video Recording

I’m planning to build a video conference app. (NodeJS + React Native)
Requirements
One to One Video Conference ( 2 Speakers )
Video / Audio Recording of both the participants.
Store the recorded stream in an S3 bucket and watch the videos directly from it.
Live Streaming (Future Goals, but not at the moment)
Strategies tried so far:
Tried Twilio and Agora, but it wasn’t feasible due to pricing.
Mediasoup (SFU - inspired from dogehouse) was another option, but it’s relatively new and the development time takes much longer.
So I have come to a conclusion to start with Peer to Peer using WebRTC with React Native and record videos on a virtual server by connecting as a ghost participant. ( 2 Speakers + 1 Ghost Participant)
Need some strategies to implement WebRTC recording at the server. (Recordings are a bit crucial, so I don’t want to depend on the client)
Should I go with Puppeteer on server, join as ghost participant and record whenever a room is created, If yes - Is it possible to run multiple instances of puppeteer? Because at times, multiple room recordings might happen, so it needs to record concurrently. Need to confirm the scalability.
Look into Kurento / Jitsi
Any other options?
Great, if you could help me out! Cheers!!
As a developer evangelist for Agora, I want to say thanks for considering Agora. With regard to the pricing, while Agora offers a generous free-tier (10k min/ month) this is meant for development usage and once your project is deployed into production, costs will scale similar to hosting infrastructure (like AWS/GCP).
As with any project, to cover costs you will need to have some monetization strategy or have some free credits to grow the business. Similar to other platforms Agora has a start-ups program for qualified startups.
All that being said to answer your question about approach, I can tell you that the ghost client approach should work, Agora's cloud recording uses similar logic. With regard to scalability you could run multiple puppeteer instances.
You can take a look at html5 videocall web application on GitHub for inspiration.
As it uses Wowza SE as relay for scaling and reliability, streams can be recorded server side with FFmpeg. FFmpeg can input one or multiple streams, mix/transcoded and output to a local or external destination.
More advanced setups like PaidVideochat - Turnkey Videochat Site on WordPress support mixing multiple streams from conferencing/calls in same video file.
Using a relay streaming server is also great for scaling to multiple viewers.
The Galene SFU has native support for server-side recording (disclosure, I'm the main author). However, it is a fairly young project, which might be a problem for you.

SMIL adaptive streaming in Videojs

What is required to use SMIL file to utilize adaptive streaming in a videojs player. I have created the SMIL file in my wowza application and it is creating my 4 separate streams and making them available. However I cannot get my webpage, that uses videojs, to correctly play the SMIL file. Hints on that coding or where to go to find the correct documentation would be greatly appreciated.
There aren't many implementations of SMIL players. I'm sure I've seen wowza URLs that suggest it will output the SMIL as other formats, something like whatever.smil/manifest.m3u8. That's HLS which could be played on mobile and Safari natively and with videojs-contrib-hls elsewhere.
I know the question is old, but I've been struggling with this recently, so I want to share my experience in case anyone is interested. My scenario is very similar: want to deliver adaptive bitrate streaming from Wowza to clients using videojs.
There is a master link that explains how to setup and run Wowza Transcoder for live streaming, and how to set up your Adaptive Bitrate Streams using an SMIL file. Following the video in there you can achieve to have a stream that uses ABS, but the SMIL file is attached to the stream name, so it is not a solution if you have streams that come to Wowza from another Media Server origin and that need to be transcoded before being served to the clients. In the article there are a few key things mentioned (like the Stream Name Groups), but somehow things doesn't seem pretty clear, at least to me. So here is some clarification from what I understood from all articles I read and what I did to achieve ABS:
You can achieve ABS in Wowza either with SMIL files or with Stream Name Groups (NGRP). NGRP refres to a block of streams that is defined in the Transcoder template that can be played back using multi-bitrate streaming (dynamically) (<- this is what I used). And SMIL files are used to create a "static" list of streams for multi-bitrate VOD streaming. If you are using Wowza Origin-Edge Delivery you'll need the .smil file, because NGRP do not get forwarded to the edge. (Source for all this information: here).
In case you need the SMIL file, you probably need to generate a new one for every stream, and probably you want to do that in an automated way, so best way would be through an HTTP request (in the link above it is explained how to achieve this).
In case you can live with NGRP, things are a bit easier:
You need to enable Wowza Transcoder (this is pretty easy and steps are in the video I mention above).
You should create your own Transcoder Template with the different stream presets you want to deliver, as an example you can check the default ones that are already there. The more presets you add, the more work Wowza will need to do whenever a stream comes, since it will need to generate a new stream for every preset that you have defined.
Now is when we generate the NGRPs. In your Transcoder Template, you can generate as many NGRPs as you want (to clarify: these are like groups of streams, that you will be able to set in your clients video player. Each NGRP contains the streams that the video will be able to use when doing the adaptive bitrate streaming). For instance, these are the default NGRPs:
If you play the ngrp "_mobile" in the clients video player, the ABS algorithm in the player will be able to adapt itself to play either the 240p or the 160p streams based on the client capabilities.
So imagine you have these two NGRP. In order to play them in videoJS, you will need to set the source to:
http://[wowza-ip-address]:1935/<name-of-your-application>/ngrp:myStream_all/playlist.m3u8
or
http://[wowza-ip-address]:1935/<name-of-your-application>/ngrp:myStream_mobile/playlist.m3u8
... based on how many options you want to provide to the client player to use for the ABS. (For instance: if your targets are old mobile devices, you probably just want to offer a couple of low bitrate streams).
(This would be in case you're delivering an HLS stream. If other format, the extension would change, for instance if you are delivering a DASH stream you would have "/manifest.mpd" instead of "playlist.m3u8").
That is all, there is also a very helpful link in video.js documentation explaining how it does the bitrate switching: here.
I hope it helps someone! At least clarifying things! :)