How can I implement live Camera Streaming using Janus webrtc server? - webrtc

I am totally new to this field and I want to implement live Camera Streaming using webRTC using the Janus Server. But I am directionless, I don't know what to do, where to start. Any help would be highly appreciated.
I tried to explore Janus official documentation but didn't get much hold of it.

Related

Send real-time video via wifiSend

I would like to make a personal application to be installed on two iPhones. The first to be used as a webcam that transmits to the second via wifi.
Having no experience with xCode, I am looking for a code example to connect 2 devices via wifi and transmit a real-time video stream.
Unfortunately, the documentation and examples I found are deprecated or partial and inconsistent.
Where can I find some code examples to help me solve my problem, preferably in ObjectiveC (but also in Swift)?
Thank you

WebRTC - P2P - Server Side Video Recording

I’m planning to build a video conference app. (NodeJS + React Native)
Requirements
One to One Video Conference ( 2 Speakers )
Video / Audio Recording of both the participants.
Store the recorded stream in an S3 bucket and watch the videos directly from it.
Live Streaming (Future Goals, but not at the moment)
Strategies tried so far:
Tried Twilio and Agora, but it wasn’t feasible due to pricing.
Mediasoup (SFU - inspired from dogehouse) was another option, but it’s relatively new and the development time takes much longer.
So I have come to a conclusion to start with Peer to Peer using WebRTC with React Native and record videos on a virtual server by connecting as a ghost participant. ( 2 Speakers + 1 Ghost Participant)
Need some strategies to implement WebRTC recording at the server. (Recordings are a bit crucial, so I don’t want to depend on the client)
Should I go with Puppeteer on server, join as ghost participant and record whenever a room is created, If yes - Is it possible to run multiple instances of puppeteer? Because at times, multiple room recordings might happen, so it needs to record concurrently. Need to confirm the scalability.
Look into Kurento / Jitsi
Any other options?
Great, if you could help me out! Cheers!!
As a developer evangelist for Agora, I want to say thanks for considering Agora. With regard to the pricing, while Agora offers a generous free-tier (10k min/ month) this is meant for development usage and once your project is deployed into production, costs will scale similar to hosting infrastructure (like AWS/GCP).
As with any project, to cover costs you will need to have some monetization strategy or have some free credits to grow the business. Similar to other platforms Agora has a start-ups program for qualified startups.
All that being said to answer your question about approach, I can tell you that the ghost client approach should work, Agora's cloud recording uses similar logic. With regard to scalability you could run multiple puppeteer instances.
You can take a look at html5 videocall web application on GitHub for inspiration.
As it uses Wowza SE as relay for scaling and reliability, streams can be recorded server side with FFmpeg. FFmpeg can input one or multiple streams, mix/transcoded and output to a local or external destination.
More advanced setups like PaidVideochat - Turnkey Videochat Site on WordPress support mixing multiple streams from conferencing/calls in same video file.
Using a relay streaming server is also great for scaling to multiple viewers.
The Galene SFU has native support for server-side recording (disclosure, I'm the main author). However, it is a fairly young project, which might be a problem for you.

How to get the live video feed from my DJI Spark to my pc?(RTMP or RTP)

I want to get the live stream from DJI spark to my PC, because I want to run a deep learning model to do object detection in real time. Is there any ways to get the live stream by using RTMP or RTP servers. I found by using Tellopy module in python we can get the live feed from DJI tello.
If you mean built-into the SDK, then the answer is no but you could decide the video stream yourself and package it up accordingly based on the needs of the server you wish to communicate with.
A few have done this work, the GO app, in order to support the various streaming sites has done it so it's known to be possible. It might take some work though.
Best of luck!

Proper way to integrate WebRTC to Electron app?

I'm trying to develop a peer-to-peer desktop app with Electron and WebRTC which transfers only JSON data betwen peers. I ran into many libraries such as PeerJs, node-crt and electron-webrtc but I'm not sure what's the best way to properly integrate that, any ideas? thanks
Personaly, i have chosen https://github.com/andyet/SimpleWebRTC, which is an API pretty easy to setup. This is not relative to Electron, this is open source, there is no API key needed and it works pretty well!
But to transfer JSON data you can just use websockets, because you will need it with webRTC for signalling anyway... (ok this is not a p2p solution)
For those who come across this post now, I would recommend https://github.com/feross/simple-peer, as it provides a simpler abstractionon top of webRTC and is actively maintained. It appears that as of now, SimpleWebRTC has been deprecated.

Need Help on Adaptive Streaming with JWPlayer

I'm currently exploring some features in JWPlayer that can help me achieve my aim that is video quality auto adjustment based on bandwith, usually called adaptive streaming. I have seen JWPlayer Javascript API from official JWPlayer website feature that I think can provide this to me. It is getQualityLevels(), but there is no documentation there, so I cannot start doing what I want.
Meanwhile I read from JWPlayer website in Streaming section, that I can use dynamic RTMP to get adaptive streaming. But it needs RTMP server and I think RTMP has lots of feature that I won't need because I just need adaptive streaming.
My question is
Is there any sample code that you can provide me to help me get adaptive streaming ?
Like how to get several video quality (SD,HD or 720px,1080px) from original video that I upload so that user can automatically select several video based on their bandwith ?
Please help me on this thing
Any answers is really appreciated,
Regards,
William
Here is a demo of this API call in action - http://bit.ly/1ix120K