Record participants video stream individually in jitsi-meet - webrtc

I’m using jitsi for one of our requirement. I have followed quick installation steps to config and install jitsi. I am also using https://jitsi.github.io/handbook/docs/dev-guide/dev-guide-iframe to create room along with some config changes.
One of our requirement is to record the video of participants individually. Basically saving each participants video stream. As of I know Jibri records the whole conference session but we need video streams of participants separately. There was a library which has been archived now Jirecon. I used a enhanced version of it, but was not successful.
Please can anybody help on which approach to follow and how to achieve.

Jibri joins as like participant and records the conference this mean you cannot record each participant via Jibri.
You can try client side recording and after that you can upload the file wherever you want. You can check RecordRTC( https://github.com/muaz-khan/RecordRTC ). But i am not sure about iFrame(you mention about "external_api.js"). You can try.
If you have own instance of jitsi, better way of implementation is instance web project(/usr/share/jitsi/meet). create a js file, develop client side recording and load the file into index.html. And trigger the recording actions via iframe postMessage .
I am not sure but if your purpose is sharing the media files to other participants, you can use jibri and save into local server tempMedia folder and share from there for a while.

Related

How can I use Adaptive streaming for VODs in Ant Media Server?

I'm using Ant Media Server for streaming. My use case requires me to record the Live Streams as VODs so the users can access the content later as well.
Like the live streams, I want to apply adaptive settings to the VODs as well so that users can get the suited resolution as per their network.
I can't find any built in solution for this yet. Can you please tell me any solution as to how can I do this!
I'm using S3 to store the recordings.
Thanks.
Thank you for the question. As far as I understand from the question, it seems that Live Streams are recorded as VoD files.
I think the most efficient way is doing that through HLS. With this way, the VoD files are recorded as HLS and multibitrates is available. No need to transcode again and it'll be played directly. Let me explain this solution step by step.
Set HLS playlist type to event and settings.deleteHLSFilesOnEnded to false . Edit your red5-web.properties for the application and set the following settings
settings.hlsPlayListType=event
settings.deleteHLSFilesOnEnded=false
Restart the server
sudo service antmedia restart
Add adaptive bitrates on the web panel.
Start Live Streaming and let the Ant Media Server create HLS(m3u8 and ts) files for each bitrate.
Stop Live Streaming
Then you can watch the stream by giving the master m3u8 file which is {STREAM_ID}_adaptive.m3u8. It can be even played directly by embedded player even if it's not live.
For more information, take a look at this wiki about HLS Playing
Please let me know if this approach helps you.
antmedia.io

Accessing the localStorage of one device from another in react-native

Some days before, i saw a blog post about why we need to keep whatsapp open on our smartphone to make it work on our PC.
It said that WhatsApp fetches the data (messages) from our smartphone and shows them on our pc which seems pretty good as it will lower the load on our database.
So now i wanted to know if there is a way to do so in react-native i.e, access the localStorage of one device from another.
Why i want to do that?
I am building an app where in the profile, i also take the profile picture from the user and i don't want to store it on the database but instead store it locally and serve it from there.
The reason for that is that we need buckets to store media files and serve them from there and i wanted to cut that part when deploying my app.

Youtube Live-streaming API returns an empty list of streams

I'm trying to make an electron app that will work together with OBS. For this I need the Stream Name from youtube among other things.
But this stream name has proven hard to get from the API.
I'm currently testing it with the python examples on the documentation page.
This is the example that i use.
Before I started, I created a event with all the settings - and I'm able to see the Stream Name/Key on the website.
When I use the API, it just returns an empty list. I'm sure that I do have a stream, and that it's working as I created it myself, and I am streaming to the example view.
However I never get anything from that service. The other services work, like the list broadcasts one, it returns all planned events.
List Broadcasts:
$ python .\list_broadcasts.py
Broadcasts with status 'all':
Test Begivenhed (ib08ZcLQgZA)
List Streams:
$ python .\list_streams.py
Live streams:
The code for the examples can also be found here: https://github.com/youtube/api-samples/tree/master/python
Turns out that when you create a "Broadcast" a random stream is attached. It's impossible to get the key from this stream.
Instead you need to create a new "Stream/Customised Ingestion", this will make it all work, just make sure that your broadcast is attached with the steam.
You can reuse this stream, and that's a good thing for me, as I don't need to change the streaming key all the time.
You can create a new Stream/Customised Ingestion preset in the UI or via the API.

SMIL adaptive streaming in Videojs

What is required to use SMIL file to utilize adaptive streaming in a videojs player. I have created the SMIL file in my wowza application and it is creating my 4 separate streams and making them available. However I cannot get my webpage, that uses videojs, to correctly play the SMIL file. Hints on that coding or where to go to find the correct documentation would be greatly appreciated.
There aren't many implementations of SMIL players. I'm sure I've seen wowza URLs that suggest it will output the SMIL as other formats, something like whatever.smil/manifest.m3u8. That's HLS which could be played on mobile and Safari natively and with videojs-contrib-hls elsewhere.
I know the question is old, but I've been struggling with this recently, so I want to share my experience in case anyone is interested. My scenario is very similar: want to deliver adaptive bitrate streaming from Wowza to clients using videojs.
There is a master link that explains how to setup and run Wowza Transcoder for live streaming, and how to set up your Adaptive Bitrate Streams using an SMIL file. Following the video in there you can achieve to have a stream that uses ABS, but the SMIL file is attached to the stream name, so it is not a solution if you have streams that come to Wowza from another Media Server origin and that need to be transcoded before being served to the clients. In the article there are a few key things mentioned (like the Stream Name Groups), but somehow things doesn't seem pretty clear, at least to me. So here is some clarification from what I understood from all articles I read and what I did to achieve ABS:
You can achieve ABS in Wowza either with SMIL files or with Stream Name Groups (NGRP). NGRP refres to a block of streams that is defined in the Transcoder template that can be played back using multi-bitrate streaming (dynamically) (<- this is what I used). And SMIL files are used to create a "static" list of streams for multi-bitrate VOD streaming. If you are using Wowza Origin-Edge Delivery you'll need the .smil file, because NGRP do not get forwarded to the edge. (Source for all this information: here).
In case you need the SMIL file, you probably need to generate a new one for every stream, and probably you want to do that in an automated way, so best way would be through an HTTP request (in the link above it is explained how to achieve this).
In case you can live with NGRP, things are a bit easier:
You need to enable Wowza Transcoder (this is pretty easy and steps are in the video I mention above).
You should create your own Transcoder Template with the different stream presets you want to deliver, as an example you can check the default ones that are already there. The more presets you add, the more work Wowza will need to do whenever a stream comes, since it will need to generate a new stream for every preset that you have defined.
Now is when we generate the NGRPs. In your Transcoder Template, you can generate as many NGRPs as you want (to clarify: these are like groups of streams, that you will be able to set in your clients video player. Each NGRP contains the streams that the video will be able to use when doing the adaptive bitrate streaming). For instance, these are the default NGRPs:
If you play the ngrp "_mobile" in the clients video player, the ABS algorithm in the player will be able to adapt itself to play either the 240p or the 160p streams based on the client capabilities.
So imagine you have these two NGRP. In order to play them in videoJS, you will need to set the source to:
http://[wowza-ip-address]:1935/<name-of-your-application>/ngrp:myStream_all/playlist.m3u8
or
http://[wowza-ip-address]:1935/<name-of-your-application>/ngrp:myStream_mobile/playlist.m3u8
... based on how many options you want to provide to the client player to use for the ABS. (For instance: if your targets are old mobile devices, you probably just want to offer a couple of low bitrate streams).
(This would be in case you're delivering an HLS stream. If other format, the extension would change, for instance if you are delivering a DASH stream you would have "/manifest.mpd" instead of "playlist.m3u8").
That is all, there is also a very helpful link in video.js documentation explaining how it does the bitrate switching: here.
I hope it helps someone! At least clarifying things! :)

simple red5 chat

i want to create simple red5 video chat, consisting of two swf, one for showing the video of the user from webcam which is easy, the problem that i am having is how to get the video of the other user in the second swf file from the server,
Basically i want to connect the swf files with the php and mysql so the respective users can connect with each other,
Does any one have an example or something to work with as i am stuck here, i have also searched the net on this topic but most off the video chat tutorials are complex ones.
Have a look at
http://www.red5chat.com/
This may help you.