Is there a peer client for Java to use for WebRTC? I'm currently using Kurento to stream video/audio between to browser clients. In addition, I want to attach another peer client to receive the video stream in my Java service. Is this possible? I couldn't find any WebRTC peers that uses Java.
WebRTC Java/Android or
Gradle implementation 'org.webrtc:google-webrtc:1.0.+' or
Bintray or
Codename One webRTC for Android, iOS, JavaScript and Desktop
There is the version for Android, but you can use it anywhere. You can also download the source code & compile it from the first link (I don't imagine is the case).
The version for Codename One works everywhere but depends on the app being a Codename One app.
Related
We have mobile application that historically has used RTSP streaming to allow a user to watch a live stream, which currently is published via Wowza Streaming Engine. We have had a need to lower stream latency, so have gravitated towards WebRTC to achieve this.
The problem is that there seems to be a lack of documentation, or examples regarding the implementation of a react-native WebRTC viewer which connects to a remote stream.
Does anyone out there have any documentation, or code examples for this kind of implementation?
I do note there is a react-native-webrtc library, however, all examples demonstrate connecting two peers on mobile phones with their video cameras i.e. Like facetime. We are after an example demonstrating someone on a phone connecting to a remote streaming server with a video feed.
Cheers,
If you want a webrtc client to connect to a server you need a server doing webrtc with the proper signaling that fit your need. Webrtc don't care which signaling you use, so you have to choose it or choose a the platform you need.
There are a lot of different media server, or library that support webrtc in server side all having there specific signaling(ex: Freeswitch, Kurento etc), or no signaling (ex: Mediasoup). Few will have a react native version as Media Streaming is not really something in the javascript/UI side but you can do something with the webrtc react-native lib.
Twillio has a lot of supported platform and could be a good start if you search a ready to use solution.
I am not able to compile the WebRTCTest application in the antmedia github repository. This is the application I am trying to compile.
I need to test live stream performance over webRTC. The setup is running Linux OS and I am using gstreamer to capture the video frames.
Following is the use case:
Stream live video frames being captured from the camera over to antmedia using WebRTC protocol. However, I tried Compiling the WebRTCTest application given in the above link, but I got a lot of compilation issues mainly due to the header file names in the webRTC code source.
How can I publish the video frames being captured from the camera to antmedia via webRTC?
Also does antmedia provide any webRTC based SDK for linux platform (C language based)?
Actually WebRTCTest has not been supported for more than 1 year. But there is another WebRTC Test Tool that is developed by Ant Media team to test Ant Media Server. It is java based and publishes video files. You can still use it on Linux and publish your web cam with small changes.
https://gitlab.com/Ant-Media/webrtc-test
you should use https://resources.antmedia.io/docs/load-testing as a testing tool.
But if you need to publish or play only with gstreamer , I have wrote a gstreamer program in C which uses webrtc to send and receive stream from and to ant media server https://github.com/USAMAWIZARD/AntMedia-Gstreamer-Webrtc. It is tested with AMS version Enterprise Edition 2.5.1
I'm trying to build a live video streaming application from a usb camera to an application running on a remote desktop. I've researched protocols like RTMP, RTSP, WebRTC. According to my understanding I can't use webRTC since it's only compatible in the browser and I'm not building my application for a browser here. Please help me choose the right protocol and also the media server.
You can, and many applications do, use WebRTC outside the browser. WebRTC implementations are available for many different platforms including iOS, Android and embedded systems.
You can even use Headless Chrome if you want to use the Chrome APIs without the visual parts of the browser.
I have a chrome extension running in my browser. I also have a Mac OSX app I wrote in Swift/Objective-c in Xcode. I am wondering how this chrome extension can talk to the Mac OSX app on the same computer.
I am aware of the Chrome Extension API, but do not know how I can capture the information from that is sent by Chrome in Swift. Does anyone know how to do this?
Thanks
There are two broad approaches you can take.
Native Messaging API. This does have the limitation that Chrome must launch the process (and communicate to it via STDIO) - you cannot attach to an existing process. The upside - the communication channel is pretty secure.
Your native app can expose a web server (or better yet, a WebSockets server) on a local port. The extension can then try to connect to this port and talk to your app. The downside is that anything (at least on the machine) can connect to your native app.
This is a frequently used approach; for example, 1Password or various IDE integrations work this way.
You could combine the two approaches to launch the app with a "launcher" Native Host if it's not running.
In WebRTC, I always see the implementation about peer-to-peer and how to get video streaming from one client to another client. How about server-to-client?
Is it possible for WebRTC to streaming video file from server-to-client?
(I am thinking about using WebRTC Native C++ API to create my own server application to connect to the current implementation on chrome or firefox browser client application.)
OK, if it is possible, will it be faster than many current video streaming services?
Yes it is possible as the server can be one of the peers in that peer-to-peer session.
If you respect the protocols and send the video in SRTP packets using VP8, the browser will play it. To help you build these components on other applications or servers, you can check this page and this project as a guide.
Now, comparing WebRTC with other streaming services... It will depend on several variables like the Codec or the protocol. But, for instance, comparing WebRTC (SRTP over UDP with VP8 Codec) against Flash (RTMP over TCP with H264 Codec), I would say that WebRTC wins.
The player will be Flash Player against the native <video> tag.
The transport would be TCP against UDP.
But of course, everything depends on what you are sending to the client.
I have written some apps and plugins using the native WebRTC API, and there isn't a lot of information out there yet, but here are a few useful resources to get you started:
QT Example: http://research.edm.uhasselt.be/jori/qtwebrtc
Native to Browser example: http://sourcey.com/webrtc-native-to-browser-video-streaming-example/
I started with the WebRTC Native C++ to Browser Video Streaming Example but it doesnot build anymore with the actual WebRTC Native Code.
Then I made modifications merging into a standalone process :
management of the peerConnection (the peerconnection_server)
access to Video4Linux capture (the peerconnection_client).
Removing the stream from browser to the WebRTC Native C++ client give a simple solution to access throught a WebRTC browser to a Video4Linux device that is available from GitHub webrtc-streamer.
Live Demo
We are attempting to replace MJPEGs with Webrtc for our server software and have a prototype module for doing this using a smattering of components tied to the Openwebrtc project. It has been an absolute bear to do, and we have frequent ICE negotiation errors (even over a simple LAN), but it mostly works.
We also built a prototype with the Google Webrtc module, but it had many dependencies. I find it easier to work with the Openwebrtc modules because Google's stuff is so tightly tied to general peer-to-peer scenarios on the browser.
I compiled the following from scratch:
libnice 0.1.14
gstreamer-sctp-1.0
usrsctp
Then I have to interact with libnice a bit directly to gather candidates. Also have to write out the SDP files by hand. But the amount of control--being able to control the source of the pipeline--makes it worthwhile. The resulting pipeline (with two clients off one server source) is below:
Of course. I'm writting a program using native WebRTC api which can join the conference as a peer and record both video and audio.
see: How to stream audio from browser to WebRTC native C++ application
and you can definitely streaming media from native app.
I'm sure you can use dummy_audio_file to streaming audio from local file, and you can find a way to access the video streaming progress by your own.
Yes it is. We have developed an load test tool to publish and play for Ant Media Server. This tool can broadcast media file. We used the same native WebRTC library used in Ant Media Server.
Sure it's possible, it allows covert live streaming to WebRTC, for example:
OBS/FFmpeg ---RTMP---> Server ---WebRTC--> Chrome/Client
For this scenario, it allows the ultra low latency live streaming, about 600~800ms, to play the live streaming by WebRTC. Please take a look at this demo.