I want to create audio, video and text messagtes chat. Is it possible using WebRTC? Or it only allow audio and video chats?
One side of my app will be implemented using browser. An other one - using C++ native API.
Does anyone have examples in native C++ API and/or javascript?
The WebRTC specification is still very much in flux, but there's a DataChannel API in the spec that is implemented in an early form in both Firefox and Chrome. DataChannels are intended to allow you to send arbitrary bytes between peers, and the spec provides for both reliable (TCP-like) and unreliable (UDP-like) channels.
I am not sure if WebRTC allows for text chatting. I was able to successfully create an Android Application that performed all of this, but only with the combination of Google's Libjingle and WebRTC libraries. Within the Libjingle library there are several example programs/pieces of code that demonstrate the library's functionality. The call example in Libjingle sounds very similar to what you are wanting to do, and is what I built my Android application out of. The only thing is I have not yet ported it to an web browser, so I am not sure if Libjingle will work with that.
I have begun looking into it, and I have found some people on the WebRTC discussion group that have developed a very nice Multi-user video chat application for a web browser that is built using WebRTC. It is capable of video (along with voice) communications as well as text chatting. I do not know if this matters, but it all occurs within a single interface (meaning that it does not seem to allow for separated/singular form communications -- text only, voice only, video only). I am sure that it would not be too difficult to separate them all out if you wanted/needed. They have posted all of their code onto GitHub and seem to be actively updating and improving it.
Related
I would like to make a personal application to be installed on two iPhones. The first to be used as a webcam that transmits to the second via wifi.
Having no experience with xCode, I am looking for a code example to connect 2 devices via wifi and transmit a real-time video stream.
Unfortunately, the documentation and examples I found are deprecated or partial and inconsistent.
Where can I find some code examples to help me solve my problem, preferably in ObjectiveC (but also in Swift)?
Thank you
I am using native Safari player implementation to stream video with HLS streaming protocol.
My goal is to get time-based metadata (such as EXT-X-DATERANGE) from a live stream manifest.
As far as I know, it is not possible to retrieve this data because the streaming logic is fully controlled by the Safari player which does not expose this data.
For now, I came to the 2 possible solutions:
Manually download the manifests and parse out the EXT-X-DATERANGE tag. But with this approach, the download timer should be manually managed too. And, of course, the number of requests for the playlists will be increased.
Desktop Safari browser supports MSE. This means it is possible to have full control over manifest retrieving and parsing. There are awesome libraries that already provide this functionality, such as shaka-player or hls.js. It is possible to implement custom response filter for segments(shaka-player) or listen to Hls.Events.FRAG_CHANGED event (hls.js) in order to have access to the playlist. The problem is that Safari in IOS mobile still does not support the MSE. So it is not possible to apply this solution for mobiles.
Are there any other ways to retrieve time-based metadata (such as EXT-X-DATERANGE) using native Safari player implementation?
Thanks a lot in advance!
I am trying to build an application which works this way: I as a user want to start a call with another user. The way I want the connection be made is by random. So it will connect to one of the many clients out there by random. Also when other clients try to make a call, it should connect to another random client and so on. I want those phone calls be made via application(such as WhatsApp) not as a phone call.
Now, the question is; is Twilio a good approach for this purpose?
If yes can you tell me which of their feature would fit my app the best?
Thanks for any suggestions!
Twilio developer evangelist here.
I can answer that Twilio would be a good approach for you to do this within your own application. I'd recommend using Twilio Video to build this as it allows cross platform communication via audio or video (in your case, you may not need the video, but this will give you the best audio quality).
As an example, my colleague Dominik built a video roulette application. It is the case that the interface was built in JavaScript for the web, but the idea would be the same for a native app. The code for the server side part of the application should give some insight into how to connect random pairings.
It's also possible to integrate Twilio Video with CallKit and Connection Services so that you can make outbound calls to other devices that ring like a real incoming call.
I want to develop a video chat application, between Web browser and an Android device. As far as I know I have two prominent options, WebRTC and RMTP. I have tested out WebRTC and for Web app it was quite convenient to use, so I am inclined to use it. However, I should consider all my options, since I know little about Android development.
Do I have any reason to choose RMTP over WebRTC in the following use case:
Simple 1 to 1 video chat
Between Android application and Web browser ( just Chrome and Firefox is fine)
Recording and storing the call
Or neither has a clear advantage over the other in this simple case? For peer discovery I have a separate application server.
For a 1:1 video chat, there is no reason whatsoever to use RMTP.
RMTP is good (and even that is debatable in 2015) for streaming - a case where one end is producing the content and many on the other end are consuming it.
For something bidirectional, you should just pick WebRTC - its codecs are better, its availability is better and its technology is better.
I want to create an application capable to play YouTube video's audios and also save the downloaded content in a local cache, therefore when the user decides to resume or play the video again, then it doesn't have to download part of video again but only download the remaining part (User can decide what to do with the cache then, and how to organize it).
It is also very convenient for mobiles (it is my main focus) but I'd like to create a desktop one too for experimental purposes.
So, my question itself is, does YouTube provide any API for this? I mean, in order to cache the download content I need that my application download the content and not any embed player (also remember that it is a native application). I have a third-party application in my Android system that plays YouTube videos, so I think it's possible unless that the developers use some sort of hack, again this is what I don't know.
Don't confuse with the web gdata info API and the embed API, this is not what I want, what I want is to handle the video transfer.
As far as I know, there is no official API for that. However, you could use libquvi to look up the URLs of the real video data, or you could have a look at how they do it and reimplement it yourself (see here).