I'm trying to use AT&T's WebRTC JavaScript development framework. I don't see documentation for modifying a live call. Is it possible to modify a live call by playing an .mp3 audio file?
http://developer.att.com/enhanced-webrtc
I'm not familiar with AT&T framework, but you can simply add another AUDIO tag that you will play during the call whenever you want. If you also want to interrupt the call, then you can simply mute the AUDIO/VIDEO tag with the remote stream attached.
Related
I have a few live streams that my video player (JWplayer) is used to play. I want a mechanism to automatically load a live stream to test if the live stream loads in JWplayer or not - this needs to happen backend on the server side - preferably a unix flavoured machine.
For example, the live stream URL may change or there may be a cross domain error. Ultimately, if this happens, I want to remove the live stream from my database automatically.
Is it possible to do this automatically? Note that an m3u8 URL may play in quicktime but not in Flash because of m3u8 errors.
I would like a similar tracking mechanism linked to a HTML 5 player (that supports live m3u8 streams) - say quicktime (or maybe ffplay?)
Is this possible? If so, how?
Thanks a lot!
Is it possible to obtain a stream of audio data arriving at the system output (speakers, headphones, etc.) using CoreAudio or another framework?
Example: You're listening to a song on iTunes while watching a YouTube video, all while playing a computer game that makes sounds of its own, all of which are being played through your computer's speakers (Probably terribly annoying). My app would need to receive the entire mix as streaming data.
Thanks in advance.
Not at a user application's Core Audio or other app framework level. Some audio output capture/snoop apps may do this with a kernel extension (kext), or perhaps a replacement audio hardware driver.
I'm building application that needs to send recorded audio and video data to net separately.
Currently I'm using QTKit to capture media but it only allows to work with video and audio data combined.
Is there any way or frameworks that allows to work with video and audio apart?
I'm using QTKit to capture media but it only allows to work with video
and audio data combined.
False.
You can add multiple inputs and outputs to your capture session with QTKit. They don't have to combine the A/V. Start here and read the entire document, then post another question if you have trouble.
AVFoundation framework also support for recording audio and video.
refer this sample code
AVRecorder
but this framework only support for OS X v10.7 or later
Does anyone know how technically to send videos (i.e. Youtube Videos) to a Roku player? There is a "Twonky Beam" app that allows streaming and what it appears to do is to send .mp4 files to Roku for playback. See the demo here: http://gigaom.com/video/youtube-on-roku-twonky-airplay/
This is done without a "Twonky Beam" Roku app. Looks like something that Roku supports natively, although I cannot find anything documented.
I want to know how they were able to accomplish this without Roku being a UPNP or DLNA device.
Any insights here would be great!
There are discussions on how to extract the mp4 URL from YouTube here and here
In terms of how to do airplay style video playback on Roku, you would use the External Control Protocol to launch a channel with the URLs of the video you wish to play back, or once your channel is launched, us the ECP in combination with the roInput component to send the URL's to your channel. Your channel would then send the URLs to a video playback compoenent which would initiate playback from Youtube or whatever source you send it. If you want to play URL's from your device (android/IOS) you would need to run a web server on the device to serve videos to the device.
here is an Open Source YouTube project referenced in that second thread.
Any unofficial project that plays video's from YouTube is subject to DMCA takedown by YouTube should they decide your project does not fit with their goals.
roInput is not really well documented, here is an example that demonstrates both roInput and launch parameters (launch parameters are keywords you include in an http POST):
function main(params as object)
if params.parameter <> invalid then
print "This channnel was launched with Launch Parameters!"
print params
else
print "launched without input parameters"
end if
port=CreateObject("roMessagePort")
input=createobject("roInput")
input.setmessageport(port)
while true
msg=wait(100,port)
if type(msg)="roInputEvent" then
params=msg.getinfo()
print params
end if
end while
end function
so your parameters might be "vidurl=http://myserver.com/video300k.mp4&vidurl=http://myserver.com/video600k.mp4" if you wanted to send multiple bit rate videos.
there are plenty of examples of how to play video on a Roku in the RokuSDK, the simplest being the simplevideoplayer exmaple.
As to the last part of the question re UPNP, you can find a roku on your lan either via brute force telnet on port 8060 to every ip or by using SSDP, also documented in the ECP guide linked above
I want to create audio, video and text messagtes chat. Is it possible using WebRTC? Or it only allow audio and video chats?
One side of my app will be implemented using browser. An other one - using C++ native API.
Does anyone have examples in native C++ API and/or javascript?
The WebRTC specification is still very much in flux, but there's a DataChannel API in the spec that is implemented in an early form in both Firefox and Chrome. DataChannels are intended to allow you to send arbitrary bytes between peers, and the spec provides for both reliable (TCP-like) and unreliable (UDP-like) channels.
I am not sure if WebRTC allows for text chatting. I was able to successfully create an Android Application that performed all of this, but only with the combination of Google's Libjingle and WebRTC libraries. Within the Libjingle library there are several example programs/pieces of code that demonstrate the library's functionality. The call example in Libjingle sounds very similar to what you are wanting to do, and is what I built my Android application out of. The only thing is I have not yet ported it to an web browser, so I am not sure if Libjingle will work with that.
I have begun looking into it, and I have found some people on the WebRTC discussion group that have developed a very nice Multi-user video chat application for a web browser that is built using WebRTC. It is capable of video (along with voice) communications as well as text chatting. I do not know if this matters, but it all occurs within a single interface (meaning that it does not seem to allow for separated/singular form communications -- text only, voice only, video only). I am sure that it would not be too difficult to separate them all out if you wanted/needed. They have posted all of their code onto GitHub and seem to be actively updating and improving it.