I'm trying to learn how to use webRTC in applications so I wrote a code sample available in the following link: http://wklej.org/hash/fd599a32e8e/
At the beginning I need to say that I don't care about browsers compatibility, All I need is to support Chromium web engine without any external adapters/other libraries.
The web application should allow me to establish a connection between two browser tabs running on the same host — by manually exchange of appropriate data (SDP and ICE candidates).
Steps to follow:
Click on "create offer button", copy local SDP;
Go to other tab and insert previously copied SDP into "remote SDP" area, then press "create answer";
Copy generated local SDP, go to first tab, insert into "remote SDP" area and click on "set remote sdp" button (not create answer button);
Exchange ice candidates — copy them from one tab, insert to the second one and press "addCandidates" button. Do the same in the other way.
The main problem is that this function:
peer.iceconnectionstatechange = function(event) {
console.log("ice connection state: " + peer.iceConnectionState)
}
won't be triggered. I tried to play with STUN/TURN servers without success. The remote video won't run. If somebody could point out where I made mistake?
Try mine (cut'n'paste): https://jsfiddle.net/7vv2vxtt/
Or automatic (localStorage): https://jsfiddle.net/2v1Lnpmx/
This code intentionally left blank.
ICE candidates get added to the local offer/answer over time, so it simply waits for end-of-candidates before producing SDP with all candidates embedded.
Should work in all browsers.
Related
tl;dr: what's the logic behind browsers (Chrome, FF, Safari) behaving as an app that's online after clicking offline and not simply... go offline?
CPP, FOP, STP
I have a small socket.io app that fetches from Twitter's API to make an image gallery.
I wanted to style the the divs that create a frame around the photos, but while the app is running found out that when selecting the elements in the dev tools, whenever a new image was added, Chrome emits a "purple
pulsing" (hereafter referred to as CPP) that kicked me out of the div I wanted to style and (rudely) put me at its parent div (the Gallery proper, if you will).
Voilà:
I started by shutting off my WiFi, which solved the problem with two drawbacks:
remembered the offline option in the network panel
needed a connection to read the socket.io docs :~)
Next I tried the offline option and found that, like the production version, CPP reëmerged, the image requests logging net::ERR_INTERNET_DISCONNECTED.
I realized that I could probably set the option reconnection: false in the socket.io bit but alas, this novella of question (which contains multitudes) still beckoned:
The Actual Question(s)
What chez Google (and Firefox (Orange Pulse), Safari (Transparent Pulse), et. al) is the logic to this behavior?
Why not Truly Sever the relevant tab's connection?
Better yet, why not let the poor developer both hold fast to their element and acknowledge visually that new elements are being thrown in?
The images are still fetched (!) which makes the Offline option seem even more misleading.
The docs from Google reference PWAs and those with service workers... does
Check the Offline checkbox to simulate a completely offline network experience.
apply only to them?
The Code that Kinda Could:
Here are the ~20 relevant lines at play (and here's the whole gig):
// app.js
var T = new Twit(config)
var stream = T.stream('statuses/filter', { track: '#MyHashtag', })
stream.on('tweet', function(tweet) { io.sockets.emit('tweet', tweet) })
function handler(request, response) {
var stream = fs.createReadStream(__dirname + '/index.html')
stream.pipe(response)
}
... and the index.html's relevant script:
// index.html
var socket = io.connect('/');
socket.on('tweet', function(tweet) {
if (someConditions = foo) {
tweet_container.innerHTML = '<img src="'
+ tweet.user.profile_image_url +
'" />'
}
}, 1000)
Nota Bene: I realize this question contains questions germane to polling, streams, networking, and topics whose names I'm not even familiar with, but my primary curiosity is what's the logic behind behaving as an app that's online after clicking offline and not simply... go offline" (and behave as it does when disconnecting from WiFi).
P.S.S here's a quote from knee-deep in the socket.io docs
If a certain client is not ready to receive messages (because of network slowness or other issues, or because they’re connected through long polling and is in the middle of a request-response cycle), if it doesn’t receive ALL the tweets related to bieber your application won’t suffer.
In that case, you might want to send those messages as volatile messages.
I am working on webrtc Peer to Peer Calling and successfully running the AppRTCDemo available on WEBRTC site . I have gone through the code and stuck on few points :
1 : When I enter the URL it hit the server and I got response like :
response return from server{"meta_viewport":"","pc_constraints":"{\"optional\": [{\"googImprovedWifiBwe\": true}]}","opusfec":"true","include_vr_js":"","vsbr":"","audio_receive_codec":"opus\/48000","arbr":"","vrbr":"","vsibr":"","token":"AHRlWroqCeuFFBWs4dLJRahxtwho2ldeH_94M_ZipRkK7aIH3nAiSFfScjb_Opz2LwC9xVeWeQrJkRWQAeTsK5sxdJEPoC3jP8uQXkE23QnSANqoBwsHOM4","initiator":1,"ssr":"","room_key":"95505311","pc_config":"{\"iceServers\": [{\"urls\": \"stun:stun.l.google.com:19302\"}]}","stereo":"false","audio_send_codec":"","turn_url":"https:\/\/computeengineondemand.appspot.com\/turn?username=77294535&key=4080218913","me":"77294535","room_link":"https:\/\/apprtc.appspot.com\/?r=95505311&t=json","error_messages":[],"offer_constraints":"{\"optional\": [], \"mandatory\": {}}","asbr":"","media_constraints":"{\"audio\": true, \"video\": true}"}
Here , I just want to know where exactly they are creating iceServer ? On their Server or is there any code inside their channel.html file.
Is there any way to generate iceServer on application without server ? or IceServer is our stun/turn URL sent from server ?
I also have few question on Channel.html :
how exactly channel.html file helping this demo to run ? I have gone through this also and it is calling the onOpen() and which is calling the GAECLIENT class method .
Thanks,
Whichever ice server is going to be used is passed through to the RTCPeerConnection constructor when it is constructed(the object is called pc for the apprtc example). You can see the servers specifically by looking at the pcConfig object.
Once the connection is created(it is not done until a call starts in this example), the localDescription(a RTCSessionDescription object) is set. Once it is set, the WebRTC api will start automatically gathering IceCandidates against the ice servers first introduced when the peer connection was created. Once a new candidate is created, the onicecandidate event is fired(you can see the function used to transfer the candidates if you look at that callback after the pc object is created).
So the general steps are as follows:
Set what iceServers you want to gather candidates against when you create your RTCPPeerConnection object
Set your localDescription to your local RTCSessionDescription object you create(usually created via the success callback you set in the createOffer or createAnswer functions from the peerconnection).
It will start gathering candidates automatically against the servers you set when the peerconnection was constructed and with each candidate the onicecandidate event is fired.
Now, specifically for the apprtc demo page, it uses an open stun server stun:stun.l.google.com:19302 and an array of closed turn servers(that are hosted on Google's cloud) with dynamic credentials that are gathered at page load.
I have a very frustrating problem with a client's network environment, and I'm hoping someone can lend a hand in helping me figure this out...
They have an app that for now is written entirely inside of VBA for Excel. (No laughing.)
Part of my helping them improve their product and user experience involved converting their UI from VBA form elements to a single WebBrowser element that houses a rich web app which communicates between Excel and their servers. It does this primarily via a socket.io server/connection.
When the user logs in, a connection is made to a room on the socket server.
Initial "owner" called:
socket.on('create', function (roomName, userName) {
socket.username = userName;
socket.join(roomName);
});
Followup "participant" called:
socket.on('adduser', function (userName, roomName){
socket.username = userName;
socket.join(roomName);
servletparam = roomName;
var request = require('request');
request(bserURL + servletparam, function (error, response, body) {
io.sockets.to(roomName).emit('messages', body);
});
servletparam = roomName + '|' + userName;
request( baseURL + servletparam, function (error, response, body) {
io.sockets.to(roomName).emit('participantList', body);
});
});
This all worked beautifully well until we got to the point where their VBA code would lock everything up causing the socket connection to get lost. When the client surfaces form it's forced VBA induced pause (that lasts anywhere from 20 seconds to 3 minutes), I try to join the room again by passing an onclick to an HTML element that triggers a script to rejoin. Oddly, that doesn't work. However if I wait a few seconds and click the object by hand, it does rejoin the room. Yes, the click is getting received from the Excel file... we see the message to the socket server, but it doesn't allow that call to rejoin the room.
Here's what makes this really hard to debug. There's no ability to see a console in VBA's WebBrowser object, so I use weinre as a remote debugger, but a) it seems to not output logs and errors to the console unless I'm triggering them to happen in the console, and b) it loses its connection when socket.io does, and I'm dead in the water.
Now, for completeness, if I remove the .join() calls and the .to() calls, it all works like we'd expect it to minus all messages being written into a big non-private room. So it's an issue with rejoining rooms.
As a long-time user of StackOverflow, I know that a long question with very little code is frowned upon, but there is absolutely nothing special about this setup (which is likely part of the problem). It's just simple emits and broadcasts (from the client). I'm happy to fill anything in based on followup questions.
To anyone that might run across this in the future...
The answer is to manage your room reconnection on the server side of things. If your client can't make reliable connections, or is getting disconnected a lot, the trick it to keep track of the rooms on the server side and join them when they do a connect.
The other piece of this that was a stumper was that the chat server and the web UI weren't on the same domain, so I couldn't share cookies to know who was connecting. In their case there wasn't a need to have them hosted in two different places, so I merged them, had Express serve the UI, and then when the client surfaced after a forced disconnect, I'd look at their user ID cookie, match them to the rooms they were in that I kept track of on the server, and rejoined them.
I'm creating an application that uses NSNetService to publish a server, and I've come across the NSNetServiceListenForConnections option that can be used in OS X 10.9 with the publishWithOptions: method. This new options is highlighted in the "What's New in OS X 10.9" page provided by Apple. It states If you send the NSNetServiceListenForConnections option flag in the options value passed to publishWithOptions:, OS X automatically handles all of the connection management for you, however, I don't see how this a new behavior? I currently just call the publish method and wait for the ServerAcceptCallBack, which is set by the CFSocketCreate method. I doesn't seem to make this any easier?
I'm following some of Apple's code from the CocoaEcho example, which gets a port and opens a CFSocket. I know you can pass 0 as the port parameter for the initWithDomain: name: port: method, but that chooses a "random" port, and I'm guessing that that's not a 100% safe thing to do. I thought that NSNetServiceListenForConnections might have something to do with that, but going by the description, it doesn't.
So to my actual question, after all the rambling:
What does the NSNetServiceListenForConnections option actually do, and (why) should I use it?
Side question: If I should use it, how do I check for availability? I've been told to use if (&NSNetServiceListenForConnections != NULL), but NSNetServiceListenForConnections is an NSUInteger so I can't get the address (via &)
With this option, you don't have to open or manage the sockets at all (i.e. no calling CFSocketCreate). It creates the socket for you. Although I'm finding in 10.9.2 it isn't closing the socket properly if you call stop on the netService, but I'm still investigating. (It did seem to be closing them in 10.9.0 and 10.9.1). The socket seems to stay open until you quit the app.
I'm trying to automate the dialing process: dial any number, patch others on the call, connect to a bridge at a scheduled time. I know how to do this while physically present in the office, but I need to automate it so that people can dial/get patched on the call without being physically present in the office.
If any one can help me with, how i can dial a number on VOIP phone from my PC, I would be able to do the rest (automation part).
Any Idea/Suggestion will be highly appreciated. Specially, the first method "Cisco IP Phone Services XML" which was suggested here: https://stackoverflow.com/questions/2517239/how-can-i-call-from-my-pc-through-my-cisco-ip-phone/. This one seems quite easy to implement as my automation tool primarily works around XMLs.
Iv'e never tested this with anything other than a SIP version of the Cisco 7940 series, so if your using SCCP or MCGP then what I present below may or may not work :-)
1) You need to make sure that you have your VOIP phone correctly set up and requesting it's config files from a TFTP server, helping you do that in this reply is beyond the scope of what I'm describing here.
2) Ensure that somewhere in one of those config files (Usually SIPXXXXXXXXX.cnf - where xxxxx is the phones mac address) you have 3 lines that configure the device for telnet access (which is disabled by default), the lines should look like this:
telnet_level: 2
phone_prompt: myphone
phone_password: mypassword
The telnet level MUST be 2, 0 disables, 1 makes it read only, the phone prompt is whatever you want the prompt to be '>', 'myphone ###' whatever and the password is the password you'll use to log in.
3) Once you've made those changes to your phone restart it by either pressing '*', '6' and 'Settings' at the same time, or by power cycling it. When it reboots it should obey the new settings in your config.
4) Now point a telnet program at the IP address allocated to your phone, and if all's gone well, you should get asked for a password, enter the password and marvel at the internal world of your Cisco phone ;-)
5) There are a number of commands you can use now, typing a ? and pressing return will give you help, typing then ? will help you on that command. type test ? and press return and you should see the following:
Test Command Definitions
------------------------
onhook , hu - Handset Onhook
offhook , hd - Handset Offhook
key , ky - Simulate Keystrokes
open , op - Open the Test Session
close , cl - Close the Test Session
show , sh - Show Call Feedback
hide , hi - Hide Call Feedback
6) Issue the command:
test open
your phone should reply with:
TEST: Opening Session
you are now in test mode.
7) once in test mode, entering
test key <key>
will activate that key. If you enter
test key ?
the phone should reply with:
Test Key Names
--------------
0-9 # *
line1 line2 navup navdn volup voldn
soft1 soft2 soft3 soft4 serv info dir
msgs set headset spkr mute
replace above with any of those names to activate that key.
8) Once your done, remember to call
test close
before disconnecting the telnet session.
I showed you the manual way here, but it doesn't take much to realize that you could easily script that from a PC or server that has access to the same sub-net as the phone. I have a set of JSON services running on mine that allows my home security system to do things like call the police if an intruder is detected while I'm not home, of for my web based phone book to auto dial numbers by clicking on a link.
All you need to know is the exact key sequence you need, then you can simply just open the test console, send the key sequence, and close. ANY key that's press-able on the phones front panel can be automated this way.