Objective C - Switching from WiFi to 3G and vice versa - objective-c

I'm developing an iPhone app that uses the network. The iPhone communicate with my server via HTTP request and should work on WiFi and 3G.
I currently use NSURLConnection initWithRequest to send async requests to my server and get responses (but I will soon move to work with ASIHTTPRequest library)
One thing that I'm not sure about is,
What happens when user device goes from WiFi to 3G (or vice versa)?
I know that the same connection can't be used and new connection must be established.
But do I need to do something to handle this situation, or does NSURLConnection handle it automatically?
For example if I sent a request and in the middle of getting the response the connection changes, what will happen with this request?
I saw Apple's Reachability example code, and I know I can use Reachability to detect network changes, but not sure how to handle those changes.

When the connection changes from 3G to WiFi, the current request will result in a failure status, and yes you will need to do something to re establish the connection (restart, resume, etc...)
Having said that, i am not sure, but i think ASIHTTPRequest has a mechanism to fix these issues when you for example request to download a file

Related

Why is the IP address of my remote ice candidate the same as my (Janus) WebRTC signaling server?

I am trying to make video calls with WebRTC and Janus. I am able to make calls using the video call demo page supplied by Janus as well as through an iOS app - this is all working perfectly fine.
However, when inspecting the network flow through both wireshark and chrome://webrtc-internals/ the connection does not seem to be directly to the public IP of the other device. Instead the data is directed to my Janus signaling server. It seems that the IP of the remoteIceCandidate is equal to the IP of my signaling server - shouldn't this should be equal to the public IP of device 2?
Is this correct behavior or not? If so, why is the remote IP not equal to the public IP of device 2? If not, what am I doing wrong?
This is correct behavior and a mistake on my part. The Janus video call plugin documentation says the following:
The idea is to provide a similar service as the well known AppRTC demo (https://apprtc.appspot.com), but with the media flowing through a server rather than being peer-to-peer.
Therefore, the media data is supposed to go to the server instead of over a peer-to-peer connection.

NSURLSession - The request timed out

I'm posting data from my app to my server using NSURLSession when a button is pressed. I can successfully send the data to my server and insert into a database, for the first two occasions, but any time after that, the request times out.
I've tried: changing session configuration (connections per host, timeoutInterval etc), session configuration types, changing the way the data is posted.
Has anyone seen this sort of behaviour before and know how I can fix this issue?
Or is it a server issue? I thought my server was down initially. I couldn't connect to it, nor load certain pages. However, it was only down for me. After rebooting my modem, I could connect back to the server. I didn't have any issues connecting to phpMyAdmin.
If the problem was reproducible after a reboot of the router, then I would look into whether Apple's captive portal test servers were down at the time.
Otherwise, my suspicion is that it is a network problem rather than anything specific to your app.
It is quite possible that the pages you were loading successfully were coming from cache.
Because you said that rebooting your modem fixed the problem, that likely means that your modem stopped responding to either DHCP requests or DNS lookups (either for your domain or for one of the captive portal test domains).
It is also possible that you have a packet loss problem, and that it gets worse the longer your router has been up and running. This could cause some requests to complete and others to fail.
Occasionally, I've seen weird behavior vaguely similar to this when ICMP is getting blocked too aggressively.
I've also seen this when a stateful firewall loses its mind and forgets the state.
This can also be caused by keeping HTTP/HTTPS connections alive past the point at which the server gives up and drops the connection, if your firewall is blocking the packet that tells you that the connection was closed by the remote end.
But without a packet trace, there's no way to be certain. To get one:
If your network code is running on OS X, you can just do this with tcpdump on your network interface.
If you are doing this on iOS, you can do this by connecting your computer via wired Ethernet, enabling network sharing over Wi-Fi, pointing tcpdump at the bridge interface, and pointing your iPhone at that Wi-Fi network.
Either way, that will tell you if there are requests going out that never come back, and more importantly, what type of requests they are, and who is responsible for replying to them. Once you have that information, if the source of the problem isn't obvious, post a link to the packet trace and we'll add more suggestions.

Use wireshark to detect problems with webRTC

so i started to work in this summer and the first task they have given to me is to use wireshark to understand why an application that uses webRTC doesn't use the turn server.
Can you guys help me out, to understand which steps should i do to understand better where is the problem.
I already run the wireshark and only get protocols STUN, that bind to a UDP connection.
TURN is a STUN extension so you will only see STUN packets in Wireshark.
You can easily test WebRTC+TURN in isolation using this sample from the WebRTC project. Remove the default stun server and add the url and credentials for your own TURN server.
Fire up wireshark, start capturing.
Click the "gather candidates" button on that page. You should see candidates with host type at least. You should, if the browser can reach the TURN server usually also see candidates with a srflx type.
If the TURN server is working and your credentials are valid, then you will get candidates with type relay. But you probably wouldn't be asking then.
Now go back to wireshark. Set the display filter to 'stun'. You should see some packets sent to the ip address of the TURN server. Right-click on one of them, 'follow' and 'udp stream'. That should show you all the packets between the browser and the TURN server.
You should be seeing binding requests (message_type=0x01) as well as binding success responses (message_type=0x101) from the server. If you don't see those, your turn server is not responding or something is blocking the client. You will also not get srflx candidates on the candidate gathering demo page.
You should also see packets wireshark interprets as 'allocate request udp' (the message type is 0x101). These are the important ones for TURN.
You should see an error from the TURN server with a message type 0x113 and an error code 401 (unauthorized) because in the first packets, there is no username attribute. In response to those the browser will start sending allocate requests that contain both a username and a message-integrity.
If things go well, those should be answered with an allocate success response (message type=0x103) indicating a xor-relayed-address.
If not and you see more 401 errors that usually means your username and password is wrong.
You might also find the articles on using wireshark to reverse-engineer Amazon Mayday and Whatsapp on WebRTChacks useful -- both use Wireshark.
The WebRTC project has some notes on Wireshark, too.

WebRTC HowTo PeerConnection via LAN with 2 Browsers

since few days I'm trying to build a basic webRTC Videochat. I've got some Demos running localy, even via LAN. But now I want to build one by my one at the really basics without so much overload some Demos come with.
But I still don't get a complete peer connection.
Eg. this example seems to be broken, because I can't "createSignalingChannel();" w3.org/TR/webrtc/#simple-example
Some other examples (https://webrtc-experiment.appspot.com/) want me to link their scripts, but I wont do this, because I want to understand the magic of the peer connection and how to get a handshake between 2 browsers.
I also explored examples with the Google App Engine but thats not what I want.
I want to run it in really easy JS and HTML just on the minimum of what is neccessary.
Here is my code:
https://github.com/mexx91/basicVideoRTC EDIT: Should work now
So what will I have to add to get an handshake and peer connection, so that I can send eg. the mediaStream to eachother.
Thanks a lot!
createSignalingChannel() is only pseudo-code to illustrate the existence of a separate channel. You need for the initial connection handling a separate message channel.
You can achieve that with hosted services like Pusher, Brightcontext or PubNub, or you can host your own backend with open-source projects like socket.io or SignalR.
Then you just need to send the offers, answers and iceCandidates through your separate channel.
List of Realtime Services: http://www.leggetter.co.uk/real-time-web-technologies-guide
Imagine a video conferencing web-app, which users A and B originally access from some webserver. Suppose that web app supports presence, so the web server knows who's currently on-line. Imahine the UI allows A to try and place a video call to B. Via say XMLHttpRequest(), A's browser informs the server this is wanted, and B's javascript pops up something saying that A wants to call B. No WebRTC has happened at all yet. But at this stage, A can indirecttly communicated with B by sending messages using e.g. XMLHttpeRequest. In WebRTC parlance, this is the "signalling channel". So, A and B can both interact with their ICE agents to discover candidate addresses, and SDP descriptions, and send these to each ot6her, via the server, over this signallinh channel. E.g. the web app on A calls a WebRTC API to get its ICE candidates, and packages these up as it sees fit, to send to B. B's reader receives this message from the server (e.g over a WebSocket or long poll) and hyence it can unpack this, and format as needed to send to the ICE agent on B, using the RTCPeerConnection object. Similalrly, SDP offer/answer can be sent betweent he two apps, and passe through into the ICE agnet in the browsers, to get agreed media formats etc. At that stage, media connections can get set uo by the browser (meida streams are added to the RTCPeerConnection initially (which aren't communicating, but whihc have attributes that can be queried to describe the codec etc, and when the API is asked to create an SDP description, it does that using these attributes, but adjust the IP address and port based on how the ICE agent on each local browser has figured out what addresses can reach that local browser / port (NAT traversal).

Replicate logmein.com behavior for smart devices

I have several smart devices that run Windows CE5 with our application written in .NETCF 3.5. The smart devices are connected to the internet with integrated GPRS modems. My clients would like a remote support option but VNC and similar tools doesn't seem to be able to do the job. I found several issues with VNC to get it to work. First it has severe performance issues when ran on the smart device. The second issue is that the internet provider has a firewall that blocks all incoming requests if they didn't originate from the smart device itself. Therefore I cannot initiate a remote desktop session with the smart devices since the request didn't originate from the smart device.
We could get our own APN however they are too expensive and the monthly cost is too great for the amount of smart devices we have deployed. It's more economical for us if we could add development costs to the initial product cost because our customers dislike high monthly costs and rather pay a large sum up front instead. A remote support solution would also allow us to minimize our onsite support.
That's why we more or less decided to roll our own remote desktop solution. We have code for capturing images on the smart device and only get the data that has changed since the last cycle. What we need is to make a communication solution like logmein.com (doesn't support WinCE5) where the smart devices connect to a server from which we then can stream the data to our support personnel's clients. Basically the smart device initiates a connection to our server and start delivering screen data when the server requests it. A support client connects to the server and gets a list of available streams and then select one to listen in on.
Any suggestions for how to do it considering we have to do the solution in .NETCF 3.5 on the smart devices? We have limited communication experience beyond simple soap web-services.
Since you're asking for a suggestion, I'll suggest this:
Don't reinvent. Reuse whatever you can. You can perform tunneling with SSH, so make an SSH connection (say, a port of PuTTY or plink, inside a loop) out via GPRS on your smart device; forward remote ports to local ports, bound to the SSH server's local address (127.0.0.1 (sshd):4567 => localhost (smart_device_01):4567). Your clients connect to your SSH server and access the assigned port for each device.
With that said, that's probably not the answer you're looking for. Below - the answer you're probably looking for.
Based on my analysis of how LogMeIn works, you'll want to make an HTTPS or TLS server where your smart devices will push data. Let's call it your tunnel server.
You'll probably want to spawn a new thread that repeatedly attempts to make connections to the tunnel server (outbound connections from smart device to the server, per your specified requirement). With a protocol like BEEP/BXXP, you can encapsulate and multiplex message-oriented or stream-oriented sessions. Wrap BXXP/BEEP into TLS, and tunnel through to your tunnel server. BEEP lets you multiplex streams onto one connection -- if you want the full capabilities of an in-house LogMeIn solution, you'll want to use something like this.
Once a connection is established, make a new BEEP session. With the new session, tell the tunnel server your system identification information (device name, device authentication signature). Write heartbeat data (timestamp periodically) into this new session.
Set up a callback (or another thread) which interfaces to your BEEP control session. Watch for a message requesting service. When such a request comes in, spawn the required threads to copy data from your custom remote-display protocol and push this data back through the same channel.
This sets the basic premise for your Smart Device's program. You can add functionality to this as you desire, say, to match what LMI's IT Reach subscription provides (remote registry, secure tunneled Telnet, remote filesystem, remote printing, remote sound... you get the idea)
I'll make some assumptions that you know how to properly secure all this stuff for authentication and authorization for your clients (Is user foo allowed to access smart device bar?).
On your tunnel server, start a server socket (listening for inbound connections, or from the perspective of smart devices, smart device outbound connections) that demultiplexes connections and sessions. Once a connection is opened, fire up BEEP and register a callback / start a thread to wait for the authentication/heartbeat session. Perform the required checks for AAA to smart devices -- are these devices allowed, are they known, how much does it cost, etc. Your tunnel server forwards data on behalf of your smart devices. For each BEEP session, attach a name (device name) to the BEEP session after the AAA procedures succeed; on failure, close the connection and let the AAA mechanism know (to block attackers). Your tunnel server should also set up what's required for interacting with the frontend -- that is, it should have the code to interact with BEEP to demultiplex the stream for your remote display data.
On your frontend server (can be the same box as the tunnel server), install the routine for AAA -- check if the user is known, if the user is allowed, how much the user should be charged, etc. Once all the checks are passed, make a secured connection from the frontend server to tunnel server. Get the device names that the tunnel server knows that the user is allowed to access. At this point, you should be able to get a "plaintext" stream, based on the device name, from the tunnel server. Forward this stream back to the user (via TLS, for example, or again via BEEP over TLS), or send the required configuration for your remote display client to connect to your tunnel server with the required parameters to access the remote display protocol's stream.