Cocoa server with user friendly automatic port forwarding or external ip lookup - objective-c

I am coding a mac app, which will be a server that serve files to each user's mobile device.
The issues with this of course are getting the actual ip/port of the server host, as it will usually be inside of a home network. If the ip/port changes, its no big as i plan to send that info to a middle-man-server first, and have my mobile app get the info from there.
I have tried upnp with https://code.google.com/p/tcmportmapper/ but even though I know my router supports upnp, the library does not work as intended.
I even tried running a TURN server on my amazon ec2 instance, but i had a very hard time figuring how what message to communicate with it to get the info i need.
I've been since last night experimenting with google's libjingle, but am having a hard time even getting the provided ios example to run.
Any advice on getting this seemingly difficult task accomplished?

The port of your app will not change. The IP change could be handled by posting your servers IP to a web service every hour or whatever time period you want.
Server should run a URL http://your-web-service.com/serverip.php?ip=your-updated-ip and then have your serverip.php handle the rest (put it into a mySQL db or something)
When your client start it should ask your site for the IP and then connect to your server with that.
This is a pretty common way of handling this type of things.

Related

web logic server Breach Help! How do Find Signs of what data if any was accessed?

A Weblogic server got hacked and the problem is now removed.
I am looking through the infected VM's now in a sandbox and want to see what if any data was accessed on the application servers.
the app servers were getting hammered with ssh requests and so we identified the infected VM's as the web logic VMS, we did not have http logging on. Is there any way to Identify if any PII was Compromised?
Looked through secure logs on weblogic as well as looked through the PIA logs
I am not sure how to identify what if any data was accessed
I would like to find out what went out of our network and info or data
what should I be looking for
is there anything I can learn from looking at the weblogic servers running on red hat?
I would want to believe that SSH was not the only service being hammered, and that was a large attempt to make eyes be on Auth logging whilst an attempt on other services is made.
Do you have a Time frame that you are working with?
Have the OS logs been checked for that time frame?
.bash_history been checked? env variables? /etc/pass* for added users? aliases? reverse shells open on the network connections? New users created on services running on that particular host?
Was WebLogic the only service running on this publicly available host?
What other services and ports were available?
Was this due to an older version of Weblogic or another service, application, plugin?
Create yourself an excel spreadsheet and start a timeline.
Look at all the OS level logging possible and start to make note of anything that looks suspicious, to then follow that breadcrumb to exhaustion.

Is there a way to combine websockets and normal http through apache?

So I have this server where I host more than one website for professional purposes. But I also like to develop game websites and I would like to create a roguelike game with HTML5.
The game engine itself would be developped in C++ on the server and the client should ask the server what changes in the environment after every move.
So, normally, I would send an ajax request to the server where apache would reroute the request to my C++ application which is running as a FastCGI Service. My C++ application would check the session, look up if the movement is valid, change the internal values so that the character moves, also changes other things in the environment, and would then send the changes back to the client.
But ajax requests can be relatively slow, opening and closing connections all the time. So when I read about websockets, I thought I was in heaven until I saw that it will interfere with Apache and Apache is not really optimized to work with it.
Obviously, I could create a web socket on a different port, but with all those firewalls out there, I don't think that's a good option.
So, is there a way to combine the two? Where apache is able to understand that a websocket request should be ignored and passed on to my application instead?

NSURLSession - The request timed out

I'm posting data from my app to my server using NSURLSession when a button is pressed. I can successfully send the data to my server and insert into a database, for the first two occasions, but any time after that, the request times out.
I've tried: changing session configuration (connections per host, timeoutInterval etc), session configuration types, changing the way the data is posted.
Has anyone seen this sort of behaviour before and know how I can fix this issue?
Or is it a server issue? I thought my server was down initially. I couldn't connect to it, nor load certain pages. However, it was only down for me. After rebooting my modem, I could connect back to the server. I didn't have any issues connecting to phpMyAdmin.
If the problem was reproducible after a reboot of the router, then I would look into whether Apple's captive portal test servers were down at the time.
Otherwise, my suspicion is that it is a network problem rather than anything specific to your app.
It is quite possible that the pages you were loading successfully were coming from cache.
Because you said that rebooting your modem fixed the problem, that likely means that your modem stopped responding to either DHCP requests or DNS lookups (either for your domain or for one of the captive portal test domains).
It is also possible that you have a packet loss problem, and that it gets worse the longer your router has been up and running. This could cause some requests to complete and others to fail.
Occasionally, I've seen weird behavior vaguely similar to this when ICMP is getting blocked too aggressively.
I've also seen this when a stateful firewall loses its mind and forgets the state.
This can also be caused by keeping HTTP/HTTPS connections alive past the point at which the server gives up and drops the connection, if your firewall is blocking the packet that tells you that the connection was closed by the remote end.
But without a packet trace, there's no way to be certain. To get one:
If your network code is running on OS X, you can just do this with tcpdump on your network interface.
If you are doing this on iOS, you can do this by connecting your computer via wired Ethernet, enabling network sharing over Wi-Fi, pointing tcpdump at the bridge interface, and pointing your iPhone at that Wi-Fi network.
Either way, that will tell you if there are requests going out that never come back, and more importantly, what type of requests they are, and who is responsible for replying to them. Once you have that information, if the source of the problem isn't obvious, post a link to the packet trace and we'll add more suggestions.

Slow web response on first request

I have a website, deployed on 2 identically configured servers - Ubuntu 14.04 / apache2 / MySQL / php. One is in a VM, the other is a physical box. Both servers behave the same.
The first request to go to a web page times out when sent from inside the local network, but responds fine from outside. So if I click on a link or a menu item on the web page, or call up a web page from a browser it times out. If then make a request for a web page it responds immediately and on all subsequent request, unless I leave it alone for over 20 seconds, then the next response will time out. If I click on one link, then wait 2 or 3 seconds, then click on the same or another link it responds. If I click a link, then click a link in another browser after 2 or 3 seconds it responds instantly.
My router is set up to redirect links from outside to the same server. When I make a request to the public address remotely it always responds instantly - no latency. This shows it's not the disk, or application pools or whatever else may take some time to spin up, it's something to do with accessing it locally. The same thing also happens with telnet, MySQL workbench and ftp with both machines. Nothing unusual in the apache logs, it seems the first request just doesn't get there.
I think it's probably my network config. I have a reason for the Ubuntu servers to be on a separate subnet, but I'm currently combining them. The servers are static IPs at 192.168.0.10 and 11, with a mask of 255.255.254.0.
I'm accessing them from machines in the 192.168.1.xx network, also with a mask of 255.255.254.0. Pings seems to go both ways instantly. It's really frustrating trying to test web updates when firstly the ftp has to be done twice and then the clicks have to be done twice if I leave it more than 20s.
Not many views of the question so probably nobody is interested anyway, but I found the answer.
I had a VMware virtual network set up at 192.168.0.0 for when I'm on a train (or at least not at home) so that I can communicate with my VM server by connecting the VM network adapter with the virtual network instead, preserving the static IP address. Even when I have the VM connected directly to the home network, the virtual network is still active on the PC which meant there are 2 separate networks in that range which obviously confuses things and takes a while to sort out. I guess it has to wait for one to time out before trying the other. Anyway, disabling the VMware virtual network when I'm at home sorts out the problem.

All Google API Calls From Our Office Time Out

We have a small office with 20+ computers that are about 80/20 split Macs vs. PCs. I am a web developer by trade who manages our little network but am, by no means, a networking/DNS expert.
That being said, we are having trouble in that every single web site we visit (stackoverflow.com included) that makes a call to a Google API takes forever to load. They all get stuck with a statusbar message such as: "Connecting to fonts.googleapis.com, ajax.googleapis.com, developers.google.com etc..." Eventually, the api call times out and the site will then load without it. Sometimes we get a pop-up error "accounts.google.com" failed to respond. In fact, when we finally get Stack Overflow to load this message is at the top of the page: "Stack Overflow requires external JavaScript from another domain, which is blocked or failed to load."
This seems to be only happening on our internal network. For instance, we can connect laptops, phones and tablets to LTE/mobile networks and they load up the same sites fine.
Oddly enough, Google.com, itself, loads fine. As do Gmail and Google Docs.
When I ping 'fonts.googleapis.com' from both inside the network and from our firewall I get "Request timed out" for 'googleapis.l.google.com' [74.125.70.95].
I have tried deleting all Google entries from our DNS server, an old Windows 2003 Small Biz Server, which sometimes results in 'googleapis.l.google.com' getting a different IP address from our ISP which alleviates the issue temporarily. But, it seems eventually this same IP of 74.125.70.95 will get tacked on to the API URL and we're back in the same boat.
I tried changing the DNS server address of our Win2003 SBS server, itself, away from our ISP's address to both OpenDNS and Google's own DNS server but this hasn't helped.
This has been happening for about a month.
Any ideas?
Stumbled on this article:
http://www.sophos.com/en-us/support/knowledgebase/2450/2750/4350/120934.aspx
Essentially it details something I hadn't thought about. My firewall's Country Blocking feature. Even though the particular IP I had trouble with seemed to belong to Google here in the US, it may have been routed through China (or my firewall's IP address tables are outdated) so traffic was being blocked.
I've adjusted FW rules to allow this IP and all is well.