We have a wildly 8.2 running on a virtualized ubuntu 14.04 behind a firewall (against DoS attacks,...) in a DMZ. (About 1200 - 3000 requests per hour.)
With Safari the download of some resource files often (about every 2nd time) fails (s. screenshot, all files are locally stored) while there is rarely a problem with other browsers (chrome, firefox)
Is there any plausible cause why there is a different behavior with Safari than with other browsers?
Has anybody ever had similar problems maybe regarding some firewall setting?
Is there any other hint where we could start looking for the cause of the problem? (Implementation, router, lack of resources...)
I know, the question is a little unprecise, that's probably why it was voted down. But I'll post what the solution was anyway.
On the server side we ran a
tcpdump -i eth0 -n -A dst port 80 | grep 'specificUrlPath'
on some other (client) machine we issued a
curl -X POST http://hostname/specificUrlPath
So we saw that the request did not always reach the server network interface and knew that there had to be a problem with the network in between.
The cause for the problem was, that NAT was switched on on the router for the server machine and the NAT implementation was obviously not able to manage as many connections. As soon as NAT was switched off everything worked as it should.
I am not sure, why requests of some browsers were more likely to be served than by others but I guess this is also due to the specific router software.
Related
I'm working with a WebRTC stack that consists of a (firewall-enabled) embedded Linux device, an iOS mobile app, and self-maintained signaling, STUN, and TURN servers.
In 99% of network configurations, the setup works just fine. However, when the embedded Linux device is connected to a Verizon Jetpack (4G LTE), the device cannot establish a WebRTC connection with the mobile app (regardless of whether the mobile phone is connected to the Jetpack or some other network).
In an effort to debug, I took down the entire firewall on both IPv4 and IPv6, but it made no difference.
Then, I kind of randomly discovered that if I add a masquerading post-routing IPv4 rule to the device's NAT table, it starts working! Specifically, this is the iptables command that I used:
$ sudo iptables -t nat -A POSTROUTING -o wlan0 -j MASQUERADE
Why would this rule get the WebRTC connection working? And is there a more proper way to achieve the same result? The rule above seems too liberal.
I found this question because I was also experiencing a WebRTC ICE timeout when trying to connect to a device on the Jetpack's network. I don't know anything about iptables or firewall/DNS/NAT configurations, but your discovery gave me a clue that it must be some settings in the MiFi itself.
Looking at http://my.jetpack (the Jetpacks web config page/app/thing) I discovered a setting that was labelled something like "Enable passthrough VPN" and it was defaulted to true/on. Toggling that to false/off appears to have fixed the connection issue for me.
I'm not 100% sure this is the "real" issue since it seems like two devices on the same VPN should be able to connect. Hopefully it gives you a little bit of a clue in your own search.
I have a CentOS; LAMP web server and have finished 97% of development work. I can test my website anywhere on the LAN and can load correctly on different browsers from different machine.
However when I try wget on the web server, it is timing out for some reason. Please point me in the right direction for this as some functionality is not working the way it was designed because of this.
Thanks!
We have a customized Flash/HTML5 video player we use for users on our site. I'm currently fleshing out the experience for users who have 'suboptimal' bandwidth--basically we'd like the client side code to be able to detect poor user experience due to excessive buffering. I would like to test this "poor bandwidth" handling code in my local development environment.
Does anyone know of good techniques for simulating "poor bandwidth" in a local environment for testing purposes?
More specifically I have my local browser connecting to a virtual machine with instances of uWSGI, nginx, and python/django and I would like to be able to inject arbitrary amounts of delay into the delivery of content from these systems. (I'm primarily concerned with doing this with nginx, which does the video content delivery/streaming).
EDIT: It may be relevant that the dev environment is Mac OS X.
Just use nginx's configuration.
While OS X Lion's Network Link Conditioner works as expected it's still annoying to use when I'm really just trying to test a subset of a web app's behavior--i.e., the slow video buffering handling system.
As such, I've found it much more convenient to set rate limiting in my nginx.conf file, e.g.,:
location ~ /files/(.*\.(mp4|m4v|mov))$ {
...
limit_rate 50k; # <-- Limit download rate per connection to 50kbps
...
}
EDIT: See the nginx HttpCoreModule docs.
FreeBSD is ancestor of Mac OS, so you can use built-in powerful firewall called ipfw.
It can be used in many different cases, for example simulate low bandwidth. Use your own IP address loopback (127.0.0.1) or a remote server (8.8.8.8 in that case).
We do a video interviewing web-application, so I'd like to share with our experience of simulation of bad connection, see example below:
$ sudo su
$ ipfw show
$ ipfw pipe 1 config delay 600ms bw 256kbit/s
$ ipfw add pipe 1 dst-ip 8.8.8.8 dst-port 80
$ ipfw flush
ipfw pipe allows you to simulate slow and unstable connection with using delay, bw and even prob to simulate packet losses.
I just found the Mac OS X Network Link Conditioner but I'm not yet sure it works on loopback, which it would need to for my purposes.
EDIT: This seems to work on loopback, so it seems to solve my problem! This is probably the way to go if you're on OS X 10.7
I'm using this program NetLimiter to simulate "poor bandwidth". It's not free, but have a trial version that works well. Is only for windows :(
I did some research on how to enable a pair-coding environment remotely so someone else on their MacOx/Linux box could view my screen (I code using vim + the rails plugin).
I read Evan Light's blog on his set up here, but I don't have an open source router:
http://evan.tiggerpalace.com/articles/2011/10/17/some-people-call-me-the-remote-pairing-guy-/
So the SSH is tricky since I don't have a sticky IP.
What is an easy way to do it?
So the SSH is tricky since I don't have a sticky IP.
There's a bunch of tools to get you a DNS name to point towards a dynamic IP (some of them are even free). I've used No-IP.com, but not for several years (and have no affiliation). You don't necessarily need to have an open-source router - you can run the daemon on your computer, and then use port-forwarding to get incoming SSH connections to your computer.
You should take this over to SuperUser.com - it's probably more on-topic there.
Not for pair-programming so far, but I usually do my screen-sharing through TeamViewer. It is extremely easy to set up, and passes through routers like hot knife through butter. However, it transfers the GUI, so it can be somewhat slow (depending on your connection).
I'm using NSURLConnection to access a web service (on a .local host). When I access the host by hostname, I'm seeing a delay of 5+ seconds, but when I access it by IP, the connection completes almost instantly.
Running the app on an actual iPhone, instead of the simulator, does not show any delays at all (testing was done on the same network connection). So this seems to be a problem specific to the iOS Simulator or OS X.
I'm able to simulate the problem using the following terminal commands:
nslookup webservice.myhost.local (which is fast)
dscacheutil -q host -a name webservice.myhost.local (shows the delay)
When analyzing the network traffic using Wireshark of the dscacheutil command, I'm seeing several Standard query AAAA requests which are marked red and get an empty response. Once these are done, I see a Standard query A request which has a response containing the correct IP address. The AAAA requests take up about 5 seconds, which would explain the delay.
Does the web service perhaps have IPv6 enabled and you can't use that from the simulator?
I see this on OSX for example when running a local IPv4 only DNS service - if I run dig #localhost is hangs for some seconds until the initial IPv6 connection times out, and then it tries IPv4.
This answer solved the problem for me. (Create an IPv6 ::1 loopback entry to go along with each 127.0.0.1.)
For anyone else who stumbles across this issue... I myself had to disable IPv6 on my machine to avoid the hang in the simulator while IPv6 fails. I did so following these instructions: https://discussions.apple.com/message/18097613#18097613
Which were to:
"To disable IPv6 in OS X Lion, you will need to use the Terminal.
Applications > Utilities > Terminal
To determine what are all of your Mac's network interfaces are, issue the following command: networksetup -listallnetworkservices
To disable IPv6 for wireless, issue the following command: networksetup -setv6off Wi-Fi;
To disable IPv6 for Ethernet, issue the following command: networksetup -setv6off Ethernet
To re-enable IPv6, use -setv6automatic instead"