Simulate poor bandwidth in a testing environment (Mac OS X)? - testing

We have a customized Flash/HTML5 video player we use for users on our site. I'm currently fleshing out the experience for users who have 'suboptimal' bandwidth--basically we'd like the client side code to be able to detect poor user experience due to excessive buffering. I would like to test this "poor bandwidth" handling code in my local development environment.
Does anyone know of good techniques for simulating "poor bandwidth" in a local environment for testing purposes?
More specifically I have my local browser connecting to a virtual machine with instances of uWSGI, nginx, and python/django and I would like to be able to inject arbitrary amounts of delay into the delivery of content from these systems. (I'm primarily concerned with doing this with nginx, which does the video content delivery/streaming).
EDIT: It may be relevant that the dev environment is Mac OS X.

Just use nginx's configuration.
While OS X Lion's Network Link Conditioner works as expected it's still annoying to use when I'm really just trying to test a subset of a web app's behavior--i.e., the slow video buffering handling system.
As such, I've found it much more convenient to set rate limiting in my nginx.conf file, e.g.,:
location ~ /files/(.*\.(mp4|m4v|mov))$ {
...
limit_rate 50k; # <-- Limit download rate per connection to 50kbps
...
}
EDIT: See the nginx HttpCoreModule docs.

FreeBSD is ancestor of Mac OS, so you can use built-in powerful firewall called ipfw.
It can be used in many different cases, for example simulate low bandwidth. Use your own IP address loopback (127.0.0.1) or a remote server (8.8.8.8 in that case).
We do a video interviewing web-application, so I'd like to share with our experience of simulation of bad connection, see example below:
$ sudo su
$ ipfw show
$ ipfw pipe 1 config delay 600ms bw 256kbit/s
$ ipfw add pipe 1 dst-ip 8.8.8.8 dst-port 80
$ ipfw flush
ipfw pipe allows you to simulate slow and unstable connection with using delay, bw and even prob to simulate packet losses.

I just found the Mac OS X Network Link Conditioner but I'm not yet sure it works on loopback, which it would need to for my purposes.
EDIT: This seems to work on loopback, so it seems to solve my problem! This is probably the way to go if you're on OS X 10.7

I'm using this program NetLimiter to simulate "poor bandwidth". It's not free, but have a trial version that works well. Is only for windows :(

Related

Browser cannot load resource files

We have a wildly 8.2 running on a virtualized ubuntu 14.04 behind a firewall (against DoS attacks,...) in a DMZ. (About 1200 - 3000 requests per hour.)
With Safari the download of some resource files often (about every 2nd time) fails (s. screenshot, all files are locally stored) while there is rarely a problem with other browsers (chrome, firefox)
Is there any plausible cause why there is a different behavior with Safari than with other browsers?
Has anybody ever had similar problems maybe regarding some firewall setting?
Is there any other hint where we could start looking for the cause of the problem? (Implementation, router, lack of resources...)
I know, the question is a little unprecise, that's probably why it was voted down. But I'll post what the solution was anyway.
On the server side we ran a
tcpdump -i eth0 -n -A dst port 80 | grep 'specificUrlPath'
on some other (client) machine we issued a
curl -X POST http://hostname/specificUrlPath
So we saw that the request did not always reach the server network interface and knew that there had to be a problem with the network in between.
The cause for the problem was, that NAT was switched on on the router for the server machine and the NAT implementation was obviously not able to manage as many connections. As soon as NAT was switched off everything worked as it should.
I am not sure, why requests of some browsers were more likely to be served than by others but I guess this is also due to the specific router software.

Is virtualization still relevant with docker?

I've read this article:
How is Docker different from a normal virtual machine?
I have huge intend of converting all my virtual images into docker instances.
I can't see an angle where vm still make sense...
So what's the point to VM now? Ok... maybe the desktop virtualization to have pulseaudio working?
Once docker solve this, what else?
UPDATE
Okay... So I can't run docker in "non-linux" favour hosts...
For one point you can't run an operating system within your container that is different from the OS on the host.
On Windows and Mac OSX boot2docker is used to run Docker which is VirtualBox running a reduced Linux OS which runs Docker.
The benefits of containers are clear and well known, but the disadvantages have been glossed over somewhat.
Specifically, you don't just need the same OS type (aka linux), you get the same version of the kernel (including any mods you want.) Since containers are an OS construct, there are resource islands per OS kernel version (and different implementations for Windows, BSD or any non-linux if they exist).
VM's are secured with CPU level isolation, containers are secured with OS level isolation (with arguably a bigger attack surface).
There are many claims out there that containers are as slow and as big as VM's once you load up your container with everything you need for production and add lots of overlays, but these are all anecdotal and no large scale survey or trustable data is available yet.

How can I set up a "remote pair-coding" environment linux to linux?

I did some research on how to enable a pair-coding environment remotely so someone else on their MacOx/Linux box could view my screen (I code using vim + the rails plugin).
I read Evan Light's blog on his set up here, but I don't have an open source router:
http://evan.tiggerpalace.com/articles/2011/10/17/some-people-call-me-the-remote-pairing-guy-/
So the SSH is tricky since I don't have a sticky IP.
What is an easy way to do it?
So the SSH is tricky since I don't have a sticky IP.
There's a bunch of tools to get you a DNS name to point towards a dynamic IP (some of them are even free). I've used No-IP.com, but not for several years (and have no affiliation). You don't necessarily need to have an open-source router - you can run the daemon on your computer, and then use port-forwarding to get incoming SSH connections to your computer.
You should take this over to SuperUser.com - it's probably more on-topic there.
Not for pair-programming so far, but I usually do my screen-sharing through TeamViewer. It is extremely easy to set up, and passes through routers like hot knife through butter. However, it transfers the GUI, so it can be somewhat slow (depending on your connection).

How to Test a Network Application with only a Single Computer?

I want to kick myself to learning network programming, starting with implementing existing network protocols. I've finished the (rudimentary) design and will start coding soon. The problem I haven't been able to figure out solution to is related to testing: I only have one Windows laptop running Windows 7 Pro with only a recovery disc (no installation disc) that obviously cannot be used on a VM.
Hard-coding input/output data clearly isn't a good way to test any sort of program. So, what solutions can I look into?
Thanks for your time.
P.S.: In case this matters, I'll do the coding in C++.
You can run a client and a server on the same machine. When accessing the network layer, just use the local callback loop (127.0.0.1 for ipv4 or ::1 for ipv6) to connect to your server when you run the client.
If you provide the APIs that you will be using (wininet, APR, Boost etc) a more detailed answer would be available.
What about a VM with Ubuntu or some other distro of Linux?

Best way to simulate a WAN network

Simplified, I have an application where data is intended to flow over the internet between two servers. Ideally, I'd like to test at what point the software ceases to function. At what lowerbound limit (bandwidth, latency, dropped packets) do things stop working to test the reliability of the software.
What I thought I would do was the following:
Setup up 3 machines (VMware instances)
Install the 2 applications on two of the servers.
Setup up the 3rd server to sit between the two machines by doing some sort of magic with Routing and Remote Access on Windows 2003
Install either Traffic Shaper XP or NetLimiter to limit the bandwidth
Run something like TMnetSim Network Simulator to simulate a bad connection.
Does this sound like a good idea or are there easier/better ways of doing this? I'm not that comfortable on Linux and my team mates are even less so.
WANem does exactly this. We have used it both in a virtual machine on the desktop and on a dedicated old pc and it worked great. It can simulate all sorts of broken connectivity.
FreeBSDs ipfw has provisions to simulate links with a given bandwith, latency or error rate. You could use that FreeBSD machine as your machine "in the middle" in your above setup.
You probably can also run at least one of the endpoints on the same machine if you want to reduce the amount of servers involved.
Someone actually packaged up the settings and whatnot necessary for the FreeBSD solution to this problem and they call it DUMMYNET.
It simulates/enforces queue and bandwidth limitations, delays, packet losses, and multipath effects. It also implements a variant of Weighted Fair Queueing called WF2Q+. It can be used on user's workstations, or on FreeBSD machines acting as routers or bridges.
It can simulate exactly what you want, and its free and will boot onto commodity hardware. They even have a canned install of it that is small enough to put on a floppy disk (!) that you can download at that link.
Maybe it is time to learn a bit about Linux because adding a 50ms delay on every outgoing packet can be done in typing just one line:
tc qdisc add dev eth0 root netem delay 50ms
For more see the Linux Traffic Control HOWTO
We had a similar requirement some ten years ago - I'll see if I can recall how we managed it.
If I remember, we wrote a socket proxy program which was controlled by inetd on a UNIX box. This socket would accept connections from a client and open equivalent sessions through to the server. It would then loop, passing messages in both directions.
The way we achieved WAN characteristics was to introduce random delays (with upper and lower limits) in both the connection establishment and the passing of data once the link was up.
It also had the feature to drop the link occasionally as WAN links were less reliable for us than local traffic.
I recall we had to make it threaded to stop the delays from affecting reverse traffic on the link.
There is a very good (and free) Microsoft solution for that, we use it for quite some time and it works great, it can very easily simulate every thing(packet loss, low bandwidth, disconnection, latency....)
This is the best solution i found for a windows environment
More information and a download link can be found here: MARCO blog post
this product has gone some evolution and it is now integrated into visual studio as part of the automation testing, but i found the use of the standalone(that is quite hard to find, so keep a local copy) to work much better. keep in mind that you need at least two computers(or VMs) since you need to pass through a network adapter in order for the application to work its magic.