version: erlang R13B
Hi all,
how can I increase the amount of ssl ports/handles that my network server is able to create on Windows?
On linux I was able to successful create about 1000 connections using:
-env ERL_MAX_PORTS 80000 -P 268435456
and changing the maximum open fd's using ulimit.
On windows apparently there is no effect using the same configuration, and sadly the number of open connections are VERY small (about 30, and it opens 6 handles for each one). I've noticed that the shell starts two other children processes, inet_gethost.exe and ssl_esock.exe. If these are the ones that I have to increase the port count, how do I do that?
Thanks,
According to this, in Windows NT
the maximum number of sockets
supported by a particular Windows
Sockets supplier is
implementation-specific. An
application should make no assumptions
about the availability of a certain
number of sockets.
According to this, you should redefine the value of FD_SETSIZE.
It also suggests to have a look to WSADATA.iMaxSockets.
Related
I tried to run an iperf test between two windows 7 laptops with one hosting a ad-hoc network. Specifically I wanted to see whether I could see a visible difference in throughput between the built in PCI card and a USB wifi adapter.
Unfortunately under both conditions I managed to see a total speed of 2.5 Mbps only.
Is Windows throttling my UDP bandwidth in some way or is iperf 2 not compatible with Windows 7?
I tested on a windows 7 to windows 10 PC as well and saw the same issue.
A wireshark trace shows almost no retries and most of the packets appear to be using 58.5 and above n rates.
However it appeared that data was being sent in bursts
This image shows the graph of packets sent
I couldn't find any information on this. I will try using iperf 3 in the meantime and also test the performace via a standard AP and update this question.
This is a screenshot of the cmd prompt
Thanks in Advance!
Update: Iperf3 showed much higher speeds. 60 Mbps or so. I'll probably need to read up on the differences between the two softwares. I don't really understand why this difference exists.
We have a customized Flash/HTML5 video player we use for users on our site. I'm currently fleshing out the experience for users who have 'suboptimal' bandwidth--basically we'd like the client side code to be able to detect poor user experience due to excessive buffering. I would like to test this "poor bandwidth" handling code in my local development environment.
Does anyone know of good techniques for simulating "poor bandwidth" in a local environment for testing purposes?
More specifically I have my local browser connecting to a virtual machine with instances of uWSGI, nginx, and python/django and I would like to be able to inject arbitrary amounts of delay into the delivery of content from these systems. (I'm primarily concerned with doing this with nginx, which does the video content delivery/streaming).
EDIT: It may be relevant that the dev environment is Mac OS X.
Just use nginx's configuration.
While OS X Lion's Network Link Conditioner works as expected it's still annoying to use when I'm really just trying to test a subset of a web app's behavior--i.e., the slow video buffering handling system.
As such, I've found it much more convenient to set rate limiting in my nginx.conf file, e.g.,:
location ~ /files/(.*\.(mp4|m4v|mov))$ {
...
limit_rate 50k; # <-- Limit download rate per connection to 50kbps
...
}
EDIT: See the nginx HttpCoreModule docs.
FreeBSD is ancestor of Mac OS, so you can use built-in powerful firewall called ipfw.
It can be used in many different cases, for example simulate low bandwidth. Use your own IP address loopback (127.0.0.1) or a remote server (8.8.8.8 in that case).
We do a video interviewing web-application, so I'd like to share with our experience of simulation of bad connection, see example below:
$ sudo su
$ ipfw show
$ ipfw pipe 1 config delay 600ms bw 256kbit/s
$ ipfw add pipe 1 dst-ip 8.8.8.8 dst-port 80
$ ipfw flush
ipfw pipe allows you to simulate slow and unstable connection with using delay, bw and even prob to simulate packet losses.
I just found the Mac OS X Network Link Conditioner but I'm not yet sure it works on loopback, which it would need to for my purposes.
EDIT: This seems to work on loopback, so it seems to solve my problem! This is probably the way to go if you're on OS X 10.7
I'm using this program NetLimiter to simulate "poor bandwidth". It's not free, but have a trial version that works well. Is only for windows :(
I'm thinking of getting my own dedicated server with the following stats:
Processor: Celeron 440 2.0 GHz
Memory: 1 GB
Primary Hard Drive : 160 GB SATA II
This will be running Windows. I have some experience with my local IIS and playing around with servers, but I have never set one up (at least a Windows one) and I've never dealt with DNS/backup/security issues.
My question has two parts:
Will this server be able to run Windows 2008, SQL Server, and possible Exchange on it without trouble. I'm worried about the processor and RAM.
Are there any guides/tutorials that talk about how to admin a windows server from start to finish. (I'm looking for something like the FAQs slicehost has for *nix based servers).
You WILL run into a problems with RAM. Refer to MS documentation and minimum requirements (SQL Server and Exchange). Also please mind that new releases of Exchange run only on 64bit systems.
Personally I would recommend installing CORE version of W2K8 if you plan to go with your described configuration.
It depends from user load. If you have about 1k unique users / month this means that probably, you will have 30 users per day - roughly 1 per hour. I think you will use more CPU working on this computer personally. So it really depends from user load.
If I were you, I would add more RAM to have something about 4 GB. RAM is the cheapest upgrade available.
You state "I've never dealt with DNS/backup/security issues."
I would suggest to you that these are the most important issues. You need to stay on top of security, applying security patches, insuring firewalls are properly configured etc.
Having been called after the fact for websites that have been hacked, I can tell you it is not pretty. Learn all you can before you stand up this server on the internet.
Can someone explain what is the difference between X server and Remote Terminal servers in simple terms?
For example, Hummingbird Exceed is an X server and Citrix is a Remote Terminal Server. How do these servers work?
A terminal server runs at the "other" machine while you use a remote desktop client to view the other machine's screen.
A X server (of the X11 Window System) runs on your machine while another machine (or several thereof) send their output to your computer.
The most important difference to the end user is probably "culture": With the X Window system you typically work with windows that run on several hosts. (You often sit in front of a quite stripped down workstation, get one application from one computer, another one from another computer.) When working with X things feel very heterogeneous - a special application only runs on a HP workstation while your company is stuffed with suns or linux boxes? No problem, just buy one HP, everone can use that application over the network like as it was local.)
Remote terminal services feel more like another computer sends its complete screen to you, more like you have a 100-Mile-Long monitor and usb cable (with a little lag built in). You typically use a remote desktop client that sends a complete desktop to you.
However in recent times both techniques get close to another - windows remote desktop (which is based on citrix) can send only application windows to your desktop, while a lot of programs based on X11 are theoretically network transparent but practically need to run on the local machine. (Sorry, no 3D shooter over the network - an extreme example).
Which one is better? I don't dare to say. White X11 is a lot more flexible (it was designed with network transparency in mind - it makes absolutely no difference if an application runs local or remote - it is in many aspects more complicated. As long as there was no remote desktop sharing there was a clear advantage, but slowly the gap is closing, for example by terminal services now allowing you to do many things that were available with X11 only in earlier times.)
By the way, the main reason many X11 application still feel a little "snappier" over the network than windows counterparts is the thing that many application programmers on windows still think they always run local and dump a lot of bitmap graphics on the screen - like custom toolbars in ZIP tools. X11 applications did not do this for a long time and chose "ugly but fast" because X11 forces you to think about the network. But as X11 applications get more pretty and Windows programmers more aware about terminal services the difference will dwindle.
Oh and an important point: X11 is deeply ingrained in the Unix way of things, Citrix is mainly used on Windows (in the form of Microsoft's Windows Terminal Services - which originated in Citrix code). So lock a terminal services admin and a X11 operator into a cage and step back watching bloodshed when they figure out who they are locked in with ...
An X server most likely refers to the X11 windowing system, which is the GUI that most unix flavors (including linux) use. It's a client/server setup, and has been around for a very long time
A remote Terminal Server in the case of Citrix is a remote windows instance that can be connected to with a special Citrix client. The Citrix environments I'm familiar with are all MS Windows solutions, ie they work similar to X, but are for Windows Servers only
They both sort of operate in similar fashions, which is serving a remote client a windowing solution. IE, they both let a server run the actual application while the display of that application is sent back over the network to a client PC.
A 'Terminal Server', as it's called, basically allow you to connect to a Windows session remotely. They employ a bit of magic to make the experience snappy over connections with latency. The Windows GUI system isn't network transparent like X, so it took a while longer to get this feature. Windows Server 2008 and Citrix products have the ability to let you use a single application, unlike the traditional Terminal Server.
X is the GUI protocol for Unix/Linux. The X server accepts connections and displays their windows. The clients are actually the programs themselves. These clients can be local or remote, it doesn't matter to X. X just displays them as requested, on the local screen or over a TCP connection. This is lower level stuff than terminal servers, and allows graphical programs to run on one machine and display on another. X11 doesn't compress or encrypt the traffic like RDP does (although SSH can help you out there).
The linux equivalent of RDP is NX. They provide free software to run NX servers/clients. I've used it and it works pretty well.
Simplified, I have an application where data is intended to flow over the internet between two servers. Ideally, I'd like to test at what point the software ceases to function. At what lowerbound limit (bandwidth, latency, dropped packets) do things stop working to test the reliability of the software.
What I thought I would do was the following:
Setup up 3 machines (VMware instances)
Install the 2 applications on two of the servers.
Setup up the 3rd server to sit between the two machines by doing some sort of magic with Routing and Remote Access on Windows 2003
Install either Traffic Shaper XP or NetLimiter to limit the bandwidth
Run something like TMnetSim Network Simulator to simulate a bad connection.
Does this sound like a good idea or are there easier/better ways of doing this? I'm not that comfortable on Linux and my team mates are even less so.
WANem does exactly this. We have used it both in a virtual machine on the desktop and on a dedicated old pc and it worked great. It can simulate all sorts of broken connectivity.
FreeBSDs ipfw has provisions to simulate links with a given bandwith, latency or error rate. You could use that FreeBSD machine as your machine "in the middle" in your above setup.
You probably can also run at least one of the endpoints on the same machine if you want to reduce the amount of servers involved.
Someone actually packaged up the settings and whatnot necessary for the FreeBSD solution to this problem and they call it DUMMYNET.
It simulates/enforces queue and bandwidth limitations, delays, packet losses, and multipath effects. It also implements a variant of Weighted Fair Queueing called WF2Q+. It can be used on user's workstations, or on FreeBSD machines acting as routers or bridges.
It can simulate exactly what you want, and its free and will boot onto commodity hardware. They even have a canned install of it that is small enough to put on a floppy disk (!) that you can download at that link.
Maybe it is time to learn a bit about Linux because adding a 50ms delay on every outgoing packet can be done in typing just one line:
tc qdisc add dev eth0 root netem delay 50ms
For more see the Linux Traffic Control HOWTO
We had a similar requirement some ten years ago - I'll see if I can recall how we managed it.
If I remember, we wrote a socket proxy program which was controlled by inetd on a UNIX box. This socket would accept connections from a client and open equivalent sessions through to the server. It would then loop, passing messages in both directions.
The way we achieved WAN characteristics was to introduce random delays (with upper and lower limits) in both the connection establishment and the passing of data once the link was up.
It also had the feature to drop the link occasionally as WAN links were less reliable for us than local traffic.
I recall we had to make it threaded to stop the delays from affecting reverse traffic on the link.
There is a very good (and free) Microsoft solution for that, we use it for quite some time and it works great, it can very easily simulate every thing(packet loss, low bandwidth, disconnection, latency....)
This is the best solution i found for a windows environment
More information and a download link can be found here: MARCO blog post
this product has gone some evolution and it is now integrated into visual studio as part of the automation testing, but i found the use of the standalone(that is quite hard to find, so keep a local copy) to work much better. keep in mind that you need at least two computers(or VMs) since you need to pass through a network adapter in order for the application to work its magic.