How to stress test a telnet server application? - telnet

We have a Unix application that is essentially a glorified telnet server.
We have no access to the Unix server, yet we need to do load/stress testing.
We really need to be able to simulate a few hundred user sessions logging in and performing certain actions.
I thought I could perhaps use a terminal emulator that performs these actions concurrently.
But I am unsure of the best way to do this.
Is there any software that exists that would allow me to do this?
Or could someone recommend as easy way to do this?

If I was going to do something like this I'd use a multithreaded perl program built around Net::Telnet, maybe using Expect and Benchmark depending on your objectives.

I would go for https://github.com/ztellman/aleph/wiki/TCP

Related

Is there a framework/app for testing distributed systems or just network apps?

I want to simulate at least 9 clients to test my p2p engine. And i want to make some of them to be behind NAT and some of them to have all ports unlocked. Also, i would like to see log which creates each of them. I am not capable of running 9VM simultaneously, so I'm here to ask experts: is there something i can use for testing it?
I am using Boost library in my app.
I think that's the perfect solution for linux users: http://www.nrl.navy.mil/itd/ncs/products/core
I will keep this answer updated if im wrong.
UPDATE: It works like a charm. Firewall rules can be set for each node. It is not using that much of memory too, so it is possible to make even large network topologies. There is a possibility to run a terminal-per-node. So you can test many different scenarios to test out your application. Good luck.

Including the functionality of a tool within another program?

I would like to write an application, for my own interest, that graphically visualizes some network concepts. Basically I would like to show the output from tools like ping, traceroute and nmap.
The most obvious approach seems to be to use pipes to call out to these tools from my C program, and process the information they return. However, I would like to avoid this heavy-handed approach if possible. My question is, is it possible to somehow link against these tools, or are there APIs that can be used to gain programatic access instead? If so, is this behavior available on a tool-by-tool basis only?
One reason for wanting to do this is to keep everything in a single process / address space and to avoid dependance on these external tools. For example, if I wrote an iphone application, I would not be able to spawn processes to call out to the external tools themselves.
Thanks for any advice or suggestions.
The networking API in your platform of choice is all you essentially need. ping, traceroute and nmap don't do any magic, all they do is send and receive packets over the network.
I don't know of any pre-existing libraries though (not that I have looked either). If it comes to it, at least ping and traceroute are quite trivial to implement by hand.
I depends on the platform you're developing for. Windows, for example, has an ICMP API that you could use to implement a ping tool.
On the other hand, the source code for ping and traceroute is available on any Linux system, so you could use that (provided the license was compatible with your needs) as the basis of your own programs.
Finally, ping (ICMP) is not hard to implement and traceroute builds on top of ping. It may be worth it to just roll your own implementation.

Testing Web Application: "Mirror" ad-hoc testing in another window

I don't even really know if the title is the best way to explain what I'm trying to do, but anyway...
We have a web app that is being ported to a number of DB backends via MDB2. Our unit tests are pretty lacking at the moment, but our internal users are pretty good at knowing what to test to see if things are broken.
What I'm 'imagining' is a browser plug in (don't really care which browser it is for) or a similar system that essentially takes every event from one window and 'mirrors' it in the other browser/s. The reason I'd like this is so that I can have various installations that use different DB backends, and have the user open a window/tab to each installation. From there, however, I'd like them to be able to 'work' in one window and have that 'work' I occur at the same time in each of the 'cloned' windows. From there, they should be able to do some quick eyeballing of the information that comes back, without having to worry about timing differences and so (very much).
I know it's a big ask, but I figure if anyone knows of a solution, I'd find it here...
Any thoughts?
Do have a look at Selenium http://seleniumhq.org/ to automate the testing.

Executing remote script - Architecture

I want to make an application that executes a remote script. The user can create a script (probabily a LUA script) then stores it in the server. Then he can uses an API for execute the script. I was thinking that API could be a webservice.
So my questions are:
I need high performance to execute the script. So my first choice was LUA script. Someone has another sugestion?
Cause I need high perfomance, I was thinking if the webservice is the best solution. Maybe I could create a TCP/IP Windows Service that hold the users request. It is important to say that I will have many user executing scripts at the same time. So I will have a concurrency problem.
My scripts will query in a database. I will use Tokyo Cabinet or Tokio Tyrant. I think Tokio Tyrant is the only solution cause I will have many requests. For perfomance, Do I need to make a connection pooling? Is there anyway to share variables between webservices requests?
To make the webservice or the Windows service i was thinking to use C++.
Can someone help with these questions?
thanks
Lua is pretty high performance for a scripting language, especially if you use LuaJIT or something similar.
You speak of high performance. How much are we speaking about? Say you have a very simple webservice that executes scripts it receives via POST, then probably the HTTP overhead is comparably small when compared to the Lua compile, environment setup & execution time.
About the database I cannot tell you anything. There's many possibilities to do pooling and this also depends on how you execute the Lua scripts. Are they running in a common environment? One per session? One per request?
C++ surely is a good choice to host Lua, because Lua fits in pretty well. Though there are other good language bindings as well.
But keep in mind that your job is not over by just sandboxing scripts. User submitted scripts can do a lot other Bad Things(TM), intentionally or by mistake, like allocating a lot of memory or hogging the CPU. In Lua (and I think this is true of many, if not all, sandboxed environments) you cannot do much about this, except killing the offending instance or, if you disallowed using coroutines in your sandbox, yield out of the offending coroutine and do something smarter.

PyAMF backend choices!

I've been using PyAMF to write a backend for a flex app that will request different groups of hundreds of different images depending on what the client needs. I have been using the "simple_server" WSGI server that PyAMF supplies while developing the flex code. Now I'm ready to write a robust backend that will be able to pull images from a mySQL database and send them as fast as possible and as efficiently as possible to many concurrent clients.
The PyAMF documentation is great because they supply many examples to follow, however I am confused about what kind of backend I am trying to create.
Do I want a SocketServer or a WSGI server or something like Twisted or web2py or Tornado? Are these even all different? :) Should I be using Apache modules instead (mod_wsgi or modjy or mod_python)?
I realize that this probably touches on many open debates, so maybe you could just point me to any good summaries of these debates?
Its great to have so many options, but how do I choose?
The short answer is, of course, that it depends on the requirements of your project.
How many concurrent connections is "a lot"?
How much programmer time can you throw at the problem?
How much hardware can you throw at the problem?
...etc...
If you plan to have lots of concurrent clients, it's hard to beat Twisted in the Python world. However, you'll have to deal with your database asynchronously to avoid blocking, and depending on how complex your database interactions are, this can be a bit of a pain. You're basically limited to either using twisted.enterprise.adbapi or coming up with your own twisted-ORM integration.
If you'd rather have "easy" database code (i.e. you want to use an ORM), you're better off going with a (TurboGears/Pylons/plain wsgi) project, probably hosted using Apache and mod_wsgi. This can be a pretty scalable solution, and you get a lot of stuff for free using these frameworks, but it may be more than you need.
I would avoid using one of the many plain python wsgi servers out there (wsgiref, paster, etc.) in production if you really want high performance.
Good Luck!