I'm looking for some kind of TELNET daemon for linux to share a single app.
I wrote a BBS/MUD, but no networking routines, and I'm looking for a way to "share" the app, in a way Citrix XenApp works for GUI/Windows Apps. I remember I used such a server once, for console mode applications, but I cannot recall the name/internet address.
As far as I know, it's pretty much unheard of to rely on a sharing layer on top of a MUD to handle sockets rather than having the socket code within the MUD server, not least because using such a layer, it seems like it would be very difficult to have the MUD actually be multiplayer.
You might want to look at integrating SocketMUD with your project; SocketMUD is basically a bare-bones socket handling layer intended for MUD use, so could be exactly what you need.
Related
I want to simulate at least 9 clients to test my p2p engine. And i want to make some of them to be behind NAT and some of them to have all ports unlocked. Also, i would like to see log which creates each of them. I am not capable of running 9VM simultaneously, so I'm here to ask experts: is there something i can use for testing it?
I am using Boost library in my app.
I think that's the perfect solution for linux users: http://www.nrl.navy.mil/itd/ncs/products/core
I will keep this answer updated if im wrong.
UPDATE: It works like a charm. Firewall rules can be set for each node. It is not using that much of memory too, so it is possible to make even large network topologies. There is a possibility to run a terminal-per-node. So you can test many different scenarios to test out your application. Good luck.
I have decided to use Twisted for a project and have developed a server that can push data to clients on other computers. At the moment I am using dummy data for testing speed requirements but I now need to interface Twisted to my other Python DAQ application which basically collects real-time data (500 Hz) from various external devices over different transports (e.g. Bluetooth). (note: the DAQ (data acquisition) application is on the same computer as the Twisted server)
Since the DAQ application is not part of the Twisted framework I am wondering what is the most efficient (fastest, robust, minimal latency) way to pass the data to the Twisted server. I have considered using a light-weight database, memcache, Queue or even the Twisted plugins but it is hard to tell which would be the most appropriate and best fit. I should add that the DAQ application was developed before deciding on using Twisted so I have so far considered it as separate from the Twisted network.
On the other side of the system, the client side, which reside on multiple computers, I have a similar problem. As the data streams in (I am sending lines of data, about 100 bytes each) I want to hand this data off to another application which will process this data for a web application (I would prefer to use Twisted Web Service for this but that is not my choice!) The Web application is being written in Java. Once again I have considered the choices above but since I am new to Twisted I am not sure which is the best approach. (note: the Web application is on the same computers as the Twisted clients)
Any advice or thoughts would be greatly appreciated.
My suggestion would be to to build a simple protocol with twisted's built-in support for AMP; you can hook this in to any other languages or frameworks using one of the implementations of AMP in other languages. AMP is designed to be as easy as possible to implement, as it's just a socket with some length-prefixed strings arranged into key/value pairs.
There's obviously a zillion different ways you could go about this, but I would first look at using a queue to pass the data to your Twisted server. If you deploy one of the many opensource queueing tools (e.g. RabbitMQ, ZeroMQ, OpenMQ, and loads of others), you should be able to write from your DAQ product using something generic like HTTP, then read into your Twisted server also using HTTP. If you don't like HTTP, then there would be a lot of alternative transports to choose from - just identify which you want to use, then use that as a basis for selecting your queueing tool.
This would give you an extremely flexible solution, in that you could upgrade or change any of these products with minimal impact to anything else in the whole solution.
I need to do cross platform (Linux/Windows) application, which runs in the background on the server, performs monitoring of the local system via net sockets as well it monitors CPU usage plus it's using some external commands to find out state of the system, starts / stops non-responsive or faulty processes on the same server, reports results to remote mysql cluster as well exposes web API and GUI, so one can see what is happening and configure it.
In case this process faults, it needs to auto-restart, send mail confirmation etc.
I know it can sound weird, but the app could be designed the way, that it runs e.g. 100 monitoring threads (each as separate thread process), which each one is not blocking another one. This is easier to implement than single thread with all non-blocking stuff.
Note that simplicity of implementation is more important than actually philosophy of coding. This is also important because of real-time requirements - each check must be performed every 1 second and act instantly.
I have done such an app in C# and it took me only one day to do it. It's using windows specific stuff so it's not portable to Mono at all, but it runs now for 6 months on 40 production systems and it never faulted because it's handling all exceptions properly, that's why I would prefer to do it in a language which has nice try/catch statements like C#, it's just plain effective.
This windows app is also using standard winforms / WPF interface for configuring etc, but for Linux I guess I would need to use web based GUI.
Now I need to do the same for Linux. but I would do something which runs on Windows too, so both share at least part of the same code.
There are some performance requirements - this app receives messages via UDP via 50 ports (it needs to make sure that the server is receiving streams), each at 10Mbit and C# has no problem with handling it. It's really just checking if the streams are on, it's not analyzing these packets.
I would like to ask you, what is the best architecture / language to design and implement this kind of app, so it's most optimal? E.g. one can do PHP gui interface and C/C++ backend monitoring (or perl or python), but to run PHP I will need to run apache, which is a bit complexity added.
Since these 10Mbit streams must be handled the way, that exactly 1 second of UDP stream not coming in must be detected, I was wondering if Java with garbage collector which likes to pause is a good choice (well, these pauses seems to be proportional to memory usage, which would not be huge in this case), but I guess it could be viable option, but I dont know how to design such an app, so it runs proper background process as well web interface.
Do I need to run glassfish or tomcat? Do I need to use any libraries?
These servers are having lot's of memory and CPU.
It's just mainly for managing availability in the clusters.
I used to program java 10 years ago in Eclipse and Netbeans, but I have lot's of time for learning :D I have both installed on my laptop - they are both very nice.
I would like to write an application, for my own interest, that graphically visualizes some network concepts. Basically I would like to show the output from tools like ping, traceroute and nmap.
The most obvious approach seems to be to use pipes to call out to these tools from my C program, and process the information they return. However, I would like to avoid this heavy-handed approach if possible. My question is, is it possible to somehow link against these tools, or are there APIs that can be used to gain programatic access instead? If so, is this behavior available on a tool-by-tool basis only?
One reason for wanting to do this is to keep everything in a single process / address space and to avoid dependance on these external tools. For example, if I wrote an iphone application, I would not be able to spawn processes to call out to the external tools themselves.
Thanks for any advice or suggestions.
The networking API in your platform of choice is all you essentially need. ping, traceroute and nmap don't do any magic, all they do is send and receive packets over the network.
I don't know of any pre-existing libraries though (not that I have looked either). If it comes to it, at least ping and traceroute are quite trivial to implement by hand.
I depends on the platform you're developing for. Windows, for example, has an ICMP API that you could use to implement a ping tool.
On the other hand, the source code for ping and traceroute is available on any Linux system, so you could use that (provided the license was compatible with your needs) as the basis of your own programs.
Finally, ping (ICMP) is not hard to implement and traceroute builds on top of ping. It may be worth it to just roll your own implementation.
Most of the applications these days provide an API...be it twitter,gmail,fb and millions others.
I understand API Design can not be explained in just an answer but I would like some suggestions on how to get started with API design. Maybe some tutorial/book that makes an application and has some chapters on how to go about providing API's for it. I'm mostly a java developer (learning Groovy) but am open to other languages also, if it is easier to get started with API design in that language.
As a side note, before I was curious about the difference between an API and a webservice. But now as I understand it, webservice is just a form of an API
I don't have any great resources however, I want to stress how correct that API is Application Programing Interface, and is simply a mechanism for how you expose your application to be consumed by others. Be it from script, web service (soap or rest), Win32 API Style Calls....
About 10 years ago when we talked API it seemed like everyone felt like all APIs were like Win32, and that was it. One of the more interesting I've worked on was an API with a PICK based Management System. In this case we wrote an XML processor in PICK and were screen scraping XML back and forth over a telnet session.
The first thing you need to decide, is how do you want to expose your data. Are you going to expose over the web? Or is your application a desktop application? How I would structure an API for cross machine communication tends to be different then if the API is running in a single process or even on a single machine.
I would also start by writting a test client, You have to understand how your API will be used first and try to make it as simple as possible. If you dive right in with the implementation you might loose perspective and make assumptions that a client developer might not.