How to know a pack of data is fully received in telnet? - telnet

I'm writing a toy MUD client which uses a TCP/IP socket to make a connection to a telnet server.
As a common feature in MUD clients, I should be able to run a bunch of regular expressions on the responses from the server and do stuff when they are triggered.
Now the problem arises when the response is long and received in 2 or more TCP/IP packets, and therefore the regular expressions wont match when I run them on the responses, as they are not complete yet (the first or second part wont match alone).
So the question is how do I know the server is done sending a packet of data before running my regular expressions on them.

The short answer is: you don't
TCP/IP is a serial protocol, that has no notion of packets.
If your application layer protocol uses packets (most do), then you have two options:
use a transport layer that supports packets natively (UDP, SCTP,...)
add packetizing information to your data stream
The simplest way to add packetizing info, is by adding delimiter characters (usually \n); obviously you cannot use the delimiter in the payload then, as it is already reserved for other purposes.
If you need to be able to transmit any character in the payload (so you cannot reserve a delimiter), use something like SLIP on top of TCP/IP

you can keep a stack, add the packets to it, keep testing until you get a full response
If the MUD is to be played (almost) exclusively by the client (not telnet itself), you can add delimiters, again have the stack, but don't test blindly, test when you get a delimiter.
If there is a command you can send that has no gameplay effect but has a constant reply from the server (eg a ping) you could use it as a delimiter of sorts.

You may be over thinking it. Nearly all muds delimit lines with LF, i.e. \n (some oddball servers will use CRLF, \r\n, or even \n\r). So buffer your input and scan for the delimiter \n. When you find one, move the line out of the input buffer and then run your regexps.
A special case is the telnet command IAC GA, which some muds use to denote prompts. Read the Telnet RFC for more details, https://www.rfc-editor.org/rfc/rfc854 , and do some research on mud-specific issues, for example http://cryosphere.net/mud-protocol.html .
Practically speaking, with muds you will never have a problem with waiting for a long line. If there's a lot of lag between mud and client there's not a whole lot you can do about that.

Related

Handling Telnet negotiation

I'm trying to implement Telnet Client using C++ and QT as GUI.
I have no idea to handling the telnet negotiations.
Every telnet command is preceded by IAC, e.g.
IAC WILL SUPPRESS_GO_AHEAD
The following is how I handling the negotiation.
Search for IAC character in received buffer
According to the command and option, response to the request
My questions are described as follows:
It seems that the telnet server won't wait for a client response after a negotiation command is sent.
e.g. (send two or more commands without waiting for client reponse)
IAC WILL SUPPRESS_GO_AHEAD
IAC WILL ECHO
How should I handle such situation? Handle two requests or just the last one?
What the option values would be if I don't response the request? Are they set as default?
Why IAC character(255) won't be treated as data instead of command?
Yes, it is allowed to send out several negotiations for different options without synchronously waiting for a response after each of them.
Actually it's important for each side to try to continue (possibly after some timeout if you did decide to wait for a response) even if it didn't receive a reply, as there are legitimate situations according to the RFC when there shouldn't or mustn't be a reply and also the other side might just ignore the request for whatever reason and you don't have control over that.
You need to consider both negotiation requests the server sent, as they are both valid requests (you may choose to deny one or both, of course).
I suggest you handle both of them (whatever "handling" means in your case) as soon as you notice them, so as not to risk getting the server stuck if it decides to wait for your replies.
One possible way to go about it is presented by Daniel J. Bernstein in RFC 1143. It uses a finite state machine (FSM) and is quite robust against negotiation loops.
A compliant server (the same goes for a compliant client) defaults all negotiable options to WON'T and DON'T (i.e. disabled) at the start of the connection and doesn't consider them enabled until a request for DO or WILL was acknowledged by a WILL or DO reply, respectively.
Not all servers (or clients for that matter) behave properly, of course, but you cannot anticipate all ways a peer might misbehave, so just assume that all options are disabled until enabling them was requested and the reply was positive.
I'll assume here that what you're actually asking is how the server is going to send you a byte of 255 as data without you misinterpreting it as an IAC control sequence (and vice versa, how you should send a byte of 255 as data to the server without it misinterpreting it as a telnet command).
The answer is simply that instead of a single byte of 255, the server (and your client in the opposite direction) sends IAC followed by another byte of 255, so in effect doubling all values of 255 that are part of the data stream.
Upon receiving an IAC followed by 255 over the network, your client (and the server in the opposite direction) must replace that with a single data byte of 255 in the data stream it returns.
This is also covered in RFC 854.

UDP Client and Server Buffer Agreement

Hi I am writing a program that will send a file from client to server using UDP socket using different packet sizes for example 512B, 1KB and 2KB and i don't want use fixed buffer size in the receiver(server).I need some codes in Java that will allow both server and client to agree upon a packet size before transfer start. Many thanks
Don't you forget that UDP packets may be fragmented, duplicated and lost? There is a whole bunch of things to take care of, starting with lost packet retransmissions.
I hate to give a "don't do this" kind of answers, but for this one, just use TCP. And if you want some user-level "packets", you can have them with TCP also (prefix each one with its length, that's enough).

How to know and distinguish data that sended by tcp Connection?

hi guys i'm making a client-server software and this is my first question
i'd like to ask: how to distinguish data that sended by tcp Connection?
Well, my points are:
-we can determine data that sended by tcpconnection.
for example, we have 3 Listviews in our form
the point of the first listview is for Biodata of client.
the point of second listview is for *The value obtained from the clients
n the point of third listview is for The picture obtained from the clients
in this case we have 3 main points that must be processed.
in fact, we only have 1 connection in our system.
Well, here I'm confused..
how to determine that data we received is for the first listview or second listview or third listview?
remember, the data of third listview is a picture that we received from tcpconnection
How do we do that with 1 connection in our system?
do i have to make 3 connection to control third listviews?
With socket communication, both the client and the server must use the same agreed-upon protocol so that they can understand each other. There are many standard protocols that have already been created, so for most tasks, creating your own protocol is unnecessary. However, if you must, you can always define your own protocol. The nature of your protocol will obviously depend completely on your specific needs, so it would be impossible to tell you what the protocol should be. However, generally speaking, the first thing your protocol must define is how to know where each complete message begins and ends. This is often accomplished by separating each message with a delimiter (e.g. new line, EOF, null). As Francois recommended, you could alternatively specify the length of the message at the beginning of each message. Within each message, you then will need a header section which, among other things, would specify the type (the format) of the data stored in the body of the message.
A simple implementation might be to send each message as a JSON or XML document. Doing so makes it very easy to define and modify the format of the message.
However, unless you really need to, I would recommend using one of the built-in communication frameworks that are provided with .NET. For simple tasks, often a simple asmx web service is sufficient. For more complex tasks, often WCF is a good choice. An asmx web service uses SOAP via HTTP via TCP/IP. WCF uses SOAP, but the lower level connection is configurable so it doesn't have to use TCP/IP, but it can easily do so.

Netty SSL mode strange behavior

I am trying to understand, why does Netty SSL mode work on strange way?
Also, the problem is following, when any SSL client(https browser, java client using ssl, also any ssl client application) connects to Netty server I get on beginning the full message, where I can recognize correctly the protocol used, but as long the channel stays connected, any following messages have strange structure, what is not happening same way with non-ssl mode.
As example on messageReceived method when the https browser connects to my server:
I have used PortUnificationServerHandler to switch protocols.. (without using nettys http handler, it is just example, because i use ssl mode for my own protocol too)
first message is ok, I get full header beginning with GET or POST
than I send response...
second message is only one byte long and contains "G" or "P" only.
third message is than the rest beginning either with ET or OST and the rest of http header and body..
here again follows my response...
fourth message is again one byte long and again contains only one byte..
fifth message again the rest... and on this way the game goes further..
here it is not important, which sub protocol is used, http or any else, after first message I get firstly one byte and on second message the rest of the request..
I wanted to build some art of proxy, get ssl data and send it unencoded on other listener, but when I do it directly without waiting for full data request, the target listener(http server as example) can not handle such data, if the target gets one byte as first only (even if the next message contains the rest), the channel gets immediately closed and request gets abandoned..
Ok, first though would be to do following, cache the first byte temporarily and wait for next message and than join those messages, and only than response, that works fine, but sometimes that is not correct approach, because the one byte is sometimes really the last message byte, and if i cache it and await wrongly next message, i can wait forever, because the https browser expects at this time some response and does not send any data more..
Now the question, is it possible to fix this problem with SSL? May be there are special settings having influence on this behavior?
I want fully joined message at once as is and not firstly first byte and than the rest..
Can you please confirm, that with newer Netty versions you have same behaving by using PortUnificationServerHandler (but without netty http handler, try some own handler.)
Is this behavior Ok so, I do not believe, it was projected so to work..
What you're experiencing is likely to be due to the countermeasures against the BEAST attack.
This isn't a problem. What seems to be the problem is that you're assuming that you're meant to read data in terms of messages/packets. This is not the case: TCP (and TLS/SSL) are meant to be used as streams of continuous data. You should keep reading data while data is available. Where to split incoming data where it's meaningful is guided by the application protocol. For HTTP, the indications are the blank line after the header and the Content-Length or chunked transfer encoding for the entity.
If you define your own protocol, you'll need a similar mechanism, whether you use plain HTTP or SSL/TLS. Assuming you don't need it only works by chance.
I had experienced this issue and found it was caused bu using JDK1.7. Moving back to JDK1.6 solved it. I did not have time to investigate further but have assumed for now that the SSLEngine implementation has changed in the JDK. I will investigate further when time permits.

Sending data packets over udp

I am creating an app that acts as a remote control for a lighting console and I need to send commands to the console over UDP. The protocol that I am using has its own custom header. How do I create the data packet with header and message to send over UDP? Thanks!
If you are trying to test the protocol, without writing any code, I suggest you use WireShark.
The probably most powerful solution you can use is scapy, which is a python module that allows very advanced packet crafting and manipulation. See its documentation or search the interwebs for examples to find out how to generate arbitrary packets and transmit them.
If you can't use python for some reason, there are multiple command line tools for packet generation, one other example being nping (documentation), the brother of nmap, the popular network scanner. nping has options to generate UDP packets with arbitrary payloads, with can be specified as a hex string, for example.
There may be other options as well. It would be good to know more details like the operating system you're on or where you get your input data from, and in which format.