Synchronizing a variable between client and server - error-handling

Say I've got a client-side counter variable. When I send a request to the server, I send this variable's value, then increment it by one (for the next request to the server). The server keeps track of this counter independantly, and checks to make sure that the count sent by the client is 1 more than it's own (server-side) copy of the count. This is all fine and dandy, but consider the following situation:
The client sends, say, 23 to the server.
The server receives 23, validates it, and increments its own counter to 23.
The server returns the All-Okay code to the client
-BUT-
Along the way from the server to the client, the return code gets corrupted. The client thus thinks that the server did not update its counter, and so leaves the client-side counter at 23. From this point onwards, the client and server are out of sync.
Does anybody know any robust schemes that would work in the face of such possible corruption/errors?
Thanks,
Cameron

Instead of using a linearly increasing counter, you can use a random "nonce" value of a good 64 bits of entropy or more. When the server receives a request from the client, the server checks to see whether the nonce matches the last one it sent the client. If so, then the request is processed and the server generates a new random nonce value to send in the response.
The server can keep the last two nonce values to send to the client. If the client sends the old value, then the server assumes that the most recent message to the client might have been lost.
The above method assumes that your goal is to prevent two different clients from using the same credentials to communicate with the server. The advantage of using the nonce method is that the next value can't easily be predicted.

The easy answer is to make one of the client or server the owner of the resource, instead of making both own its own copy of the resource.
If you're using a reliable protocol like TCP though, you don't have to worry about the message not getting to the client.
A good thing to follow when doing client/server work though is to make all operations idempotent. That is to say each function could be called one or many times without any side effects. In this case you wouldn't have an 'increment' function at all. Instead you'd have a 'set' function.

How about just not having the client update its local copy until the server acknowledges the update of the server counter.
So:
Client calculates next value
Client sends next value to server
Server checks that next value is valid (not already seen)
Server updates counter to next value (if needed)
Server notifies client that next value has been received
Client updates local counter to next value
If the server does not receive the client update, the client merely resends the calculated next value. If the client does not receive the next value confirmation, it will resend the next value, but the server having already seen it does not update, but merely acknowledges. Eventually, the client sees the server's message and continues. This covers the case of lost messages.
If you are concerned about corruption, calculate a checksum on the message and send that as well. Recalculate the checksum on receipt and compare it to the one sent. Generally the network stack does this for you, though, so I wouldn't worry too much about it unless your are running your own protocol.

Related

About losing HTTP Requests

I have a server to which my client sends a HTTP GET request with some values. The server on its end simply stores these values to a database.
Now, I am observing that sometimes I do not observe these values in the database. One of the following could have happened:
The client never sent it
The server never received it
The server failed in writing to the database
My strongest doubt is that the reason is 2 - but I am unable to explain it completely. Since this is an HTTP request (which means there is TCP underneath) reliable delivery of the GET request should be guaranteed, right? Is it possible that even though I send a GET request to the server - it was never received by the server? If yes, what is TCP doing there?
Or, can I confidently assert that if the server is up and running and everything sent to the server is written to the database, then the absence of the details of the GET request in the database means the client never sent it?
Not sure if the details will help - but I am running a tomcat server and I am just sending a name-value pair through the get request.
There are a few things you seem to be missing. First of all, yes, if TCP finishes successfully, you pretty much have a guarantee that your message (i.e. the TCP payload) has reached the other side: TCP assures that it will take care of lost packages and the order in which packages arrive. However, this is not universially failproof, as there are still things beyond the powers of TCP (think of a physical disconnect by cutting through an ethernet cable). There is also no assertion regarding the syntactical correctness of the protocol "above." Any checks beyond delivering a bit-perfect copy is simply not TCP's concern.
So, there is a chance that the requests issued by your client are faulty or that they are indeed correct but not parsed correctly by your server. Former is striking me as more likely as latter one as Tomcat is a very mature piece of software. I think it would help tremendously if you would record and analyse some of your generated traffic through e.g. Wireshark.
You do not really mention what database you have in use. But there are some sacrificing acid-compliance in favour of increased write speeds. The nature of these databases brings it that you can never be really sure wether something actually got written to disk or is still residing in some buffer in memory. Should you happen to use such a db, this were another line of investigation.
Programmatically, I advise you take the following steps when dealing with HTTP traffic:
Has writing to the socket finishes without error?
Could a response be read from the socket?
Does the response carry a code in the 2xx range (indicating a successful operation)?
If any of these fail, you should really log something.
On a realated note, what you are doing there does not call for the GET method but for POST as you are changing application state. Consider it as a nice-to-have ;)
Without knowing the specifics, you can break it down into two parts. The HTTP request and the DB write. The client will receive a 200 OK response from the server when its GET request has been acknowledged. I've written code under Tomcat to connect to a MySQL DB using DAO. In the case of a failure an exception would be thrown and logged. Which ever method you're using, you'll want to figure out how failures are logged.

Long polling on a penny auction site?

On a penny auction site, there are a few fundamental requests that happen over time, namely:
Bidding request (when someone places a bid)
Timer updates
Leading bidder updates
I am trying to understand long polling a bit better and I'm stuck with this. As far as i know, Long polling is there to reduce Ajax requests. I.e. By only having ONE for visual updates, and ONE for actions. So, therefore:
bidding request (to place bids) will remain as is, but all the visual update requests will be combined into one "long poll" request, right?
If the user connects to the site for the first time, he will request the current state of the page by also passing in what he was last told the state of the page was. The server will compare it with the state of what it should be, and if they are different, it will pass the new state back to the user, correct?
When passing the state back, the LONG POLL will effectively stop, the screen will be updated, and a new LONG POLL will be started, correct?
Is this understanding correct so far?
If that is so, how will this in any way decrease the number of requests to the backend if the server still has to compare the state?
How will this help if the page is opened in 50 different windows by one user?
Long polling is used to simulate a connection in which the server pushes data to the client (rather than what is actually happening - which is the client requesting the information from the server). Basically the client requests data from the server, but rather than returning data to the client immediately the server 'holds' the request open - it can then return data to the client at a later time point - which can be used to simulate the server updating the client in 'real time'.
So in your example of an auction site the client might long-poll the sever for an item bid amount - the server would hold this request open, and when the bid value on that item changes can return the updated amount to the client.. this can be used to give the impression of the server updating the client as the bid amount changes.
As far as requests to the server go, this very much depends on how this is implemented. Obviously using long polling will reduce the number of requests made to the server compared with, say, getting the client to issue a new 'standard' request every second to check for updates. Multiple instances of the client will still result in multiple requests to the server - and moreover the server still has to deal with the overhead of holding the long polling requests open and responding to these when appropriate.. Apparently different servers, and server architectures, deal with this more effectively than others.

Netty SSL mode strange behavior

I am trying to understand, why does Netty SSL mode work on strange way?
Also, the problem is following, when any SSL client(https browser, java client using ssl, also any ssl client application) connects to Netty server I get on beginning the full message, where I can recognize correctly the protocol used, but as long the channel stays connected, any following messages have strange structure, what is not happening same way with non-ssl mode.
As example on messageReceived method when the https browser connects to my server:
I have used PortUnificationServerHandler to switch protocols.. (without using nettys http handler, it is just example, because i use ssl mode for my own protocol too)
first message is ok, I get full header beginning with GET or POST
than I send response...
second message is only one byte long and contains "G" or "P" only.
third message is than the rest beginning either with ET or OST and the rest of http header and body..
here again follows my response...
fourth message is again one byte long and again contains only one byte..
fifth message again the rest... and on this way the game goes further..
here it is not important, which sub protocol is used, http or any else, after first message I get firstly one byte and on second message the rest of the request..
I wanted to build some art of proxy, get ssl data and send it unencoded on other listener, but when I do it directly without waiting for full data request, the target listener(http server as example) can not handle such data, if the target gets one byte as first only (even if the next message contains the rest), the channel gets immediately closed and request gets abandoned..
Ok, first though would be to do following, cache the first byte temporarily and wait for next message and than join those messages, and only than response, that works fine, but sometimes that is not correct approach, because the one byte is sometimes really the last message byte, and if i cache it and await wrongly next message, i can wait forever, because the https browser expects at this time some response and does not send any data more..
Now the question, is it possible to fix this problem with SSL? May be there are special settings having influence on this behavior?
I want fully joined message at once as is and not firstly first byte and than the rest..
Can you please confirm, that with newer Netty versions you have same behaving by using PortUnificationServerHandler (but without netty http handler, try some own handler.)
Is this behavior Ok so, I do not believe, it was projected so to work..
What you're experiencing is likely to be due to the countermeasures against the BEAST attack.
This isn't a problem. What seems to be the problem is that you're assuming that you're meant to read data in terms of messages/packets. This is not the case: TCP (and TLS/SSL) are meant to be used as streams of continuous data. You should keep reading data while data is available. Where to split incoming data where it's meaningful is guided by the application protocol. For HTTP, the indications are the blank line after the header and the Content-Length or chunked transfer encoding for the entity.
If you define your own protocol, you'll need a similar mechanism, whether you use plain HTTP or SSL/TLS. Assuming you don't need it only works by chance.
I had experienced this issue and found it was caused bu using JDK1.7. Moving back to JDK1.6 solved it. I did not have time to investigate further but have assumed for now that the SSLEngine implementation has changed in the JDK. I will investigate further when time permits.

How can a low-power microcontroller authenticate itself against a server?

Currently I am working on an embedded project. The client side is an 8bits MCU and the server side is computer.
As part of goal, I want to minimize the chance people copy our product. During the initialization phase, the server send its serial number to client and client do some simple calculation with its serial number then send result back to server. The server checks the result to a pre-calculated, hardcoded value, if match the client is authentic.
The problem is the calculated serial number that sent back to server is always fixed. Any copycat company can figure it out quite easily with a logic analyzer. I want to make the transmitting serial number seems random bits from time to time but still be able to decrypt back to its original value. A good example using AES encryption (notice every time you press the Encrypt It button a seemingly random text is generated, but as you decrypt it, then it reverts to the original text.)
Due to ROM/RAM and process power limitation in 8bits MCU I can’t fit a complete AES routine in it, so AES is out of a solution. Is there an easy and efficiency algorithm just to randomize the transmission?
Use a key pair. On initialization:
Client tells server "I am online"
Server encrypts a verification message, which only the client will be able to decode
Client sends back the decrypted message
There should be no need for the server's key to be hardcoded - it can be generated based on a timestamp (only an answer within an acceptable range is accepted) or the codes can be generated on an as-needed basis with a timeout to prevent them from being stored for a long term.
Have the server send either a monotonically-increasing counter or a timestamp to the client, alongside the serial number. The client then includes that in the calculation it performs.
Because the server always sends a different request, the response will always be different (of course, if the market is lucrative enough your competitors can always disassemble your MCU code and figure out how to replicate it, but there's really no stopping that).
A different idea might be to require the 8 bit controller to send a CRC of the date, time and serial number to the server. The server can verify it is a unique serial and send a CRC with date, time and authorization code.
You might also look into the rolling code algorythms used for garage doors openers to see if they could be applied to your application.

Is it possible to have asynchronous processing

I have a requirement where I need to send continuous updates to my clients. Client is browser in this case. We have some data which updates every sec, so once client connects to our server, we maintain a persistent connection and keep pushing data to the client.
I am looking for suggestions of this implementation at the server end. Basically what I need is this:
1. client connects to server. I maintain the socket and metadata about the socket. metadata contains what updates need to be send to this client
2. server process now waits for new client connections
3. One other process will have the list of all the sockets opened and will go through each of them and send the updates if required.
Can we do something like this in Apache module:
1. Apache process gets the new connection. It maintains the state for the connection. It keeps the state in some global memory and returns back to root process to signify that it is done so that it can accept the new connection
2. the Apache process though has returned the status to root process but it is also executing in parallel where it going through its global store and sending updates to the client, if any.
So can a Apache process do these things:
1. Have more than one connection associated with it
2. Asynchronously waiting for new connection and at the same time processing the previous connections?
This is a complicated and ineffecient model of updating. Your server will try to update clients that have closed down. And the server has to maintain all that client data and meta data (last update time, etc).
Usually, for continuous updates ajax is used in a polling model. The client has a javascript timer that when it fires, hits a service that provides updated data. The client continues to get updates at regular intervals without having to write an apache module.
Would this model work for your scenario?
More reasons to opt for poll instead of push
Periodic_Refresh
With a little patch to resume a SUSPENDED mpm_event connection, I've got an asynchronous Apache module working. With this you can do the improved polling:
javascript connects to Apache and asks for an update;
if there's no updates, then instead of answering immediately the module uses SUSPENDED;
some time later, after an update or a timeout happens, callback fires somewhere;
callback gives an update (or a "no updates" message) to the client and resumes the connection;
client goes to step 1, repeating the poll which with Keep-Alive will use the same connection.
That way the number of roundtrips between the client and the server can be decreased and the client receives the update immediately. (This is known as Comet's Reverse Ajax, AFAIK).