I'm pondering a client-server auth protocol based on sending and validating a nonce|HMAC(nonce|datetime, shared_secret). I want to allow for small discrepancies between machine times.
I want to introduce a datetime so that nonces can't be reused indefinitely, but I don't want to store a list of used nonces on the server. However, one can't expect machine clocks to be in perfect sync. If I round the time to, say, nearest five minutes, that would cause a false negative if the client's time is 10:59 and the server time is 11:01. So, basically, I'm looking for a way to uniquely identify a time interval that won't be subject to rollover.
Is there a standard solution to this?
So here's what I was hoping I could avoid:
nonce/HMAC combinations that last forever (i. e. replay attacks)
storing nonces on the server between auth requests
more than one roundtrip (e. g. challenge/response)
If there is no solution, that's also a valid answer. I'll go to one of those approaches.
Related
In order to prevent replay attacks, I'm implementing a mechanism where the client has to send to the server a nonce token, which is comprised of an UUID and a timestamp. They are both generated by the client.
However, I'm having concerns regarding the timestamp. I understand that for this to work, the clocks of the server and the clients must be in sync. I do not have control over the clients and, intuitively, it seems unrealistic to expect the server and clients' clocks to be fully in sync. As such, I expect that a client's clock might be a few seconds too early or too late.
Moreover, I expect a few seconds difference between the time the client has sent the nonce token and the time the server has received it. I expect the gap to be more important if the client's connection is poor.
Because of those concerns, I have decided to:
Reject timestamps more than 2 minutes old;
Reject timestamps set up more than 10 seconds into the future.
I would like the input of programmers who've dealt with timestamp validation. Do you see issues with the choices I've made regarding timestamp validation? What are the issues you have encountered?
Thanks!
When the load balancer can use round robin algorithm to distribute the incoming request evenly to the nodes why do we need to use the consistent hashing to distribute the load? What are the best scenario to use consistent hashing and RR to distribute the load?
From this blog,
With traditional “modulo hashing”, you simply consider the request
hash as a very large number. If you take that number modulo the number
of available servers, you get the index of the server to use. It’s
simple, and it works well as long as the list of servers is stable.
But when servers are added or removed, a problem arises: the majority
of requests will hash to a different server than they did before. If
you have nine servers and you add a tenth, only one-tenth of requests
will (by luck) hash to the same server as they did before. Consistent hashing can achieve well-distributed uniformity.
Then
there’s consistent hashing. Consistent hashing uses a more elaborate
scheme, where each server is assigned multiple hash values based on
its name or ID, and each request is assigned to the server with the
“nearest” hash value. The benefit of this added complexity is that
when a server is added or removed, most requests will map to the same
server that they did before. So if you have nine servers and add a
tenth, about 1/10 of requests will have hashes that fall near the
newly-added server’s hashes, and the other 9/10 will have the same
nearest server that they did before. Much better! So consistent
hashing lets us add and remove servers without completely disturbing
the set of cached items that each server holds.
Similarly, The round-robin algorithm is used to the scenario that a list of servers is stable and LB traffic is at random. The consistent hashing is used to the scenario that the backend servers need to scale out or scale in and most requests will map to the same server that they did before. Consistent hashing can achieve well-distributed uniformity.
Let's say we want to maintain user sessions on servers. So, we would want all requests from a user to go to the same server. Using round-robin won't be of help here as it blindly forwards requests in circularly fashion among the available servers.
To achieve 1:1 mapping between a user and a server, we need to use hashing based load balancers. Consistent hashing works on this idea and it also elegantly handles cases when we want to add or remove servers.
References: Check out the below Gaurav Sen's videos for further explanation.
https://www.youtube.com/watch?v=K0Ta65OqQkY
https://www.youtube.com/watch?v=zaRkONvyGr8
For completeness, I want to point out one other important feature of Consistent Hashing that hasn't yet been mentioned: DOS mitigation.
If a load-balancer is getting spammed with requests, (either from too many customers, an attack, or a haywire local service) a round-robin approach will apply the request spam evenly across all upstream services. Even spread out, this load might be too much for each service to handle. So what happens? Your loadbalancer, in trying to be helpful, has brought down your entire system.
If you use a modulus or consistent hashing approach, then only a small subset of services will be DOS'd by the barrage.
Being able to "limit the blast radius" in this manner is a critical feature of production systems
Consistent hashing is fits well for stateful systems(where context of the previous request is required in the current requests), so in stateful systems if previous and current request lands in different servers than for current request context is lost and system won't be able to fulfil the request, so in consistent hashing with the use of hashing we can route of requests to same server for that particular user, while in round robin we cannot achieve this, round robin is good for stateless systems.
I need to get an understanding of ISO-8583 message platform,lets say i want to perform a authorization of a card transaction,so in real time at a particular instance lets say i got 100000 requests from network(VISA/MASTERCARD) all for authorization,how do i define priority of there request and the response,can the connection pool handle it(in my case its HIKARI),how is it done banks/financial institutions for authorizing a request.Please provide me some insights on how to manage all these requests.Should i go for a MQ?
Tech used are:-spring boot,hibernate,spring-tcp-starter
Your question doesn't seem to be very well researched as there are a ton of switch platforms out there that due this today and many of their technology guides can be found on the web including for major vendors like ACI, FIS, AJB,.. etc if you look yard enough.
I have worked with several iso-interface specifications, commercial switches, and home grown platforms and it is actually pretty consistent in how they do the core realtime processing.
This information on prioritization is generally in each ISO-8583 message processing specification and is made explicitly clear in almost every specification I've ever read written by someone who is familar with ISO-8533 and not just making up their own variant or copying someone elses.
That said.. in general at a high level authorizations / financials (0100, 0200) requests always have high priority than force posts (0x20) messages.
Administrative messages in the 05xx and 06xx and 08xx sometimes also get bumped up above other advices.. but these are still advices and almost always auths/financials are always processed first as they A) Impact the customer B) have much tighter timers than any advice by usually more than double or more.
Most switches I have seen do it entirely in memory without going to MQ and or some other disk for core authorization process to manage these.. but not to say there is not some sort of home grown middle ware sometimes involved.. but non-realtime processes regularly use a MQ process to queue or disk queuing these up into processes not in-line of the approval for this Store-and-forward (SAF) processing.. but many of these still use memory only processing for the front of their queue.
It is important to also differentiate between 100000 requests and 100000 transactions.. the various exchanges both internal and external make a big difference in the number of actual requests/responses in flight at even given time.. a basic transaction can be accomplished in like two messages.. but some of the more complex ones can easily exceed 20 messages just for a pre-authorization or a completion component.
If you are dealing with largely batch transaction bursts.. I can see the challenge of queuing but almost every application I have seen has a max in flight for advices and requests separate of each other.. and sometimes even with different timers.. and the apps pumping the transactions almost always wait for the response back before sending more.. and this tends to work fine for just about everyone.. including big posting batches from retailers and card networks. So if your app doesn't have them.. you probably need to add them.
In fact your 100000 requests should be sorted by (Terminal ID and/or Merchant ID) + (timestamp/local timestamp) + (STAN and/or RRN).
Duplicated transaction requests expected to be rejected.
If you simulating multiple requests from single terminal (or host) with same test card details the increasing of STAN/RRN would be a case.
Please refer to previous answers about STAN and RRN ISO 8583 fields.
In ISO message, what's the use of stan and rrn ?
Tamper data
There is terrible thing called Tamper Data. It receives all POST'ing data from FLASH to PHP and give ability for user to change values.
Imagine that in flash game (written in ActionScript 3) are score points and time. After match completed score and time variables are sending to PHP and inserting to database.
But user can easy change values with Tamper Data after match completed. So changed values will be inserted to database.
My idea seems that won't work
I had idea to update data in database on every change? I mean If player get +10 score points I need instant to write It to database. But how about time? I need update my table in database every milisecond? Is that protection solution at all? If user can change POST data he can change It everytime also last time when game completed.
So how to avoid 3rd party software like Tamper Data?
Tokens. I've read article about Tokens, there is talking about how to create random string as token and compare It with database, but It's not detailed and I don't have idea how to realise It. Is that good idea? If yes, maybe someone how to realise It practically?
According to me is better way to send both parameter and value in encrypted format like score=12 send like c2NvcmU9MTI= which is base64
function encrypt($str)
{
$s = strtr(base64_encode(mcrypt_encrypt(MCRYPT_RIJNDAEL_256, md5(SALTKEY), serialize($str), MCRYPT_MODE_CBC, md5(md5(SALTKEY)))), '+/=', '-_,');
return $s;
}
function decrypt($str)
{
$s = unserialize(rtrim(mcrypt_decrypt(MCRYPT_RIJNDAEL_256, md5(SALTKEY), base64_decode(strtr($str, '-_,', '+/=')), MCRYPT_MODE_CBC, md5(md5(SALTKEY))), "\0"));
return $s;
}
In general, there is no way to protect the content generated in Flash and sent to server.
Even if you encrypt the data with a secret key, both the key and the encryption algorithm are contained in the swf file and can be decompiled. It is a bit more harder than simply faking the data so it is kind of usable solution but it will not always help.
To have full security, you need to run all game simulation on the server. For example, if player jumped and catched a coin, Flash does not send "score +10" to the server. Instead, it sends player coordinates and speed, and server does the check: where is the coin, where is the player, what is player's speed and can the player get the coin or not.
If you cannot run the full simulation on the server, you can do a partial check by sending data to server at some intervals.
First, never send a "final" score or any other score. It is very easy to fake. Instead, send an event every time the player does something that changes his score.
For example, every time player catches a coin, you send this event to the server. You may not track player coordinates or coin coordinates, but you know that the level contains only 10 coins. So player cannot catch more than 10 coins anyway. Also, player can't catch coins too fast because you know the minimum distance between coins and the maximum player speed.
You should not write the data to database each time you receive it. Instead you need to keep each player's data in memory and change it there. You can use a noSQL database for that, for example Redis.
First, cheaters will always cheat. There's really no easy solution (or difficult one) to completely prevent it. There are lots of articles on the great lengths developers have gone to discourage cheating, yet it is still rampant in nearly every game with any popularity.
That said, here are a few suggestions to hopefully discourage cheating:
Encrypt your data. This is not unbeatable, but will discourage many lazy hackers since they can't just tamper with plain http traffic, they first have to find your encryption keys. Check out as3corelib for AS3 encryption.
Obfuscate your SWFs. There are a few tools out there to do this for you. Again, this isn't unbeatable, but it is an easy way to make it harder for cheaters to find your encryption keys.
Move all your timing logic to the server. Instead of your client telling the server about time, tell the server about actions like "GAME_STARTED" and "SCORED_POINTS". The server then tracks the user's time and calculates the final score. The important thing here is that the client does not tell the server anything related to time, but merely the action taken and the server uses its own time.
If you can establish any rules about maximum possible performance (for example 10 points per second) you can detect some types of cheating on the server. For example, if you receive SCORED_POINTS=100 but the maximum is 10, you have a cheater. Or, if you receive SCORED_POINTS=10, then SCORE_POINTS=10 a few milliseconds later, and again a few milliseconds later, you probably have a cheater. Be careful with this, and know that it's a back and forth battle. Cheaters will always come up with clever ways to get around your detection logic, and you don't want your detection logic to be so strict that you accidentally reject an honest player (perhaps a really skilled player who is out-performing what you initially thought possible).
When you detect a cheater, "honey pot" them. Don't tell them they are cheating, as this will only encourage them to find ways to avoid detection.
I'm writing a peer to peer network protocol based on private/public key pair trust. To verify and deduplicate messages sent by a host, I use timestamp verification. A host does not trust another host's message if the signed timestamp has a delta (to the current) of greater than 30 seconds or so.
I just ran into the interesting problem that my test server and my second client are about 40 seconds out of sync (fixed by updating ntp).
I was wondering what an acceptable time difference would be, and if there is a better way of preventing replay attacks? Supposedly I could have one client supply a random text to hash and sign, but unfortunately this wont work as in this situation I have to write messages once.
A host does not trust another host's message if the signed timestamp has a delta (to the current) of greater than 30 seconds or so.
Time based is notoriously difficult. I can't tell you the problems I had with mobile devices that would not or could not sync their clock with the network.
Counter based is usually easier and does not DoS itself.
I was wondering what an acceptable time difference would be...
Microsoft's Active Directory uses 5 minutes.
if there is a better way of preventing replay attacks
Counter based with a challenge/response.
I could have one client supply a random text to hash and sign, but unfortunately this wont work as in this situation I have to write messages once...
Perhaps you could use a {time,nonce} pair. If the nonce has not been previously recorded, then act on the message if its within the time delta. Then hold the message (with {time,nonce}) for a windows (5 minutes?).
If you encounter the same nonce again, don't act on it. If you encounter an unseen nonce but its out of the time delta, then don't act on it. Purge your list of nonces on occasion (every 5 minutes?).
I'm writing a peer to peer network protocol based...
If you look around, then you will probably find a protocol in the academic literature.