In challenge-response mechanism (and other systems), it advised not to use time-based nonce.
Why it should be avoided?
(Disclaimer: I have no degree in crypto, everything I wrote is just a layman's opinion.)
Using time-based nonces is discouraged because they are likely to incidentally collide and easy to be implemented in a wrong way.
Nonces (“numbers used only once”) are not the same thing as secret keys or initialization vectors. The ciphers that use them are usually designed bearing in mind that:
exposing nonces to the attacker doesn't harm security as long as the secret key is not compromised;
nonces don't have to be random at all, all they have to be is unique for a given secret key.
So, it's perfectly okay to select zero as the starting nonce and increment it before sending each successive message. Nonce predictability is not an issue at all.
The sole reason why time-based nonces are discouraged are probable backward clock adjustments. If your system NTP service rewinds your clock two seconds backward, then your are likely to send two encrypted messages with the same nonce within the short period of time. If you can guaranty that no clock rewinds will ever happen, than go ahead.
Another point against time-based nonces is that the clock resolution may be not enough to provide each message with a unique number.
UPD:
Using counter-based or time-based nonces is safe in terms of encryption strength. However, they may weaken your security system by providing attacker with additional information, namely: how much messages have the you system already sent, that's the average message rate, that are the number of clients it serves simultaneously, and so on. The attacker may be able to use this information to their advantage.That's called a side-channel attack.
See also:
https://crypto.stackexchange.com/questions/37903
https://crypto.stackexchange.com/questions/53153
https://download.libsodium.org/doc/secret-key_cryptography/encrypted-messages.html, section “Nonce-misuse resistance”
a time or counter based nonce could lead to a scenario where an attacker can prepare in advance ... that alone usually won't break a system, abut it is one step into the wrong direction... unpredictable nonces usually don't hurt...
Related
In order to prevent replay attacks, I'm implementing a mechanism where the client has to send to the server a nonce token, which is comprised of an UUID and a timestamp. They are both generated by the client.
However, I'm having concerns regarding the timestamp. I understand that for this to work, the clocks of the server and the clients must be in sync. I do not have control over the clients and, intuitively, it seems unrealistic to expect the server and clients' clocks to be fully in sync. As such, I expect that a client's clock might be a few seconds too early or too late.
Moreover, I expect a few seconds difference between the time the client has sent the nonce token and the time the server has received it. I expect the gap to be more important if the client's connection is poor.
Because of those concerns, I have decided to:
Reject timestamps more than 2 minutes old;
Reject timestamps set up more than 10 seconds into the future.
I would like the input of programmers who've dealt with timestamp validation. Do you see issues with the choices I've made regarding timestamp validation? What are the issues you have encountered?
Thanks!
When the load balancer can use round robin algorithm to distribute the incoming request evenly to the nodes why do we need to use the consistent hashing to distribute the load? What are the best scenario to use consistent hashing and RR to distribute the load?
From this blog,
With traditional “modulo hashing”, you simply consider the request
hash as a very large number. If you take that number modulo the number
of available servers, you get the index of the server to use. It’s
simple, and it works well as long as the list of servers is stable.
But when servers are added or removed, a problem arises: the majority
of requests will hash to a different server than they did before. If
you have nine servers and you add a tenth, only one-tenth of requests
will (by luck) hash to the same server as they did before. Consistent hashing can achieve well-distributed uniformity.
Then
there’s consistent hashing. Consistent hashing uses a more elaborate
scheme, where each server is assigned multiple hash values based on
its name or ID, and each request is assigned to the server with the
“nearest” hash value. The benefit of this added complexity is that
when a server is added or removed, most requests will map to the same
server that they did before. So if you have nine servers and add a
tenth, about 1/10 of requests will have hashes that fall near the
newly-added server’s hashes, and the other 9/10 will have the same
nearest server that they did before. Much better! So consistent
hashing lets us add and remove servers without completely disturbing
the set of cached items that each server holds.
Similarly, The round-robin algorithm is used to the scenario that a list of servers is stable and LB traffic is at random. The consistent hashing is used to the scenario that the backend servers need to scale out or scale in and most requests will map to the same server that they did before. Consistent hashing can achieve well-distributed uniformity.
Let's say we want to maintain user sessions on servers. So, we would want all requests from a user to go to the same server. Using round-robin won't be of help here as it blindly forwards requests in circularly fashion among the available servers.
To achieve 1:1 mapping between a user and a server, we need to use hashing based load balancers. Consistent hashing works on this idea and it also elegantly handles cases when we want to add or remove servers.
References: Check out the below Gaurav Sen's videos for further explanation.
https://www.youtube.com/watch?v=K0Ta65OqQkY
https://www.youtube.com/watch?v=zaRkONvyGr8
For completeness, I want to point out one other important feature of Consistent Hashing that hasn't yet been mentioned: DOS mitigation.
If a load-balancer is getting spammed with requests, (either from too many customers, an attack, or a haywire local service) a round-robin approach will apply the request spam evenly across all upstream services. Even spread out, this load might be too much for each service to handle. So what happens? Your loadbalancer, in trying to be helpful, has brought down your entire system.
If you use a modulus or consistent hashing approach, then only a small subset of services will be DOS'd by the barrage.
Being able to "limit the blast radius" in this manner is a critical feature of production systems
Consistent hashing is fits well for stateful systems(where context of the previous request is required in the current requests), so in stateful systems if previous and current request lands in different servers than for current request context is lost and system won't be able to fulfil the request, so in consistent hashing with the use of hashing we can route of requests to same server for that particular user, while in round robin we cannot achieve this, round robin is good for stateless systems.
My project uses the Presets plugin with the flag onlyAllowPresets=true.
The reason for this is to close a potential vulnerability where a script might request an image thousands of times, resizing with 1px increment or something like that.
My question is: Is this a real vulnerability? Or does ImageResizer have some kind of protection built-in?
I kind of want to set the onlyAllowPresets to false, because it's a pain in the butt to deal with all the presets in such a large project.
I only know of one instance where this kind of attack was performed. If you're that valuable of a target, I'd suggest using a firewall (or CloudFlare) that offers DDOS protection.
An attack that targets cache-misses can certainly eat a lot of CPU, but it doesn't cause paging and destroy your disk queue length (bitmaps are locked to physical ram in the default pipeline). Cached images are still typically served with a reasonable response time, so impact is usually limited.
That said, run a test, fake an attack, and see what happens under your network/storage/cpu conditions. We're always looking to improve attack handling, so feedback from more environments is great.
Most applications or CMSes will have multiple endpoints that are storage or CPU-intensive (often a wildcard search). Not to say that this is good - it's not - but the most cost-effective layer to handle this often at the firewall or CDN. And today, most CMSes include some (often poor) form of dynamic image processing, so remember to test or disable that as well.
Request signing
If your image URLs are originating from server-side code, then there's a clean solution: sign the urls before spitting them out, and validate during the Config.Current.Pipeline.Rewrite event. We'd planned to have a plugin for this shipping in v4, but it was delayed - and we've only had ~3 requests for the functionality in the last 5 years.
The sketch for signing would be:
Sort querystring by key
Concatenate path and pairs
HMACSHA256 the result with a secret key
Append to end of querystring.
For verification:
Parse the query,
Remove the hmac
Sort query and concatenate path as before
HMACSHA256 the result and compare to the value we removed.
Raise an exception if it's wrong.
Our planned implementation would permit for 'whitelisted' variations - certain values that a signature would permit to be modified by the client - say for breakpoint-based width values. This would be done by replacing targeted key/value pairs with a serialized whitelist policy prior to signature. For validation, pairs targeted by a policy would be removed prior to signature verification, and policy enforcement would happen if the signature was otherwise a match.
Perhaps you could add more detail about your workflow and what is possible?
To be honest I don't know if this is the appropriate title since I am completely new to this area, but I will try my best to explain below.
The scenario can be modeled as a group of functionally identical servers and a group of functionally identical clients. Assume each client knows the endpoints of all the servers (possibly from a broker or some kind of name service), and randomly chooses one to talk to.
Problem 1: The client and the server first need to authenticate themselves to each other (i.e. the client must show the server that it's a valid client, vice versa).
Problem 2: After that, the client and server talk to each other over some kind of encryption.
For Problem 1, I don't know what's the best solution. For Problem 2, I'm thinking about letting each clients create a private key and give the corresponding public key to the server it talks to right after authentication, so that no one else can decrypt its messages; and let all servers share a private key and distribute the corresponding public key to all clients, so that the external world (including the clients) can't decrypt what the clients send to the servers.
These are probably very naive approaches though, so I'd really appreciate any help & thoughts on the problems. Thank you.
I asked a similar question about half a year ago here, I've been redirected to Information Security.
After reading through my answer and rethinking your question, if you still have questions that are so broad, I suggest to ask there. StackOverflow, from what I know, is more about programming (thus security in programming) than security concepts. Either way, you will probably have to migrate there at some point when doing your project.
To begin with, you need to seriously consider what needs protecting in your system. Like here (check Gilles' comment and others), one of the first and most important things to do is to think over what security measures you have to take. You just mentioned authentication and encryption, but there are many more things that are important, like data integrity. Check wiki page for security measures. After knowing more about these, you can choose what (if any) encryption algorithms, hashing functions and others you need.
For example, forgetting about data integrity is forgetting about hashing, which is the most popular security measure I encounter on SO. By applying encryption, you 'merely' can expect no one else to be able to read the message. But you cannot be sure if it reaches the destination unchanged (if anything), either because of interceptors or signal losses. I assume you need to be sure.
A typical architecture I am aware of, assumes asymmetric encryption for private key exchange and then communicating using private keys. This is because public-key infrastructure (PKI) assumes that the key of one of the sides is publicly known, making communication easier, but certainly slower (e.g. due to key length: RSA [asymmetric] starts with 512bits, but typical key length now is 2048, which I can compare to weakest, but still secure AES [symmetric], which key lengths start with 128bits). The problem is, as you stated, the server and user are not authenticated to each other, so the server does not really know if the person sending the data really is who they claim they are. Also, the data could have been changed during traffic.
To prevent that, you need a so called 'key exchange algorithm', such as one of the Diffie Hellman schemes (so, DH might be the 'raw' answer to both of your problems).
Taking all above into consideration, you might want to use one (or more) of the popular protocols and/or services to define your architecture. Popular ones are SSH, SSL/TLS and IPSec. Read about them, define what services you need, check if they are present in one of the services above and you are willing to use the service. If not, you can always design your own using raw crypto algorithms and digests (hashes).
I need to implement a very secured Web Service using WCF. I have read a lot of documents about security in WCF concerning authorization, authentication, message encryption. The web service will use https, Windows Authentication for access to the WS, SQL Server Membership/Role Provider for user authentication and authorization on WS operations and finally message encryption.
I read in one of documents that it is good to consider security on each layer indenpendently, i.e. Transport Layer security must be thought without considering Message Layer. Therefore, using SSL through https in combination with message encryption (using public/private key encryption and signature) would be a good practice, since https concerns Transport Layer and message encryption concerns Message Layer.
But a friend told me that [https + message encryption] is too much; https is sufficient.
What do you think?
Thanks.
If you have SSL then you still need to encrypt your messages if you don't really trust the server which stores them (it could have its files stolen), so this is all good practice.
There comes a point where you have a weakest link problem.
What is your weakest link?
Example: I spend $100,000,000 defending an airport from terrorists, so they go after a train station instead. Money and effort both wasted.
Ask yourself what the threat model is and design your security for that. TLS is a bare minimum for any Internet-based communications, but it doesn't matter if somebody can install a keystroke logger.
As you certainly understand, the role of Transport-Level Security is to secure the transmission of the message, whereas Message-Level Security is about securing the message itself.
It all depends on the attack vectors (or more generally the purpose) you're considering.
In both cases, the security models involved can have to purposes: protection against eavesdropping (relying on encryption) and integrity protection (ultimately relying on signatures, since based on public-key cryptography in most cases).
TLS with server-certificate only will provide you with the security of the transport, and the client will know that the communication really comes from the server it expects (if configured properly, of course). In addition, if you use client-certificate, this will also guarantee the server that the communication comes from a client that has the private key for this client certificate.
However, when the data is no longer in transit, you rely on the security of the machine where it's used and stored. You might no longer be able to assert with certainty where the data came from, for example.
Message-level security doesn't rely on how the communication was made. Message-level signature allows you to know where the messages came from at a later date, independently of how they've been transferred. This can be useful for audit purposes. Message-level encryption would also reduce the risks of someone getting hold of the data if it's stored somewhere where some data could be taken (e.g. some intranet storage systems).
Basically, if the private key used to decrypt the messages has the same protection as the private key used for SSL authentication, and if the messages are not stored for longer time than the connection, in that case it is certainly overkill.
OTOH, if you've got different servers, or if the key is stored e.g. using hardware security of sorts, or is only made available by user input, then it is good advice to secure the messages themselves as well. Application level security also makes sense for auditing purposes and against configuration mistakes, although personally I think signing the data (integrity protection) is more important in this respect.
Of course, the question can also become: if you're already using a web-service that uses SOAP/WSDL, why not use XML encrypt/sign? It's not that hard to configure. Note that it does certainly take more processor time and memory. Oh, one warning: don't even try it if the other side does not know what they are doing - you'll spend ages explaining it and even then you run into trouble if you want to change a single parameter later on.
Final hint: use standards and standardized software or you'll certainly run into crap. Spend some time getting getting to know how things work, and make sure you don't accept ill formatted messages when you call verify (e.g. XML signing the wrong node or accepting MD5 and such things).