How to get Authorization Token - api

I secured my Web Api using Token Based authentication. All works well, but for now before making an api call, I request for a token by making separate api call which will return me the token for making further request. What I am wondering is there any possible way that I can generate token on client side which will be decrypted on the server?
OR
What you think, I am on the right track?
Here is my jQuery Code
$.when($.get("/api/service/GetToken"))
.done(function (token) {
doAjaxCall("GET", "/api/service/GetAllJobsStatusCount/1/admin", "{}", "json", token, function (data) {
console.log(data);
});
});
Here is my method which will return me token
[Api.HttpGet]
public string GetToken()
{
var authorizeToken = "apikey";
return Rsa.Encrypt(authorizeToken);
}
Please suggest

A single token only works for authentication if an encrypted connection (HTTPS) is used - otherwise, MITMs can read the token and steal the identity. If everything is done using secure connections, it doesn't matter who calculates the token, as long as it is exchanged before any authentications, so both partners know it beforehand - otherwise there would be no way to know if it's valid or not. Note however that the token can always only prove that the client is the one that the token was given to (or recevied from) in the first place, you still don't know anything else (for example if the other person was who he claimed when te exchange was made).
If not using secure connections, I'd suggest using public-key encryption. If not encrypting the whole request, calculate a signature and sign it with the private key clientside, verify it server-side. In this case, the client MUST calculate the private and the public key and hand the public key over to the server, as the server should never know the client's private key (and it shouldn't ever be sent over a unencrypted connection either, after all that'S the whole point of using PKI).
For another approach (that is widely used, but not extraordinary secure), see OAuth.
[Edit]: Addition based on comment:
To verify a client, every request that accesses valuable data has to be authenticated. That means: the client needs to send something that is able to prove that it is who it claims to be along with EVERY request (as HTTP is a stateless protocol).
The usual way is to exchange some secret once, that is valid for either a limited time or forever, and only known to the client and the servet. That way, the server can verify idientities by checking if the client sent the token it was given. Expiring tokens after improves security, but when a client requests a knew one, there is no way to tell if it is the same client that used the old one (unless the request is validated by the old token and that one is still valid).
If ther client calculates the token, that deosn'T change anything - it sends it once and tells the server who it is, then every request that carries the same token does most likely come from the same client. There is only one additional thing the server has to do here: If two clients calculate the same token, the server has to reject the token from the second client and request another one. Otherwise, clients cannot be distinguished anymore.

Related

How to securely generate a session token that can be independently verified by Auth instances

I'm currently working on making an authentication gRPC microservice using Rust and Tonic. The simple idea is that my service generates a token that can later be used to reference back to the UserID. I save this token and user relationship in a redis database so I can run multiple of these authentication services in tandem. I'm currently generating my tokens like this:
// Create a session for a username
pub fn create_session_id(username: &String) -> String {
// Generate unique ID
let id = Uuid::new_v4();
// Hash the ID and username for a unique session token
let mut hasher = Sha256::new();
hasher.update(id.to_string() + username);
// Return hexadecimal string encoding of hash
format!("{:X}", hasher.finalize())
}
I will be the first to admit it's a bit primitive. My problem is that there's nothing stopping a potential attacker with access to this database from inserting their own arbitrary token with an associated userid. This means I will have to be able to verify my token came from my authentication service. How do I generate a session token that can be independantly verified by any one of my auth instances without making it vunerable to forgery.
I have thought about implementing OAuth2, but I'm struggling to integrate that with the gRPC microservice architecture.
Any help would be appriciated.

Can "context" from HTTPS callable Cloud Functions be trusted?

I am using Cloud Functions to handle read/write to Cloud Firestore on the server side. The Cloud Functions are triggered by clients in the web app using HTTPS callable function.
When calling a Cloud Functions using HTTPS, there is a parameter sent from the client call "context" that carries user auth information. For example, a Cloud Functions on the server can look like this:
// Saves a message to the Firebase Realtime Database but sanitizes the text by removing swearwords.
exports.addMessage = functions.https.onCall((data, context) => {
// ...
});
However, since context is passed by the client, and the client could pass in a manipulated ID token, do I need to always perform a ID token verification before trusting and using something like context.auth.uid to interact with my database?
The ID token verification I am talking about is this:
// idToken comes from the client app
admin.auth().verifyIdToken(idToken)
.then(function(decodedToken) {
var uid = decodedToken.uid;
// ...
}).catch(function(error) {
// Handle error
});
Essentially, I want to know if Firebase performs ID token verification automatically when passing context using https call and therefore I can go ahead and trust that if the client has manipulated context, the https call will fail due to token verification failing. Or, do I need to explicitly do a manual ID token verification on the server every single time to check the integrity of context, since the client can easily insert a manipulated token using the browser's devtools or something like that.
Yes, the ID token is automatically included in the request and verified in the function. You don't have to write code to verify the toekn when using callable functions.

JAX-RS security and Authentication

I am developing REST webService , and some of my client will use my webservices , so for identify the genuine client , I have decided to give them a unique Application Token to each genuine client . The client will encode this Token and they will put this Token in Request header and I have configure a REST filter in my REST webservices to verify Token . I dont want to use https . My problem is that any one can take that Token from my client site and can consume my REST webservices . How I can stop this ?
Since you dont want to use https, I assume confidentiality is not an issue here, and that you only want to authorize requests based on who is making them. Instead of passing a plain token, which could get stolen, you should ask your clients to sign their requests. You have a good explanation over here:
Implementing HMAC authentication for REST API with Spring Security
Using HMAC to authenticate Web service requests
websec.io - API Authentication
In short, and taken from Implementing HMAC authentication for REST API with Spring Security:
Client and server share a secret access key and a public access key.
Client create a request that contains three fundamental elements: the public key header (in plain text), a date header, a signature string calculated hashing some data of the request with the secret access key. This hash usually contains the http method, the uri path, the value of the date header (for reply attacks), all the content of the request (for POST and PUT methods) and the content type.
Client send the request to the server.
Server read the public key header and use it to retrieve the corresponding private access key.
Server use the private access key to calculate the signature in the same way as the client did.
Server check if the just-calculated signature matches with the one sent by the client.
(OPTIONAL) To prevent reply attacks, server checks that the value in the date header is within an acceptable limit (usually between 5 and 15 minutes to account clock discrepancy). The value cannot be manipulated by malicious attacker because the date it's used as part of the signature. If someone change the date header, the server will calculated a different signature of that calculated by the client, so step 6 will fail.
This logic can be implemented using any programming language. Following is a pseudo-code signature example in java:
//clientId is the public client identifier
//secretKey is the key shared between server and client
//requestContent is a string representation of the HTTP request (HTTP verb, body, etc.
//init signing key and mac
SecretKeySpec signingKey = new SecretKeySpec(secretKey.getBytes(), "HmacSHA1");
Mac mac = Mac.getInstance("HmacSHA1");
mac.init(signingKey);
//sign the request content
byte[] rawHmac = mac.doFinal(requestContent.getBytes());
//encode to base64
String result = base64(rawHmac);
//store in header
request.setHeader("Authorization", "MyAPI " + clientId + ":" + result);
On the server side, when you receive that request, you extract the clientId and signature from the header, retrieve the secret key corresponding to the clientId received, re-compute the signature (exactly as above) and compare the results. It it matches client is authorized, if not you return an HTTP 403 (or whatever error you want).
There is then no more "secrets" to steal for a potential man in the middle, BUT there are still keys that need to be securely stored on both the clients and the server. Leaking those keys will compromise the whole system.
As token cannot be securely transmitted on HTTP layer one can easily get this token. You can ask genuine client to encrypt this token by combining some logic having timestamp so that every time token is encrypted using some different algorithm and on server side you should follow similar algorithm to decrypt it. This way even if someone get hold of token that can't be reused. One way is to club this encryption logic with Google Authenticator. (http://www.techrepublic.com/blog/google-in-the-enterprise/use-google-authenticator-to-securely-login-to-non-google-sites/)
Use the checksum to secure the messages as below
MD5 or SHA1 checksum should be used to validate a password without passing the actual password.
The server sends a random string to the client.
The client appends his password to the random string, and returns an MD5/SHA1 sum of the result to the server.
On the server, do the same and compare the MD5/SHA1 sums.
If both MD5/SHA1 are identicals then the password is good and message is not changed.

PKE REST Auth using SHA-1 Hash

I'm designing my first RESTful API and am trying to figure out how I'm going to authenticate API calls. I've worked with the Gengo API (dev docs) in the past and had great luck with it, so admittedly, am basing a lot of my auth design on their algorithm described in that link.
To sum their process up, to create a valid/authenticated API call:
Register for an account with them and generate a public/private key set. Then for each API call:
Obtain the UNIX epoch timestamp that the call is being made at.
Calculate the SHA-1 hash of your timestamp "against" your private key.
Make sure that your public key, private key and the calculated hash (above) is present as 3 separate HTTP parameters with every single API call.
At first this was a little confusing to me, but I was able to get authentication working pretty quickly with their API. But I never fully understood why I had to generate this SHA-1 hash, and I had no clue what they were doing on the server-side to actually authenticate my API calls.
Now that I'm writing my own authenticated API, I need to understand these things. So I ask:
What purpose does the timestamp and its derived SHA-1 hash serve? Why is it less secure to just require users send me their public/private keys with each API call?
Is this pubkey + privkey + hashed_timestamp method that Gengo is using a standardized practice for API auth? If so, does it have a name/algorithm? Are there other, equally-secure competitors to it?
I'm confused by the whole HMAC/SHA-1 stuff (see the link above for concrete example). I always thought SHA-1 was a one-way function that turned a string into a unqiue, encoded strinig similar to what MD5 offers. But in that example (see link), it looks like it's passing SHA-1 and the string to some HMAC algorithm. What purpose does this HMAC serve and why does it require 3 arguments (SHA-1, the timestamp and the private key)?
Finally, what do I do with the 3 parameters (pub key, priv key, hashed timestamp) on the server-side to perform authentication? If I was designing a system that only used the pub/priv keys, then I would treat them like a username/password combo and would check the database to see if that combo existed or not. But the hashed timestamp is really throwing me off here.
What purpose does the timestamp and its derived SHA-1 hash serve? Why is it less secure to just require users send me their public/private keys with each API call?
To clear any misunderstanding you seem to have up front, the user should never send the private key over the network. The private key is to stay private. It is a secret shared between you and the user. Reread the Gengo link, you'll see that it is only used as a parameter to the HMAC function. It is up to the user to find a way to secure it, but your API does not need it to verify calls.
The timestamp serves two purposes. First it is a piece of data for which you will get both the plaintext and the HMAC. You will be recomputing the HMAC on your side with the private key of the user. If the HMAC checks, it means that not only the timestamp was not tampered with, but also that only someone knowing the private key could have sent it. It provides integrity and authenticity for that piece of data.
If it was a simple SHA1, a attacker could have intercepted the message, changed the timestamp, and recomputed the hash. By using a keyed hash, you ensure that the sender is who you think he is.
The second purpose for the timestamp is to prevent replay attacks. Even if using a keyed hash, the attacker could have captured an old request and send it again, possibly triggering unwanted actions. If your users hash the time and you test it and reject requests that are unreasonably old, you can prevent such replay attacks.
Is this pubkey + privkey + hashed_timestamp method that Gengo is using a standardized practice for API auth? If so, does it have a name/algorithm? Are there other, equally-secure competitors to it?
Again the privkey is not sent through the pipe. Using HMAC for API authentication is quite common. It is used for Amazon Web Services for example. When used in the Gengo way, the fact that there is seemingly a public/private key pair can be confusing, it is really still symmetric cryptography, and the private key is used as a shared secret.
However I think it is better to include more than just the timestamp in the data that is HMAC'ed. Otherwise an attacker could tamper with other parts of the request. The headers, the HTTP verb, and a hash of the content of the request should be included as well.
Another scheme is to use the private key on client side to sign (encrypt with the private key) a piece of data, so the server only needs to verify it with the public key of the client and needs not know the private key of the client. Embedding a time information is still needed to prevent replays. I do not know much about this scheme, it might be hard to reliably link clients with a given public key in the first place.
What purpose does this
HMAC serve and why does it require 3 arguments (SHA-1, the timestamp
and the private key)?
An HMAC is a keyed hash. Consider the simplest form of message authentication: hash(key + message). It was found that this was not secure (see length extension attack) and a nested structure fixes the vulnerability.
HMAC is a generic name of that structure: hash(k1 + hash(k2 + message)), where k1 and k2 are derived from the actual secret key. So when we do an HMAC we need to pass the name of the actual hash algorithm that will be used (here SHA-1), the message (here, the timestamp), and the secret key.
Finally, what do I do with the 3 parameters (pub key, priv key, hashed
timestamp) on the server-side to perform authentication? If I was
designing a system that only used the pub/priv keys, then I would
treat them like a username/password combo and would check the database
to see if that combo existed or not. But the hashed timestamp is
really throwing me off here.
Hopefully clearer by now. You use the public key as an identifier to retrieve the private key. You take the ts header and recompute the HMAC of it with the private key. If it matches with the hmac header sent, the request is authentic. You check the actual timestamp to see if it's not an old request replayed by some attacker. If all checks, the call can go through. I think it's better to embed all the important information in the HMAC, not just a timestamp though.
You need either public key cryptography or an HMAC, not both.
Let's come back to the timestamp later, and you're confusing authentication with integrity, which we'll also come back to later.
Authentication: in your case this is where the client proves knowledge of some secret to the server. Two common ways to do this are via public key cryptography and using an HMAC.
PKC: before using the service a public/private key pair is generated. The client has the private key; the server has the public key. Important: the private key never leaves the client. In particular, the server does not have access to the private key. To authenticate, the client encrypts some random value N (called a nonce), and sends N and its encrypted form to the server. The server uses the public key to decrypt the encrypted nonce and confirms that it equals the supplied nonce. This proves to the server that the client has the private key.
HMAC: client and server agree a shared secret K beforehand. To authenticate, the client creates a nonce N, computes HMAC(K, N), and sends N and HMAC(K, N) to the server. The server also computes HMAC(K, N) since it knows the shared secret and has received N from the client. If the computed and received HMAC(K, N) values are the same then the server knows that the client has the shared secret K.
The HMAC approach has one significant weakness compared with PKC: if the server is compromised then the attacked gains knowledge of K and can then use that to masquerade as the client.
If using PKC, ideally generate the keypair on the client and send the public key to the server. That way the server never has the private key.
However, unless the communication channel is confidential (e.g. uses SSL/TLS), both approaches have a problem: replay attacks. A passive observer can record the N+encrypted form, or N+HMAC(K,N) and replay them to the server. The server will then think that the observer is a valid client.
Two standard defences are:
Use a time-based nonce.
The server remembers previously-seen nonces, and rejects new requests that use a previously-seen nonce.
That's where the timestamp comes in, and is discussed in more detail here: Should I sign (with a public/private key) a form based authentication (no SSL)?
Integrity: we've proved to the server that we're a valid client, but we haven't provided any protection of the request itself. An attacker could modify our request in flight: we'd authenticate correctly but then execute the attacker's request rather than the client's original request.
To resolve this we need to protect the integrity of the request. We can do this with the same mechanism as above. Rather than just using a nonce (N) or nonce+timestamp, include the entire request in the data been encrypted or hashed. An important consideration here is that encrypted and hashing operate on bytes, not REST requests. You therefore need a reliable way to convert the REST request (i.e. HTTP method, URL, request parameters) into bytes. This is often called "canonicalisation": the client and server must both canonicalise the request in exactly the same way, so that they are encrypting/hashing the same bytes given the request.
This whole process is standardised in things like OAuth, e.g. https://dev.twitter.com/docs/auth/authorizing-request
To answer your specific questions:
The timestamp defends against replay attacks: passive observers can't reply a client's session. The SHA-1 hash is used as a component of the HMAC.
Yes, to a point. But I'd use a fully-fledged implementation of it rather than rolling your own, such as something OAuth-based.
An HMAC is a keyed hash: it's like a standard cryptographic hash (such as SHA-1, except that you also include a shared secret key in the hash. Simply concatenating the key to the data being hashed has cryptographic weaknesses that the HMAC construct avoids. (https://en.wikipedia.org/wiki/HMAC.)
If you're using PKC then you look up the client's public key on the server (based on some client ID, which ist not the client's private key), use that to decrypt the encrypted request, and verify that that request matches the received request. If you're using HMAC then you look up the client's shared secret, canonicalise the request, compute HMAC(K, R) and verify that it matches the received HMAC(K, R). In both cases you must also verify timestamps/nonces to protected against replays.
BUT: rule #1 of crypto: don't roll your own. Use an established mechanism, such as OAuth. You probably also want to use SSL/TLS, which would then also let you use client certificates as a third authentication option. If you used those then you could also rely on SSL/TLS to give you integrity and replay protection. However, implementing SSL/TLS certificate validation correctly seems to fox many developers...

Enabling OAuth1 Support on a Jersey Jax-rs web service

I'd like to enable OAuth1 Provider support on my restful web service. Jersey supports this as described here Jersey OAuth1 Provider support.
I've been trying to register it as so:
public ApplicationConfig(){
super();
addRestResourceClasses(getMyResourceClasses());
register(new OAuth1ServerFeature(new DefaultOAuth1Provider(),"/oauth/access_token","/oauth/request_token"));
}
But, when I register the OAuth1ServerFeature, I get a 404 when trying to access my resources.
Can't seem to find any examples/tutorials implementing jersey oauth support anywhere!
Is there a simple component I can plug into my jax-rs service to enable oauth support?
I realise this thread is somewhat old - but having just got it work myself, I felt a reply was in order! Given time, I may even create a blog post with a fuller example. Be warned - this is not a short answer!
There is an absolute lack of examples on information on using the OAuth1 server (aka Provider) feature in Jersey - I can't remember a tech topic that revealed so little useful Google information. I almost passed on looking for another solution since it led me to think perhaps it didn't work. But, with some perseverance, I can say that not only is it usable, but it seems to work rather well. Plus of course, if you're already using Jersey for your REST API - you don't need any extra libs.
I am not an OAuth1 expert - and I'd strongly recommend some background reading for those attempting this. I am also assuming here you have Jersey working, understand things like ContainerRequestFilters, and also have some internal means to authorize users.
My examples also use the excellent JAX-RS OSGi connector - the only real difference is that where we use an OSGi bundle context to register the OAuth1 feature via an OSGI service, regular Jersey users will need to configure via their normal Application / Server config model.
Initialisation
You must create your OAuth1 feature - and give it a provider:
DefaultOAuth1Provider oap = new DefaultOAuth1Provider();
Feature oaFeature = new OAuth1ServerFeature(oap, "oauth1/request_token", "oauth1/access_token");
Don't forget to register oaFeature into Jersey!
The DefaultOAuth1Provider is entirely memory based - which was fine for us to start with. Many will want to persist access tokens for use across server restarts, which will require an extended subclass (or clean implementation)
Add in your Consumers Keys and Secrets
It took me a while to realise Consumers were not users but clients i.e. applications. The Jersey implementation will not work if you don't register keys and secrets for each consumer (aka client app) that wishes to connect
oap.registerConsumer("some-owner-id",
"abcdef" ,
"123456",
new MultivaluedHashMap<String,String> ());
You obviously would never hard-code these, and further would use some form of secure store for the secret (param 3).
If you do not add these you will not get any further.
OAuth protocol step 1 - get a request token
At this stage you are ready client side to get a request token - and here there is a perfectly good example on GitHub.
ConsumerCredentials consumerCredentials = new ConsumerCredentials("abcdef","123456");
//TODO - user proper client builder with real location + any ssl context
OAuth1AuthorizationFlow authFlow = OAuth1ClientSupport.builder(consumerCredentials)
.authorizationFlow(
"http://myhost:8080/myapi/oauth1/request_token",
"http://myhost:8080/myapi/oauth1/access_token",
"http://myhost:8080/myapi/oauth1/authorize")
.build();
String authorizationUri = authFlow.start();
System.out.println("Auth URI: " + authorizationUri);
Obviously you would change URLs to point to your server and - crucially - the client needs to use the same Conumer Key and Secret you registered in the server.
You will get back a response with an oauth_token string in it e.g.
http://myhost:8080/myapi/oauth/authorize?oauth_token=a1ec37598da
b47f6b9d770b1b23a5f99
OAuth protocol step 2 - authorize the user
As you will read in any article, actual user Authorization is outside of the scope of OAuth1 - at this stage you must invoke your servers auth process whatever that is.
However!!!! What is not outside the OAuth1 scope is what your server needs to do if the user authorizes successfully. You must tell your DefaultOAuth1Provider about the successful auth:
// Dummy code - make out like we're auth'd
Set<String> dummyRoles = new HashSet<> (Arrays.asList( new String[] { "my-role-1", "my-role-2" }));
DefaultOAuth1Provider.Token tok1 = getRequestToken("a1ec37598da
b47f6b9d770b1b23a5f99");
String verifier = authorizeToken(tok1, new Principal()
{
public String getName()
{
return "my-user";
}
},
dummyRoles);
System.out.println("***** verifier: " + verifier);
Note the request token string is that from step 1. Obviously a real implementation would pass a real Principal and set of roles for the authorized user.
Also, of course, printing out the verifier is not much use - you need to get that back to your client in some way, either via an independent channel or possibly as a header in the auth response - which maybe would need to be encrypted for added protection.
OAuth protocol step 3 - swap the request token for an access token
Once the client receives or has the verifier entered manually, it can finalize the process and swap the request token for an access token e.g.
String verifier = System.console().readLine("%s", "Verifier: ");
final AccessToken accessToken = authFlow.finish(verifier);
System.out.println("Access token: " + accessToken.getToken());
Again, not a realistic example - but it shows the process.
If your OAuth1Provider saves access tokens to some persistent store on the server, you can re-use any access token returned here on a future session without going through all the previous steps.
That's it - you then just need to make sure every request the client creates from this point on in the process makes use of that access token.