I am trying to implement a signature verification endpoint - or ASP.net WebAPI action filter, to verify that a token has in fact come from AWS Cognito - validate its signature.
I am using the following code, but it always returns invalid. The Javascript code example also below works perfectly with the same keys / token.
Can anyone help?
Thanks,
KH
CSharp
public IHttpActionResult Verify([FromBody] string accessToken)
{
string[] parts = accessToken.Split('.');
//From the Cognito JWK set
//{"alg":"RS256","e":"myE","kid":"myKid","kty":"RSA","n":"myN","use":"sig"}]}
var n = Base64UrlDecode("q7ocE2u-JSe1P4AF6_Nasae7e7wUoUxJq058CueDFs9R5fvWQTtAN1rMxBCeLQ7Q8Q0u-vqxr83b6N9ZR5zWUU2stgYzrDTANbIn9zMGDZvSR1tMpun5eAArKW5fcxGFj6klQ0bctlUATSGU5y6xmYoe_U9ycLlPxh5mDluR7V6GbunE1IXJHqcyy-s7dxYdGynTbsLemwmyjDaInGGsM3gMdPAJc29PXozm87ZKY52U7XQN0TMB9Ipwsix443zbE_8WX2mvKjU5yvucFdc4WZdoXN9SGs3HGAeL6Asjc0S6DCruuNiKYj4-MkKh_hlTkH7Rj2CeoV7H3GNS0IOqnQ");
var e = Base64UrlDecode("AQAB");
RSACryptoServiceProvider provider = new RSACryptoServiceProvider();
provider.ImportParameters(new RSAParameters
{
Exponent = new BigInteger(e).ToByteArrayUnsigned(),
Modulus = new BigInteger(n).ToByteArrayUnsigned()
});
SHA512Managed sha512 = new SHA512Managed();
byte[] hash = sha512.ComputeHash(Encoding.UTF8.GetBytes(parts[0] + "." + parts[1]));
RSAPKCS1SignatureDeformatter rsaDeformatter = new RSAPKCS1SignatureDeformatter(provider);
rsaDeformatter.SetHashAlgorithm(sha512.GetType().FullName);
if (!rsaDeformatter.VerifySignature(hash, Base64UrlDecode(parts[2])))
throw new ApplicationException(string.Format("Invalid signature"));
return Ok(true);
}
// from JWT spec
private static byte[] Base64UrlDecode(string input)
{
var output = input;
output = output.Replace('-', '+'); // 62nd char of encoding
output = output.Replace('_', '/'); // 63rd char of encoding
switch (output.Length % 4) // Pad with trailing '='s
{
case 0: break; // No pad chars in this case
case 1: output += "==="; break; // Three pad chars
case 2: output += "=="; break; // Two pad chars
case 3: output += "="; break; // One pad char
default: throw new System.Exception("Illegal base64url string!");
}
var converted = Convert.FromBase64String(output); // Standard base64 decoder
return converted;
}
JavaScript
var jwkToPem = require('jwk-to-pem');
var jwt = require('jsonwebtoken');
var jwks = //jwk set file, which you can find at https://cognito-idp.{region}.amazonaws.com/{userPoolId}/.well-known/jwks.json.
//Decode token
var decoded = jwt.decode(token, {complete: true});
//Get the correct key from the jwks based on the kid
var jwk = jwks.keys.filter(function(v) {
return v.kid === decoded.header.kid;
})[0];
//Convert the key to pem
var pem = jwkToPem(jwk);
//Verify the token with the pem
jwt.verify(token, pem, function(err, decoded) {
//if decoded exists, its valid
});
Replace
SHA512Managed sha512 = new SHA512Managed();
by
SHA256CryptoServiceProvider sha256 = new SHA256CryptoServiceProvider();
Don't forget to set properly the hash algorithm properly as well
rsaDeformatter.SetHashAlgorithm("SHA256");
Flo's answer works but its built into .net now. Using https://rafpe.ninja/2017/07/30/net-core-jwt-authentication-using-aws-cognito-user-pool/ which has more details on how to build it into the .net core middleware:
public RsaSecurityKey SigningKey(string Key, string Expo)
{
return new RsaSecurityKey(new RSAParameters()
{
Modulus = Base64UrlEncoder.DecodeBytes(Key),
Exponent = Base64UrlEncoder.DecodeBytes(Expo)
});
}
public TokenValidationParameters TokenValidationParameters()
{
// Basic settings - signing key to validate with, audience and issuer.
return new TokenValidationParameters
{
// Basic settings - signing key to validate with, IssuerSigningKey and issuer.
IssuerSigningKey = this.SigningKey(CognitoConstants.key,CognitoConstants.expo),
ValidIssuer = CognitoConstants.Issuer,
ValidAudience = CognitoConstants.clientid,//Same value you send in the cognito request url
// when receiving a token, check that the signing key
ValidateIssuerSigningKey = true,
// When receiving a token, check that we've signed it.
ValidateIssuer = true,
// When receiving a token, check that it is still valid.
ValidateLifetime = true,
// Do not validate Audience on the "access" token since Cognito does not supply it but it is on the "id"
ValidateAudience = true,
// This defines the maximum allowable clock skew - i.e. provides a tolerance on the token expiry time
// when validating the lifetime. As we're creating the tokens locally and validating them on the same
// machines which should have synchronised time, this can be set to zero. Where external tokens are
// used, some leeway here could be useful.
ClockSkew = TimeSpan.FromMinutes(0)
};
}
private bool ValidateToken(string token)
{
var tokenHandler = new JwtSecurityTokenHandler();
if (tokenHandler.CanReadToken(token))
{
var validationParams = TokenValidationParameters();
SecurityToken validatedToken;
//ValidateToken throws if it fails so if you want to return false this needs changing
var principal = tokenHandler.ValidateToken(token, validationParams, out validatedToken);
return validatedToken != null;
}
return false;
}
Related
I am trying to have AES encryption on the server side, and decryption on the client side. I have followed an example where CryptoJS is used on the client side for encryption and SubtleCrypto on the client side as well for decryption, but in my case I have the encryption and decryption separated.
Suppose I have the following encryption function within React Native:
const encrypt = (str: string) => {
const iv = crypto.randomBytes(12);
const myHexToken = "0x...."
const cipher = crypto.createCipheriv('aes-256-gcm', myHexToken.slice(0,32), iv)
let encrypted = cipher.update(str, 'utf8', 'hex')
encrypted += cipher.final('hex');
const tag = cipher.getAuthTag();
return {
message: encrypted,
tag: tag.toString('hex'),
iv: iv.toString('hex'),
};
};
This json is then posted to the client through a webview postMessage.
The client side has the following javascript injected:
var myHexToken = "0x....";
window.addEventListener("message", async function (event) {
var responseData = JSON.parse(event.data);
try {
var decryptedData = await decrypt(responseData.iv, responseData.message, responseData.tag);
} catch (e) {
alert(e);
}
// ...
How can I decrypt responseData.message within the WebView through SubtleCrypto of the Web Crypto API?
I have tried various things with the following methods, but I keep getting "OperationalError":
function fromHex(hexString) {
return new Uint8Array(hexString.match(/.{1,2}/g).map(byte => parseInt(byte, 16)));
}
function str2ab(str) {
const buf = new ArrayBuffer(str.length);
const bufView = new Uint8Array(buf);
for (let i = 0, strLen = str.length; i < strLen; i++) {
bufView[i] = str.charCodeAt(i);
}
return buf;
}
function fromBase64(base64String) {
return Uint8Array.from(window.atob(base64String), c => c.charCodeAt(0));
}
async function importKey(rawKey) {
var key = await crypto.subtle.importKey(
"raw",
rawKey,
"AES-GCM",
true,
["encrypt", "decrypt"]
);
return key;
}
async function decrypt(iv, data, tag) {
var rawKey = fromHex(myHexToken.slice(0,32));
var iv = fromHex(iv);
var ciphertext = str2ab(data + tag);
var cryptoKey = await importKey(rawKey)
var decryptedData = await window.crypto.subtle.decrypt(
{
name: "AES-GCM",
iv: iv
},
cryptoKey,
ciphertext
)
var decoder = new TextDecoder();
var plaintext = decoder.decode(decryptedData);
return plaintext;
}
UPDATE 1: Added the getAuthTag implementation server side. Changed IV to have length of 12 bytes. Attempt to concatenate ciphertext and tag client side.
I have verified that "myHexToken" is the same both client and server side. Also, the return values of the server side "encrypt()" method are correctly sent to the client.
In the WebCrypto code the key must not be hex decoded with fromHex(), but must be converted to an ArrayBuffer with str2ab().
Also, the concatenation of ciphertext and tag must not be converted to an ArrayBuffer with str2ab(), but must be hex decoded with fromHex().
With these fixes decryption works:
Test:
For the test, the following hex encoded key and plaintext are used on the NodeJS side:
const myHexToken = '000102030405060708090a0b0c0d0e0ff0f1f2f3f4f5f6f7f8f9fafbfcfdfeff';
const plaintext = "The quick brown fox jumps over the lazy dog";
const encryptedData = encrypt(plaintext);
console.log(encryptedData);
This results e.g. in the following output:
{
message: 'cc4beae785cda5c9413f49cf9449a6ae17fdc0f7435b9a8fd954602bdb4f4b825793f6b561c0d9a709007c',
tag: '046c8e56bbd13db2faed82d1b19c665e',
iv: '11f87b0eaf006373ae8bc94d'
}
The ciphertext created this way can be successfully decrypted with the fixed JavaScript code:
(async () => {
function fromHex(hexString) {
return new Uint8Array(hexString.match(/.{1,2}/g).map(byte => parseInt(byte, 16)));
}
function str2ab(str) {
const buf = new ArrayBuffer(str.length);
const bufView = new Uint8Array(buf);
for (let i = 0, strLen = str.length; i < strLen; i++) {
bufView[i] = str.charCodeAt(i);
}
return buf;
}
async function importKey(rawKey) {
var key = await crypto.subtle.importKey(
"raw",
rawKey,
"AES-GCM",
true,
["encrypt", "decrypt"]
);
return key;
}
async function decrypt(iv, data, tag) {
//var rawKey = fromHex(myHexToken.slice(0,32)); // Fix 1
var rawKey = str2ab(myHexToken.slice(0,32));
var iv = fromHex(iv);
//var ciphertext = str2ab(data + tag); // Fix 2
var ciphertext = fromHex(data + tag);
var cryptoKey = await importKey(rawKey)
var decryptedData = await window.crypto.subtle.decrypt(
{
name: "AES-GCM",
iv: iv
},
cryptoKey,
ciphertext
);
var decoder = new TextDecoder();
var plaintext = decoder.decode(decryptedData);
return plaintext;
}
var myHexToken = '000102030405060708090a0b0c0d0e0ff0f1f2f3f4f5f6f7f8f9fafbfcfdfeff'
var data = {
message: 'cc4beae785cda5c9413f49cf9449a6ae17fdc0f7435b9a8fd954602bdb4f4b825793f6b561c0d9a709007c',
tag: '046c8e56bbd13db2faed82d1b19c665e',
iv: '11f87b0eaf006373ae8bc94d'
}
var plaintext = await decrypt(data.iv, data.message, data.tag);
console.log(plaintext);
})();
A remark about the key: In the posted NodeJS code, const myHexToken = "0x...." is set. It's not clear to me if the 0x prefix is just supposed to symbolize a hex encoded string, or is really contained in the string. If the latter, it should actually be removed before the implicit UTF-8 encoding (by createCiperiv()). In case of a hex decoding it must be removed anyway.
In the posted example a valid hex encoded 32 bytes key is used (i.e. without 0x prefix).
With regard to the key encoding, also note the following:
The conversion of the key from a hex encoded string by a UTF-8 (or ASCII) encoding results in only half of the key being considered, in the example: 000102030405060708090a0b0c0d0e0f. This reduces security, because the value range per byte is reduced from 256 to 16 values.
In order for the entire key to be considered, the correct conversion on the NodeJS side would be: Buffer.from(myHexToken, 'hex') and on the WebCrypto side: var rawKey = fromHex(myHexToken).
Because of its implicit UTF8 encoding crypto.createCipheriv(..., myHexToken.slice(0,32), ...) creates a 32 bytes key and is functionally identical to str2ab(myHexToken.slice(0,32)) only as long as the characters in the substring myHexToken.slice(0,32) correspond to ASCII characters (which is true for a hex encoded string).
I'm trying to connect my Google Sheet to Coinbase API using apps script. When I try to use the authentication with my keys, I keep getting the same error:
{"errors":[{"id":"authentication_error","message":"invalid timestamp"}]}
(Code 401).
I try to check the time difference between my request and Coinbase server (to see if it is more than 30 seconds) and it doesn't. (1589465439 (mine) / 1589465464 (server)).
My code:
var timestamp = Math.floor(Date.now() / 1000) + 15;
Logger.log(timestamp);
var req = {
method: 'GET',
path: '/v2/accounts',
body: ''
};
var message = timestamp + req.method + req.path + req.body;
var secret = Utilities.base64Decode(apiKey);
secret = Utilities.newBlob(secret).getDataAsString();
//create a hexedecimal encoded SHA256 signature of the message
var hmac = Utilities.computeHmacSignature(Utilities.MacAlgorithm.HMAC_SHA_256, message, secret);
var signature = Utilities.base64Encode(hmac);
Logger.log(signature);
var signatureStr = '';
for (i = 0; i < signature.length; i++) {
var byte = signature[i];
if (byte < 0)
byte += 256;
var byteStr = byte.toString(16);
// Ensure we have 2 chars in our byte, pad with 0
if (byteStr.length == 1) byteStr = '0' + byteStr;
signatureStr += byteStr;
}
Logger.log(signatureStr);
var options = {
baseUrl: 'https://api.coinbase.com/',
url: req.path,
method: req.method,
headers: {
'CB-ACCESS-SIGN': signatureStr,
'CB-ACCESS-TIMESTAMP': timestamp,
'CB-ACCESS-KEY': apiKey
}
};
var response = UrlFetchApp.fetch("https://api.coinbase.com/v2/accounts", options);
This is an old question, but in case it's still unsolved, I see a few changes that will fix this. I ran into a similar issue and had to solve it.
You should have 2 keys total (1 API key, 1 secret Key). Secret key is a separate key that comes from Coinbase, and is not a decoded variant of the API access key. It does not need explicit decoding.
Where you're passing timestamp as a header -> convert that value to a string to fix the timestamp error.
headers: {
'CB-ACCESS-SIGN': signatureStr,
'CB-ACCESS-TIMESTAMP': timestamp.toString(),
'CB-ACCESS-KEY': apiKey
}
You can collapse the hmac and signature variables into this one liner before converting to hex.
let signature = Utilities.computeHmacSha256Signature(message, secret);
These 3 changes make your code start to work with my keys.
I am using AspNet Core to build a web api and JWT tokens to authenticate users.
I see that in TokenValidationParameters the default value of ValidateIssuerSigningKey property is false.
Does it make any difference if we set it to true, when using the HMAC256 Symmetric key to sign and verify tokens (where there is no public-key added to the token like in case of RSA)?
services
.AddAuthentication(options =>
{
options.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme;
options.DefaultScheme = JwtBearerDefaults.AuthenticationScheme;
options.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme;
})
.AddJwtBearer(cfg =>
{
cfg.RequireHttpsMetadata = false;
cfg.SaveToken = true;
string jwtIssuer = configuration["JwtIssuer"];
SymmetricSecurityKey securityKey = new SymmetricSecurityKey(Encoding.UTF8.GetBytes(configuration["JwtKey"]));
cfg.TokenValidationParameters = new TokenValidationParameters
{
ValidIssuer = jwtIssuer,
ValidAudience = jwtIssuer,
ValidateIssuerSigningKey = true,
IssuerSigningKey = securityKey,
ClockSkew = TimeSpan.Zero
};
});
Or is it necessary to set ValidateIssuerSigningKey to true only when using RSA keys?
Here is the code level documentation of this property:
//
// Summary:
// Gets or sets a boolean that controls if validation of the Microsoft.IdentityModel.Tokens.SecurityKey
// that signed the securityToken is called.
//
// Remarks:
// It is possible for tokens to contain the public key needed to check the signature.
// For example, X509Data can be hydrated into an X509Certificate, which can be used
// to validate the signature. In these cases it is important to validate the SigningKey
// that was used to validate the signature.
[DefaultValue(false)]
public bool ValidateIssuerSigningKey { get; set; }
Based on looking at the Microsoft.IdentityModel.Tokens source code, I could find only one place where the ValidateIssuerSigningKey boolean property is used, here:
https://github.com/AzureAD/azure-activedirectory-identitymodel-extensions-for-dotnet/blob/dev/src/Microsoft.IdentityModel.Tokens/Validators.cs
Which ultimately causes this code block to be executed:
X509SecurityKey x509SecurityKey = securityKey as X509SecurityKey;
if (x509SecurityKey?.Certificate is X509Certificate2 cert)
{
DateTime utcNow = DateTime.UtcNow;
var notBeforeUtc = cert.NotBefore.ToUniversalTime();
var notAfterUtc = cert.NotAfter.ToUniversalTime();
if (notBeforeUtc > DateTimeUtil.Add(utcNow, validationParameters.ClockSkew))
throw LogHelper.LogExceptionMessage(new SecurityTokenInvalidSigningKeyException(LogHelper.FormatInvariant(LogMessages.IDX10248, notBeforeUtc, utcNow)));
LogHelper.LogInformation(LogMessages.IDX10250, notBeforeUtc, utcNow);
if (notAfterUtc < DateTimeUtil.Add(utcNow, validationParameters.ClockSkew.Negate()))
throw LogHelper.LogExceptionMessage(new SecurityTokenInvalidSigningKeyException(LogHelper.FormatInvariant(LogMessages.IDX10249, notAfterUtc, utcNow)));
LogHelper.LogInformation(LogMessages.IDX10251, notAfterUtc, utcNow);
}
I.e. that flag relates to X509 certificates only, and the testing of the time period they are valid for. So I suspect it does not affect tokens validated using HMAC256... unless the HMAC key was obtained from an X509 certificate!
The Azure Active Directory people clarify this on this GitHub Wiki page created in December 2020:
https://github.com/AzureAD/azure-activedirectory-identitymodel-extensions-for-dotnet/wiki/Use-of-TokenValidationParameters.ValidateIssuerSigningKey
I'll quote key parts for convenience:
Normally this [ValidateIssuerSigningKey] is not required because the user / runtime must set IssuerSigningKey or IssuerSigningKeys or in the case of custom security key retrieval the delegate IssuerSigningKeyResolver ( Definition ) for keys to be available for validating the signature on the token. ... It is assumed that only keys from trusted sources are set.
If you need custom validation of the security key that signed the token you can ... set TokenValidationParameters.ValidateIssuerSigningKey to true. ...
The default behavior is applicable to X509SecurityKey and checks that the certificate is not expired. No CRL or other checks are made.
We need to update users claims after they log in to our website. This is caused by changes in the users licenses done by another part of our system.
However I am not able to comprehend how to update the claims without logout/login.
Rigth now this is our client setup
app.UseOpenIdConnectAuthentication(new OpenIdConnectAuthenticationOptions
{
//user validation host
Authority = UrlConstants.BaseAddress,
//Client that the user is validating against
ClientId = guid,//if not convertet to Gui the compare from the server fails
RedirectUri = UrlConstants.RedirectUrl,
PostLogoutRedirectUri = UrlConstants.RedirectUrl,
ResponseType = "code id_token token",
Scope = "openid profile email roles licens umbraco_api umbracoaccess",
UseTokenLifetime = false,
SignInAsAuthenticationType = "Cookies",
Notifications = new OpenIdConnectAuthenticationNotifications
{
SecurityTokenValidated = async n =>
{
_logger.Info("ConfigureAuth", "Token valdidated");
var id = n.AuthenticationTicket.Identity;
var nid = new ClaimsIdentity(
id.AuthenticationType,
Constants.ClaimTypes.GivenName,
Constants.ClaimTypes.Role);
// get userinfo data
var uri = new Uri(n.Options.Authority + "/connect/userinfo");
var userInfoClient = new UserInfoClient(uri,n.ProtocolMessage.AccessToken);
var userInfo = await userInfoClient.GetAsync();
userInfo.Claims.ToList().ForEach(ui => nid.AddClaim(new Claim(ui.Item1, ui.Item2)));
var licens = id.FindAll(LicenseScope.Licens);
nid.AddClaims(licens);
// keep the id_token for logout
nid.AddClaim(new Claim("id_token", n.ProtocolMessage.IdToken));
n.AuthenticationTicket = new AuthenticationTicket(
nid,
n.AuthenticationTicket.Properties);
_logger.Info("ConfigureAuth", "AuthenticationTicket created");
},
RedirectToIdentityProvider = async n =>
{
// if signing out, add the id_token_hint
if (n.ProtocolMessage.RequestType == OpenIdConnectRequestType.LogoutRequest)
{
var idTokenHint = n.OwinContext.Authentication.User.FindFirst("id_token").Value;
_logger.Debug("ConfigureAuth", "id_token for logout set on request");
_logger.Debug("ConfigureAuth", "Old PostLogoutRedirectUri: {0}", n.ProtocolMessage.PostLogoutRedirectUri.ToString());
n.ProtocolMessage.IdTokenHint = idTokenHint;
var urlReferrer = HttpContext.Current.Request.UrlReferrer.ToString();
if (!urlReferrer.Contains("localhost"))
{
n.ProtocolMessage.PostLogoutRedirectUri = GetRedirectUrl();
}
else
{
n.ProtocolMessage.PostLogoutRedirectUri = urlReferrer;
}
_logger.Debug("ConfigureAuth", string.Format("Setting PostLogoutRedirectUri to: {0}", n.ProtocolMessage.PostLogoutRedirectUri.ToString()));
}
if (n.ProtocolMessage.RequestType == OpenIdConnectRequestType.AuthenticationRequest)
{
n.ProtocolMessage.RedirectUri = GetRedirectUrl2();
n.ProtocolMessage.AcrValues = GetCurrentUmbracoId();
_logger.Debug("ConfigureAuth", string.Format("Setting RedirectUri to: {0}", n.ProtocolMessage.RedirectUri.ToString()));
}
},
}
});
We get our custom claims in SecurityTokenValidated
var licens = id.FindAll(LicenseScope.Licens);
nid.AddClaims(licens);
I do not follow how to get this without doing a login? Any help is highly appreciated.
That's a reminder that you should not put claims into tokens that might change during the lifetime of the session.
That said - you can set a new cookie at any point in time.
Reach into the OWIN authentication manager and call the SignIn method. Pass the claims identity that you want to serialize into the cookie.
e.g.
Request.GetOwinContext().Authentication.SignIn(newIdentity);
I am attempting to create a token validation method that returns true if a JWT token is valid based on the signature. I don't think I really need to validate everything in the token but what actually signifies a token is valid after calling ValidateToken()? The existence of a principle? The out referenced token contains certain values? Not sure when to return true from this method.
public bool ValidateToken(string tokenString)
{
var validationParameters = new TokenValidationParameters()
{
ValidIssuer = "My Company",
ValidAudience = ApplicationId,
IssuerSigningKey = JsonWebTokenSecretKey
};
SecurityToken token = new JwtSecurityToken();
var tokenHandler = new JwtSecurityTokenHandler();
var principal = tokenHandler.ValidateToken(tokenString, validationParameters, out token);
return principal != null;
}
I check all of the claims values manually. I've been searching for a definitive answer to your same question but the only thing I have seen is that the ValidateToken function will throw an Exception if something is wrong, so I begin by wrapping the call in a try-catch and return false from the catch.
That's just my "first-pass" at validating the token, though. Afterwards I do a little more heavy lifting to check certain values manually. For example, I make sure that the unique_name value in the claims section actually exists as a user in my database, that the user has not been deactivated, and other proprietary system stuff like that.
public static bool VerifyToken(string token)
{
var validationParameters = new TokenValidationParameters()
{
IssuerSigningToken = new BinarySecretSecurityToken(_key),
ValidAudience = _audience,
ValidIssuer = _issuer,
ValidateLifetime = true,
ValidateAudience = true,
ValidateIssuer = true,
ValidateIssuerSigningKey = true
};
var tokenHandler = new JwtSecurityTokenHandler();
SecurityToken validatedToken = null;
try
{
tokenHandler.ValidateToken(token, validationParameters, out validatedToken);
}
catch(SecurityTokenException)
{
return false;
}
catch(Exception e)
{
log(e.ToString()); //something else happened
throw;
}
//... manual validations return false if anything untoward is discovered
return validatedToken != null;
}
The last line, return validatedToken != null, is purely superstition on my part. I've never seen the validatedToken be null.