What is the strategy of data-protection key rotation with multiple pods? - asp.net-core

I used services.AddDataProtection().PersistKeysToFileSystem(path).ProtectKeysWithAzureKeyVault(authData). to encrypt data-protection keys. In 24 hours since deployment no new data-protection key was generated. This means that until the current data-protection key expires no encryption is in place.
Now ,to force the data-protection key generation I can delete the latest data-protection key and restart the pods, but this will lead to race condition described here: https://github.com/dotnet/aspnetcore/issues/28475 so I will need to restart them again. Will the users having cookies encrypted with the now deleted data-protection key be logged out?
This also bothers me, because what exactly happens when there is a data-protection key rotation every 180 days? User's cookies are encrypted using it so if they are signed in would their cookies no longer be valid?
Additionally if one of let's say 6 pods generates new data-protection key when is the time the rest syncs up? Is it possible that you will fetch a form using 1 pod and then submit it using the other while they use different data-protection keys?
How to deal with all that?

This issue is still open, there is a meta issue that links to other open issues about the subject.
https://github.com/dotnet/aspnetcore/issues/36157
I had the same problem, but instead of pods I have AWS Lambda functions.
I solved the problem by disabling automatic key generation:
services.AddDataProtection()
.DisableAutomaticKeyGeneration()
And managing the keys myself. I have at least two keys:
The default key. Expires 190 days after activation. It is the default key during 180 days.
The next key. It activates 10 days before the current key expires. It expires 190 days after activation. It will be the default key during 180 days.
This is the code I execute before deploying lambda function and then once a month:
public class KeyringUpdater
{
private readonly ILogger<KeyringUpdater> logger;
private readonly IKeyManager keyManager;
public KeyringUpdater(IKeyManager keyManager, ILogger<KeyringUpdater> logger)
{
this.logger = logger;
this.keyManager = keyManager;
}
private IKey? GetDefaultKey(IReadOnlyCollection<IKey> keys)
{
var now = DateTimeOffset.UtcNow;
return keys.FirstOrDefault(x => x.ActivationDate <= now && x.ExpirationDate > now && x.IsRevoked == false);
}
private IKey? GetNextKey(IReadOnlyCollection<IKey> keys, IKey key)
{
return keys.FirstOrDefault(x => x.ActivationDate > key.ActivationDate && x.ActivationDate < key.ExpirationDate && x.ExpirationDate > key.ExpirationDate && x.IsRevoked == false);
}
public void Update()
{
var keys = this.keyManager.GetAllKeys();
logger.LogInformation("Found {Count} keys", keys.Count);
var defaultKey = GetDefaultKey(keys);
if (defaultKey == null)
{
logger.LogInformation("No default key found");
var now = DateTimeOffset.UtcNow;
defaultKey = this.keyManager.CreateNewKey(now, now.AddDays(190));
logger.LogInformation("Default key created. ActivationDate: {ActivationDate}, ExpirationDate: {ExpirationDate}", defaultKey.ActivationDate, defaultKey.ExpirationDate);
keys = this.keyManager.GetAllKeys();
}
else
{
logger.LogInformation("Found default key. ActivationDate: {ActivationDate}, ExpirationDate: {ExpirationDate}", defaultKey.ActivationDate, defaultKey.ExpirationDate);
}
var nextKey = GetNextKey(keys, defaultKey);
if (nextKey == null)
{
logger.LogInformation("No next key found");
nextKey = this.keyManager.CreateNewKey(defaultKey.ExpirationDate.AddDays(-10), defaultKey.ExpirationDate.AddDays(180));
logger.LogInformation("Next key created. ActivationDate: {ActivationDate}, ExpirationDate: {ExpirationDate}", nextKey.ActivationDate, nextKey.ExpirationDate);
}
else
{
logger.LogInformation("Found next key. ActivationDate: {ActivationDate}, ExpirationDate: {ExpirationDate}", nextKey.ActivationDate, nextKey.ExpirationDate);
}
}
}

Related

Azure container shared access signature expiring

I'm having trouble with Azure Blobs and Shared Access Signatures when they expire. I need to grant access to a blob for longer than 1 hour (1 year), so I'm using a named container policy, but unfortunately . Its expiring after 1 hr
SharedAccessPolicy sharedAccessPolicy = new SharedAccessPolicy();
sharedAccessPolicy.Permissions = SharedAccessPermissions.Read;
sharedAccessPolicy.SharedAccessStartTime = DateTime.UtcNow;
//sharedAccessPolicy.SharedAccessExpiryTime = DateTime.UtcNow.AddYear(1); No need to define expiry time here.
BlobContainerPermissions blobContainerPermissions = new BlobContainerPermissions();
blobContainerPermissions.SharedAccessPolicies.Add("default", sharedAccessPolicy);
container.SetPermissions(blobContainerPermissions);
Console.WriteLine("Press any key to continue....");
Console.ReadLine();
CloudBlob blob = container.GetBlobReference(path);
string sas = blob.GetSharedAccessSignature(new SharedAccessPolicy()
{
SharedAccessExpiryTime = DateTime.UtcNow.AddDays(7),//add expiry date only when you're creating the signed URL
}
, "default");
Console.WriteLine(blob.Uri.AbsoluteUri + sas);
Process.Start(new ProcessStartInfo(blob.Uri.AbsoluteUri + sas));
Console.WriteLine("Press any key to continue....");
Console.ReadLine();

Why Redis keys are not expiring?

I have checked these questions but they did not help me to fix my issue. I am using Redis as a key value store for Rate Limiting in my Spring REST application using spring-data-redis library. I test with huge load. In that I use the following code to store a key and I am setting the expire time as well. Most of the time the key expires as expected. But some times the key is not expiring!
code snippet
RedisAtomicInteger counter = counter = new RedisAtomicInteger("mykey");
counter.expire(1, TimeUnit.MINUTES);
I checked the availability of the keys using redis-cli tool
keys *
and
ttl keyname
redis.conf having default values.
Any suggestions ?
Edit 1:
Full code:
The function is in an Aspect
public synchronized Object checkLimit(ProceedingJoinPoint joinPoint) throws Exception, Throwable {
boolean isKeyAvailable = false;
List<String> keysList = new ArrayList<>();
Object[] obj = joinPoint.getArgs();
String randomKey = (String) obj[1];
int randomLimit = (Integer) obj[2];
// for RedisTemplate it is already loaded as
// #Autowired
// private RedisTemplate template;
// in this class
Set<String> redisKeys = template.keys(randomKey+"_"randomLimit+"*");
Iterator<String> it = redisKeys.iterator();
while (it.hasNext()) {
String data = it.next();
keysList.add(data);
}
if (keysList.size() > 0) {
isKeyAvailable = keysList.get(0).contains(randomKey + "_" + randomLimit);
}
RedisAtomicInteger counter = null;
// if the key is not there
if (!isKeyAvailable) {
long expiryTimeStamp = 0;
int timePeriodInMintes = 1;
expiryTimeStamp = new Date(System.currentTimeMillis() + timePeriodInMintes * 60 * 1000).getTime();
counter = new RedisAtomicInteger(randomKey+ "_"+ randomLimit + "_" + expiryTimeStamp,template.getConnectionFactory());
counter.incrementAndGet();
counter.expire(timePeriodInMintes, TimeUnit.MINUTES);
break;
} else {
String[] keys = keysList.get(0).split("_");
String rLimit = keys[1];
counter = new RedisAtomicInteger(keysList.get(0), template.getConnectionFactory());
int count = counter.get();
// If count exceeds throw error
if (count != 0 && count >= Integer.parseInt(rLimit)) {
throw new Exception("Error");
}
else {
counter.incrementAndGet();
}
}
return joinPoint.proceed();
}
when these lines run
RedisAtomicInteger counter = counter = new RedisAtomicInteger("mykey");
counter.expire(1, TimeUnit.MINUTES);
I can see
75672562.380127 [0 10.0.3.133:65462] "KEYS" "mykey_1000*"
75672562.384267 [0 10.0.3.133:65462] "GET" "mykey_1000_1475672621787"
75672562.388856 [0 10.0.3.133:65462] "SET" "mykey_1000_1475672621787" "0"
75672562.391867 [0 10.0.3.133:65462] "INCRBY" "mykey_1000_1475672621787" "1"
75672562.395922 [0 10.0.3.133:65462] "PEXPIRE" "mykey_1000_1475672621787" "60000"
...
75672562.691723 [0 10.0.3.133:65462] "KEYS" "mykey_1000*"
75672562.695562 [0 10.0.3.133:65462] "GET" "mykey_1000_1475672621787"
75672562.695855 [0 10.0.3.133:65462] "GET" "mykey_1000_1475672621787"
75672562.696139 [0 10.0.3.133:65462] "INCRBY" "mykey_1000_1475672621787" "1"
in Redis log, when I "MONITOR" it in
Edit:
Now with the updated code I believe your methodology is fundamentally flawed aside from what you're reporting.
The way you've implemented it you require running KEYS in production - this is bad. As you scale out you will be causing a growing, and unnecessary, system blocking load on the server. As every bit of documentation on it says, do not use keys in production. Note that encoding the expiration time in the key name gives you no benefit. If you made that part of the key name a the creation timestamp, or even a random number nothing would change. Indeed, if you removed that bit, nothing would change.
A more sane route would instead be to use a keyname which is not time-dependent. The use of expiration handles that function for you. Let us call your rate-limited thing a "session". Your key name sans the timestamp is the "session ID". By setting an expiration of 60s on it, it will no longer be available at the 61s mark. So you can safely increment and compare the result to your limit without needing to know the current time or expiry time. All you need is a static key name and an appropriate expiration set on it.
If you INCR a non-existing key, Redis will return "1" meaning it created the key and incremented it in a single step/call. so basically the logic goes like this:
create "session" ID
increment counter using ID
compare result to limit
if count == 1, set expiration to 60s
id count > limit, reject
Step 3.1 is important. A count of 1 means this is a new key in Redis, and you want to set your expiration on it. Anything else means the expiration should already have been set. If you set it in 3.2 you will break the process because it will preserve the counter for more than 60s.
With this you don't need to have dynamic key names based on expiration time, and thus don't need to use keys to find out if there is an existing "session" for the rate-limited object. It also makes your code much simpler and predictable, as well as reduce round trips to Redis - meaning it will be lower load on Redis and perform better. As to how to do that w/the client library you're using I can't say because I'm not that familiar with it. But the basic sequence should be translatable to it as it is fairly basic and simple.
What you haven't shown, however, is anything to support the assertion that the expiration isn't happening. All you've done is show that Redis is indeed being told to and setting an expiration. In order to support your claim you need to show that the key does not expire. Which means you need to show retrieval of the key after the expiration time, and that the counter was not "reset" by being recreated after the expiration. One way you can see the expiration is happening is to use keyspace notifications. With that you will be able to see Redis saying a key was expired.
Where this process will fail a bit is if you do multiple windows for rate-limiting, or if you have a much larger window (ie. 10 minutes) in which case sorted sets might be a more sane option to prevent front-loading of requests - if desired. But as your example is written, the above will work just fine.

Creating a private/public key with 64 characters that are already known using bitcoinjs

So I'm trying to create a private/public key from 64 characters that I already know using bitcoinjs with the code below:
key = Bitcoin.ECKey.makeRandom();
// Print your private key (in WIF format)
document.write(key.toWIF());
// => Kxr9tQED9H44gCmp6HAdmemAzU3n84H3dGkuWTKvE23JgHMW8gct
// Print your public key (toString defaults to a Bitcoin address)
document.write(key.pub.getAddress().toString());
// => 14bZ7YWde4KdRb5YN7GYkToz3EHVCvRxkF
If I try to set "key" to my 64 characters instead of "Bitcoin.ECKey.makeRandom();" it fails. Is there a method or library that I overlooked that would allow me to use the known 64 characters in order to generate the private key in wif format and the public address?
Thanks in advance to anyone that may be able to offer some help.
You should use fromWIF method to pass your own data.
from source code of eckey.js
// Static constructors
ECKey.fromWIF = function(string) {
var payload = base58check.decode(string)
var compressed = false
// Ignore the version byte
payload = payload.slice(1)
if (payload.length === 33) {
assert.strictEqual(payload[32], 0x01, 'Invalid compression flag')
// Truncate the compression flag
payload = payload.slice(0, -1)
compressed = true
}
To create WIF from your key please follow https://en.bitcoin.it/wiki/Wallet_import_format
Here is interactive tool http://gobittest.appspot.com/PrivateKey
The solution to generate private and public key:
//public-key
var address = eckey.getBitcoinAddress().toString();
var privateKeyBytesCompressed = privateKeyBytes.slice(0);
privateKeyBytesCompressed.push(0x01);
var privateKeyWIFCompressed = new Bitcoin.Address(privateKeyBytesCompressed);
privateKeyWIFCompressed.version = 0x80;
//private-key
privateKeyWIFCompressed = privateKeyWIFCompressed.toString();
Take a look at moneyart.info for beautifully designed paperwallets.

Wrap a secret key with a public key using PKCS#11

In my C program, I generate a public/private key pair with the function C_GenerateKeyPair and a sensitive (secret) key with C_GenerateKey. The aim is to wrap the secret key with the public key, but when I call the function C_WrapKey, I get the error CKR_KEY_TYPE_INCONSISTENT. The code runs if I use another wrapping secret key with attributes Wrap and Encrypt set.
The template used for the public key is the one proposed in PKCS#11 documentation:
CK_SESSION_HANDLE hSession;
CK_OBJECT_HANDLE hPublicKey, hPrivateKey;
CK_MECHANISM mechanism = {
CKM_RSA_PKCS_KEY_PAIR_GEN, NULL_PTR, 0
};
CK_ULONG modulusBits = 768;
CK_BYTE publicExponent[] = { 3 };
CK_BYTE id[] = {123};
CK_BBOOL true = CK_TRUE;
CK_ATTRIBUTE publicKeyTemplate[] = {
{CKA_ENCRYPT, &true, sizeof(true)},
{CKA_VERIFY, &true, sizeof(true)},
{CKA_WRAP, &true, sizeof(true)},
{CKA_MODULUS_BITS, &modulusBits, sizeof(modulusBits)},
{CKA_PUBLIC_EXPONENT, publicExponent, sizeof(publicExponent)}
};
The Wrap and Encrypt attribute are correctly specified, while for the secret key to be wrapped I add the attribute CKA_EXTRACTABLE.
Thanks in advance for your help.
The error CKR_KEY_TYPE_INCONSISTENT is due to a wrong CK_MECHANISM, used in the function C_WrapKey. If we want to wrap a secret key with a RSA public key, set the following mechanism:
CK_MECHANISM dec_mec = {CKM_RSA_PKCS, NULL_PTR, 0};

matching and verifying Express 3/Connect 2 session keys from socket.io connection

I have a good start on a technique similar to this in Express 3
http://notjustburritos.tumblr.com/post/22682186189/socket-io-and-express-3
the idea being to let me grab the session object from within a socket.io connection callback, storing sessions via connect-redis in this case.
So, in app.configure we have
var db = require('connect-redis')(express)
....
app.configure(function(){
....
app.use(express.cookieParser(SITE_SECRET));
app.use(express.session({ store: new db }));
And in the app code there is
var redis_client = require('redis').createClient()
io.set('authorization', function(data, accept) {
if (!data.headers.cookie) {
return accept('Sesssion cookie required.', false)
}
data.cookie = require('cookie').parse(data.headers.cookie);
/* verify the signature of the session cookie. */
//data.cookie = require('cookie').parse(data.cookie, SITE_SECRET);
data.sessionID = data.cookie['connect.sid']
redis_client.get(data.sessionID, function(err, session) {
if (err) {
return accept('Error in session store.', false)
} else if (!session) {
return accept('Session not found.', false)
}
// success! we're authenticated with a known session.
data.session = session
return accept(null, true)
})
})
The sessions are being saved to redis, the keys look like this:
redis 127.0.0.1:6379> KEYS *
1) "sess:lpeNPnHmQ2f442rE87Y6X28C"
2) "sess:qsWvzubzparNHNoPyNN/CdVw"
and the values are unencrypted JSON. So far so good.
The cookie header, however, contains something like
{ 'connect.sid': 's:lpeNPnHmQ2f442rE87Y6X28C.obCv2x2NT05ieqkmzHnE0VZKDNnqGkcxeQAEVoeoeiU' }
So now the SessionStore and the connect.sid don't match, because the signature part (after the .) is stripped from the SessionStore version.
Question is, is is safe to just truncate out the SID part of the cookie (lpeNPnHmQ2f442rE87Y6X28C) and match based on that, or should the signature part be verified? If so, how?
rather than hacking around with private methods and internals of Connect, that were NOT meant to be used this way, this NPM does a good job of wrapping socket.on in a method that pulls in the session, and parses and verifies
https://github.com/functioncallback/session.socket.io
Just use cookie-signature module, as recommended by the comment lines in Connect's utils.js.
var cookie = require('cookie-signature');
//assuming you already put the session id from the client in a var called "sid"
var sid = cookies['connect.sid'];
sid = cookie.unsign(sid.slice(2),yourSecret);
if (sid == "false") {
//cookie validation failure
//uh oh. Handle this error
} else {
sid = "sess:" + sid;
//proceed to retrieve from store
}