I have checked these questions but they did not help me to fix my issue. I am using Redis as a key value store for Rate Limiting in my Spring REST application using spring-data-redis library. I test with huge load. In that I use the following code to store a key and I am setting the expire time as well. Most of the time the key expires as expected. But some times the key is not expiring!
code snippet
RedisAtomicInteger counter = counter = new RedisAtomicInteger("mykey");
counter.expire(1, TimeUnit.MINUTES);
I checked the availability of the keys using redis-cli tool
keys *
and
ttl keyname
redis.conf having default values.
Any suggestions ?
Edit 1:
Full code:
The function is in an Aspect
public synchronized Object checkLimit(ProceedingJoinPoint joinPoint) throws Exception, Throwable {
boolean isKeyAvailable = false;
List<String> keysList = new ArrayList<>();
Object[] obj = joinPoint.getArgs();
String randomKey = (String) obj[1];
int randomLimit = (Integer) obj[2];
// for RedisTemplate it is already loaded as
// #Autowired
// private RedisTemplate template;
// in this class
Set<String> redisKeys = template.keys(randomKey+"_"randomLimit+"*");
Iterator<String> it = redisKeys.iterator();
while (it.hasNext()) {
String data = it.next();
keysList.add(data);
}
if (keysList.size() > 0) {
isKeyAvailable = keysList.get(0).contains(randomKey + "_" + randomLimit);
}
RedisAtomicInteger counter = null;
// if the key is not there
if (!isKeyAvailable) {
long expiryTimeStamp = 0;
int timePeriodInMintes = 1;
expiryTimeStamp = new Date(System.currentTimeMillis() + timePeriodInMintes * 60 * 1000).getTime();
counter = new RedisAtomicInteger(randomKey+ "_"+ randomLimit + "_" + expiryTimeStamp,template.getConnectionFactory());
counter.incrementAndGet();
counter.expire(timePeriodInMintes, TimeUnit.MINUTES);
break;
} else {
String[] keys = keysList.get(0).split("_");
String rLimit = keys[1];
counter = new RedisAtomicInteger(keysList.get(0), template.getConnectionFactory());
int count = counter.get();
// If count exceeds throw error
if (count != 0 && count >= Integer.parseInt(rLimit)) {
throw new Exception("Error");
}
else {
counter.incrementAndGet();
}
}
return joinPoint.proceed();
}
when these lines run
RedisAtomicInteger counter = counter = new RedisAtomicInteger("mykey");
counter.expire(1, TimeUnit.MINUTES);
I can see
75672562.380127 [0 10.0.3.133:65462] "KEYS" "mykey_1000*"
75672562.384267 [0 10.0.3.133:65462] "GET" "mykey_1000_1475672621787"
75672562.388856 [0 10.0.3.133:65462] "SET" "mykey_1000_1475672621787" "0"
75672562.391867 [0 10.0.3.133:65462] "INCRBY" "mykey_1000_1475672621787" "1"
75672562.395922 [0 10.0.3.133:65462] "PEXPIRE" "mykey_1000_1475672621787" "60000"
...
75672562.691723 [0 10.0.3.133:65462] "KEYS" "mykey_1000*"
75672562.695562 [0 10.0.3.133:65462] "GET" "mykey_1000_1475672621787"
75672562.695855 [0 10.0.3.133:65462] "GET" "mykey_1000_1475672621787"
75672562.696139 [0 10.0.3.133:65462] "INCRBY" "mykey_1000_1475672621787" "1"
in Redis log, when I "MONITOR" it in
Edit:
Now with the updated code I believe your methodology is fundamentally flawed aside from what you're reporting.
The way you've implemented it you require running KEYS in production - this is bad. As you scale out you will be causing a growing, and unnecessary, system blocking load on the server. As every bit of documentation on it says, do not use keys in production. Note that encoding the expiration time in the key name gives you no benefit. If you made that part of the key name a the creation timestamp, or even a random number nothing would change. Indeed, if you removed that bit, nothing would change.
A more sane route would instead be to use a keyname which is not time-dependent. The use of expiration handles that function for you. Let us call your rate-limited thing a "session". Your key name sans the timestamp is the "session ID". By setting an expiration of 60s on it, it will no longer be available at the 61s mark. So you can safely increment and compare the result to your limit without needing to know the current time or expiry time. All you need is a static key name and an appropriate expiration set on it.
If you INCR a non-existing key, Redis will return "1" meaning it created the key and incremented it in a single step/call. so basically the logic goes like this:
create "session" ID
increment counter using ID
compare result to limit
if count == 1, set expiration to 60s
id count > limit, reject
Step 3.1 is important. A count of 1 means this is a new key in Redis, and you want to set your expiration on it. Anything else means the expiration should already have been set. If you set it in 3.2 you will break the process because it will preserve the counter for more than 60s.
With this you don't need to have dynamic key names based on expiration time, and thus don't need to use keys to find out if there is an existing "session" for the rate-limited object. It also makes your code much simpler and predictable, as well as reduce round trips to Redis - meaning it will be lower load on Redis and perform better. As to how to do that w/the client library you're using I can't say because I'm not that familiar with it. But the basic sequence should be translatable to it as it is fairly basic and simple.
What you haven't shown, however, is anything to support the assertion that the expiration isn't happening. All you've done is show that Redis is indeed being told to and setting an expiration. In order to support your claim you need to show that the key does not expire. Which means you need to show retrieval of the key after the expiration time, and that the counter was not "reset" by being recreated after the expiration. One way you can see the expiration is happening is to use keyspace notifications. With that you will be able to see Redis saying a key was expired.
Where this process will fail a bit is if you do multiple windows for rate-limiting, or if you have a much larger window (ie. 10 minutes) in which case sorted sets might be a more sane option to prevent front-loading of requests - if desired. But as your example is written, the above will work just fine.
Related
I am currently trying to implement a login to Shopify over the Storefront API via Multipass.
However, what it isn't clear to me from the Documentation on that Page, how the "created_at" Field is used. Since it states that this field should be filled with the current timestamp.
But what if the same users logs in a second time via Multipass, should it be filled with the timestamp of the second login.
Or should the original Multipass token be stored somewhere, and reused at a second login, instead of generating a new one?
Yes you need to set it always to the current time. I guess it stands for "token created at".
This is the code I use in Python:
class Multipass:
def __init__(self, secret):
key = SHA256.new(secret.encode('utf-8')).digest()
self.encryptionKey = key[0:16]
self.signatureKey = key[16:32]
def generate_token(self, customer_data_hash):
customer_data_hash['created_at'] = datetime.datetime.utcnow().isoformat()
cipher_text = self.encrypt(json.dumps(customer_data_hash))
return urlsafe_b64encode(cipher_text + self.sign(cipher_text))
def generate_url(self, customer_data_hash, url):
token = self.generate_token(customer_data_hash).decode('utf-8')
return '{0}/account/login/multipass/{1}'.format(url, token)
def encrypt(self, plain_text):
plain_text = self.pad(plain_text)
iv = get_random_bytes(AES.block_size)
cipher = AES.new(self.encryptionKey, AES.MODE_CBC, iv)
return iv + cipher.encrypt(plain_text.encode('utf-8'))
def sign(self, secret):
return HMAC.new(self.signatureKey, secret, SHA256).digest()
#staticmethod
def pad(s):
return s + (AES.block_size - len(s) % AES.block_size) * chr(AES.block_size - len(s) % AES.block_size)
And so
...
customer_object = {
**user,# customer data
"verified_email": True
}
multipass = Multipass(multipass_secret)
return multipass.generate_url(customer_object, environment["url"])
How can someone login a second time? If they are already logged in, they would not essentially be able to re-login without logging out. If they logged out, the multi-pass would assign a new timestamp. When would this flow occur of a user logging in a second time and not being issued a brand new login? How would they do this?
Now I am using this code to increment a value in spring boot :
String loginFailedKey = "admin-login-failed:" + request.getPhone();
Object loginFailedCount = loginFailedTemplate.opsForValue().get(loginFailedKey);
if (loginFailedCount != null && Integer.valueOf(loginFailedCount.toString()) > 3) {
throw PostException.REACH_MAX_RETRIES_EXCEPTION;
}
List<Users> users = userService.list(request);
if (CollectionUtils.isEmpty(users)) {
loginFailedTemplate.opsForValue().increment(loginFailedKey, 1);
throw PostException.LOGIN_INFO_NOT_MATCH_EXCEPTION;
}
is it possible to set an expire time when increment the key? If a new increment command happen, update the expire time. I read the docs and did not found the implement.
There is no direct way in Spring Boot.
One of the indirect ways is to use LUA srcipt.
For example:
RedisScript script = RedisScript.of(
"local i = redis.call('INCRBY', KEYS[1], ARGV[1])"
+ " redis.call('EXPIRE', KEYS[1], ARGV[2])"
+ " return i");
redisTemplate.execute(script, key, String.valueOf(increment),
String.valueOf(expiration));
I am new in ADF (EJB/JPA not Business Component), when the user is using our new app developed on jdeveloper "12.2.1.2.0", after an hour of activity, system is loosing the current record. To note that the object lost is the parent object.
I tried to change the session-timeout (knowing that it will affect the inactivity time).
public List<SelectItem> getSProvMasterSelectItemList(){
List<SelectItem> sProvMasterSelectItemList = new ArrayList<SelectItem>();
DCIteratorBinding lBinding = ADFUtils.findIterator("pByIdIterator");/*After 1 hour I am able to get lBinding is not null*/
Row pRow = lBinding.getCurrentRow();/*But lBinding.getCurrentRow() is null*/
DCDataRow objRow = (DCDataRow) pRow;
Prov prov = (Prov) objRow.getDataProvider();
if (!StringUtils.isEmpty(prov)){
String code = prov.getCode();
if (StringUtils.isEmpty(code)){
return sProvMasterSelectItemList;
}else{
List<Lov> mProvList = getSessionEJBBean().getProvFindMasterProv(code);
sProvMasterSelectItemList.add(new SelectItem(null," "));
for (Lov pMaster:mProvList) {
sProvMasterSelectItemList.add(new SelectItem(pMaster.getId(),pMaster.getDescription()));
}
}
}
return sProvMasterSelectItemList ;
}
I expect to be able to read the current record at any time, specially that it is the master block, and one record is available.
This look like a classic issue of misconfigured Application Module.
Cause : Your application module is timing out and releasing it's transaction before the official adfc-config timeout value.
To Fix :
Go to the application module containing this VO > Configuration > Edit the default > Modify Idle Instance Timeout to be the same as your adf session timeout (Take time to validate the other configuration aswell)
I have implemented Redis's reliable queue pattern using BRPOPLPUSH because I want to avoid polling.
However this results in a network request for each item. How can I augment this so that a worker BRPOPLPUSH'es multiple entries at once?
While BRPOPLPUSH is blocking version of RPOPLSPUSH and do not support transactions and you cant handle multiple entries. Also you cant use LUA for this purposes because of LUA execution nature: server would be blocked for new requests before LUA script has finished.
You can use application side logic to resolve queue pattern you need. Pseudo language
func MyBRPOPLPUSH(source, dest, maxItems = 1, timeOutTime = 0) {
items = []
timeOut = time() + timeOutTime
while ((timeOut > 0 && time() < timeOut) || items.count < maxItems) {
item = redis.RPOPLSPUSH(source, dest)
if (item == nil) {
sleep(someTimeHere);
continue;
}
items.add(item)
}
my sort command is
"SORT hot_ids by no_keys GET # GET msg:->msg GET msg:->count GET msg:*->comments"
it works fine in redis-cli, but it doesn't return data in RedisClient. the result is a byte[][], length of result is correct, but every element of array is null.
the response of redis is
...
$-1
$-1
...
c# code is
data = redis.Sort("hot_ids ", new SortOptions()
{
GetPattern = "# GET msg:*->msg GET msg:*->count GET msg:*->comments",
Skip = skip,
Take = take,
SortPattern = "not-key"
});
Redis Sort is used in IRedisClient.GetSortedItemsFromList, e.g. from RedisClientListTests.cs:
[Test]
public void Can_AddRangeToList_and_GetSortedItems()
{
Redis.PrependRangeToList(ListId, storeMembers);
var members = Redis.GetSortedItemsFromList(ListId,
new SortOptions { SortAlpha = true, SortDesc = true, Skip = 1, Take = 2 });
AssertAreEqual(members,
storeMembers.OrderByDescending(s => s).Skip(1).Take(2).ToList());
}
You can use the MONITOR command in redis-cli to help diagnose and see what requests the ServiceStack Redis client is sending to redis-server.