How cache entry's valid period is calculated in MULE4? - mule

If I cache a payload, how long it will be valid?
There are 2 settings in caching-strategy;
Entry TTL and
Expiration Interval.
If I want to invalidate my cached value after 8 hours, How I should set above parameters?
What is the usage for 'invalidate cache' processor?

Entry TTL is how long an entry should live in the cache. Expiration interval is how frequently the object store will check the entries to see if one entry should be deleted. In your case entryTTL should 8 hours. Be mindful of the units used for each attribute. Expiration interval is a bit more tricky. You may want to check entries much more frequently to avoid them living more than 8 hours before expiring. It may be 10 minutes, 30 minutes, 1 hour or whatever works for you.
I explained it more in my blog: https://medium.com/#adobni/configuring-an-object-store-in-mule-4-5da609e3456a

Related

Is there a way to increase ttl in redis?

I know there are several ways to set a specific ttl for a key, but is there a way to add some extra time for a key which has a counting down ttl?
There's no built-in way to extend TTL. You need to get the current TTL, and then add some more TTL to it.
Wrap these two steps into a Lua script:
-- extend 300 seconds
eval 'local ttl = redis.call("TTL", "key") + 300; redis.call("EXPIRE", "key", ttl)' 0
Good question
there is no such command
I think it is a bad idea to have a command like that, you have to be careful when you use it.
Probably end up adding more time to the ttl than we expect. If you set it like 5 mins, the actual expire time will be close to 5 mins even if setting it multiple times in that request. But if you add multiple 5 mins to it, then we can`t be sure of the actual expire time

How to expire a HyperLogLog in Redis?

HyperLogLogs take up 12KB of space. I don't see anything in the docs about when that storage is freed up.
My current plan is to call EXPIRE every time I call PFADD, but I can't find much discussion about expiring HLLs, so I'm wondering if I'm doing it wrong...
I'm planning on using HLLs to count the number of active visitors on my site in real-time. I only want to keep the counts for the past hour around, freeing up anything older than that.
NO, you cannot expire items added to the HLL. Instead, EXPIRE command will expire the whole HLL.
In order to achieve your goal, you can create HLL for each hour, and expire the whole HLL after some time.
// for the 2019082200
PFADD user:2019082200 user1
// also set expiration for the HLL, and expire it after 10 hours
EXPIRE user:2019082200 36000
// add more users
PFADD user:2019082200 user2
// until the next hour, create a new HLL for the next hour
PFADD user:2019082201 user1
EXPIRE user:2019082201 36000

Set Data Limit in Loggly

Is there any way to set limit in Loggly to take only last seven days logs information, other must be automatically deleted and than Loggly not be full because it will take only few days information.
Thanks
If you are using a free account, Loggly will automatically delete the data after 7 days. You should not worry about how much data is in storage, so long as your volume is under 200 MB per day. This is measured from 00:00h UTC.

Bitcoin Exchange API - more frequent high low

Any way to get more a high-low value more frequent than every 24 hours from say the Bitstamp API ticker?
This link only tells you how to get the value for every 24 hours
https://www.bitstamp.net/api/
(this also seems to be a problem with every other exchange I've tried)
24 hours is compared by time_now - 24 hours so it should give you updated rates every second or may be every minute depends on the configuration of the api file.

Calculate number of events per last 24 hours with Redis

Seems, it's common task, but I haven't found solution.
I need to calculate number of user's events (for example, how many comments he left) per last 24 hours. Older data is not interesting for me, so information about comments added month ago should be removed from Redis.
Now I see only one solution. We can make keys that includes ID of user and hour of day, increment it value. Then we will get 24 values and calculate their sum. Each key has 24 hours expiration.
For example,
Event at Jun 22, 13:27 -> creating key _22_13 = 1
Event at Jun 22, 13:40 -> incrementing key _22_13 = 2
Event at Jun 22, 18:45 -> creating key _22_18 = 1
Event at Jun 23, 16:00 -> creating key _23_16 = 1
Getting sum of event at Jun 23, 16:02 -> sum of keys _22_17 - _22_23, _23_00 - _23_16: in our keys it's only _22_18 and _23_16, result is 2. Keys _22_13 and _22_13 are expired.
This method is not accurate (it calculates count of events per 24+0..1 hour) and not so universal (what key will I choose if I need number of events per last 768 minutes or 2.5 month?).
Do you have better solution with Redis data types?
Your model seems fine. Of course, it's not universal, but it's what you have to sacrifice in order to be performant.
I suggest you keep doing this. If you will need another timespan for reporting (768 minutes), you can always get this from mysql where your comments are stored (you do this, right?). This will be slower, but at least the query will be served.
If you need faster response or higher precision, you can store counters in redis with minute resolution (one counter per minute).
You can use redis EXPIRE query after each key creation.
SET comment "Hello, world"
EXPIRE comment 1000 // in seconds
PEXPIRE comment 1000 // in milliseconds
Details here.