I can't lock value of the redis with redlock. How to lock the value in the redis? - redis

I'd like to lock value in the redis with redlock.
But, It's not locked even though lock is done successfully.
This is my test code. Pushed data is 'refId:1234' and lock it for 4 seconds.
So, pop should be done after 4 seconds. But, it's poped after 1 seconds.
const redis = require('ioredis');
const redLock = require('redlock');
// Connect to Redis
const client = redis.createClient( {
port:6379,
host:'127.0.0.1',
});
const redlock = new redLock([client]);
let count = 0;
let lockT;
function listPush() {
let value = 'refId:1234';
client.rpush('events', value, (err, reply) => {
if (err) {
console.error(err);
}
else {
let now = Date();
let time = Date.parse(now);
console.log('Successfully pushed : ', time);
}
});
}
function listPop() {
let ret;
client.lpop('events', (err, reply) => {
if (err) {
console.error(err);
} else {
let now = Date();
let time = Date.parse(now);
console.log(`Data read from list: ${reply}`, ' ', time);
}
});
}
function lockValue(value, ttl) {
return new Promise((resolve, reject) => {
redlock.lock(`${value}`, ttl)
.then((lock) => {
let now = Date();
let time = Date.parse(now);
console.log("Successfully acquired lock on : ", time);
resolve(lock);
})
.catch((error) => {
console.error(`Failed to acquire lock on "${value}"`);
reject(error);
});
});
}
listPush();
lockValue("refId:1234", 4000);
setTimeout(listPop, 1000);
This is log:
Successfully pushed : 1676204641000
Successfully acquired lock on : 1676204641000
Data read from list: refId:1234 1676204642000
In the log, data read is done within 1 second after lock is done.
The ttl of the lock is 4 seconds. So, data should be accessible after lock time 4 seconds.
It shows that lock is not working
This is partial of lock instance after locking.
resource: [ 'refId:1234' ],
value: 'f9f40e68e9ce3a97fa8b820727317b94',
expiration: 1676205843381,
attempts: 1,
attemptsRemaining: 9
If the expiration value means ttl of lock, it is not matched. It's strange.
But, I'm not sure expiration means ttl.
I'm confusing why the redlock is not working well.
Am I misconfigured the redlock and redis for distributed lock?
Am I misunderstand operation of redlock?
If the value is locked in the redis, the value is not accessible until unlocked.
Access is blocking until unlocked and is possible after timout or release lock.

Related

Fetching sessions saga refactor to RXJS epic

trying to refactor the saga code into rxjs epic. It's fetching sessions from the server
yield race([
call(fetchSessions, sessionsList), //complete retrieving sessions
call(sessionWatchdog, 20000), //or timeout with 20 seconds of inactivity
]);
function* fetchSessions(list: Array<number>): Generator<*, *, *> {
console.log('FETCH ALL', list.length);
for (let id of list) {
yield fork(fetchSession, id);
}
}
function* sessionWatchdog(duration: number): Generator<*, *, *> {
while (true) {
let { retrieved, timeout } = yield race({
retrieved: take('SESSION_RETRIEVED'),
timeout: delay(duration),
});
if (timeout) {
return 'TIMEOUT';
}
}
}
The fetch session is an async function that retrieves a single session. I'm not sure how to make sure to make an equivalent epic. After each session is fetched need to make sure it was retrieved or timeout and handle that.
That's what I have now, but don't understand how to make it do the same as saga code with sessionWatchdog
export const fetch_all_sessions = (
action$: Obs<*>,
state$: typeof StateObservable
): Obs<*> =>
action$.pipe(
filter(action => action.type === 'FETCH_ALL_SESSIONS'),
switchMap(action => {
let list = action.list;
from(list).pipe(
map(id => {
return fetchSession(id);
})
);
return of({ type: '' });
})
);
Thanks for your help or advice

expo-location startLocationUpdatesAsync triggered too rarely

This is where i trigger Location.startLocationUpdatesAsync
useEffect(() => {
if (isPermissionReady && destination && !intervalHandle.current) {
// Register location fetch to task manager
Location.startLocationUpdatesAsync(TASK_NAME, {
accuracy: Location.Accuracy.Balanced,
// activityType: Location.ActivityType.AutomotiveNavigation,
// deferredUpdatesTimeout: INTERVAL_MS,
})
// Repeatedly read local storage to update currentPos state
intervalHandle.current = setInterval(updateCurrentPos, INTERVAL_MS)
}
}, [isPermissionReady, destination])
And this is my TaskManager (declared separately in index.tsx not inside a lifecycle or hook):
TaskManager.defineTask(TASK_NAME, ({ data, error }) => {
if (error) {
console.error('Task Manager Failed')
return
}
if (data) {
const { locations } = (data as any) ?? { locations: [] }
const { latitude, longitude } = locations[0]?.coords ?? {}
try {
AsyncStorage.setItem(STORAGE_KEY, JSON.stringify({ latitude, longitude }))
} catch {
console.error('SetItem Failed')
}
}
})
My taskmanager reads location data from device and save to LocalStorage(AsyncStorage) and react native app fetches this data with setInterval. However, data set by TaskManager is updated too rarely and it is not called regularly in constant time(imo) and never called if I stay at the same place with all deferredOOO options disabled.
Does startLocationUpdatesAsync trigger only after some distance change even with default values? or am I doing something wrong? (I want it to be called in regular basis or at least have a clear understanding of when it is called)
Plus, is it normal for taskmanager to not show any console.log?

Redis (node using ioredis) clusterRetryStrategy exception handling

I have a local setup on a mac running a redis cluster with 6 nodes. I have been trying to handle connection errors and limit the number of retries whenever the cluster is offline (purposefully trying to provoke a fallback to mongo) but it keeps retrying regardless of the cluster / redis options I pass in unless I return "null" in the clusterRetryStrategy. Here is a snapshot of my redis service.
import Redis from 'ioredis';
export class RedisService {
public readonly redisClient: Redis.Cluster;
public readonly keyPrefix = 'MyRedisKey';
public readonly clusterOptions: Redis.ClusterOptions = {
enableReadyCheck: true,
retryDelayOnClusterDown: 3000,
retryDelayOnFailover: 30000,
retryDelayOnTryAgain: 30000,
clusterRetryStrategy: (times: number) => {
// notes: this is the only way I have figured how to throw errors without retying for ever when connection to the cluster fails.
// tried to use maxRetriesPerRequest=1, different variations of retryStrategy
return null;
},
slotsRefreshInterval: 10000,
redisOptions: {
keyPrefix: this.keyPrefix,
autoResubscribe: true,
autoResendUnfulfilledCommands: false,
password: process.env?.REDIS_PASSWORD,
enableOfflineQueue: false,
maxRetriesPerRequest: 1
}
};
constructor() {
const nodes: any = ['127.0.0.1:30001','127.0.0.1:30002','127.0.0.1:30003','127.0.0.1:30004','127.0.0.1:30005','127.0.0.1:30006'];
this.redisClient = new Redis.Cluster(nodes, this.clusterOptions);
}
public hget = async (field: string): Promise<any> => {
const response = await this.redisClient.hget(this.keyPrefix, field);
return response;
}
}
If my redis cluster is stopped and I make a call to this service like:
public readonly redisService: RedisService = new RedisService();
try {
const item = await this.redisService.hget('test');
} catch(e){
console.error(e);
}
I endlessly get the error "[ioredis] Unhandled error event: ClusterAllFailedError: Failed to refresh slots cache." and it never falls into the catch block.
By the way, I tried the solution listed here but it did not work.
Redis (ioredis) - Unable to catch connection error in order to handle them gracefully.
Below are the versions of ioredis npm packages I am using.
"ioredis": "4.19.4"
"#types/ioredis": "4.17.10"
Thank you for your help.
Refresh slots cache is done automatically by IoRedis.
Setting them to 10000ms is bad because it means than any new shards added into your cluster will not be visible by your NodeJs sever for 10s (that could create some nasty errors in run time)
In order, to catch those errors you can listen on error events.
this.redisClient.on('error', async (error: any) => {
logger.error(`[EventHandler] error`, { errorMessage: error.message });
});
About the clusterRetryStrategy, it is true that it a bit odd.
From what I discover, IoRedis calls this clusterRetryStrategy function as follow
a couple of times without an error in the second param
a lot of time with actual error is passed in param getaddrinfo ENOTFOUND
for ever without the error when connection is back to normal
In order to stop the infinit loop when connection is back, I use the following function:
import Redis from 'ioredis';
export class RedisService {
public readonly redisClient: Redis.Cluster;
public readonly keyPrefix = 'MyRedisKey';
private retryStrategyErrorDetected: boolean = false;
public readonly clusterOptions: Redis.ClusterOptions = {
clusterRetryStrategy: (times: number, reason?: Error) => {
tLogger.warn(`[EventHandler] clusterRetryStrategy`, { count: times, name: reason?.message});
if (this.retryStrategyErrorDetected && !reason) {
this.retryStrategyErrorDetected = false;
return null;
}
if (reason) {
this.retryStrategyErrorDetected = true;
}
return Math.min(100 + times * 2, 2000);
};
},
};
constructor() {
const nodes: any = ['127.0.0.1:30001','127.0.0.1:30002','127.0.0.1:30003','127.0.0.1:30004','127.0.0.1:30005','127.0.0.1:30006'];
this.redisClient = new Redis.Cluster(nodes, this.clusterOptions);
this.redisClient.on('error', async (error: any) => {
logger.error(`[EventHandler] error`, { errorMessage: error.message });
});
}
}
Hope it will help someone.

Redis client won't set values upon message received

The initial set of the values happens on the server (\complex\server\index.js):
app.post('/values', async (req, res) => {
const index = req.body.index;
if (parseInt(index) > 40) {
return res.status(422).send('Index too high');
}
redisClient.hset('values', index, 'Nothing yet!');
redisPublisher.publish('insert', index);
pgClient.query('INSERT INTO values(number) VALUES($1)', [index]);
res.send({ working: true });
});
On value submit in component (\complex\client\src\Fib.js):
handleSubmit = async (event) => {
event.preventDefault();
await axios.post('/api/values', {
index: this.state.index
});
this.setState({ index: '' });
};
the worker sets the value for the Redis client:
sub.on('message', (channel, message) => {
redisClient.hset('values', message, fib(parseInt(message)));
});
sub.subscribe('insert');
However when list all the values inside the Fib.js component for each submitted index the component receives 'Nothing yet!'.
Why would it not receive the calculated values?
The complete repo is on https://github.com/ElAnonimo/docker-complex
redisClient.hset('values', index, 'Nothing yet!'); is asynchronous -- it needs to connect to Redis, send a message, wait for a response, etc.
So what probably happens is a race-condition, redisPublisher.publish('insert', index); runs before hset is completes.
I haven't gone over the code, so you'd also want to make sure to avoid a similiar race-condition with subscribe() being called after publish().
Try this:
app.post('/values', async (req, res) => {
const index = req.body.index;
if (parseInt(index) > 40) {
return res.status(422).send('Index too high');
}
redisClient.hset('values', index, 'Nothing yet!', () => redisPublisher.publish('insert', index));
pgClient.query('INSERT INTO values(number) VALUES($1)', [index]);
res.send({ working: true });
});
The problem is with your docker-compose.yml file.
You have to specify the environment variables for the worker container and specify the redis host and port (just like how you specified for the server container):
worker:
environment:
- REDIS_HOST=redis
- REDIS_PORT=6379

TransactionInactiveError: Failed to execute 'get' on 'IDBObjectStore': The transaction is inactive or finished

This seems to be a Safari only bug. It does not occur in Chrome as far as I can tell. I have a very standard IndexedDB setup. I call initDb, save the result, and that provides me a nice way to make calls to the DB.
var initDb = function() {
// Setup DB. whenDB is a promise we use before executing any DB requests so we know the DB is fully set up.
parentDb = null;
var whenDb = new Promise(function(resolve, reject) {
var DBOpenRequest = window.indexedDB.open('groceries');
DBOpenRequest.onsuccess = function(event) {
parentDb = DBOpenRequest.result;
resolve();
};
DBOpenRequest.onupgradeneeded = function(event) {
var localDb = event.target.result;
localDb.createObjectStore('unique', {
keyPath: 'id'
});
};
});
// makeRequest needs to return an IndexedDB Request object.
// This function just wraps that in a promise.
var request = function(makeRequest, key) {
return new Promise(function(resolve, reject) {
var request = makeRequest();
request.onerror = function() {
reject('Request error');
};
request.onsuccess = function() {
if (request.result == undefined) {
reject(key + ' not found');
} else {
resolve(request.result);
}
};
});
};
// Open a very typical transaction
var transact = function(type, storeName) {
// Make sure DB is set up, then open transaction
return whenDb.then(function() {
var transaction = parentDb.transaction([storeName], type);
transaction.oncomplete = function(event) {
console.log('transcomplete')
};
transaction.onerror = function(event) {
console.log('Transaction not opened due to error: ' + transaction.error);
};
return transaction.objectStore(storeName);
});
};
// Shortcut function to open transaction and return standard Javascript promise that waits for DB query to finish
var read = function(storeName, key) {
return transact('readonly', storeName).then(function(transactionStore) {
return request(function() {
return transactionStore.get(key);
}, key);
});
};
// A test function that combines the previous transaction, request and read functions into one.
var test = function() {
return whenDb.then(function() {
var transaction = parentDb.transaction(['unique'], 'readonly');
transaction.oncomplete = function(event) {
console.log('transcomplete')
};
transaction.onerror = function(event) {
console.log('Transaction not opened due to error: ' + transaction.error);
};
var store = transaction.objectStore('unique');
return new Promise(function(resolve, reject) {
var request = store.get('groceryList');
request.onerror = function() {
console.log(request.error);
reject('Request error');
};
request.onsuccess = function() {
if (request.result == undefined) {
reject(key + ' not found');
} else {
resolve(request.result);
}
};
});
});
};
// Return an object for db interactions
return {
read: read,
test: test
};
};
var db = initDb();
When I call db.read('unique', 'test') in Safari I get the error:
TransactionInactiveError: Failed to execute 'get' on 'IDBObjectStore': The transaction is inactive or finished
The same call in Chrome gives no error, just the expected promise returns. Oddly enough, calling the db.test function in Safari works as expected as well. It literally seems to be that the separation of work into two functions in Safari is somehow causing this error.
In all cases transcomplete is logged AFTER either the error is thrown (in the case of the Safari bug) or the proper value is returned (as should happen). So the transaction has NOT closed before the error saying the transaction is inactive or finished is thrown.
Having a hard time tracking down the issue here.
Hmm, not confident in my answer, but my first guess is the pause that occurs between creating the transaction and starting a request allows the transaction to timeout and become inactive because it finds no requests active, such that a later request that does try to start is started on an inactive transaction. This can easily be solved by starting requests in the same epoch of the javascript event loop (the same tick) instead of deferring the start of a request.
The error is most likely in these lines:
var store = transaction.objectStore('unique');
return new Promise(function(resolve, reject) {
var request = store.get('groceryList');
You need to create the request immediately to avoid this error:
var store = transaction.objectStore('unique');
var request = store.get('groceryList');
One way to solve this might be simply to approach the code differently. Promises are intended to be composable. Code that uses promises generally wants to return control to the caller, so that the caller can control the flow. Some of your functions as they are currently written violate this design pattern. It is possible that by simply using a more appropriate design pattern, you will not run into this error, or at least you will be able to identify the problem more readily.
An additional point would be your mixed use of global variables. Variables like parentDb and db are just going to potentially cause problems on certain platforms unless you really are an expert at async code.
For example, start with a simple connect or open function that resolves to an open IDBDatabase variable.
function connect(name) {
return new Promise(function(resolve, reject) {
var openRequest = indexedDB.open(name);
openRequest.onsuccess = function() {
var db = openRequest.result;
resolve(db);
};
});
}
This will let you easily compose an open promise together with code that should run after it, like this:
connect('groceries').then(function(db) {
// do stuff with db here
});
Next, use a promise to encapsulate an operation. This is not a promise per request. Pass along the db variable instead of using a global one.
function getGroceryList(db, listId) {
return new Promise(function(resolve, reject) {
var txn = db.transaction('unique');
var store = txn.objectStore('unique');
var request = store.get(listId);
request.onsuccess = function() {
var list = request.result;
resolve(list);
};
request.onerror = function() {
reject(request.error);
};
});
}
Then compose it all together
connect().then(function(db) {
return getGroceryList(db, 'asdf');
}).catch(error);