Issue with loading sqlite entries in reverse in expo sqlite - react-native

I am working on a feature in an app where users can log journal entries, and I need to implement lazy loading. However, I need to load the entires in reverse, since I need to display the most recent entries first. I am not able to do this, even with the COUNT(*) aggregate function because I do not want to use GROUP BY. Here is my code:
export const lazyLoadEntriesByGoalId = (goalId, amountLastLoaded) => {
//load journal entries
const transactionPromise = new Promise((resolve, reject) => {
database.transaction((tx) => {
tx.executeSql(
`SELECT * FROM journal
WHERE goalId = ?
AND id < ?
ORDER BY id DESC
LIMIT 5;`,
[goalId, amountLastLoaded],
(_, result) => {
resolve(result.rows._array);
},
(_, err) => {
reject(err);
}
);
});
});
return transactionPromise;
};
The amountLastLoaded is for seeing how many entries are already loaded. I am considering using "COUNT(*) - ?", but expo-sqlite throws an error if I do that. What can I do?

Related

Redis getting all fields of sorted set

im trying to make a freelancing website that will hold gigs at redis for caching, in order to categorise them, there are 2 fields called "categoryId" and "skillId" and i want to keep them sorted with "createdAt" field which is a date. So i have two options and i have some blank spots about first one.
option1
Im holding my gigs at sorted set and making a key with two parameter which holds categoryId And skillId, but the problem is user may want only select gigs with specific category and skill doesn't matter. But user may also want select gigs with both categoryId and skillId. So for that reason i used a key like
`gigs:${categoryId}:${skillId != null ? skillId : "*"}`
here's my full code
export const addGigToSortedSet = async (value) => {
return new Promise<string>((resolve, reject) => {
let date =
value.gigCreatedAt != null && value.createdAt != undefined
? Math.trunc(Date.parse(<string>value.createdAt) / 1000)
: Date.now();
redisClient
.zAdd(`gigs:${value.gigCategory}:${value.gigSkill}`, {
score: date,
value: JSON.stringify(value),
})
.then((res) => {
if (res == 1) {
resolve("Başarılı");
} else {
reject("Hata");
return;
}
});
});
};
export const multiAddGigsToSortedSet = async (gigs: any[]) => {
return new Promise((resolve, reject) => {
let multiClient = redisClient.multi();
for (const gig of gigs) {
let date =
gig.gigCreatedAt != null && gig.createdAt != undefined
? Math.trunc(Date.parse(<string>gig.createdAt) / 1000)
: Date.now();
multiClient.zAdd(`gigs:${gig.gigCategory}:${gig.gigSkill}`, {
score: date,
value: JSON.stringify(gig),
});
}
multiClient.exec().then((replies) => {
if (replies.length > 0) {
resolve(replies);
} else {
reject("Hata");
return;
}
});
});
};
export const getGigsFromSortedSet = async (
categoryId: string,
page: number,
limit: number,
skillId?: string
) => {
return new Promise<string[]>((resolve, reject) => {
redisClient
.zRange(
`gigs:${categoryId}:${skillId != null ? skillId : "*"}`,
(page - 1) * limit,
page * limit
)
.then((res) => {
if (res) {
resolve(res.reverse());
} else {
reject("Hata");
return;
}
});
});
};
Option 2
option two is way more simpler but less more effective with storage usage
i'll create two sorted set about category and skill and then will use zinterstore to get my values, and i will easily get gigs about only category since i have different set.
so my question is which way is more effective solution and will this line give me gigs with given category without skill parameter?
gigs:${categoryId}:${skillId != null ? skillId : "*"}
Your approach #2 is the most common implementation. See https://redis.io/docs/reference/patterns/indexes/
But...
Indexes created with sorted sets are able to index only a single
numerical value. Because of this you may think it is impossible to
index something which has multiple dimensions using this kind of
indexes, but actually this is not always true. If you can efficiently
represent something multi-dimensional in a linear way, they it is
often possible to use a simple sorted set for indexing.
For example the Redis geo indexing API uses a sorted set to index
places by latitude and longitude using a technique called Geo hash.
The sorted set score represents alternating bits of longitude and
latitude
Therefore if you can find an encoding scheme of your "categoryId" and "skillId" into a single value then you could use a single sorted set.

Sqlite very slow on for loop delete

I have this local db that I'm playing with and it pulls a list of users, does something with each and then deletes the records. The delete is VERY slow:
db.all("select id, username from users", (err, rows) => {
rows.forEach((row) => {
// do stuff with row
db.run("delete from users where id = ?", row.id, (err) => {
if (err) {
throw err;
}
});
});
});
It is a simple db: CREATE TABLE IF NOT EXISTS users(id INTEGER PRIMARY KEY, username text NOT NULL)
Deleting a record takes even 20 seconds on a list of 100k records. What am I doing wrong here and how can I speed this up?
Deleting a record takes even 20 seconds on a list of 100k records. What am I doing wrong here and how can I speed this up?
db.all will fetch all the rows at once. This is slow, consumes a lot of memory, and all rows must be fetched before any processing starts.
Instead, use db.each. This will fetch a row and act on it immediately.
There's also no need to use where in (?). For a single value use where = ?. This may or may not affect performance.
db.each(
"select id, username from users", (err, row) => {
// do stuff with row
db.run("delete from users where id = ?", row.id, (err) => {
if (err) {
throw err;
}
}
}
)

How to repeat SQL insertion until successful with pg-promise?

In my program I insert some data into a table and get back it's id and I need to ensure I enter that id into another table with a unique randomly generated string. But, in case the insertion fails for attempting to insert an already-existing random string, how could I repeat the insertion until it is successful?
I'm using pg-promise to talk to postgreSQL. I can run program like this that inserts the data into both tables given the random string doesn't already exists:
db.none(
`
WITH insert_post AS
(
INSERT INTO table_one(text) VALUES('abcd123')
RETURNING id
)
INSERT INTO table_two(id, randstr)
VALUES((SELECT id FROM insert_post), '${randStrFn()}')
`
)
.then(() => console.log("Success"))
.catch(err => console.log(err));
I'm unsure if there is any easy SQL/JS/pg-promise based solution that I could make use of.
I would encourage the author of the question to seek a pure-SQL solution to his problem, as in terms of performance it would be significantly more efficient than anything else.
But since the question was about how to re-run queries with pg-promise, I will provide an example, in addition to one already published, except without acquiring and releasing the connection for every attempt, plus proper data integrity.
db.tx(t => {
// BEGIN;
return t.one('INSERT INTO table_one(text) VALUES($1) RETURNING id', 'abcd123', a => +a.id)
.then(id => {
var f = attempts => t.none('INSERT INTO table_two(id, randstr) VALUES($1, randStrFn())', id)
.catch(error => {
if (--attempts) {
return f(attempts); // try again
}
throw error; // give up
});
return f(3); // try up to 3 times
});
})
.then(data => {
// COMMIT;
// success, data = null
})
.catch(error => {
// ROLLBACK;
});
Since you are trying to re-run a dependent query, you should not let the first query remain successful, if all your attempts with the second query fail, you should roll all the changes back, i.e. use a transaction - method tx, as shown in the code.
This is why we split your WITH query inside the transaction, to ensure such an integrity.
UPDATE
Below is a better version of it though. Because errors inside the transaction need to be isolated, in order to avoid breaking the transaction stack, each attempt should be inside its own SAVEPOINT, which means using another transaction level:
db.tx(t => {
// BEGIN;
return t.one('INSERT INTO table_one(name) VALUES($1) RETURNING id', 'abcd123', a => +a.id)
.then(id => {
var f = attempts => t.tx(sp => {
// SAVEPOINT level_1;
return sp.none('INSERT INTO table_two(id, randstr) VALUES($1, randStrFn())', id);
})
.catch(error => {
// ROLLBACK TO SAVEPOINT level_1;
if (--attempts) {
return f(attempts); // try again
}
throw error; // give up
});
return f(3); // try up to 3 times
});
})
.then(data => {
// 1) RELEASE SAVEPOINT level_1;
// 2) COMMIT;
})
.catch(error => {
// ROLLBACK;
});
I would also suggest using pg-monitor, so you can see and understand what is happening underneath, and what queries are being in fact executed.
P.S. I'm the author of pg-promise.
The easiest way is to put it into a method then re-call that in the catch:
const insertPost = (post, numRetries) => {
return
db.none(
`
WITH insert_post AS
(
INSERT INTO table_one(text) VALUES('abcd123')
RETURNING id
)
INSERT INTO table_two(id, randstr)
VALUES((SELECT id FROM insert_post), '${randStrFn()}')
`
)
.then(() => console.log("Success"))
.catch(err => {
console.log(err)
if (numRetries < 3) {
return self.insertPost(post, numRetries + 1);
}
throw err;
});
}

Fetching records corresponding to a user, when queried from admin role/user which has access to all records

I need to fetch a particular class (say Class A) records corresponding to each user in my parse server, when queried with admin role which have access to all the records in that particular class (Class A).
How can I do that?
Quick help would be greatly appreciated. :-)
I'm assuming that you want these records on the client, but the client doesn't have "permission" to get all class a records?
If I've got the problem right, then here's a solution. Create a cloud code function that can use the master key to query objects of class a.
// this is the cloud function that you can call with
// whichever client SDK you are using....
const fetchClassA = function (request, response) {
const result = [];
const userId = request.params.fetchForUser;
// the test here should be against role, just an example....
if (request.user.get('username') !== 'admin') {
response.error('you are not authorized.');
return;
}
if (!userId) {
response.error('no user supplied');
return;
}
const user = new Parse.User();
user.id = userId;
new Parse.Query('ClassA')
.equalTo('user', user)
// depending on the use case, you may want to use
// find here instead?
.each((object) => {
result.push(object);
}, { useMasterKey: true })
.then(() => response.success(result))
.catch(response.error);
}
// the rest of this is just a unit test to "lightly" test
// our cloud function....
describe('fetch record with a cloud function', () => {
const userA = new Parse.User();
const userB = new Parse.User();
beforeEach((done) => {
userA.setUsername('userA');
userA.setPassword('abc');
userB.setUsername('userB');
userB.setPassword('def');
Parse.Object.saveAll([userA, userB])
.then(() => Parse.Object.saveAll([
new Parse.Object('ClassA').set('user', userA),
new Parse.Object('ClassA').set('user', userA),
new Parse.Object('ClassA').set('user', userA),
new Parse.Object('ClassA').set('user', userB),
new Parse.Object('ClassA').set('user', userB),
new Parse.Object('ClassA').set('user', userB),
]))
.then(() => Parse.User.signUp('admin', 'foo'))
.then(done)
.catch(done.fail);
});
it('should fetch class a', (done) => {
Parse.Cloud.define('fetchClassA', fetchClassA);
Parse.Cloud.run('fetchClassA', { foo: 'bar', fetchForUser: userA.id })
.then(result => expect(result.length).toBe(3))
.then(done)
.catch(done.fail);
});
});

UAT > How to check the ordering of data in a table using Selenium?

I need to test that a resulting list is ordered date descending using selenium.
this.Then(/^the list items should be ordered by date descending$/, (arg1): CucumberStepOutput => {
actions
return 'pending';
});
I'm sure this is something that has been done many times by people in the selenium community - I am hoping someone will share best practice.
If anyone is interested in the answer or at least the solution I ended up with see the snippet below.
It's a bit ugly (comment with suggestions to clean up) but works!
this.Then(/^the list items should be in descending$/, (): CucumberStepOutput => {
return new Promise<void>((resolve, reject) => {
let expectedValues = ['value1',
'value2',
'value3'
];
client.elements('.element-class').then(elements => {
let resultDates: Array<string> = [];
elements.value.reduce<Promise<void>>((chain, nextElement) => {
return chain.then(() => {
return client.elementIdText(nextElement.ELEMENT).then(text => {
resultVa.push(text.value);
});
});
}, Promise.resolve()).then(()=>{
JSON.stringify(resultValues).should.equal(JSON.stringify(expectedValues));
resolve();
})
});
});
});