SERIALIZABLE AND ATOMIC TRANSACTIONS RethinkDB - serialization

I want to know if performing an update in this way will guarantees me an ATOMIC and SERIALIZABLE transaction.
I need to insert some data in a table Y before performing the update
in a table X and no one canĀ“t do anything in Table X until i release
it.
r.table('TableX', { readMode: 'majority' }).get(IdUser).update(function () {
r.table('TableY').insert({
SomeData: 'Some data will be inserted in Table Y',
SomeData2: 'Some data 2 will be inserted in Table Y',
}).run(conn, (err, results) => {
console.log(results)
})
return { SomeData3: 'Some Data will be updated in TableX' }
}, { nonAtomic: false }).run(conn, function (err, result) {
if (!err && result.replaced > 0)
resolve('OK');
else
reject(new Error(err));
})

Related

sqlite deletion query not working in react-native

I am new to mobile development and I would like to perform simple queries like deleting a row a specific from my sqlite table. But It doesnt work.The row still exits in my database table
This is my code:
export default class App extends Component{
constructor(props){
super(props);
db = SQlite.openDatabase(
{
name: 'gad.db',
createFromLocation: 1,
},
this.successToOpenDB,
this.failToOpenDB,
);
}
successToOpenDB()
{
db.transaction(tx =>
{
tx.executeSql("DELETE FROM songs WHERE content='content2' ", [] ,(tx, results) =>
{
console.log('DELETION OK');
},
(tx, error) =>
{
console.log("DELETION KO");
});
});
}
failToOpenDB(err){
console.log(err);
alert("not connected to database");
}
Please anyone help.
Thanks in advance
Must be some issue with primary key, if primary key is your condition to delete,
or "content" here.
First try to retrieve primary key of the row you want to delete, and store in
some state maybe:
var deleted= results.rows.item(length-1).ID
I have primary key as ID, so I am now able to get required row.
also, after delete query you can check any rows affected or not in promise returned as follows:
(tx, results) => {
if( results.rowsAffected >0) {
//then proceed ahead.
}
}

Sqlite very slow on for loop delete

I have this local db that I'm playing with and it pulls a list of users, does something with each and then deletes the records. The delete is VERY slow:
db.all("select id, username from users", (err, rows) => {
rows.forEach((row) => {
// do stuff with row
db.run("delete from users where id = ?", row.id, (err) => {
if (err) {
throw err;
}
});
});
});
It is a simple db: CREATE TABLE IF NOT EXISTS users(id INTEGER PRIMARY KEY, username text NOT NULL)
Deleting a record takes even 20 seconds on a list of 100k records. What am I doing wrong here and how can I speed this up?
Deleting a record takes even 20 seconds on a list of 100k records. What am I doing wrong here and how can I speed this up?
db.all will fetch all the rows at once. This is slow, consumes a lot of memory, and all rows must be fetched before any processing starts.
Instead, use db.each. This will fetch a row and act on it immediately.
There's also no need to use where in (?). For a single value use where = ?. This may or may not affect performance.
db.each(
"select id, username from users", (err, row) => {
// do stuff with row
db.run("delete from users where id = ?", row.id, (err) => {
if (err) {
throw err;
}
}
}
)

Node Schedule, delete function deleting users instead of targeted data

schedule.scheduleJob('0 0 4 * * *',
function (fireDate) {
console.log(`fireDate: ${fireDate}`);
console.log(`now: ${new Date()}`);
resetUserData()
.then(result => {
console.log(`data removed at ${new Date()}`);
console.log('result:\n', result);
})
.catch(reason => {
console.error(`removing data failed at ${new Date()}`);
console.error(`reason: ${reason}`);
});
});
function resetUserData(){
return db('users')
.select('water', 'exercise', 'sleep', 'breaks', 'daily_points')
.del()
}
The reset function data should delete data based on the .select? It's instead deleting the entire user.

How to repeat SQL insertion until successful with pg-promise?

In my program I insert some data into a table and get back it's id and I need to ensure I enter that id into another table with a unique randomly generated string. But, in case the insertion fails for attempting to insert an already-existing random string, how could I repeat the insertion until it is successful?
I'm using pg-promise to talk to postgreSQL. I can run program like this that inserts the data into both tables given the random string doesn't already exists:
db.none(
`
WITH insert_post AS
(
INSERT INTO table_one(text) VALUES('abcd123')
RETURNING id
)
INSERT INTO table_two(id, randstr)
VALUES((SELECT id FROM insert_post), '${randStrFn()}')
`
)
.then(() => console.log("Success"))
.catch(err => console.log(err));
I'm unsure if there is any easy SQL/JS/pg-promise based solution that I could make use of.
I would encourage the author of the question to seek a pure-SQL solution to his problem, as in terms of performance it would be significantly more efficient than anything else.
But since the question was about how to re-run queries with pg-promise, I will provide an example, in addition to one already published, except without acquiring and releasing the connection for every attempt, plus proper data integrity.
db.tx(t => {
// BEGIN;
return t.one('INSERT INTO table_one(text) VALUES($1) RETURNING id', 'abcd123', a => +a.id)
.then(id => {
var f = attempts => t.none('INSERT INTO table_two(id, randstr) VALUES($1, randStrFn())', id)
.catch(error => {
if (--attempts) {
return f(attempts); // try again
}
throw error; // give up
});
return f(3); // try up to 3 times
});
})
.then(data => {
// COMMIT;
// success, data = null
})
.catch(error => {
// ROLLBACK;
});
Since you are trying to re-run a dependent query, you should not let the first query remain successful, if all your attempts with the second query fail, you should roll all the changes back, i.e. use a transaction - method tx, as shown in the code.
This is why we split your WITH query inside the transaction, to ensure such an integrity.
UPDATE
Below is a better version of it though. Because errors inside the transaction need to be isolated, in order to avoid breaking the transaction stack, each attempt should be inside its own SAVEPOINT, which means using another transaction level:
db.tx(t => {
// BEGIN;
return t.one('INSERT INTO table_one(name) VALUES($1) RETURNING id', 'abcd123', a => +a.id)
.then(id => {
var f = attempts => t.tx(sp => {
// SAVEPOINT level_1;
return sp.none('INSERT INTO table_two(id, randstr) VALUES($1, randStrFn())', id);
})
.catch(error => {
// ROLLBACK TO SAVEPOINT level_1;
if (--attempts) {
return f(attempts); // try again
}
throw error; // give up
});
return f(3); // try up to 3 times
});
})
.then(data => {
// 1) RELEASE SAVEPOINT level_1;
// 2) COMMIT;
})
.catch(error => {
// ROLLBACK;
});
I would also suggest using pg-monitor, so you can see and understand what is happening underneath, and what queries are being in fact executed.
P.S. I'm the author of pg-promise.
The easiest way is to put it into a method then re-call that in the catch:
const insertPost = (post, numRetries) => {
return
db.none(
`
WITH insert_post AS
(
INSERT INTO table_one(text) VALUES('abcd123')
RETURNING id
)
INSERT INTO table_two(id, randstr)
VALUES((SELECT id FROM insert_post), '${randStrFn()}')
`
)
.then(() => console.log("Success"))
.catch(err => {
console.log(err)
if (numRetries < 3) {
return self.insertPost(post, numRetries + 1);
}
throw err;
});
}

Waterline ORM equivalent of insert on duplicate key update

I have a table user_address and it has some fields like
attributes: {
user_id: 'integer',
address: 'string' //etc.
}
currently I'm doing this to insert a new record, but if one exists for this user, update it:
UserAddress
.query(
'INSERT INTO user_address (user_id, address) VALUES (?, ?) ' +
'ON DUPLICATE KEY UPDATE address=VALUES(address);',
params,
function(err) {
//error handling logic if err exists
}
Is there any way to use the Waterline ORM instead of straight SQL queries to achieve the same thing? I don't want to do two queries because it's inefficient and hard to maintain.
The answer above is less than ideal. It also has the method as part of the attributes for the model, which is not correct behavior.
Here is what the ideal native solution looks like that returns a promise just like any other waterline model function would:
module.exports = {
attributes: {
user_id: 'integer',
address: 'string'
},
updateOrCreate: function (user_id, address) {
return UserAddress.findOne().where({user_id: user_id}).then(function (ua) {
if (ua) {
return UserAddress.update({user_id: user_id}, {address: address});
} else {
// UserAddress does not exist. Create.
return UserAddress.create({user_id: user_id, address: address});
}
});
}
}
Then you can just use it like:
UserAddress.updateOrCreate(id, address).then(function(ua) {
// ... success logic here
}).catch(function(e) {
// ... error handling here
});
Make a custom model method that does what you want using Waterline queries isntead of raw SQL. You will be doing two queries, but with Waterline syntax.
Example below (if you don't know about deferred objects then just use callback syntax, but the logic is the same):
var Q = require('q');
module.exports = {
attributes: {
user_id: 'integer',
address: 'string',
updateOrCreate: function (user_id, address) {
var deferred = Q.defer();
UserAddress.findOne().where({user_id: user_id}).then(function (ua) {
if (ua) {
// UserAddress exists. Update.
ua.address = address;
ua.save(function (err) {deferred.resolve();});
} else {
// UserAddress does not exist. Create.
UserAddress.create({user_id: user_id, address: address}).done(function (e, ua) {deferred.resolve();});
}
}).fail(function (err) {deferred.reject()});
return deferred.promise;
}
};
#Eugene's answer is good but it will always run 2 operations: findOne + update or create. I believe we can optimize it further because if the record exists we just need to run update. Example:
module.exports = {
attributes: {
user_id: 'integer',
address: 'string'
},
updateOrCreate: function (user_id, address) {
return UserAddress.update({user_id: user_id}, {address: address})
.then(function(ua){
if(ua.length === 0){
// No records updated, UserAddress does not exist. Create.
return UserAddress.create({user_id: user_id, address: address});
}
});
}
}
BTW, there is an open request to implement .updateOrCreate in waterline: #790