Best way of getting the row just added. Heroku + Node + Postgres - sql

What is the best way of getting the row that was just added, I am working with Heroku, Node and Postgres, and Expressjs. I want to be able to do something like this.
app.post( '/', function( req, res ){
client.query("INSERT into ..", function( err, result ){
res.send( result.id );
});
});
Ideally the callback would have information about the row that it just entered in but the content of it just an object that looks like
{ rows:[] }
Is there a good way of getting that row that I just added, thanks.

You probably want to use INSERT ... RETURNING:
The optional RETURNING clause causes INSERT to compute and return value(s) based on each row actually inserted. This is primarily useful for obtaining values that were supplied by defaults, such as a serial sequence number. However, any expression using the table's columns is allowed. The syntax of the RETURNING list is identical to that of the output list of SELECT.
So something like this:
client.query('insert into your_table (...) values (...) returning *', function(err, result) {
// ...
});
should get you the newly inserted row in your callback function.

Related

extracting data from MariaDB using a raw KNEX statement

Former sequelize person here.
What is the best method to extract data from MariaDB using a raw Knex statement? Here are the two methods i have tried:
MariaDb:
SELECT JSON_OBJECT(
...........
) 'JSON_OBJECT'
jscript to interpret results:
for ( tmp of result[0] ) { console.log (JSON.parse(tmp.JSON_OBJECT)) ; }
and this MariaDb:
SELECT JSON_ARRAYAGG(
JSON_OBJECT(
..........
)
) 'JSON_ARRAYAGG'
jscript:
for ( tmp of JSON.parse(result[0][0]['JSON_ARRAYAGG'] )) { console.log(tmp) ; }
Both of these methods work, but there may be a much cleaner way rather than using JSON.parse.
Suggestions?
note: i certainly understand there is a controversy using raw - i have a number of very large sql statements that were in a previous application, and i would rather (for now) just use them as is rather than rewrite them in pure Knex.
EDIT: this may be "good enough" since it appears that knex raw does not necessarily return objects.
node:
SELECT JSON_ARRAYAGG(
JSON_OBJECT(
..........
)
) 'JSON_ARRAYAGG'
.....................................
const result = await knex.raw( sqlStatement, sqlQuery);
return result[0][0].JSON_ARRAYAGG ;
browser:
(async () => { try { let result = await app.service('raw-service').find({query: sqlQuery}) ; newResult = result; } catch (e) { console.log(e); } } )() ;
console.log(JSON.parse(newResult));
but at least the really ugly stuff is hidden away, inside of node.
SQL databases are designed to return rows of columns; any query where you route everything through json instead is not going to be "the best method" you ask for.
Your first try at least uses rows, so is better than the second.
That said, if you are just modifying existing code to work with Knex rather than rewriting it, whatever requires minimum changes will lead to fewer bugs and should be preferred.
To extract data from a MariaDB database using Knex, you can use the .select() method on a Knex query builder object. This method allows you to specify the columns that you want to retrieve from the database, as well as any conditions or filters using the .where() method.

Does gorm interpret the content of a struct with a logical OR?

New to SQL, I am writing as an exercise an API middleware that checks if the information contained in some headers match a database entry ("token-based authentication"). Database access is based on GORM.
To this, I have defined my ORM as follows:
type User struct {
ID uint
UserName string
Token string
}
In my middleware I retrieve the content of relevant headers and end up with the variables userHeader and tokenHeader. They are supposed to be matched to the database in order to do the authentication.
The user table has one single entry:
select * from users
// 1,admin,admintoken
The authentication code is
var auth User
res := db.Where(&User{UserName: userHeader, Token: tokenHeader}).Find(&auth)
if res.RowsAffected == 1 {
// authentication succeeded
}
When testing this, I end up with the following two incorrect results (other combinations are correct):
with only one header set to a correct value (and the other one not present) the authentication is successful (adding the other header with an incorrect value is OK (=auth fails))
no headers set → authentication goes though
I expected my query to mean (in the context of the incorrect results above)
select * from users where users.user_name = 'admin' and users.token = ''
select * from users where users.user_name = '' and users.token = ''
and this query is correct on the console, i.e. produces zero results (ran against the database).
The ORM one, however, seems to discard non-existing headers and assume they are fine (this is at least my understanding)
I also tried to chain the Where clauses via
db.Where(&User{UserName: userHeader}).Where(&User{Token: tokenHeader}).Find(&auth)
but the result is the same.
What should be the correct query?
The gorm.io documentation says the following on the use of structs in Where conditionals:
When querying with struct, GORM will only query with non-zero fields,
that means if your field’s value is 0, '', false or other zero
values, it won’t be used to build query conditions ...
The suggested solution to this is:
To include zero values in the query conditions, you can use a map,
which will include all key-values as query conditions ...
So, when the token header or both headers are empty, but you still want to include them in the WHERE clause of the generated query, you need to use a map instead of the struct as the argument to the Where method.
db.Where(map[string]interface{}{"user_name": userHeader, "token": tokenHeader}).Find(&auth)
You can use Debug() to check for the generated SQL (it gets printed into stderr); use it if you are unsure what SQL your code generates

How to handle caching counters by redis?

I am using Postgres as main DB and REDIS for caching. I am working on caching mechanism for one db query which takes to much time (It's about 5-6 JOINS + nested SELECTS). For now I am caching results of this query using SET 'some key' JSON.stringify(query.result). This works fine, however I have one column that cannot be cached - it is called commentsCount. It has to be always up to date. As a temporary solution, I am querying db just for this one particular field like this:
app.get('/post/getBySlug/:slug',function(req,res,next){
var cacheKey = req.params.slug+'|'+req.params.language; // "my-post-slug|en-us" for example
cache.get(cacheKey, function(err, post){
throw err if err;
if(post) {
db.getPostCommentsCount({ where: { id: post.id }}).done(function(err,commentsCount){
throw err if err;
post.commentsCount = commentsCount;
res.json(post);
next()
})
} else {
db.getFullPostBySlug(req.params.slug, req.params.language).done(function(err, post){
throw err if err;
cache.set(cacheKey, post);
res.json(post);
next();
})
}
})
})
But it is still now what I want, because main DB is still queried. Is there any standard/good practise on storing counters in REDIS? My comment insert function looks like this:
START TRANSACTION
INSERT INTO "Comments" VALUES (...) // insert comments
UPDATE "Posts" SET "commentsCount" = "commentsCount" + 1 WHERE "Posts"."id" = 123456 // update counter on post
COMMIT TRANSACTION
I am using transaction because I dont want comment to be inserted without incrementing comments count. As a "side" question - is it better to make 2 sql queries in transaction or write a trigger to handle incrementing counter??
According to my query (I posted link to gist in comments):
We dont plan more than 2 languages (though it is possible)
I made those counters because I have to keep counters separate per language, be able to order by those separate counters and also be able to order by sum of the counters (total for all languages) - I found it hard to make query that would order by sum of columns from separate rows while still returning those rows... (At the begining counters were stored in language translations).
Generally this query looks for post where exists translation with specific 'slug' and 'language' (slug+language on post translation is unique index). Morover post has to be published (isPublished = boolean) and post.status has to be 'published' (status = enum) or post.iscomingSoon has to be true (isComingSoon = boolean). Do you have idea what index/ordering I could add to this query? Or should I just remove limit?
In every translation table I keep language as TEXT. It can be for example en-us or zh-cn etc. Do you think I should make it enum or maybe I should make another table to store languages and just keep language_id in translations?
Author actually can be null :)

fnReloadAjax(url): two requests

I'd like to refresh my table when new item is added. I use such code:
$("#frm_create_user").submit(function() {
var formData = getFormData($("#frm_create_user"));
$.ajax({
type: "POST",
url: getApiUrl("/user"),
dataType: "json",
contentType: 'application/json',
data: JSON.stringify({user:{user_ref: formData.user_ref}}),
}).done(function(r) {
oTable.fnReloadAjax(getApiUrl("/users?sSearch=" + r.user.userid));
});
return false;
});
But for some reason, I can see two requests instead of one.
The first one is correct - http://symfony/app_dev.php/api/users?sSearch=kZoh1s23&_=1394204041433
And the second one is confusing - http://symfony/app_dev.php/api/users?sSearch=kZoh1s23&sEcho=3&iColumns=8&sColumns=&iDisplayStart=0&iDisplayLength=25&mDataProp_0=userid&mDataProp_1=user_ref&mDataProp_2=password&mDataProp_3=vpn_password&mDataProp_4=status_id&mDataProp_5=expire_account&mDataProp_6=created&mDataProp_7=&sSearch=&bRegex=false&sSearch_0=&bRegex_0=false&bSearchable_0=true&sSearch_1=&bRegex_1=false&bSearchable_1=true&sSearch_2=&bRegex_2=false&bSearchable_2=true&sSearch_3=&bRegex_3=false&bSearchable_3=true&sSearch_4=&bRegex_4=false&bSearchable_4=true&sSearch_5=&bRegex_5=false&bSearchable_5=true&sSearch_6=&bRegex_6=false&bSearchable_6=true&sSearch_7=&bRegex_7=false&bSearchable_7=true&iSortCol_0=0&sSortDir_0=asc&iSortingCols=1&bSortable_0=true&bSortable_1=false&bSortable_2=true&bSortable_3=true&bSortable_4=true&bSortable_5=true&bSortable_6=true&bSortable_7=true&_=1394204041505
If I remove fnReloadAjax() line, these two requests gone so that it looks like it is caused by fnReloadAjax()
How may I fix it to have only http://symfony/app_dev.php/api/users?sSearch=kZoh1s23&_=1394204041433 requests?
All these confusing parameters are Informations that your server sided script might need to clamp the data that has to be returned to dataTables.
Since I don't know your server sided code, I can only break up what they are good for:
&sEcho=3 //No need to react to this, it's just the result of the last ajax call
&iColumns=8 //Your table has 8 columns
&iDisplayStart=0 //You are on page 1
&iDisplayLength=25 //you want to display up to 25 entrys per page
&mDataProp_0=userid //Your first colum gets the value of [userid]
&mDataProp_1=user_ref //Your first colum gets the value of [user_ref]
&mDataProp_2=password //Your first colum gets the value of [password]
etc...
&sSearch=12345&bRegex=true//Your first column is filtered by userid 12345 and this value should be treated as a regex by your datasource
&sSearch_0=&bRegex_0=false//Your second column is not filtered and should not be treated as a regex by your datasource
etc...
&iSortCol_0=0&sSortDir_0=asc //your first column should be sorted ascending
&iSortingCols=1 //you have one column that is sortable
&bSortable_0=true //Column 0 is sortable
&bSortable_1=false //Column 1 is not sortable
etc..
Your server sided script should react to these values. In case of a mysql datasource it should set up its where clause to the filtering parameters, limit it by pagenumber and items per page, and sort according to the sortinfo.
All this is needed if you want to use the luxury features of datatables like pagination, sorting, individual column filtering, clamping ajax return values to minify serverload when working with thousands of entrys.
If you don't need that, just ignore the additional parameters in your server script and just react to the data you need. But leave them in, you might need them later:-)
Hope this helps

grid filter in dojo

can any one help with filtering multiple condition in dojo grid.
im using grid.DataGrid and json data.
data1 = {items: [ {"id":1,"media":"PRINT",pt:"Yellow Directory"},
{"id":2,"media":"DIGITAL",pt:"Social Media"},{id":3,"media":"DIGITAL",pt:"Yellow Online"}
],identifier: "id"};
a=1,b=2;
grid.filter({id:a,id:b})
the above line is just displaying the record with b value.
i need the record with both the values.
can any one help me with this.???
So you want the records that have any of the specified ids?
It comes down to the capabilities of the store you're using. If you're using a Memory store with SimpleQueryEngine, then you can specify a regex or an object with a test function instead:
grid.filter({id: {
test: function(x) {
return x === 'a' || x === 'b';
}
}});
If you're using JsonRest store, then you get to choose how your queries are processed server-side so you could potentially pass in an array of interesting values and handle that in your own way on the server. (i.e. filter({id:[a,b]}))