How to handle caching counters by redis? - sql

I am using Postgres as main DB and REDIS for caching. I am working on caching mechanism for one db query which takes to much time (It's about 5-6 JOINS + nested SELECTS). For now I am caching results of this query using SET 'some key' JSON.stringify(query.result). This works fine, however I have one column that cannot be cached - it is called commentsCount. It has to be always up to date. As a temporary solution, I am querying db just for this one particular field like this:
app.get('/post/getBySlug/:slug',function(req,res,next){
var cacheKey = req.params.slug+'|'+req.params.language; // "my-post-slug|en-us" for example
cache.get(cacheKey, function(err, post){
throw err if err;
if(post) {
db.getPostCommentsCount({ where: { id: post.id }}).done(function(err,commentsCount){
throw err if err;
post.commentsCount = commentsCount;
res.json(post);
next()
})
} else {
db.getFullPostBySlug(req.params.slug, req.params.language).done(function(err, post){
throw err if err;
cache.set(cacheKey, post);
res.json(post);
next();
})
}
})
})
But it is still now what I want, because main DB is still queried. Is there any standard/good practise on storing counters in REDIS? My comment insert function looks like this:
START TRANSACTION
INSERT INTO "Comments" VALUES (...) // insert comments
UPDATE "Posts" SET "commentsCount" = "commentsCount" + 1 WHERE "Posts"."id" = 123456 // update counter on post
COMMIT TRANSACTION
I am using transaction because I dont want comment to be inserted without incrementing comments count. As a "side" question - is it better to make 2 sql queries in transaction or write a trigger to handle incrementing counter??
According to my query (I posted link to gist in comments):
We dont plan more than 2 languages (though it is possible)
I made those counters because I have to keep counters separate per language, be able to order by those separate counters and also be able to order by sum of the counters (total for all languages) - I found it hard to make query that would order by sum of columns from separate rows while still returning those rows... (At the begining counters were stored in language translations).
Generally this query looks for post where exists translation with specific 'slug' and 'language' (slug+language on post translation is unique index). Morover post has to be published (isPublished = boolean) and post.status has to be 'published' (status = enum) or post.iscomingSoon has to be true (isComingSoon = boolean). Do you have idea what index/ordering I could add to this query? Or should I just remove limit?
In every translation table I keep language as TEXT. It can be for example en-us or zh-cn etc. Do you think I should make it enum or maybe I should make another table to store languages and just keep language_id in translations?
Author actually can be null :)

Related

Read very slow from a database

I use spring boot with spring data jpa, hibernate and oracle.
Actually, I my table I have arount 10 millions of record, I need to do some operation, write info to a file and after delete the record.
It's a basic sql query
select * from zzz where status = 2;
I done a test without doing operation and delete record
long start = System.nanoTime();
int page = 0;
Pageable pageable = PageRequest.of(page, LIMIT);
Page<Billing> pageBilling = billingRepository.findAllByStatus(pageable);
while (true) {
for (Billing: pageBilling .getContent()) {
//process
//write to file
//delete element
}
if (!pageBilling .hasNext()) {
break;
}
pageable = pageBilling .nextPageable();
pageBilling = billingRepository.findAllByStatus(pageable);
}
long end = System.nanoTime();
long microseconds = (end - start) / 1000;
System.out.println(microseconds + " to write");
Result it's bad, with a limit of 10 000, that took 157 minutes, with 100 000 28 minutes, with millions 19 minutes.
It's there a better solution to increase performance?
The following are likely to improve the performance significantly:
You should not iterate past the first page. Instead, delete the processed data and select the first page again. Actually you don't need a page for that you can encode the limit in the method name. Selecting late pages is rather inefficient.
The process of loading, processing and deleting one batch of items should be in a separate transaction. Otherwise the EntityManager will hold all the entities ever loaded which will make things really slow.
If that still isn't sufficient yet you may look into the following:
Inspect the SQL executed. Does it look sensible? If not consider switching to JdbcTemplate or NamedParameterJdbcTemplate with a query method that takes a RowCallbackHandler you should be able to load and process all rows with a single select statement and at the end to process one delete statement to remove all rows. This requires that the status that you use for filtering does not change in the mean time.
How do the execution plans look like? If they seem of inspect your indices.

How to Convert foor loop to NHibernate Futures for performance

NHibernate Version: 3.4.0.4000
I'm currently working on optimizing our code so that we can reduce the number of round trips to the database and am looking at a for loop that is one of the culprits. I'm having a hard time figuring out how to batch all of these iterations into a future that gets executed once when sent to SQL Server. Essentially each iteration of the loop causes 2 queries to hit the database!
foreach (var choice in lineItem.LineItemChoices)
{
choice.OptionVersion = _session.Query<OptionVersion>().Where(x => x.Option.Id == choice.OptionId).OrderByDescending(x => x.OptionVersionNumber).FirstOrDefault();
choice.ChoiceVersion = _session.Query<ChoiceVersion>().OrderByDescending(x => x.ChoiceVersionIdentity.ChoiceVersionNumber).Where(x => x.Choice.Id == choice.ChoiceId).FirstOrDefault();
}
One option is to extract OptionId and ChoiceId from all the LineItemChoices into two lists in local memory. Then issue just two queries, one for options and one for choices, giving these lists in .Where(x => optionIds.Contains(x.Option.Id)). This corresponds to SQL IN operator. This requires some postprocessing. You will get two result lists (transform to dictionary or lookup if you expect many results), that you need to process to populate the choice objects. This postprocessing is local and tends to be very cheap compared to database roundtrips. This option can be a bit tricky if the existing FirstOrDefault part is absolutely necessary. Do you expect there to be more than result for a single optionId? If not, this code could instead have used SingleOrDefault, which could just be dropped if converting to use IN-queries.
The other option is to use futures (https://nhibernate.info/doc/nhibernate-reference/performance.html#performance-future). For Linq it means to use ToFuture or ToFutureValue at the end, which also conflicts with FirstOrDefault I believe. The important thing is that you need to loop over all line item choices to initialize ALL queries BEFORE you access the value of any of them. So this is likely to also result in some postprocessing, where you would first store the future values in some list, and then in a second loop access the real value from each query to populate the line item choice.
If you to expect that the queries can yield more than one result (before applying FirstOrDefault), I think you can just use Take(1) instead, as that will still return an IQueryable where you can apply the future method.
The first option is probably the most efficient, since it will just be two queries and allow the database engine to make just one pass over the tables.
Keep the limit on the maximum number of parameters that can be given in an SQL query in mind. If there can be thousands of line item choices, you may need to split them in batches and query for at most 2000 identifiers per round trip.
Adding on the Oskar answer, NHibernate Futures was implement in NHibernate 2.1. It is available on method Future for collections and FutureValue for single values.
In your case, you could separate the IDs of the list in memory ...
var optionIds = lineItem.LineItemChoices.Select(x => x.OptionId);
var choiceIds = lineItem.LineItemChoices.Select(x => x.ChoiceId);
... and execute two queries using Future<T> to get two lits in one hit over the database.
var optionVersions = _session.Query<OptionVersion>()
.Where(x => optionIds.Contains(x.Option.Id))
.OrderByDescending(x => x.OptionVersionNumber)
.Future<OptionVersion>();
var choiceVersions = _session.Query<ChoiceVersion>()
.Where(x => choiceIds.Contains(x.Choice.Id))
.OrderByDescending(x => x.ChoiceVersionIdentity.ChoiceVersionNumber)
.Future<ChoiceVersion>();
After with all you need in memory, you could loop on the original collection you have and search in memory the data to fill up the choice object.
foreach (var choice in lineItem.LineItemChoices)
{
choice.OptionVersion = optionVersions.OrderByDescending(x => x.OptionVersionNumber).FirstOrDefault(x => x.Option.Id == choice.OptionId);
choice.ChoiceVersion = choiceVersions.OrderByDescending(x => x.ChoiceVersionIdentity.ChoiceVersionNumber).FirstOrDefault(x => x.Choice.Id == choice.ChoiceId);
}

Best way of getting the row just added. Heroku + Node + Postgres

What is the best way of getting the row that was just added, I am working with Heroku, Node and Postgres, and Expressjs. I want to be able to do something like this.
app.post( '/', function( req, res ){
client.query("INSERT into ..", function( err, result ){
res.send( result.id );
});
});
Ideally the callback would have information about the row that it just entered in but the content of it just an object that looks like
{ rows:[] }
Is there a good way of getting that row that I just added, thanks.
You probably want to use INSERT ... RETURNING:
The optional RETURNING clause causes INSERT to compute and return value(s) based on each row actually inserted. This is primarily useful for obtaining values that were supplied by defaults, such as a serial sequence number. However, any expression using the table's columns is allowed. The syntax of the RETURNING list is identical to that of the output list of SELECT.
So something like this:
client.query('insert into your_table (...) values (...) returning *', function(err, result) {
// ...
});
should get you the newly inserted row in your callback function.

jqGrid/NHibernate/SQL: navigate to selected record

I use jqGrid to display data which is retrieved using NHibernate. jqGrid does paging for me, I just tell NHibernate to get "count" rows starting from "n".
Also, I would like to highlight specific record. For example, in list of employees I'd like a specific employee (id) to be shown and pre-selected in table.
The problem is that this employee may be on non-current page. E.g. I display 20 rows from 0, but "highlighted" employee is #25 and is on second page.
It is possible to pass initial page to jqGrid, so, if I somehow use NHibernate to find what page the "highlighted" employee is on, it will just navigate to that page and then I'll use .setSelection(id) method of jqGrid.
So, the problem is narrowed down to this one: given specific search query like the one below, how do I tell NHibernate to calculate the page where the "highlighted" employee is?
A sample query (simplified):
var query = Session.CreateCriteria<T>();
foreach (var sr in request.SearchFields)
query = query.Add(Expression.Like(sr.Key, "%" + sr.Value + "%"));
query.SetFirstResult((request.Page - 1) * request.Rows)
query.SetMaxResults(request.Rows)
Here, I need to alter (calculate) request.Page so that it points to the page where request.SelectedId is.
Also, one interesting thing is, if sort order is not defined, will I get the same results when I run the search query twice? I'd say that SQL Server may optimize query because order is not defined... in which case I'll only get predictable result if I pull ALL query data once, and then will programmatically in C# slice the specified portion of query results - so that no second query occur. But it will be much slower, of course.
Or, is there another way?
Pretty sure you'd have to figure out the page with another query. This would surely require you to define the column to order by. You'll need to get the order by and restriction working together to count the rows before that particular id. Once you have the number of rows before your id, you can figure what page you need to select and perform the usual paging query.
OK, so currently I do this:
var iquery = GetPagedCriteria<T>(request, true)
.SetProjection(Projections.Property("Id"));
var ids = iquery.List<Guid>();
var index = ids.IndexOf(new Guid(request.SelectedId));
if (index >= 0)
request.Page = index / request.Rows + 1;
and in jqGrid setup options
url: "${Url.Href<MyController>(c => c.JsonIndex(null))}?_SelectedId=${Id}",
// remove _SelectedId from url once loaded because we only need to find its page once
gridComplete: function() {
$("#grid").setGridParam({url: "${Url.Href<MyController>(c => c.JsonIndex(null))}"});
},
loadComplete: function() {
$("#grid").setSelection("${Id}");
}
That is, in request I lookup for index of id and set page if found (jqGrid even understands to display the appropriate page number in the pager because I return the page number to in in json data). In grid setup, I setup url to include the lookup id first, but after grid is loaded I remove it from url so that prev/next buttons work. However I always try to highlight the selected id in the grid.
And of course I always use sorting or the method won't work.
One problem still exists is that I pull all ids from db which is a bit of performance hit. If someone can tell how to find index of the id in the filtered/sorted query I'd accept the answer (since that's the real problem); if no then I'll accept my own answer ;-)
UPDATE: hm, if I sort by id initially I'll be able to use the technique like "SELECT COUNT(*) ... WHERE id < selectedid". This will eliminate the "pull ids" problem... but I'd like to sort by name initially, anyway.
UPDATE: after implemented, I've found a neat side-effect of this technique... when sorting, the active/selected item is preserved ;-) This works if _SelectedId is reset only when page is changed, not when grid is loaded.
UPDATE: here's sources that include the above technique: http://sprokhorenko.blogspot.com/2010/01/jqgrid-mvc-new-version-sources.html

Are transactions possible with HTML5 Storage in Safari

Instead of doing an each loop on a JSON file containing a list of SQL statments and passing them one at a time, is it possible with Safari client side storage to simply wrap the data in "BEGIN TRANSACTION" / "COMMIT TRANSACTION" and pass that to the database system in a single call? Looping 1,000+ statements takes too much time.
Currently iterating one transaction at a time:
$j.getJSON("update1.json",
function(data){
$j.each(data, function(i,item){
testDB.transaction(
function (transaction) {
transaction.executeSql(data[i], [], nullDataHandler, errorHandler);
}
);
});
});
Trying to figure out how to make just one call:
$j.getJSON("update1.json",
function(data){
testDB.transaction(
function (transaction) {
transaction.executeSql(data, [], nullDataHandler, errorHandler);
}
);
});
Has anybody tried this yet and succeeded?
Every example I could find in the documentation seems to show only one SQL statement per executeSql command. I would just suggest showing an "ajax spinner" loading graphic and execute your SQL in a loop. You can keep it all within the transaction, but the loop would still need to be there:
$j.getJSON("update1.json",
function(data){
testDB.transaction(
function (transaction) {
for(var i = 0; i < data.length; i++){
transaction.executeSql(data[i], [], nullDataHandler, errorHandler);
}
}
);
}
);
Moving the loop inside the transaction and using the for i = should help get a little more speed out of your loop. $.each is good for less than a 1000 iterations, after that the native for(var = i... will probably be faster.
Note Using my code, if any of your SQL statements throw errors, the entire transaction will fail. If that is not your intention, you will need to keep the loop outside the transaction.
I haven't ever messed with HTML5 database storage (have with local/sessionStorage though) and I would assume that it's possible to run one huge string of statements. Use data.join(separator here) to get the string representation of the data array.
Yes, it is possible to process a whole group of statements within a single transaction with webSQL. You actually don't even need to use BEGIN or COMMIT, this is taken care of for you automatically as long as you make all your executeSql calls from the same transaction. As long as you do this every statement gets included within the transaction.
This makes the process much faster and also makes it so that when one of your statements has an error it rolls back the entire transaction.