neo4j queries response time - indexing

Testing queries responses time returns interesting results:
When executing the same query several times in a row, at first the response times get better until a certain point, then in each execute it gets a little slower or jumps inconsistently.
Running the same query while using the USING INDEX and in other times not using the USING INDEX, returns almost the same responses times range (as described in clause 1), although the profile is getting better (less db hits while using the USING INDEX).
Dropping the index and re-running the query returns the same profile as executing the query while the index exists but the query has been executed without the USING INDEX.
Is there an explanation to the above results?
What will be the best way to know if the query has been improved if although the db hits are getting better, the response times aren't?

The best way to understand how a query executes is probably to use the PROFILE command, which will actually explain how the database goes about executing the query. This should give you feedback on what cypher does with USING INDEX hints. You can also compare different formulations of the same query to see which result in fewer dbHits.
There probably is no comprehensive answer to why the query takes a variable amount of time in various situations. You haven't provided your model, your data, or your queries. It's dependent on a whole host of factors outside of just your query, for example your data model, your cache settings, whether or not the JVM decides to garbage collect at certain points, how full your heap is, what kind of indexes you have (whether or not you use USING INDEX hints) -- and those are only the factors at the neo4j/java level. At the OS level there are many other possibilities/contingencies that make precise performance measurement difficult.
In general when I'm concerned about these things I find it's good to gather a large data sample (run the query 10,0000 times) and then take an average. All of the factors that are outside of your control tend to average out in a sample like that, but if you're looking for a concrete prediction of exactly how long this next query is going to take, down to the milliseconds, that may not be realistically possible.

Related

The performance of retrieve all columns and multiple columns

I am learning SQL following "SQL in 10 minutes",
Reference to use wildcards to retrieve all the records, it states that:
As a rule, you are better off not using the * wildcard unless you really do need every column in the table. Even though use of wildcards may save you the time and effort needed to list the desired columns explicitly, retrieving unnecessary columns usually slows down the performance of your retrieval and your application.
However, It consume less time to retrieve all the records than to retrieve multiple fields:
As the result indicate, wildcards for 0.02 seconds V.S. 0.1 seconds
I tested several times, wildcards faster than multiple specified columns constantly, even though time consumed varied every times.
Kudos to you for attempting to validate advice you get in a book! A single test neither invalidates the advice nor invalidates the test. It is worthwhile to dive further.
The advice provided in SQL In 10 Minutes is sound advice -- and it explicitly states that the purpose related to performance. (Another consideration is that that it makes the code unstable when the database changes.) As a note: I regularly use select t.* for ad-hoc queries.
Why are the results different? There can be multiple reasons for this:
Databases do not have deterministic performance, so other considerations -- such as other processes running on the machine or resource contention -- can affect the performance.
As mentioned in a comment, caching can be the reason. Specifically, running the first query may require loading the data from disk, and it is already in memory for the first.
Another form of caching is for the execution plan, so perhaps the first execution plan is cached but not the second.
You don't mention the database, but perhaps your database has a really, really slow compiler and compiling the first takes longer than the second.
Fundamentally, the advice is sound from a common-sense perspective. Moving less data around should be more efficient. That is really what the advice is saying.
In any case, the difference between 10 milliseconds and 2 milliseconds is very short. I would not generalize this performance to larger data and say that the second is 5 times faster than the first in general. For whatever reason, it is 8 milliseconds shorter on a very small data set, one so small that performance would not be a consideration anyway.
For manual testing the data that's in a table or tables?
Then it doesn't matter much whether you used a * or the column names.
Sure, if the table has like 100 columns and you only are interested in a few? Then explicitly adding the columnnames will give you a less convulted result.
Plus, you can choose the order they appear in the result.
And using a * in a sub-query would drag all the fields into the resultset.
While if you only selected the columns you need could improve performance.
For manual testing, that normally doesn't matter much.
Whether a test SQL runs 1 seconds or 2 seconds, if it's a test or an ad-hoc query then it wouldn't bother you.
What the suggestion is more intended for, is about coding SQL's that are to be used in a production environment.
When using * in a SQL, that means that when something changes in the tables that are used in the query, that it can affect the output of that query.
Possibly leading to errors. Your boss would frown upon that!
For example, a SQL with a select * from tableA union select * from tableB that you coded a year ago suddenly starts crashing because a column was added to tableB. Ouch.
But by explicitly putting the column names, adding a column to 1 of the tables wouldn't make any difference to that SQL.
In other words.
In production, stability and performance matter much more than golf-coding.
Another thing to keep in mind is the effect of caching.
Some databases can temporarly store metadata or even data in memory.
Which can speed up the retrieval of a query that gets the same results of a query that just run before it.
So try running the following SQL's.
Which are in a different order than in the question.
And check if there's still a speed difference.
select * from products;
select prod_id, prod_name, prod_price from products;

How do I improve performance when querying on a column that changes frequently in SQL Azure using LINQ to SQL

I have an SQL Azure database, and one of the tables contains over 400k objects. One of the columns in this table is a count of the number of times that the object has been downloaded.
I have several queries that include this particular column (call it timesdownloaded), sorted descending, in order to find the results.
Here's an example query in LINQ to SQL (I'm writing all this in C# .NET):
var query = from t in db.tablename
where t.textcolumn.StartsWith(searchfield)
orderby t.timesdownloaded descending
select t.textcolumn;
// grab the first 5
var items = query.Take(5);
This query called perhaps 90 times per minute on average.
Objects are downloaded perhaps 10 times per minute on average, so this timesdownloaded column is updated that frequently.
As you can imagine, any index involving the timesdownloaded column gets over 30% fragmented in a matter of hours. I have implemented an index maintenance plan that checks and rebuilds these indexes when necessary every few hours. This helps, but of course adds spikes in query response times whenever the indexes are rebuilt which I would like to avoid or minimize.
I have tried a variety of indexing schemes.
The best performing indexes are covering indexes that include both the textcolumn and timesdownloaded columns. When these indexes are rebuilt, the queries are amazingly quick of course.
However, these indexes fragment badly and I end up with pretty frequent delay spikes due to rebuilding indexes and other factors that I don't understand.
I have also tried simply not indexing the timesdownloaded column. This seems to perform more consistently overall, though slower of course. And when I check on the SQL query execution plan, it seems to be pretty inconsistent in how SQL tries to optimize this query. Of course it ends up with a log of logical reads as it has to fetch the timesdownloaded column from the table and not an organized index. So this isn't optimal.
What I'm trying to figure out is if I am fundamentally missing something in how I have configured or manage this database.
I'm no SQL expert, and I've yet to find a good answer for how to do this.
I've seen some suggestions that Stored Procedures could help, but I don't understand why and haven't tried to get those going with LINQ just yet.
As commented below, I have considered caching but haven't taken that step yet either.
For some context, this query is a part of a search suggestion feature. So it is called frequently with many different search terms.
Any suggestions would be appreciated!
Based on the comments to my question and further testing, I ended up using an Azure Table to cache my results. This is working really well and I get a lot of hits off of my cache and many fewer SQL queries. The overall performance of my API is much better now.
I did try using Azure In Role Caching, but that method doesn't appear to work well for my needs. It ended up using too much memory (no matter how I configured it, which I don't understand), swapping to disk like crazy and brought my little Small instances to their knees. I don't want to pay more at the moment, so Tables it is.
Thanks for the suggestions!

Optimize query to fetch tuples from index directly

I want to optimize a large SQL query that has around 500 SQL lines and is a little slow, it takes 1 to 5 seconds to execute in an interactive system.
I saw this munin graph
That is not the same as this graph
What I understand from the first graph (showing scans) is that the indexes are being used in where or order by sentences, only to search a tuple that matches some rules (boolean expression).
The second graph I'm not really sure what it means by "tuple access"
Question1: What is the meaning of "tuple access"?
So I'm thinking that I can make an optimization step forward if I could rewrite some parts of this big query to fetch more tuples using the indexes and less sequentially, using the information in the second graph.
Question2: Am I correct? Would it be better that the second graph show more index fetched and less sequentially read?
Question3: In case this is correct, could you provide a SQL example in which the tuples are index-fetched opposed to one in which they are sequentially read?
Note: In the questions, I'm only referring to the second graph
In general, trying to optimize graphs like this is a mistake unless you have a specific performance problem. It is not in fact always better to retrieve tuples from the indexes. These things are very complex decisions which depend on specifics of table, table access, the sort of material you are retrieving and more.
The fact is that a query plan that works for one quantity of data may not work as well for another.
Particularly if you have a lot of small tables, sequential scans will, for example, always beat index scans.
So what you want to do is to start by finding the slow queries, running them under EXPLAIN ANALYZE and looking for opportunities to add appropriate indexes. You can't do this without looking at the query plan and the actual query, which is why you always want to look at that.
In other words, your graph just gives you a sense of access patterns. It does not give you enough information to do any sort of real performance optimizations.

Oracle ORDERED hint cost vs speed

So, a few weeks ago, I asked about Oracle execution plan cost vs speed in relation to the FIRST_ROWS(n) hint. I've run into a similar issue, but this time around the ORDERED hint. When I use the hint, my execution time improves dramatically (upwards of 90%), but the EXPLAIN PLAN for the query reports an enormous cost increase. In this particular query, the cost goes from 1500 to 24000.
The query is paramterized for pagination, and joins 19 tables to get the data out. I'd post it here, but it is 585 lines long and is written for a vendor's messy, godawful schema. Unless you happened to be intimately familiar with the product this is used for, it wouldn't be much help to see it. However, I gathered the schema stats at 100% shortly before starting work on tuning the query, so the CBO is not working in the dark here.
I'll try to summarize what the query does. The query essentially returns objects and their children in the system, and is structured as a large subquery block joined directly to several tables. The first part returns object IDs and is paginated inside its query block, before the joins to other tables. Then, it is joined to several tables that contain child IDs.
I know that the CBO is not all knowing or infalible, but it really bothers me to see an execution plan this costly perform so well; it goes against a lot of what I've been taught. With the FIRST_ROWS hint, the solution was to provide a value n such that the optimizer could reliably generate the execution plan. Is there a similar kind of thing happening with the ORDERED hint for my query?
The reported cost is for the execution of the complete query, not just the first set of rows. (PostgreSQL does the costing slightly differently, in that it provides the cost for the initial return of rows and for the complete set).
For some plans the majority of the cost is incurred prior to returning the first rows (eg where a sort-merge is used), and for others the initial cost is very low but the cost per row is relatively high thereafter (eg. nested loop join).
So if you are optimising for the return of the first few rows and joining 19 tables you may get a very low cost for the return of the first 20 with a nested loop-based plan. However for of the complete set of rows the cost of that plan might be very much higher than others that are optimised for returning all rows at the expense of a delay in returning the first.
You should not rely on the execution cost to optimize a query. What matters is the execution time (and in some cases resource usages).
From the concept guide:
The cost is an estimated value proportional to the expected resource use needed to execute the statement with a particular plan.
When the estimation is off, most often it is because the statistics available to the optimizer are misleading. You can correct that by giving the optimizer more accurate statistics. Check that the statistics are up to date. If they are, you can gather additional statistics, for example by enabling dynamic statistic gathering of manually creating an histogram on a data-skewed column.
Another factor that can explain the disparity between relative cost and execution time is that the optimizer is built upon simple assumptions. For example:
Without an histogram, every value in a column is uniformly distributed
An equality operator will select 5% of the rows (without histogram or dynamic stats)
The data in each column is independent upon the data in every other column
Furthermore, for queries with bind variables, a single cost is computed for further executions (even if the bind value change, possibly modifying the cardinality of the query)
...
These assumptions are made so that the optimizer can return an execution cost that is a single figure (and not an interval). For most queries these approximation don't matter much and the result is good enough.
However, you may find that sometimes the situation is simply too complex for the optimizer and even gathering extra statistics doesn't help. In that case you'll have to manually optimize the query, either by adding hints yourself, by rewriting the query or by using Oracle tools (such as SQL profiles).
If Oracle could devise a way to accurately determine the execution cost, we would never need to optimize a query manually in the first place !

How to improve query performance

I have a lot of records in table. When I execute the following query it takes a lot of time. How can I improve the performance?
SET ROWCOUNT 10
SELECT StxnID
,Sprovider.description as SProvider
,txnID
,Request
,Raw
,Status
,txnBal
,Stxn.CreatedBy
,Stxn.CreatedOn
,Stxn.ModifiedBy
,Stxn.ModifiedOn
,Stxn.isDeleted
FROM Stxn,Sprovider
WHERE Stxn.SproviderID = SProvider.Sproviderid
AND Stxn.SProviderid = ISNULL(#pSProviderID,Stxn.SProviderid)
AND Stxn.status = ISNULL(#pStatus,Stxn.status)
AND Stxn.CreatedOn BETWEEN ISNULL(#pStartDate,getdate()-1) and ISNULL(#pEndDate,getdate())
AND Stxn.CreatedBy = ISNULL(#pSellerId,Stxn.CreatedBy)
ORDER BY StxnID DESC
The stxn table has more than 100,000 records.
The query is run from a report viewer in asp.net c#.
This is my go-to article when I'm trying to do a search query that has several search conditions which might be optional.
http://www.sommarskog.se/dyn-search-2008.html
The biggest problem with your query is the column=ISNULL(#column, column) syntax. MSSQL won't use an index for that. Consider changing it to (column = #column AND #column IS NOT NULL)
You should consider using the execution plan and look for missing indexes. Also, how long it takes to execute? What is slow for you?
Maybe you could also not return so many rows, but that is just a guess. Actually we need to see your table and indexes plus the execution plan.
Check sql-tuning-tutorial
For one, use SELECT TOP () instead of SET ROWCOUNT - the optimizer will have a much better chance that way. Another suggestion is to use a proper inner join instead of potentially ending up with a cartesian product using the old style table,table join syntax (this is not the case here but it can happen much easier with the old syntax). Should be:
...
FROM Stxn INNER JOIN Sprovider
ON Stxn.SproviderID = SProvider.Sproviderid
...
And if you think 100K rows is a lot, or that this volume is a reason for slowness, you're sorely mistaken. Most likely you have really poor indexing strategies in place, possibly some parameter sniffing, possibly some implicit conversions... hard to tell without understanding the data types, indexes and seeing the plan.
There are a lot of things that could impact the performance of query. Although 100k records really isn't all that many.
Items to consider (in no particular order)
Hardware:
Is SQL Server memory constrained? In other words, does it have enough RAM to do its job? If it is swapping memory to disk, then this is a sure sign that you need an upgrade.
Is the machine disk constrained. In other words, are the drives fast enough to keep up with the queries you need to run? If it's memory constrained, then disk speed becomes a larger factor.
Is the machine processor constrained? For example, when you execute the query does the processor spike for long periods of time? Or, are there already lots of other queries running that are taking resources away from yours...
Database Structure:
Do you have indexes on the columns used in your where clause? If the tables do not have indexes then it will have to do a full scan of both tables to determine which records match.
Eliminate the ISNULL function calls. If this is a direct query, have the calling code validate the parameters and set default values before executing. If it is in a stored procedure, do the checks at the top of the s'proc. Unless you are executing this with RECOMPILE that does parameter sniffing, those functions will have to be evaluated for each row..
Network:
Is the network slow between you and the server? Depending on the amount of data pulled you could be pulling GB's of data across the wire. I'm not sure what is stored in the "raw" column. The first question you need to ask here is "how much data is going back to the client?" For example, if each record is 1MB+ in size, then you'll probably have disk and network constraints at play.
General:
I'm not sure what "slow" means in your question. Does it mean that the query is taking around 1 second to process or does it mean it's taking 5 minutes? Everything is relative here.
Basically, it is going to be impossible to give a hard answer without a lot of questions asked by you. All of these will bear out if you profile the queries, understand what and how much is going back to the client and watch the interactions amongst the various parts.
Finally depending on the amount of data going back to the client there might not be a way to improve performance short of hardware changes.
Make sure Stxn.SproviderID, Stxn.status, Stxn.CreatedOn, Stxn.CreatedBy, Stxn.StxnID and SProvider.Sproviderid all have indexes defined.
(NB -- you might not need all, but it can't hurt.)
I don't see much that can be done on the query itself, but I can see things being done on the schema :
Create an index / PK on Stxn.SproviderID
Create an index / PK on SProvider.Sproviderid
Create indexes on status, CreatedOn, CreatedBy, StxnID
Something to consider: When ROWCOUNT or TOP are used with an ORDER BY clause, the entire result set is created and sorted first and then the top 10 results are returned.
How does this run without the Order By clause?