I have a class (that reflects a db row) that has more than 200 instances created within the codes bootstrap. Each one has a single SELECT query with the only condition being WHERE 'tblA'.'AID' = #, but I was thinking of creating a single query, that would parse 200 WHERE clauses connected by OR logic, then from the result, 200 objects are created with the data already found, so there is only 1 query.
I am implementing this on a test server at the moment, but I was wondering if this was a bad step for efficiency, and at what time it would be better to do 2 sets of queries, taking care of half the clauses each (or how ever many more need to be made)?
Additional, I am also writing a performance enhancer into it to replace something like
WHERE `tblA`.`AID` = 2 OR `tblA`.`AID` = 3 OR `tblA`.`AID` = 5 OR `tblA`.`AID` = 6 OR `tblA`.`AID` = 7
with
WHERE (`tblA`.`AID` >= 2 AND `tblA`.`AID` <= 3) OR (`tblA`.`AID` >= 5 AND `tblA`.`AID` <= 7)
or even
WHERE `tblA`.`AID` >= 2 AND `tblA`.`AID` <= 7 AND `tblA`.`AID` <> 4
If you have a discrete list, then just use in . . .
where AID in (2, 3, 5, 6, 7, . . .)
And let the SQL engine worry about the optimization.
The biggest hit is likely to be the time to parse the query and sending a large query to the engine. If your list gets really long, then consider putting the list in a temporary table, building an index on the table, and doing a join.
You don't specify what database you are using, but this advice is pretty database-agnostic.
You still haven't specified DBMS.
For SQL Server this might be modestly worthwhile (though you may well want to consider joining on a table valued parameter or similar rather than having a lengthy IN list anyway).
SQL Server will do separate individual seeks rather than collapse them into contiguous ranges. This is covered thoroughly in the article When is a Seek not a Seek? but some examples below.
CREATE TABLE T (X int PRIMARY KEY)
INSERT INTO T
SELECT TOP 1000000 ROW_NUMBER() OVER (ORDER BY ##SPID)
FROM master..spt_values v1, master..spt_values v2
SET STATISTICS IO ON;
SELECT *
FROM T
WHERE X IN ( 1, 2, 3, 4, 5, 6, 7, 8, 9,10,
11,12,13,14,15,16,17,18,19,20,
21,22,23,24,25,26,27,28,29,30,
31,32,33,34,35,36,37,38,39,40,
41,42,43,44,45,46,47,48,49,50,
51,52,53,54,55,56,57,58,59,60,
61,62,63,64)
Table 'T'. Scan count 64, logical reads 192
SELECT *
FROM T
WHERE X BETWEEN 1 AND 64
Table 'T'. Scan count 1, logical reads 3
As mentioned in the comments on the article for greater than 64 values you will get a slightly different plan that adds a table of constants and a nested loops join into the mix.
Related
I have a ten million level database. The client needs to read data and perform calculation.
Due to the large amount of data, if it is saved in the application cache, memory will be overflow and crash will occur.
If I use select statement to query data from the database in real time, the time may be too long and the number of operations on the database may be too frequent.
Is there a better way to read the database data? I use C++ and C# to access SQL Server database.
My database statement is similar to the following:
SELECT TOP 10 y.SourceName, MAX(y.EndTimeStamp - y.StartTimeStamp) AS ProcessTimeStamp
FROM
(
SELECT x.SourceName, x.StartTimeStamp, IIF(x.EndTimeStamp IS NOT NULL, x.EndTimeStamp, 134165256277210658) AS EndTimeStamp
FROM
(
SELECT
SourceName,
Active,
LEAD(Active) OVER(PARTITION BY SourceName ORDER BY TicksTimeStamp) NextActive,
TicksTimeStamp AS StartTimeStamp,
LEAD(TicksTimeStamp) OVER(PARTITION BY SourceName ORDER BY TicksTimeStamp) EndTimeStamp
FROM Table1
WHERE Path = N'App1' and TicksTimeStamp >= 132165256277210658 and TicksTimeStamp < 134165256277210658
) x
WHERE (x.Active = 1 and x.NextActive = 0) OR (x.Active = 1 and x.NextActive = null)
) y
GROUP BY y.SourceName
ORDER BY ProcessTimeStamp DESC, y.SourceName
The database structure is roughly as follows:
ID Path SourceName TicksTimeStamp Active
1 App1 Pipe1 132165256277210658 1
2 App1 Pipe1 132165256297210658 0
3 App1 Pipe1 132165956277210658 1
4 App2 Pipe2 132165956277210658 1
5 App2 Pipe2 132165956277210658 0
I use the ExecuteReader of C #. The same SQL statement runs on SQL Management for 4s, but the time returned by the ExecuteReader is 8-9s. Does the slow time have anything to do with this interface?
I don't really 'get' the entire query but I'm wondering about this part:
WHERE (x.Active = 1 and x.NextActive = 0) OR (x.Active = 1 and x.NextActive = null)
SQL doesn't really like OR's so why not convert this to
WHERE x.Active = 1 and ISNULL(x.NextActive, 0) = 0
This might cause a completely different query plan. (or not)
As CharlieFace mentioned, probably best to share the query plan so we might get an idea of what's going on.
PS: I'm also not sure what those 'ticksTimestamps' represent, but it looks like you're fetching a pretty wide range there, bigger volumes will also cause longer processing time. Even though you only return the top 10 it still has to go through the entire range to calculate those durations.
I agree with #Charlieface. I think the index you want is as follows:
CREATE INDEX idx ON Table1 (Path, TicksTimeStamp) INCLUDE (SourceName, Active);
You can add both indexes (with different names of course) and see which one the execution engine chooses.
I can suggest adding the following index which should help the inner query using LEAD:
CREATE INDEX idx ON Table1 (SourceName, TicksTimeStamp, Path) INCLUDE (Active);
The key point of the above index is that it should allow the lead values to be rapidly computed. It also has an INCLUDE clause for Active, to cover the entire select.
Suppose I have a table with column which takes values from 1 to 10. I need to select columns with all values except for 9 and 10. Will there be a difference (performance-wise) when I use this query:
SELECT * FROM tbl WHERE col NOT IN (9, 10)
and this one?
SELECT * FROM tbl WHERE col IN (1, 2, 3, 4, 5, 6, 7, 8)
Use "IN" as it will most likely make the DBMS use an index on the corresponding column.
"NOT IN" could in theory also be translated into an index usage, but in a more complicated way which DBMS might not "spend overhead time" using.
When it comes to performance you should always profile your code (i.e. run your queries few thousand times and measure each loops performance using some kind of stopwatch. Sample).
But here I highly recommend using the first query for better future maintaining. The logic is that you need all records but 9 and 10. If you add value 11 to your table and use second query, logic of your application will be broken that will lead to bug, of course.
Edit: I remember this was tagged as php that's why I provided sample in php, but I might be mistaken. I guess it won't be hard to rewrite that sample in the language you're using.
I have seen Oracle have trouble optimizing some queries with NOT IN if columns are nullable. If you can write your query either way, IN is preferred as far as I'm concerned.
For a list of constants, MySQL will internally expand your code to:
SELECT * FROM tbl WHERE ((col <> 9 and col <> 10))
Same for the other one, with 8 times = instead.
So yes, the first one will be faster, less comparisons to be done. Chances that it is measurable are negligible though, the overhead of a handful of constant comparisons is nothing compared to the general overhead of parsing SQL and retrieving data.
"IN" statement works internally like a serie of "OR" statements.
For example:
SELECT * FROM tbl WHERE col IN (1, 2, 3)
Its equals to
SELECT * FROM tbl WHERE col = 1 OR col = 2 OR col = 3
"OR" statements could cause some performance issues as explained in this article:
https://bertwagner.com/2018/02/20/or-vs-union-all-is-one-better-for-performance/
When you do a NOT IN statement, its all the same, but the result has a logical denial. BUT, you could write and equivalent query much better in performance. In your example:
SELECT * FROM tbl WHERE col NOT IN (9, 10)
Its equals to
SELECT * FROM tbl WHERE col <> 9 AND col <> 10
With an "AND" statement, the database stop analizing when one of all conditionals its false, so, its much better in performance than "OR" used in "IN" statement.
I need to use InnoDB storage engine on a table with about 1mil or so records in it at any given time. It has records being inserted to it at a very fast rate, which are then dropped within a few days, maybe a week. The ping table has about a million rows, whereas the website table only about 10,000.
My statement is this:
select url
from website ws, ping pi
where ws.idproxy = pi.idproxy and pi.entrytime > curdate() - 3 and contentping+tcpping is not null
group by url
having sum(contentping+tcpping)/(count(*)-count(errortype)) < 500 and count(*) > 3 and
count(errortype)/count(*) < .15
order by sum(contentping+tcpping)/(count(*)-count(errortype)) asc;
I added an index on entrytime, yet no dice. Can anyone throw me a bone as to what I should consider to look into for basic optimization of this query. The result set is only like 200 rows, so I'm not getting killed there.
In the absence of the schemas of the relations, I'll have to make some guesses.
If you're making WHERE a.attrname = b.attrname clauses, that cries out for a JOIN instead.
Using COUNT(*) is both redundant and sometimes less efficient than COUNT(some_specific_attribute). The primary key is a good candidate.
Why would you test contentping+tcpping IS NOT NULL, asking for a calculation that appears unnecessary, instead of just testing whether the attributes individually are null?
Here's my attempt at an improvement:
SELECT url
FROM website AS ws
JOIN ping AS pi
ON ws.idproxy = pi.idproxy
WHERE
pi.entrytime > CURDATE() - 3
AND pi.contentping IS NOT NULL
AND pi.tcpping IS NOT NULL
GROUP BY url
HAVING
SUM(pi.contentping + pi.tcpping) / (COUNT(pi.idproxy) - COUNT(pi.errortype)) < 500
AND COUNT(pi.idproxy) > 3
AND COUNT(pi.errortype) / COUNT(pi.idproxy) < 0.15
ORDER BY
SUM(pi.contentping + pi.tcpping) / (COUNT(pi.idproxy) - COUNT(pi.errortype)) ASC;
Performing lots of identical calculations in both the HAVING and ORDER BY clauses will likely be costing you performance. You could either put them in the SELECT clause, or create a view that has those calculations as attributes and use that view for accessing the values.
I'm writing a web application that should show very large results on a search query.
Say some queries will return 10.000 items.
I'd like to show those to users paginated; no problem so far: each page will be the result of a query with an appropriate LIMIT statement.
But I'd like to show clues about results in each page of the paginated query: some data from the first item and some from the last.
This mean that, for example, with a result of 10.000 items and a page size of 50 items, if the user asked for the first page I will need:
the first 50 items (the page requested by the user)
item 51 and 100 (the first and last of the second page)
item 101 and 151
etc
For efficiency reasons I want to avoid one query per row.
[edit] I also would prefer not downloading 10.000 results if I only need 50 + 10000/50*2 = 400
The question is: is there a single query I can issue to the RDBMS (mysql, by the way, but I'd prefer a cross-db solution) that will return only the data I need?
I can't use server side cursor, because not all dbs support it and I want my app to be database-agnostic.
Just for fun, here is the MSSQL version of it.
declare #pageSize as int; set #pageSize = 10;
declare #pageIndex as int; set #pageIndex = 0; /* first page */
WITH x AS
(
select
ROW_NUMBER() OVER (ORDER BY (created) ASC) AS RowNumber,
*
from table
)
SELECT * FROM x
WHERE
((RowNumber <= (#pageIndex+1)*#pageSize) AND (RowNumber >= #pageIndex*#PageSize+1))
OR
RowNumber % #pageSize = 1
OR
RowNumber % #pageSize = #pageSize-1
Note, that an ORDER BY is provided in the over clause.
Also note, that if you have gazillion rows, your result set will have millions. You need to maximize the result rows for practical reasons.
I have no idea how this could be a solved in generic SQL. (My bet: no way. Even simple pageing cannot be solved without DB-specific operators.)
UPDATE: I completely misread the initial question. You can do this using UNION and the LIMIT clause in MySQL, although it might be what you meant by "one query per row". The syntax would be like:
select FOO from BAZ limit 50
union
select FOO from BAZ limit 50, 1
union
select FOO from BAZ limit 99, 1
union
select FOO from BAZ limit 100, 1
union
select FOO from BAZ limit 149, 1
and so on and so forth. Since you're using UNION, you'll only need one roundtrip to the database. I'm not sure how MySQL will treat the various SELECT statements, though. It should be able to recognize that they are essentially the same query and use a cached query plan, but I don't work with MySQL enough to know if that's a reasonable expectation for its optimizer.
Obviously, to build this query in a general fashion, you'll first need to run a count query so you can calculate what your offsets will be.
This is definitely not a tractable problem for standard SQL, since the paging logic requires nonstandard features.
The problem: we have a very complex search query. If its result yields too few rows we expand the result by UNIONing the query with a less strict version of the same query.
We are discussing wether a different approach would be faster and/or better in quality. Instead of UNIONing we would create a custom sql function which would return a matching score. Then we could simply order by that matching score.
Regarding performance: will it be slower than a UNION?
We use PostgreSQL.
Any suggestions would be greatly appreciated.
Thank you very much
Max
A definitive answer can only be given if you measure the performance of both approaches in realistic environments. Everything else is guesswork at best.
There are so many variables at play here - the structure of the tables and the types of data in them, the distribution of the data, what kind of indices you have at your disposal, how heavy the load on the server is - it's almost impossible to predict any outcome, really.
So really - my best advice is: try both approaches, on the live system, with live data, not just with a few dozen test rows - and measure, measure, measure.
Marc
You want to order by the "return value" of your custom function? Then the database server can't use an index for that. The score has to be calculated for each record in the table (that hasn't been excluded with a WHERE clause) and stored in some temporary storage/table. Then the order by is performed on that temporary table. So this easily can get slower than your union queries (depending on your union statements of course).
To add my little bit...
+1 to marc_s, completely agree with what he said - I would only say, you need a test db server with realistic data volumes in to test on, as opposed to production server.
For the function approach, the function would be executed for each record, and then ordered by that result - this will not be an indexed column and so I'd expect to see a negative impact in performance. However, how big that impact is and whether it is actually negative when compared to the cumulative time of the other approach, is only going to be known by testing.
In PostgreSQL 8.3 and below, UNION implied DISTINCT which implied sorting, that means ORDER BY, UNION and DISTINCT were always of same efficiency, since the atter two aways used sorting.
On PostgreSQL 8.3, this query returns the sorted results:
SELECT *
FROM generate_series(1, 10) s
UNION
SELECT *
FROM generate_series(5, 15) s
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Since PostgreSQL 8.4 it became possible to use HashAggregate for UNION which may be faster (and almost always is), but does not guarantee ordered output.
The same query returns the following on PostgreSQL 8.4:
SELECT *
FROM generate_series(1, 10) s
UNION
SELECT *
FROM generate_series(5, 15) s
10
15
8
6
7
11
12
2
13
5
4
1
3
14
9
, and as you can see the resuts are not sorted.
PostgreSQL change list mentions this:
SELECT DISTINCT and UNION/INTERSECT/EXCEPT no longer always produce sorted output (Tom)
So in new PostgreSQL versions, I'd advice to use UNION, since it's more flexible.
In old versions, the performance will be the same.