Performance analysis on SQL Query - sql

Which will perform better?
Ouery 1:
select (select a from innertable I where i.val=o.val)
, val1, val2
from outertable o
Query 2:
select i.a
,o.val1
,o.val2
from outertable o
join innertable i on i.val=o.val
Why ? Please advise.

As Ollie suggests, the only definitive way to determine which of two queries is more efficient is to benchmark the two approaches using your data since the performance of the two alternatives is likely to depend on data volumes, data structures, what indexes are present, etc.
In general, the two queries that you posted will return different results. Unless you are guaranteed that every row in outertable has exactly one corresponding row in innertable, the two queries will return a different number of rows. The first query will return a row for every row in outertable with a NULL as the first column if there is no matching row in innertable. The second query will not return anything if there is no matching row in innertable. Similarly, if there are multiple matching rows in innertable for any particular row in outertable, the first query will return an error while the second query will return multiple rows for that row in outertable.
If you are confident that the two queries return identical result sets in your particular case because you can guarantee that there is exactly one row in innertable for every row in outertable (in which case it is at least somewhat odd that your data model separates the tables), the second option would be the much more natural way to write the query and thus the one for which the optimizer is most likely to find the more efficient plan.

Related

How does a DELETE FROM with a SELECT in the WHERE work?

I am looking at an application and I found this SQL:
DELETE FROM Phrase
WHERE Modified < (SELECT Modified FROM PhraseSource WHERE Id = Phrase.PhraseId)
The intention of the SQL is to delete rows from Phrase where there are more recent rows in the PhraseSource table.
Now I know the tables Phrase and PhraseSource have the same columns and Modified holds the number of seconds since 1970 but I cannot understand how/why this works or what it is doing. When I look at it then it seems like on the left of the < it is just one column and on the right side of the > it would be many rows. Does it even make any sense?
The two tables are identical and have the following structure
Id - GUID primary key
...
...
...
Modified int
the ... columns are about ten columns containing text and numeric data. The PhraseSource table may or may not contain more recent rows with a higher number in the Modified column and different text and numeric data.
The SELECT statement in parenthesis is a sub-query or nested query.
What happens is that for each row, the Modified column value is compared with the result of the sub-query (which is run once for each of the rows in the Phrase table).
The sub-query has a WHERE statement, so it finds a row that has the same ID as the row from Phrase table that we are currently evaluating and returns the Modified value (which is for a sigle row, actually a single scalar value).
The two Modified values are compared and in case the Phrase's row has been modified before the row in PhraseSource, it is deleted.
As you can see this approach is not efficient, because it requires the database to run a separate query for each of the rows in the Phrase table (although I imagine that some databases might be smart enough to optimize this a little bit).
A better solution
The more efficient solution would be to use INNER JOIN:
DELETE p FROM Phrase p
INNER JOIN PhraseSource ps
ON p.PhraseId=ps.Id
WHERE p.Modified < ps.Modified
This should do the exact same thing as your query, but using efficient JOIN mechanism. INNER JOIN uses the ON statement to choose how to "match" rows in two different tables (which is done very efficiently by the DB) and then again compares the Modified values of matching rows.

Inconsistent results from BigQuery: same query, different number of rows

I noticed today that one my query was having inconsistent results: every time I run it I have a different number of rows returned (cache deactivated).
Basically the query looks like this:
SELECT *
FROM mydataset.table1 AS t1
LEFT JOIN EACH mydataset.table2 AS t2
ON t1.deviceId=t2.deviceId
LEFT JOIN EACH mydataset.table3 AS t3
ON t2.email=t3.email
WHERE t3.email IS NOT NULL
AND (t3.date IS NULL OR DATE_ADD(t3.date, 5000, 'MINUTE')<TIMESTAMP('2016-07-27 15:20:11') )
The tables are not updated between each query. So I'm wondering if you also have noticed that kind of behaviour.
I usually make queries that return a lot of rows (>1000) so a few missing rows here and there is hardly noticeable. But this query return a few row, and it varies everytime between 10 and 20 rows :-/
If a Google engineer is reading this, here are two Job ID of the same query with different results:
picta-int:bquijob_400dd739_1562d7e2410
picta-int:bquijob_304f4208_1562d7df8a2
Unless I'm missing something, the query that you provide is completely deterministic and so should give the same result every time you execute it. But you say it's "basically" the same as your real query, so this may be due to something you changed.
There's a couple of things you can do to try to find the cause:
replace select * by an explicit selection of fields from your tables (a combination of fields that uniquely determine each row)
order the table by these fields, so that the order becomes the same each time you execute the query
simplify your query. In the above query, you can remove the first condition and turn the two left outer joins into inner joins and get the same result. After that, you could start removing tables and conditions one by one.
After each step, check if you still get different result sets. Then when you have found the critical step, try to understand why it causes your problem. (Or ask here.)

Splitting large table into 2 dataframes via JDBC connection in RStudio

Through R I connect to a remotely held database. The issue I have is my hardware isn't so great and the dataset contains tens of millions of rows with about 10 columns per table. When I run the below code, at the df step, I get a "Not enough RAM" error from R:
library(DatabaseConnector)
conn <- connect(connectionDetails)
df <- querySql(conn,"SELECT * FROM Table1")
What I thought about doing was splitting the tables into two parts any filter/analyse/combine as needed going forward. I think because I use the conn JDBC conection I have to use SQL syntax to make it work. With SQL, I start with the below code:
df <- querySql(conn,"SELECT TOP 5000000 FROM Table1")
And then where I get stuck is how do I create a second dataframe starting with n - 5000000 rows and ending at the final row, retrieved from Table1.
I'm open to suggestions but I think there are two potential answers to this question. The first is to work within the querySql to get it working. The second is to use an R function other than querySql (no idea what this would look like). I'm limited to R due to work environment.
The SQL statement
SELECT TOP 5000000 * from Table1
is not doing what you think it's doing.
Relational tables are conceptually unordered.
A relation is defined as a set of n-tuples. In both mathematics and the relational database model, a set is an unordered collection of unique, non-duplicated items, although some DBMSs impose an order to their data.
Selecting from a table produces a result-set. Result-sets are also conceptually unordered unless and until you explicitly specify an order for them, which is generally done using an order by clause.
When you use a top (or limit, depending on the DBMS) clause to reduce the number of records to be returned by a query (let's call these the "returned records") below the number of records that could be returned by that query (let's call these the "selected records") and if you have not specified an order by clause, then it is conceptually unpredictable and random which of the selected records will be chosen as the returned records.
Since you have not specified an order by clause in your query, you are effectively getting 5,000,000 unpredictable and random records from your table. Every single time you run the query you might get a different set of 5,000,000 records (conceptually, at least).
Therefore, it doesn't make sense to ask about how to get a second result-set "starting with n - 5000000 and ending at the final row". There is no n, and there is no final row. The choice of returned records was not deterministic, and the DBMS does not remember such choices of past queries. The only conceivable way such information could be incorporated into a subsequent query would be to explicitly include it in the SQL, such as by using a not in condition on an id column and embedding id values from the first query as literals, or doing some kind of negative join, again, involving the embedding of id values as literals. But obviously that's unreasonable.
There are two possible solutions here.
1: order by with limit and offset
Take a look at the PostgreSQL documentation on limit and offset. First, just to reinforce the point about lack of order, take note of the following paragraphs:
When using LIMIT, it is important to use an ORDER BY clause that constrains the result rows into a unique order. Otherwise you will get an unpredictable subset of the query's rows. You might be asking for the tenth through twentieth rows, but tenth through twentieth in what ordering? The ordering is unknown, unless you specified ORDER BY.
The query optimizer takes LIMIT into account when generating query plans, so you are very likely to get different plans (yielding different row orders) depending on what you give for LIMIT and OFFSET. Thus, using different LIMIT/OFFSET values to select different subsets of a query result will give inconsistent results unless you enforce a predictable result ordering with ORDER BY. This is not a bug; it is an inherent consequence of the fact that SQL does not promise to deliver the results of a query in any particular order unless ORDER BY is used to constrain the order.
Now, this solution requires that you specify an order by clause that fully orders the result-set. An order by clause that only partially orders the result-set will not be enough, since it will still leave room for some unpredictability and randomness.
Once you have the order by clause, you can then repeat the query with the same limit value and increasing offset values.
Something like this:
select * from table1 order by id1, id2, ... limit 5000000 offset 0;
select * from table1 order by id1, id2, ... limit 5000000 offset 5000000;
select * from table1 order by id1, id2, ... limit 5000000 offset 10000000;
...
2: synthesize a numbering column and filter on it
It is possible to add a column to the select clause which will provide a full order for the result-set. By wrapping this SQL in a subquery, you can then filter on the new column and thereby achieve your own pagination of the data. In fact, this solution is potentially slightly more powerful, since you could theoretically select discontinuous subsets of records, although I've never seen anyone actually do that.
To compute the ordering column, you can use the row_number() partition function.
Importantly, you will still have to specify id columns by which to order the partition. This is unavoidable under any conceivable solution; there always must be some deterministic, predictable record order to guide stateless paging through data.
Something like this:
select * from (select *, row_number() over (id1, id2, ...) rn from table1) t1 where rn>0 and rn<=5000000;
select * from (select *, row_number() over (id1, id2, ...) rn from table1) t1 where rn>5000000 and rn<=10000000;
select * from (select *, row_number() over (id1, id2, ...) rn from table1) t1 where rn>10000000 and rn<=15000000;
...
Obviously, this solution is more complicated and verbose than the previous one. And the previous solution might allow for performance optimizations not possible under the more manual approach of partitioning and filtering. Hence I would recommend the previous solution.
My above discussion focuses on PostgreSQL, but other DBMSs should provide equivalent features. For example, for SQL Server, see Equivalent of LIMIT and OFFSET for SQL Server?, which shows an example of the synthetic numbering solution, and also indicates that (at least as of SQL Server 2012) you can use OFFSET {offset} ROWS and FETCH NEXT {limit} ROWS ONLY to achieve limit/offset functionality.

Join 50 millions query one to one row

I have two tables having 50 million unique rows each.
Row number from one table corresponds to row number in the second table.
i.e
1st row in the 1st table joins with 1st row in the second table, 2nd row in first table joins with 2nd row in the second table and so on. Doing inner join is costly.
It takes more than 5 hours on clusters. Is there an efficient way to do this in SQL?
To start with: tables are just sets. So the row number of a record can be considered pure coincidence. You must not join two tables based on row numbers. So you would join on IDs rather than on row numbers.
There is nothing more efficient than a simple inner join. As the whole tables must be read, you might not even gain anything from indexes (but as we are talking of IDs, there will be indexes anyhow, so nothing we must ponder on).
Depending on the DBMS you may be able to parallelize the query. In Oracle for example you would use a hint such as /*+ parallel( tablename , parallel_factor ) */.
Try to sort both tables by rows (if isnt sorted),then use normal SELECT (maybe you could use LIMIT to get it part by part) for both tables anddata connect line by line wherever you want

Fast Way To Estimate Rows By Criteria

I have seen a few posts detailing fast ways to "estimate" the number of rows in a given SQL table without using COUNT(*). However, none of them seem to really solve the problem if you need to estimate the number of rows which satisfy a given criteria. I am trying to get a way of estimating the number of rows which satisfy a given criteria, but the information for these criteria is scattered around two or three tables. Of course a SELECT COUNT(*) with the NOLOCK hint and a few joins will do, and I can afford under- or over-estimating the total records. The probem is that this kind of query will be running every 5-10 minutes or so, and since I don't need the actual number-only an estimate-I would like to trade-off accuracy for speed.
The solution, if any, may be "SQL Server"-specific. In fact, it must be compatible with SQL Server 2005. Any hints?
There is no easy way to do this. You can get an estimate for the total number of rows in a table, e.g. from system catalog views.
But there's no way to do this for a given set of criteria in a WHERE clause - either you would have to keep counts for each set of criteria and the values, or you'd have to use black magic to find that out. The only place that SQL Server keeps something that would go into that direction is the statistics it keeps on the indices. Those will have certain information about what kind of values occur how frequently in an index - but I quite honestly don't have any idea if (and how) you could leverage the information in the statistics in your own queries......
If you really must know the number of rows matching a certain criteria, you need to do a count of some sort - either a SELECT COUNT(*) FROM dbo.YourTable WHERE (yourcriteria) or something else.
Something else could be something like this:
wrap your SELECT statement into a CTE (Common Table Expression)
define a ROW_NUMBER() in that CTE ordering your data by some column (or set of columns)
add a second ROW_NUMBER() to that CTE that orders your data by the same column (or columns) - but in the opposite direction (DESC vs. ASC)
Something like this:
;WITH YourDataCTE AS
(
SELECT (list of columns you need),
ROW_NUMBER() OVER(ORDER BY <your column>) AS 'RowNum',
ROW_NUMBER() OVER(ORDER BY <your column> DESC) AS 'RowNum2'
FROM
dbo.YourTable
WHERE
<your conditions here>
)
SELECT *
FROM YourDataCTE
Doing this, you would get the following effect:
your first row in your result set will contain your usual data columns
the first ROW_NUMBER() will contain the value 1
the second ROW_NUMBER() will contain the total number of row that match that criteria set
It's surprisingly good at dealing with small to mid-size result sets - I haven't tried yet how it'll hold up with really large result sets - but it might be something to investigate and see if it works.
Possible solutions:
If the count number is big in comparison to the total number of rows in the table, then adding indexes that cover where condition will help and the query will be very fast.
If the result number is close to the total number of rows in the table, indexes will not help much. You could implement a trigger that would maintain a 'conditional count table'. So whenever row matching condition added you would increment the value in the table, and when row is deleted you would decrement the value. So you will query this small 'summary count table'.