Strange ordering bug (is it a bug?) when ordering two columns with identical values - sql

I have the following query in postgres:
SELECT *
FROM "bookings"
WHERE ("bookings".client_id = 50)
ORDER BY session_time DESC
LIMIT 20 OFFSET 0
The record in the 20th place is has an identical session_time to the 21st record.
This query returns 20 results, however if you compare the results to the whole database the query returns the 1st-19th results and the 21st, skipping over the 20th.
This query can be fixed by adding, "id" to the order:
SELECT *
FROM "bookings"
WHERE ("bookings".client_id = 50)
ORDER BY session_time DESC, id
LIMIT 20 OFFSET 0
However I was wondering how this bug occurred? How does postgres order identical filed when using offsets and limits? Is it random? Is it a bug with postgres?

This is not a bug. The limit and offset happen after ordering and it is not deterministic which rows are selected in one case vs another. In general you want to have a tiebreaker so that your ordering is stable and deterministic (I prefer to use unique tiebreakers even when I don't have limit or offset issues in order to ensure the query is the same each time it is run).
If you are doing pagination, add the primary key or surrogate key to the sort as a tiebreaker. That is really the best way.

Related

SQL Top clause with order by clause

I am bit new to SQL, I want to write query with TOP clause and order by clause.
So, for returning all the records I write below query
select PatientName,PlanDate as Date,* from OPLMLA21..Exams order
by PlanDate desc
And I need top few elements from same query, so I modified the query to
select top(5) PatientName,PlanDate as Date,* from OPLMLA21..Exams
order by PlanDate desc
In my understanding it will give the top 5 results from the previous query, but I see ambiguity there. I have attached the screen shot of query results .
May be my understanding is wrong, I read a lot but not able to understand this please help me out.
I stated this in a comment, however, to repeat that:
 TOP (5) doesn't give the "top results" of the prior query though, no. It gives the top (first) rows from the dataset defined in the query it in is. If there are multiple rows that have the same "rank", then the row(s) returned for that rank are arbitrary. So, for example, for your query if you have 100 rows all with the same value for PlanDate, what 5 rows you get are completely arbitrary and could be different (including the order they are in) every time you run said query.
What I mean by arbitrary is that, effectively, SQL Server is free to choose whatever rows, of those applicable, are returned. Sometimes this might be the same everytime you run the query, but this by luck more than anything. As your database gets larger, you have more users querying the data, you involve joins, things like locks, indexes, parrallelism, etc all will effect the "order" that SQL Server is processing said data, and will effect an ambigious TOP clause.
Take the example data below:
ID | SomeDate
---|---------
1 |2020-01-01
2 |2020-01-01
3 |2020-01-01
4 |2020-01-01
5 |2020-01-01
6 |2020-01-02
Now, what would you expect if I ran a TOP (2) against that table with an ORDER BY clause of SomeDate DESC. Well, certainly, you'd expect the "last" row (with an ID of 6) to be returned, but what about the next row? The other 5 rows all have the same value for SomeDate. Perhaps, because your under the impression that data in a table is pre-sorted, you might expect the row with a value of 5 for ID. What if I told you that there was a CLUSTERED INDEX on ID ASC; that might well end up meaning that the row with a value of 1 is returned. What if there is also an index on SomeDate DESC?
What if the table was 10,000 of rows in size, and you also have a JOIN to another table, which also has a CLUSTERED INDEX, and some user is performing a query with some specific row locking on in while you run your query? What would you expect then?
Without your ORDER BY being specific enough to ensure that each row has a distinct ordering position, SQL Server will return other rows in an arbitrary order and when mixed with a TOP means the "top" rows will also be arbitrary.
Side note: I note in your image (of what appears to be SSMS), your "dates" are in the format yyyyMMdd. This strongly implies that you are storing a date value as a varchar or int type. This is a design flaw and needs to be fixed. There are 6 date and time data types, and 5 of them are far superior to using a string and numerical data type to storing the data.

Are big-query results always ordered, that is: using OFFSET makes sense to skip rows?

In other words does a select query order results every time, so these 2 will always produce unique values:
select *
from bigquery-public-data.crypto_ethereum.balances
limit 10 OFFSET 100
select *
from bigquery-public-data.crypto_ethereum.balances
limit 10 OFFSET 2000
Assuming of course the table has unique values...I am just curious if without using "order" clause the table is always deterministic/consequetive or can the results duplicate if they're returned indeed at random? 10x!
I am just curious if without using "order" clause the table is always deterministic/consequetive or can the results duplicate if they're returned indeed at random.
No. SQL tables represent unordered set of rows. There is no inherent ordering of the rows. Unless an order by clause is specified, there is no guarantee that two consequent executive of the same query would yield an indentical result. The database is free to return the rows in whatever order it likes.
As a consequence, the results of a query with a row-limiting clause but no order by clause are not deterministic. Do add an order by clause the these queries, or you will sooner or later run into suprising and hard-to-debug behaviors.

PostgreSQL Query without WHERE only ORDER BY and LIMIT doesn't use index

I have a table that contains an 'id' column of type BIGSERIAL. I also have an index for this one column (sort order descending, BTREE, unique).
I often need to retrieve the last 10, 20, 30 entries from a table of millions of entries, like this:
SELECT * FROM table ORDER BY id DESC LIMIT 10
I would have thought it's a pretty clear case: there's an index for this particular field, sort order matches, and I need only 10 entries compared to millions in the whole table, this query definitely uses an index scan.
But it doesn't it does a sequential scan over the whole table.
I try to dig deeper, didn't find anything unusual. The Postgres doc at https://www.postgresql.org/docs/9.6/static/indexes-ordering.html says:
An important special case is ORDER BY in combination with LIMIT n: an
explicit sort will have to process all the data to identify the first
n rows, but if there is an index matching the ORDER BY, the first n
rows can be retrieved directly, without scanning the remainder at all.
But it still doesn't work. Does anybody have any pointers for me? Maybe I'm just not seeing the forrest for the trees anymore... :-(
Ok, saying it out loud and trying to gather more information to put into my question apparently made me see the forrest again, I found the actual problem. Further down in the doc I mentioned above is this sentence:
An index stored in ascending order with nulls first can satisfy either
ORDER BY x ASC NULLS FIRST or ORDER BY x DESC NULLS LAST depending on
which direction it is scanned in.
This was the problem. I specified the sort order in the index but I ignored the NULLS FIRST vs. LAST.
Postgres default is NULLS FIRST if you don't mention it explicitly in your query. So what Postgres found was the combination ORDER BY DESC NULLS FIRST which wasn't covered by my index. The combination of both SORT ORDER and NULLS is what matters.
The 2 possible solutions:
Either mention NULLS FIRST/LAST accordingly in the query so that it matches the index
...or change the index to NULLS FIRST (which is what I did)
Now Postgres is doing a proper index scan and only touches 10 elements during the query, not all of them.
If you need to get last 10 entries in table you can use this:
SELECT *
FROM table
WHERE id >= (SELECT MAX(id) FROM table) - 10
ORDER BY id DESC
And similarly for 20 and 30 entries.
This looks not so clear, but works fast as long as you have index for 'id' column.

Splitting large table into 2 dataframes via JDBC connection in RStudio

Through R I connect to a remotely held database. The issue I have is my hardware isn't so great and the dataset contains tens of millions of rows with about 10 columns per table. When I run the below code, at the df step, I get a "Not enough RAM" error from R:
library(DatabaseConnector)
conn <- connect(connectionDetails)
df <- querySql(conn,"SELECT * FROM Table1")
What I thought about doing was splitting the tables into two parts any filter/analyse/combine as needed going forward. I think because I use the conn JDBC conection I have to use SQL syntax to make it work. With SQL, I start with the below code:
df <- querySql(conn,"SELECT TOP 5000000 FROM Table1")
And then where I get stuck is how do I create a second dataframe starting with n - 5000000 rows and ending at the final row, retrieved from Table1.
I'm open to suggestions but I think there are two potential answers to this question. The first is to work within the querySql to get it working. The second is to use an R function other than querySql (no idea what this would look like). I'm limited to R due to work environment.
The SQL statement
SELECT TOP 5000000 * from Table1
is not doing what you think it's doing.
Relational tables are conceptually unordered.
A relation is defined as a set of n-tuples. In both mathematics and the relational database model, a set is an unordered collection of unique, non-duplicated items, although some DBMSs impose an order to their data.
Selecting from a table produces a result-set. Result-sets are also conceptually unordered unless and until you explicitly specify an order for them, which is generally done using an order by clause.
When you use a top (or limit, depending on the DBMS) clause to reduce the number of records to be returned by a query (let's call these the "returned records") below the number of records that could be returned by that query (let's call these the "selected records") and if you have not specified an order by clause, then it is conceptually unpredictable and random which of the selected records will be chosen as the returned records.
Since you have not specified an order by clause in your query, you are effectively getting 5,000,000 unpredictable and random records from your table. Every single time you run the query you might get a different set of 5,000,000 records (conceptually, at least).
Therefore, it doesn't make sense to ask about how to get a second result-set "starting with n - 5000000 and ending at the final row". There is no n, and there is no final row. The choice of returned records was not deterministic, and the DBMS does not remember such choices of past queries. The only conceivable way such information could be incorporated into a subsequent query would be to explicitly include it in the SQL, such as by using a not in condition on an id column and embedding id values from the first query as literals, or doing some kind of negative join, again, involving the embedding of id values as literals. But obviously that's unreasonable.
There are two possible solutions here.
1: order by with limit and offset
Take a look at the PostgreSQL documentation on limit and offset. First, just to reinforce the point about lack of order, take note of the following paragraphs:
When using LIMIT, it is important to use an ORDER BY clause that constrains the result rows into a unique order. Otherwise you will get an unpredictable subset of the query's rows. You might be asking for the tenth through twentieth rows, but tenth through twentieth in what ordering? The ordering is unknown, unless you specified ORDER BY.
The query optimizer takes LIMIT into account when generating query plans, so you are very likely to get different plans (yielding different row orders) depending on what you give for LIMIT and OFFSET. Thus, using different LIMIT/OFFSET values to select different subsets of a query result will give inconsistent results unless you enforce a predictable result ordering with ORDER BY. This is not a bug; it is an inherent consequence of the fact that SQL does not promise to deliver the results of a query in any particular order unless ORDER BY is used to constrain the order.
Now, this solution requires that you specify an order by clause that fully orders the result-set. An order by clause that only partially orders the result-set will not be enough, since it will still leave room for some unpredictability and randomness.
Once you have the order by clause, you can then repeat the query with the same limit value and increasing offset values.
Something like this:
select * from table1 order by id1, id2, ... limit 5000000 offset 0;
select * from table1 order by id1, id2, ... limit 5000000 offset 5000000;
select * from table1 order by id1, id2, ... limit 5000000 offset 10000000;
...
2: synthesize a numbering column and filter on it
It is possible to add a column to the select clause which will provide a full order for the result-set. By wrapping this SQL in a subquery, you can then filter on the new column and thereby achieve your own pagination of the data. In fact, this solution is potentially slightly more powerful, since you could theoretically select discontinuous subsets of records, although I've never seen anyone actually do that.
To compute the ordering column, you can use the row_number() partition function.
Importantly, you will still have to specify id columns by which to order the partition. This is unavoidable under any conceivable solution; there always must be some deterministic, predictable record order to guide stateless paging through data.
Something like this:
select * from (select *, row_number() over (id1, id2, ...) rn from table1) t1 where rn>0 and rn<=5000000;
select * from (select *, row_number() over (id1, id2, ...) rn from table1) t1 where rn>5000000 and rn<=10000000;
select * from (select *, row_number() over (id1, id2, ...) rn from table1) t1 where rn>10000000 and rn<=15000000;
...
Obviously, this solution is more complicated and verbose than the previous one. And the previous solution might allow for performance optimizations not possible under the more manual approach of partitioning and filtering. Hence I would recommend the previous solution.
My above discussion focuses on PostgreSQL, but other DBMSs should provide equivalent features. For example, for SQL Server, see Equivalent of LIMIT and OFFSET for SQL Server?, which shows an example of the synthetic numbering solution, and also indicates that (at least as of SQL Server 2012) you can use OFFSET {offset} ROWS and FETCH NEXT {limit} ROWS ONLY to achieve limit/offset functionality.

Is MySQL LIMIT applied before or after ORDER BY?

Which one comes first when MySQL processes the query?
An example:
SELECT pageRegions
FROM pageRegions WHERE(pageID=?) AND(published=true) AND (publishedOn<=?)
ORDER BY publishedON DESC
LIMIT 1';
Will that return the last published pageRegion even if the record does not match the revision datetime IF LIMIT is applied after ORDER BY?
Yes, it's after the ORDER BY. For your query, you'd get the record with the highest publishedOn, since you're ordering DESC, making the largest value first in the result set, of which you pick out the first one.
The limit is always applied at the end of result gathering, therefore after order by.
Given all your clauses, the order of processing will be
FROM
WHERE
SELECT
ORDER BY
LIMIT
So you will get the closest record <= publishedOn matching all the conditions in the WHERE clause.
Just wanted to point out the in case of MySQL ordering is applied before limiting the results. But this is not true for other DB.
For example Oracle first limits results and applies ordering on said results. It makes sense when you think about it from a performance point of view. In MySQL you are actually ordering the entire DB(> 1000 records) to get 2