Splitting large table into 2 dataframes via JDBC connection in RStudio - sql

Through R I connect to a remotely held database. The issue I have is my hardware isn't so great and the dataset contains tens of millions of rows with about 10 columns per table. When I run the below code, at the df step, I get a "Not enough RAM" error from R:
library(DatabaseConnector)
conn <- connect(connectionDetails)
df <- querySql(conn,"SELECT * FROM Table1")
What I thought about doing was splitting the tables into two parts any filter/analyse/combine as needed going forward. I think because I use the conn JDBC conection I have to use SQL syntax to make it work. With SQL, I start with the below code:
df <- querySql(conn,"SELECT TOP 5000000 FROM Table1")
And then where I get stuck is how do I create a second dataframe starting with n - 5000000 rows and ending at the final row, retrieved from Table1.
I'm open to suggestions but I think there are two potential answers to this question. The first is to work within the querySql to get it working. The second is to use an R function other than querySql (no idea what this would look like). I'm limited to R due to work environment.

The SQL statement
SELECT TOP 5000000 * from Table1
is not doing what you think it's doing.
Relational tables are conceptually unordered.
A relation is defined as a set of n-tuples. In both mathematics and the relational database model, a set is an unordered collection of unique, non-duplicated items, although some DBMSs impose an order to their data.
Selecting from a table produces a result-set. Result-sets are also conceptually unordered unless and until you explicitly specify an order for them, which is generally done using an order by clause.
When you use a top (or limit, depending on the DBMS) clause to reduce the number of records to be returned by a query (let's call these the "returned records") below the number of records that could be returned by that query (let's call these the "selected records") and if you have not specified an order by clause, then it is conceptually unpredictable and random which of the selected records will be chosen as the returned records.
Since you have not specified an order by clause in your query, you are effectively getting 5,000,000 unpredictable and random records from your table. Every single time you run the query you might get a different set of 5,000,000 records (conceptually, at least).
Therefore, it doesn't make sense to ask about how to get a second result-set "starting with n - 5000000 and ending at the final row". There is no n, and there is no final row. The choice of returned records was not deterministic, and the DBMS does not remember such choices of past queries. The only conceivable way such information could be incorporated into a subsequent query would be to explicitly include it in the SQL, such as by using a not in condition on an id column and embedding id values from the first query as literals, or doing some kind of negative join, again, involving the embedding of id values as literals. But obviously that's unreasonable.
There are two possible solutions here.
1: order by with limit and offset
Take a look at the PostgreSQL documentation on limit and offset. First, just to reinforce the point about lack of order, take note of the following paragraphs:
When using LIMIT, it is important to use an ORDER BY clause that constrains the result rows into a unique order. Otherwise you will get an unpredictable subset of the query's rows. You might be asking for the tenth through twentieth rows, but tenth through twentieth in what ordering? The ordering is unknown, unless you specified ORDER BY.
The query optimizer takes LIMIT into account when generating query plans, so you are very likely to get different plans (yielding different row orders) depending on what you give for LIMIT and OFFSET. Thus, using different LIMIT/OFFSET values to select different subsets of a query result will give inconsistent results unless you enforce a predictable result ordering with ORDER BY. This is not a bug; it is an inherent consequence of the fact that SQL does not promise to deliver the results of a query in any particular order unless ORDER BY is used to constrain the order.
Now, this solution requires that you specify an order by clause that fully orders the result-set. An order by clause that only partially orders the result-set will not be enough, since it will still leave room for some unpredictability and randomness.
Once you have the order by clause, you can then repeat the query with the same limit value and increasing offset values.
Something like this:
select * from table1 order by id1, id2, ... limit 5000000 offset 0;
select * from table1 order by id1, id2, ... limit 5000000 offset 5000000;
select * from table1 order by id1, id2, ... limit 5000000 offset 10000000;
...
2: synthesize a numbering column and filter on it
It is possible to add a column to the select clause which will provide a full order for the result-set. By wrapping this SQL in a subquery, you can then filter on the new column and thereby achieve your own pagination of the data. In fact, this solution is potentially slightly more powerful, since you could theoretically select discontinuous subsets of records, although I've never seen anyone actually do that.
To compute the ordering column, you can use the row_number() partition function.
Importantly, you will still have to specify id columns by which to order the partition. This is unavoidable under any conceivable solution; there always must be some deterministic, predictable record order to guide stateless paging through data.
Something like this:
select * from (select *, row_number() over (id1, id2, ...) rn from table1) t1 where rn>0 and rn<=5000000;
select * from (select *, row_number() over (id1, id2, ...) rn from table1) t1 where rn>5000000 and rn<=10000000;
select * from (select *, row_number() over (id1, id2, ...) rn from table1) t1 where rn>10000000 and rn<=15000000;
...
Obviously, this solution is more complicated and verbose than the previous one. And the previous solution might allow for performance optimizations not possible under the more manual approach of partitioning and filtering. Hence I would recommend the previous solution.
My above discussion focuses on PostgreSQL, but other DBMSs should provide equivalent features. For example, for SQL Server, see Equivalent of LIMIT and OFFSET for SQL Server?, which shows an example of the synthetic numbering solution, and also indicates that (at least as of SQL Server 2012) you can use OFFSET {offset} ROWS and FETCH NEXT {limit} ROWS ONLY to achieve limit/offset functionality.

Related

How does Oracle SQL order rows when there is no ORDER BY

I have currently written SQL to bring back data in a select statement without using ORDER BY. From what I have read, the selection seems to be random? As I am trying to run tests on data that should be updated once selected, is there a way to 'predict' which entries will be returned by the select statement (and therefore should be modified)? My select statement is:
SELECT x.FIELD_A,
x.FIELD_B,
x.FIELD_C,
y.FIELD_D,
y.FIELD_E,
y.FIELD_F
FROM TABLE_X x
LEFT JOIN TABLE_Y Y ON x.FIELD_A = y.FIELD_G
WHERE y.FIELD_G in ('xxx', 'yyy', 'zzz', 'www')
AND y.FIELD_E = 'vvvv'
AND y.FIELD_G IS NULL
AND y.FIELD_H IS NULL
AND y.FIED_I IS NULL
AND x.FIELD_J IN ('uuu', 'ttt', 'sss', 'rrrr')
AND y.FIELD_K_DATE > ADD_MONTHS(SYSDATE, -12)
AND ROWNUM <= 20
How does Oracle SQL order rows when there is no ORDER BY?
It doesn't order them at all.
From a classic Tom Kyte article, and quoting his own book:
You should think of a heap organized table as a big unordered collection of rows. These rows will come out in a seemingly random order, and depending on other options being used (parallel query, different optimizer modes and so on), they may come out in a different order with the same query. Do not ever count on the order of rows from a query unless you have an ORDER BY statement on your query!
If you run your query several times today you may see the same results in the same order, but might not. You could run it a hundred times and perhaps get the same result each time, and think that means that's what it will always return. But if you run it again tomorrow it may or may not be the same. If you run it weeks or months or years in the future it still could be the same as you see today, but is increasingly likely not to be as other things change - including data volumes, statistics gathering, database upgrades or even potentially patching, and so on.
Without an order-by clause you are at the mercy of the optimiser and things even that can't control, potentially even down to the order blocks of data are read from disk.
So no, you can't predict which entries will be returned by your query. You will get 20 rows but which ones you get and what order those are in is non-deterministic.
If you need a predictable result you need to order the results before apply the rownum filter (or using fetch first).

Are big-query results always ordered, that is: using OFFSET makes sense to skip rows?

In other words does a select query order results every time, so these 2 will always produce unique values:
select *
from bigquery-public-data.crypto_ethereum.balances
limit 10 OFFSET 100
select *
from bigquery-public-data.crypto_ethereum.balances
limit 10 OFFSET 2000
Assuming of course the table has unique values...I am just curious if without using "order" clause the table is always deterministic/consequetive or can the results duplicate if they're returned indeed at random? 10x!
I am just curious if without using "order" clause the table is always deterministic/consequetive or can the results duplicate if they're returned indeed at random.
No. SQL tables represent unordered set of rows. There is no inherent ordering of the rows. Unless an order by clause is specified, there is no guarantee that two consequent executive of the same query would yield an indentical result. The database is free to return the rows in whatever order it likes.
As a consequence, the results of a query with a row-limiting clause but no order by clause are not deterministic. Do add an order by clause the these queries, or you will sooner or later run into suprising and hard-to-debug behaviors.

If you do a simple SELECT-WHERE on a CTE that is already sorted, are your results guaranteed to still be in that same order, just filtered?

Wondering about expected/deterministic ordering output from Oracle 11g for queries based on sorted CTEs.
Consider this (extremely-oversimplified for the sake of the) example SQL query. Again, note how the CTE has an ORDER BY clause in it.
WITH SortedArticles as (
SELECT. *
FROM Articles
ORDER BY DatePublished
)
SELECT *
FROM SortedArticles
WHERE Author = 'Joe';
Can it be assumed that the outputted rows are guaranteed to be in the same order as the CTE, or do I have to re-sort them a second time?
Again, this is an extremely over-simplified example but it contains the important parts of what I'm asking. They are...
The CTE is sorted
The final SELECT statement selects only against the CTE, nothing else (no joins, etc.), and
The final SELECT statement only specifies a WHERE clause. It is purely a filtering statement.
The short answer is no. The only way to guarantee ordering is with an ORDER BY clause on your outer query. But there is no need to sort the results in the CTE in that situation.
However, if the sort expression is complex, and you need sorting in the derived CTEs (e.g. because of using OFFSET/FETCH or ROWNUM), you could simplify the subsequent sorting by adding a row number field to the original CTE based on its sort criteria and then just sorting the derived CTEs by that row number. For your example:
WITH SortedArticles as (
SELECT *,
ROW_NUMBER() OVER (ORDER BY DatePublished) AS rn
FROM Articles
)
SELECT *
FROM SortedArticles
WHERE Author = 'Joe'
ORDER BY rn
No, the results are not guaranteed to be in the same order as in the subquery. Never was, never will be. You may observe a certain behaviour, especially if the CTE is materialized, which you can try to influence with optimizer hints like /*+ MATERIALIZE */ and /*+ INLINE */. However, the behaviour of the query optimizer depends also on data volume, IO v cpu speed, and most importantly on the database version. For instance, Oracle 12.2 introduces a feature called "In-Memory Cursor Duration Temp Table" that tries to speed up queries like yours, without preserving the order in the subquery.
I'd go along with #Nick's suggestion of adding a row number field in the subquery.

Using limit in sqlite SQL statement in combination with order by clause

Will the following two SQL statements always produce the same result set?
1. SELECT * FROM MyTable where Status='0' order by StartTime asc limit 10
2. SELECT * FROM (SELECT * FROM MyTable where Status='0' order by StartTime asc) limit 10
Yes, but ordering subqueries is probably a bad habit to get into. You could feasibly add a further ORDER BY outside the subquery in your second example, e.g.
SELECT *
FROM (SELECT *
FROM Test
ORDER BY ID ASC
) AS A
ORDER BY ID DESC
LIMIT 10;
SQLite still performs the ORDER BY on the inner query, before sorting them again in the outer query. A needless waste of resources.
I've done an SQL Fiddle to demonstrate so you can view the execution plans for each.
No. First because the StartTime column may not have UNIQUE constraint. So, even the first query may not always produce the same result - with itself!
Second, even if there are never two rows with same StartTime, the answer is still negative.
The first statement will always order on StartTime and produce the first 10 rows. The second query may produce the same result set but only with a primitive optimizer that doesn't understand that the ORDER BY in the subquery is redundant. And only if the execution plan includes this ordering phase.
The SQLite query optimizer may (at the moment) not be very bright and do just that (no idea really, we'll have to check the source code of SQLite*). So, it may appear that the two queries are producing identical results all the time. Still, it's not a good idea to count on it. You never know what changes will be made in a future version of SQLite.
I think it's not good practice to use LIMIT without ORDER BY, in any DBMS. It may work now, but you never know how long these queries will be used by the application. And you may not be around when SQLite is upgraded or the DBMS is changed.
(*) #Gareth's link provides the execution plan which suggests that current SQLite code is dumb enough to execute the redundant ordering.

Fast Way To Estimate Rows By Criteria

I have seen a few posts detailing fast ways to "estimate" the number of rows in a given SQL table without using COUNT(*). However, none of them seem to really solve the problem if you need to estimate the number of rows which satisfy a given criteria. I am trying to get a way of estimating the number of rows which satisfy a given criteria, but the information for these criteria is scattered around two or three tables. Of course a SELECT COUNT(*) with the NOLOCK hint and a few joins will do, and I can afford under- or over-estimating the total records. The probem is that this kind of query will be running every 5-10 minutes or so, and since I don't need the actual number-only an estimate-I would like to trade-off accuracy for speed.
The solution, if any, may be "SQL Server"-specific. In fact, it must be compatible with SQL Server 2005. Any hints?
There is no easy way to do this. You can get an estimate for the total number of rows in a table, e.g. from system catalog views.
But there's no way to do this for a given set of criteria in a WHERE clause - either you would have to keep counts for each set of criteria and the values, or you'd have to use black magic to find that out. The only place that SQL Server keeps something that would go into that direction is the statistics it keeps on the indices. Those will have certain information about what kind of values occur how frequently in an index - but I quite honestly don't have any idea if (and how) you could leverage the information in the statistics in your own queries......
If you really must know the number of rows matching a certain criteria, you need to do a count of some sort - either a SELECT COUNT(*) FROM dbo.YourTable WHERE (yourcriteria) or something else.
Something else could be something like this:
wrap your SELECT statement into a CTE (Common Table Expression)
define a ROW_NUMBER() in that CTE ordering your data by some column (or set of columns)
add a second ROW_NUMBER() to that CTE that orders your data by the same column (or columns) - but in the opposite direction (DESC vs. ASC)
Something like this:
;WITH YourDataCTE AS
(
SELECT (list of columns you need),
ROW_NUMBER() OVER(ORDER BY <your column>) AS 'RowNum',
ROW_NUMBER() OVER(ORDER BY <your column> DESC) AS 'RowNum2'
FROM
dbo.YourTable
WHERE
<your conditions here>
)
SELECT *
FROM YourDataCTE
Doing this, you would get the following effect:
your first row in your result set will contain your usual data columns
the first ROW_NUMBER() will contain the value 1
the second ROW_NUMBER() will contain the total number of row that match that criteria set
It's surprisingly good at dealing with small to mid-size result sets - I haven't tried yet how it'll hold up with really large result sets - but it might be something to investigate and see if it works.
Possible solutions:
If the count number is big in comparison to the total number of rows in the table, then adding indexes that cover where condition will help and the query will be very fast.
If the result number is close to the total number of rows in the table, indexes will not help much. You could implement a trigger that would maintain a 'conditional count table'. So whenever row matching condition added you would increment the value in the table, and when row is deleted you would decrement the value. So you will query this small 'summary count table'.