Related
I have my table (cte) defintions and result set here
The CTE may look strange but it has been tested and returns the correct results in the most efficient manner that I've found yet. The below query will find the number of person IDs (patid) who are taking two or more drugs at the same time. Currently, the query works insofar as it returns the patIDs of the people taking both drugs, but not both drugs at the same time. Taking both drugs is indicated by one fillDate of one drug falling before a scriptEndDate of another drug. So
You can see in this partial result set that on line 18 the scriptFillDate is 2009-07-19 which is between the fillDate and scriptEndDate of the same patID from row 2. What constraint do I need to add so I can filter these unneeded results?
--PatientDrugList is a CTE because eventually parameters might be passed to it
--to alter the selection population
;with PatientDrugList(patid, filldate, scriptEndDate,drugName,strength)
as
(
select rx.patid,rx.fillDate,rx.scriptEndDate,rx.drugName,rx.strength
from rx
),
--the row constructor here will eventually be parameters for a stored procedure
DrugList (drugName)
as
(
select x.drugName
from (values ('concerta'),('fentanyl'))
as x(drugName)
where x.drugName is not null
)
--the row number here is so that I can find the largest date range
--(the largest datediff means the person was on a given drug for a larger
--amount of time. obviously not a optimal solution
--celko inspired relational division!
select distinct row_number() over(partition by pd.patid, drugname order by datediff(day,pd.fillDate,pd.scriptEndDate)desc) as rn
,pd.patid
,pd.drugname
,pd.fillDate
,pd.scriptEndDate
from PatientDrugList as pd
where not exists
(select * from DrugList
where not exists
(select * from PatientDrugList as pd2
where(pd.patid=pd2.patid)
and (pd2.drugName = DrugList.drugName)))
and exists
(select *
from DrugList
where DrugList.drugName=pd.drugName
)
group by pd.patid, pd.drugName,pd.filldate,pd.scriptEndDate
Wrap you original query into a CTE, or better yet, for performance, stability of query plan and result, store it into a temp table.
The query below (assuming CTE option) will give you the overlapping times when both drugs are being taken.
;with tmp as (
.. your query producing the columns shown ..
)
select *
from tmp a
join tmp b on a.patid = b.patid and a.drugname <> b.drugname
where a.filldate < b.scriptenddate
and b.filldate < a.scriptenddate;
I want a random selection of rows in PostgreSQL, I tried this:
select * from table where random() < 0.01;
But some other recommend this:
select * from table order by random() limit 1000;
I have a very large table with 500 Million rows, I want it to be fast.
Which approach is better? What are the differences? What is the best way to select random rows?
Fast ways
Given your specifications (plus additional info in the comments),
You have a numeric ID column (integer numbers) with only few (or moderately few) gaps.
Obviously no or few write operations.
Your ID column has to be indexed! A primary key serves nicely.
The query below does not need a sequential scan of the big table, only an index scan.
First, get estimates for the main query:
SELECT count(*) AS ct -- optional
, min(id) AS min_id
, max(id) AS max_id
, max(id) - min(id) AS id_span
FROM big;
The only possibly expensive part is the count(*) (for huge tables). Given above specifications, you don't need it. An estimate to replace the full count will do just fine, available at almost no cost:
SELECT (reltuples / relpages * (pg_relation_size(oid) / 8192))::bigint AS ct
FROM pg_class
WHERE oid = 'big'::regclass; -- your table name
Detailed explanation:
Fast way to discover the row count of a table in PostgreSQL
As long as ct isn't much smaller than id_span, the query will outperform other approaches.
WITH params AS (
SELECT 1 AS min_id -- minimum id <= current min id
, 5100000 AS id_span -- rounded up. (max_id - min_id + buffer)
)
SELECT *
FROM (
SELECT p.min_id + trunc(random() * p.id_span)::integer AS id
FROM params p
, generate_series(1, 1100) g -- 1000 + buffer
GROUP BY 1 -- trim duplicates
) r
JOIN big USING (id)
LIMIT 1000; -- trim surplus
Generate random numbers in the id space. You have "few gaps", so add 10 % (enough to easily cover the blanks) to the number of rows to retrieve.
Each id can be picked multiple times by chance (though very unlikely with a big id space), so group the generated numbers (or use DISTINCT).
Join the ids to the big table. This should be very fast with the index in place.
Finally trim surplus ids that have not been eaten by dupes and gaps. Every row has a completely equal chance to be picked.
Short version
You can simplify this query. The CTE in the query above is just for educational purposes:
SELECT *
FROM (
SELECT DISTINCT 1 + trunc(random() * 5100000)::integer AS id
FROM generate_series(1, 1100) g
) r
JOIN big USING (id)
LIMIT 1000;
Refine with rCTE
Especially if you are not so sure about gaps and estimates.
WITH RECURSIVE random_pick AS (
SELECT *
FROM (
SELECT 1 + trunc(random() * 5100000)::int AS id
FROM generate_series(1, 1030) -- 1000 + few percent - adapt to your needs
LIMIT 1030 -- hint for query planner
) r
JOIN big b USING (id) -- eliminate miss
UNION -- eliminate dupe
SELECT b.*
FROM (
SELECT 1 + trunc(random() * 5100000)::int AS id
FROM random_pick r -- plus 3 percent - adapt to your needs
LIMIT 999 -- less than 1000, hint for query planner
) r
JOIN big b USING (id) -- eliminate miss
)
TABLE random_pick
LIMIT 1000; -- actual limit
We can work with a smaller surplus in the base query. If there are too many gaps so we don't find enough rows in the first iteration, the rCTE continues to iterate with the recursive term. We still need relatively few gaps in the ID space or the recursion may run dry before the limit is reached - or we have to start with a large enough buffer which defies the purpose of optimizing performance.
Duplicates are eliminated by the UNION in the rCTE.
The outer LIMIT makes the CTE stop as soon as we have enough rows.
This query is carefully drafted to use the available index, generate actually random rows and not stop until we fulfill the limit (unless the recursion runs dry). There are a number of pitfalls here if you are going to rewrite it.
Wrap into function
For repeated use with the same table with varying parameters:
CREATE OR REPLACE FUNCTION f_random_sample(_limit int = 1000, _gaps real = 1.03)
RETURNS SETOF big
LANGUAGE plpgsql VOLATILE ROWS 1000 AS
$func$
DECLARE
_surplus int := _limit * _gaps;
_estimate int := ( -- get current estimate from system
SELECT (reltuples / relpages * (pg_relation_size(oid) / 8192))::bigint
FROM pg_class
WHERE oid = 'big'::regclass);
BEGIN
RETURN QUERY
WITH RECURSIVE random_pick AS (
SELECT *
FROM (
SELECT 1 + trunc(random() * _estimate)::int
FROM generate_series(1, _surplus) g
LIMIT _surplus -- hint for query planner
) r (id)
JOIN big USING (id) -- eliminate misses
UNION -- eliminate dupes
SELECT *
FROM (
SELECT 1 + trunc(random() * _estimate)::int
FROM random_pick -- just to make it recursive
LIMIT _limit -- hint for query planner
) r (id)
JOIN big USING (id) -- eliminate misses
)
TABLE random_pick
LIMIT _limit;
END
$func$;
Call:
SELECT * FROM f_random_sample();
SELECT * FROM f_random_sample(500, 1.05);
Generic function
We can make this generic to work for any table with a unique integer column (typically the PK): Pass the table as polymorphic type and (optionally) the name of the PK column and use EXECUTE:
CREATE OR REPLACE FUNCTION f_random_sample(_tbl_type anyelement
, _id text = 'id'
, _limit int = 1000
, _gaps real = 1.03)
RETURNS SETOF anyelement
LANGUAGE plpgsql VOLATILE ROWS 1000 AS
$func$
DECLARE
-- safe syntax with schema & quotes where needed
_tbl text := pg_typeof(_tbl_type)::text;
_estimate int := (SELECT (reltuples / relpages
* (pg_relation_size(oid) / 8192))::bigint
FROM pg_class -- get current estimate from system
WHERE oid = _tbl::regclass);
BEGIN
RETURN QUERY EXECUTE format(
$$
WITH RECURSIVE random_pick AS (
SELECT *
FROM (
SELECT 1 + trunc(random() * $1)::int
FROM generate_series(1, $2) g
LIMIT $2 -- hint for query planner
) r(%2$I)
JOIN %1$s USING (%2$I) -- eliminate misses
UNION -- eliminate dupes
SELECT *
FROM (
SELECT 1 + trunc(random() * $1)::int
FROM random_pick -- just to make it recursive
LIMIT $3 -- hint for query planner
) r(%2$I)
JOIN %1$s USING (%2$I) -- eliminate misses
)
TABLE random_pick
LIMIT $3;
$$
, _tbl, _id
)
USING _estimate -- $1
, (_limit * _gaps)::int -- $2 ("surplus")
, _limit -- $3
;
END
$func$;
Call with defaults (important!):
SELECT * FROM f_random_sample(null::big); --!
Or more specifically:
SELECT * FROM f_random_sample(null::"my_TABLE", 'oDD ID', 666, 1.15);
About the same performance as the static version.
Related:
Refactor a PL/pgSQL function to return the output of various SELECT queries - chapter "Various complete table types"
Return SETOF rows from PostgreSQL function
Format specifier for integer variables in format() for EXECUTE?
INSERT with dynamic table name in trigger function
This is safe against SQL injection. See:
Table name as a PostgreSQL function parameter
SQL injection in Postgres functions vs prepared queries
Possible alternative
I your requirements allow identical sets for repeated calls (and we are talking about repeated calls) consider a MATERIALIZED VIEW. Execute above query once and write the result to a table. Users get a quasi random selection at lightening speed. Refresh your random pick at intervals or events of your choosing.
Postgres 9.5 introduces TABLESAMPLE SYSTEM (n)
Where n is a percentage. The manual:
The BERNOULLI and SYSTEM sampling methods each accept a single
argument which is the fraction of the table to sample, expressed as a
percentage between 0 and 100. This argument can be any real-valued expression.
Bold emphasis mine. It's very fast, but the result is not exactly random. The manual again:
The SYSTEM method is significantly faster than the BERNOULLI method
when small sampling percentages are specified, but it may return a
less-random sample of the table as a result of clustering effects.
The number of rows returned can vary wildly. For our example, to get roughly 1000 rows:
SELECT * FROM big TABLESAMPLE SYSTEM ((1000 * 100) / 5100000.0);
Related:
Fast way to discover the row count of a table in PostgreSQL
Or install the additional module tsm_system_rows to get the number of requested rows exactly (if there are enough) and allow for the more convenient syntax:
SELECT * FROM big TABLESAMPLE SYSTEM_ROWS(1000);
See Evan's answer for details.
But that's still not exactly random.
You can examine and compare the execution plan of both by using
EXPLAIN select * from table where random() < 0.01;
EXPLAIN select * from table order by random() limit 1000;
A quick test on a large table1 shows, that the ORDER BY first sorts the complete table and then picks the first 1000 items. Sorting a large table not only reads that table but also involves reading and writing temporary files. The where random() < 0.1 only scans the complete table once.
For large tables this might not what you want as even one complete table scan might take to long.
A third proposal would be
select * from table where random() < 0.01 limit 1000;
This one stops the table scan as soon as 1000 rows have been found and therefore returns sooner. Of course this bogs down the randomness a bit, but perhaps this is good enough in your case.
Edit: Besides of this considerations, you might check out the already asked questions for this. Using the query [postgresql] random returns quite a few hits.
quick random row selection in Postgres
How to retrieve randomized data rows from a postgreSQL table?
postgres: get random entries from table - too slow
And a linked article of depez outlining several more approaches:
http://www.depesz.com/index.php/2007/09/16/my-thoughts-on-getting-random-row/
1 "large" as in "the complete table will not fit into the memory".
postgresql order by random(), select rows in random order:
These are all slow because they do a tablescan to guarantee that every row gets an exactly equal chance of being chosen:
select your_columns from your_table ORDER BY random()
select * from
(select distinct your_columns from your_table) table_alias
ORDER BY random()
select your_columns from your_table ORDER BY random() limit 1
If you know how many rows are in the table N:
offset by floored random is constant time. However I am NOT convinced that OFFSET is producing a true random sample. It's simulating it by getting 'the next bunch' and tablescanning that, so you can step through, which isn't quite the same as above.
SELECT myid FROM mytable OFFSET floor(random() * N) LIMIT 1;
Roll your own constant Time Select Random N rows with periodic table scan to be absolutely sure of a random:
If your table is huge then the above table-scans are a show stopper taking up to 5 minutes to finish.
To go faster you can schedule a behind the scenes nightly table-scan reindexing which will guarantee a perfectly random selection in an O(1) constant-time speed, except during the nightly reindexing table-scan, where it must wait for maintenance to finish before you may receive another random row.
--Create a demo table with lots of random nonuniform data, big_data
--is your huge table you want to get random rows from in constant time.
drop table if exists big_data;
CREATE TABLE big_data (id serial unique, some_data text );
CREATE INDEX ON big_data (id);
--Fill it with a million rows which simulates your beautiful data:
INSERT INTO big_data (some_data) SELECT md5(random()::text) AS some_data
FROM generate_series(1,10000000);
--This delete statement puts holes in your index
--making it NONuniformly distributed
DELETE FROM big_data WHERE id IN (2, 4, 6, 7, 8);
--Do the nightly maintenance task on a schedule at 1AM.
drop table if exists big_data_mapper;
CREATE TABLE big_data_mapper (id serial, big_data_id int);
CREATE INDEX ON big_data_mapper (id);
CREATE INDEX ON big_data_mapper (big_data_id);
INSERT INTO big_data_mapper(big_data_id) SELECT id FROM big_data ORDER BY id;
--We have to use a function because the big_data_mapper might be out-of-date
--in between nightly tasks, so to solve the problem of a missing row,
--you try again until you succeed. In the event the big_data_mapper
--is broken, it tries 25 times then gives up and returns -1.
CREATE or replace FUNCTION get_random_big_data_id()
RETURNS int language plpgsql AS $$
declare
response int;
BEGIN
--Loop is required because big_data_mapper could be old
--Keep rolling the dice until you find one that hits.
for counter in 1..25 loop
SELECT big_data_id
FROM big_data_mapper OFFSET floor(random() * (
select max(id) biggest_value from big_data_mapper
)
) LIMIT 1 into response;
if response is not null then
return response;
end if;
end loop;
return -1;
END;
$$;
--get a random big_data id in constant time:
select get_random_big_data_id();
--Get 1 random row from big_data table in constant time:
select * from big_data where id in (
select get_random_big_data_id() from big_data limit 1
);
┌─────────┬──────────────────────────────────┐
│ id │ some_data │
├─────────┼──────────────────────────────────┤
│ 8732674 │ f8d75be30eff0a973923c413eaf57ac0 │
└─────────┴──────────────────────────────────┘
--Get 4 random rows from big_data in constant time:
select * from big_data where id in (
select get_random_big_data_id() from big_data limit 3
);
┌─────────┬──────────────────────────────────┐
│ id │ some_data │
├─────────┼──────────────────────────────────┤
│ 2722848 │ fab6a7d76d9637af89b155f2e614fc96 │
│ 8732674 │ f8d75be30eff0a973923c413eaf57ac0 │
│ 9475611 │ 36ac3eeb6b3e171cacd475e7f9dade56 │
└─────────┴──────────────────────────────────┘
--Test what happens when big_data_mapper stops receiving
--nightly reindexing.
delete from big_data_mapper where 1=1;
select get_random_big_data_id(); --It tries 25 times, and returns -1
--which means wait N minutes and try again.
Adapted from: https://www.gab.lc/articles/bigdata_postgresql_order_by_random
Alternatively if all the above is too much work.
A simpler good 'nuff solution for constant time select random row is to make a new column on your big table called big_data.mapper_int make it not null with a unique index. Every night reset the column with a unique integer between 1 and max(n). To get a random row you "choose a random integer between 0 and max(id)" and return the row where mapper_int is that. If there's no row by that id, because the row has changed since re-index, choose another random row. If a row is added to big_data.mapper_int then populate it with max(id) + 1
Alternatively TableSample to the rescue:
If you have postgresql version > 9.5 then tablesample can do a constant time random sample without a heavy tablescan.
https://wiki.postgresql.org/wiki/TABLESAMPLE_Implementation
--Select 1 percent of rows from yourtable,
--display the first 100 rows, order by column a_column
select * from yourtable TABLESAMPLE SYSTEM (1)
order by a_column
limit 100;
TableSample is doing some stuff behind the scenes that takes some time and I don't like it, but is faster than order by random(). Good, fast, cheap, choose any two on this job.
Starting with PostgreSQL 9.5, there's a new syntax dedicated to getting random elements from a table :
SELECT * FROM mytable TABLESAMPLE SYSTEM (5);
This example will give you 5% of elements from mytable.
See more explanation on the documentation: http://www.postgresql.org/docs/current/static/sql-select.html
The one with the ORDER BY is going to be the slower one.
select * from table where random() < 0.01; goes record by record, and decides to randomly filter it or not. This is going to be O(N) because it only needs to check each record once.
select * from table order by random() limit 1000; is going to sort the entire table, then pick the first 1000. Aside from any voodoo magic behind the scenes, the order by is O(N * log N).
The downside to the random() < 0.01 one is that you'll get a variable number of output records.
Note, there is a better way to shuffling a set of data than sorting by random: The Fisher-Yates Shuffle, which runs in O(N). Implementing the shuffle in SQL sounds like quite the challenge, though.
select * from table order by random() limit 1000;
If you know how many rows you want, check out tsm_system_rows.
tsm_system_rows
module provides the table sampling method SYSTEM_ROWS, which can be used in the TABLESAMPLE clause of a SELECT command.
This table sampling method accepts a single integer argument that is the maximum number of rows to read. The resulting sample will always contain exactly that many rows, unless the table does not contain enough rows, in which case the whole table is selected. Like the built-in SYSTEM sampling method, SYSTEM_ROWS performs block-level sampling, so that the sample is not completely random but may be subject to clustering effects, especially if only a small number of rows are requested.
First install the extension
CREATE EXTENSION tsm_system_rows;
Then your query,
SELECT *
FROM table
TABLESAMPLE SYSTEM_ROWS(1000);
Here is a decision that works for me. I guess it's very simple to understand and execute.
SELECT
field_1,
field_2,
field_2,
random() as ordering
FROM
big_table
WHERE
some_conditions
ORDER BY
ordering
LIMIT 1000;
If you want just one row, you can use a calculated offset derived from count.
select * from table_name limit 1
offset floor(random() * (select count(*) from table_name));
One lesson from my experience:
offset floor(random() * N) limit 1 is not faster than order by random() limit 1.
I thought the offset approach would be faster because it should save the time of sorting in Postgres. Turns out it wasn't.
I think the best and simplest way in postgreSQL is:
SELECT * FROM tableName ORDER BY random() LIMIT 1
A variation of the materialized view "Possible alternative" outlined by Erwin Brandstetter is possible.
Say, for example, that you don't want duplicates in the randomized values that are returned. An example use case is to generate short codes which can only be used once.
The primary table containing your (non-randomized) set of values must have some expression that determines which rows are "used" and which aren't — here I'll keep it simple by just creating a boolean column with the name used.
Assume this is the input table (additional columns may be added as they do not affect the solution):
id_values id | used
----+--------
1 | FALSE
2 | FALSE
3 | FALSE
4 | FALSE
5 | FALSE
...
Populate the ID_VALUES table as needed. Then, as described by Erwin, create a materialized view that randomizes the ID_VALUES table once:
CREATE MATERIALIZED VIEW id_values_randomized AS
SELECT id
FROM id_values
ORDER BY random();
Note that the materialized view does not contain the used column, because this will quickly become out-of-date. Nor does the view need to contain other columns that may be in the id_values table.
In order to obtain (and "consume") random values, use an UPDATE-RETURNING on id_values, selecting id_values from id_values_randomized with a join, and applying the desired criteria to obtain only relevant possibilities. For example:
UPDATE id_values
SET used = TRUE
WHERE id_values.id IN
(SELECT i.id
FROM id_values_randomized r INNER JOIN id_values i ON i.id = r.id
WHERE (NOT i.used)
LIMIT 1)
RETURNING id;
Change LIMIT as necessary -- if you need multiple random values at a time, change LIMIT to n where n is the number of values needed.
With the proper indexes on id_values, I believe the UPDATE-RETURNING should execute very quickly with little load. It returns randomized values with one database round-trip. The criteria for "eligible" rows can be as complex as required. New rows can be added to the id_values table at any time, and they will become accessible to the application as soon as the materialized view is refreshed (which can likely be run at an off-peak time). Creation and refresh of the materialized view will be slow, but it only needs to be executed when new id's added to the id_values table need to be made available.
Add a column called r with type serial. Index r.
Assume we have 200,000 rows, we are going to generate a random number n, where 0 < n <= 200, 000.
Select rows with r > n, sort them ASC and select the smallest one.
Code:
select * from YOUR_TABLE
where r > (
select (
select reltuples::bigint AS estimate
from pg_class
where oid = 'public.YOUR_TABLE'::regclass) * random()
)
order by r asc limit(1);
The code is self-explanatory. The subquery in the middle is used to quickly estimate the table row counts from https://stackoverflow.com/a/7945274/1271094 .
In application level you need to execute the statement again if n > the number of rows or need to select multiple rows.
I know I'm a little late to the party, but I just found this awesome tool called pg_sample:
pg_sample - extract a small, sample dataset from a larger PostgreSQL database while maintaining referential integrity.
I tried this with a 350M rows database and it was really fast, don't know about the randomness.
./pg_sample --limit="small_table = *" --limit="large_table = 100000" -U postgres source_db | psql -U postgres target_db
In MySQL, you can select X random rows with the following statement:
SELECT * FROM table ORDER BY RAND() LIMIT X
This does not, however, work in SQLite. Is there an equivalent?
For a much better performance use:
SELECT * FROM table WHERE id IN (SELECT id FROM table ORDER BY RANDOM() LIMIT x)
SQL engines first load projected fields of rows to memory then sort them, here we just do a random sort on id field of each row which is in memory because it's indexed, then separate X of them, and find the whole row using these X ids.
So this consume less RAM and CPU as table grows!
SELECT * FROM table ORDER BY RANDOM() LIMIT X
SELECT * FROM table ORDER BY RANDOM() LIMIT 1
All answers here are based on ORDER BY. This is very inefficient (i.e. unusable) for large sets because you will evaluate RANDOM() for each record, and then ORDER BY which is a resource expensive operation.
An other approach is to place abs(CAST(random() AS REAL))/9223372036854775808 < 0.5 in the WHERE clause to get in this case for example 0.5 hit chance.
SELECT *
FROM table
WHERE abs(CAST(random() AS REAL))/9223372036854775808 < 0.5
The large number is the maximum absolute number that random() can produce. The abs() is because it is signed. Result is a uniformly distributed random variable between 0 and 1.
This has its drawbacks. You can not guarantee a result and if the threshold is large compared to the table, the selected data will be skewed towards the start of the table. But in some carefully designed situations, it can be a feasible option.
This one solves the negative RANDOM integers, and keeps good performance on large datasets:
SELECT * FROM table LIMIT 1 OFFSET abs(random() % (select count(*) from table));
where:
abs(random() % n ) Gives you a positive integer in range(0,n)
The accepted answer works, but requires a full table scan per query. This will get slower and slower as your table grows large, making it risky for queries that are triggered by end-users.
The following solution takes advantage of indexes to run in O(log(N)) time.
SELECT * FROM table
WHERE rowid > (
ABS(RANDOM()) % (SELECT max(rowid) FROM table)
)
LIMIT 1;
To break it down
SELECT max(rowid) FROM table - Returns the largest valid rowid for the table. SQLite is able to use the index on rowid to run this efficiently.
ABS(RANDOM()) % ... - Return a random number between 0 and max(rowid) - 1). SQLite's random function generates a number between -9223372036854775808 and +9223372036854775807. The ABS makes sure its positive, and the modulus operator gates it between max(rowid) - 1.
rowid > ... - Rather than using =, use > in case the random number generated corresponds to a deleted row. Using strictly greater than ensures that we return a row with a row id between 1 (greater than 0) and max(rowid) (great than max(rowid) - 1). SQLite uses the primary key index to efficiently return this result as well.
This also works for queries with WHERE clauses. Apply the WHERE clause to both the output and the SELECT max(rowid) subquery. I'm not sure which conditions this will run efficiently, however.
Note: This was derived from an answer in a similar question.
I can do SELECT TOP (200) ... but why not BOTTOM (200)?
Well not to get into philosophy what I mean is, how can I do the equivalent of TOP (200) but in reverse (from the bottom, like you'd expect BOTTOM to do...)?
SELECT
columns
FROM
(
SELECT TOP 200
columns
FROM
My_Table
ORDER BY
a_column DESC
) SQ
ORDER BY
a_column ASC
It is unnecessary. You can use an ORDER BY and just change the sort to DESC to get the same effect.
Sorry, but I don't think I see any correct answers in my opinion.
The TOP x function shows the records in undefined order. From that definition follows that a BOTTOM function can not be defined.
Independent of any index or sort order. When you do an ORDER BY y DESC you get the rows with the highest y value first. If this is an autogenerated ID, it should show the records last added to the table, as suggested in the other answers. However:
This only works if there is an autogenerated id column
It has a significant performance impact if you compare that with the TOP function
The correct answer should be that there is not, and cannot be, an equivalent to TOP for getting the bottom rows.
Logically,
BOTTOM (x) is all the records except TOP (n - x), where n is the count; x <= n
E.g. Select Bottom 1000 from Employee:
In T-SQL,
DECLARE
#bottom int,
#count int
SET #bottom = 1000
SET #count = (select COUNT(*) from Employee)
select * from Employee emp where emp.EmployeeID not in
(
SELECT TOP (#count-#bottom) Employee.EmployeeID FROM Employee
)
It would seem that any of the answers which implement an ORDER BY clause in the solution is missing the point, or does not actually understand what TOP returns to you.
TOP returns an unordered query result set which limits the record set to the first N records returned. (From an Oracle perspective, it is akin to adding a where ROWNUM < (N+1).
Any solution which uses an order, may return rows which also are returned by the TOP clause (since that data set was unordered in the first place), depending on what criteria was used in the order by
The usefulness of TOP is that once the dataset reaches a certain size N, it stops fetching rows. You can get a feel for what the data looks like without having to fetch all of it.
To implement BOTTOM accurately, it would need to fetch the entire dataset unordered and then restrict the dataset to the final N records. That will not be particularly effective if you are dealing with huge tables. Nor will it necessarily give you what you think you are asking for. The end of the data set may not necessarily be "the last rows inserted" (and probably won't be for most DML intensive applications).
Similarly, the solutions which implement an ORDER BY are, unfortunately, potentially disastrous when dealing with large data sets. If I have, say, 10 Billion records and want the last 10, it is quite foolish to order 10 Billion records and select the last 10.
The problem here, is that BOTTOM does not have the meaning that we think of when comparing it to TOP.
When records are inserted, deleted, inserted, deleted over and over and over again, some gaps will appear in the storage and later, rows will be slotted in, if possible. But what we often see, when we select TOP, appears to be sorted data, because it may have been inserted early on in the table's existence. If the table does not experience many deletions, it may appear to be ordered. (e.g. creation dates may be as far back in time as the table creation itself). But the reality is, if this is a delete-heavy table, the TOP N rows may not look like that at all.
So -- the bottom line here(pun intended) is that someone who is asking for the BOTTOM N records doesn't actually know what they're asking for. Or, at least, what they're asking for and what BOTTOM actually means are not the same thing.
So -- the solution may meet the actual business need of the requestor...but does not meet the criteria for being the BOTTOM.
First, create an index in a subquery according to the table's original order using:
ROW_NUMBER () OVER (ORDER BY (SELECT NULL) ) AS RowIndex
Then order the table descending by the RowIndex column you've created in the main query:
ORDER BY RowIndex DESC
And finally use TOP with your wanted quantity of rows:
SELECT TOP 1 * --(or 2, or 5, or 34)
FROM (SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL) ) AS RowIndex, *
FROM MyTable) AS SubQuery
ORDER BY RowIndex DESC
All you need to do is reverse your ORDER BY. Add or remove DESC to it.
The problem with ordering the other way is that it often does not make good use of indices. It is also not very extendable if you ever need to select a number of rows that are not at the start or the end. An alternative way is as follows.
DECLARE #NumberOfRows int;
SET #NumberOfRows = (SELECT COUNT(*) FROM TheTable);
SELECT col1, col2,...
FROM (
SELECT col1, col2,..., ROW_NUMBER() OVER (ORDER BY col1) AS intRow
FROM TheTable
) AS T
WHERE intRow > #NumberOfRows - 20;
The currently accepted answer by "Justin Ethier" is not a correct answer as pointed out by "Protector one".
As far as I can see, as of now, no other answer or comment provides the equivalent of BOTTOM(x) the question author asked for.
First, let's consider a scenario where this functionality would be needed:
SELECT * FROM Split('apple,orange,banana,apple,lime',',')
This returns a table of one column and five records:
apple
orange
banana
apple
lime
As you can see: we don't have an ID column; we can't order by the returned column; and we can't select the bottom two records using standard SQL like we can do for the top two records.
Here is my attempt to provide a solution:
SELECT * INTO #mytemptable FROM Split('apple,orange,banana,apple,lime',',')
ALTER TABLE #mytemptable ADD tempID INT IDENTITY
SELECT TOP 2 * FROM #mytemptable ORDER BY tempID DESC
DROP TABLE #mytemptable
And here is a more complete solution:
SELECT * INTO #mytemptable FROM Split('apple,orange,banana,apple,lime',',')
ALTER TABLE #mytemptable ADD tempID INT IDENTITY
DELETE FROM #mytemptable WHERE tempID <= ((SELECT COUNT(*) FROM #mytemptable) - 2)
ALTER TABLE #mytemptable DROP COLUMN tempID
SELECT * FROM #mytemptable
DROP TABLE #mytemptable
I am by no means claiming that this is a good idea to use in all circumstances, but it provides the desired results.
You can use the OFFSET FETCH clause.
SELECT COUNT(1) FROM COHORT; --Number of results to expect
SELECT * FROM COHORT
ORDER BY ID
OFFSET 900 ROWS --Assuming you expect 1000 rows
FETCH NEXT 100 ROWS ONLY;
(This is for Microsoft SQL Server)
Official documentation:
https://www.sqlservertutorial.net/sql-server-basics/sql-server-offset-fetch/
"Tom H" answer above is correct and it works for me in getting Bottom 5 rows.
SELECT [KeyCol1], [KeyCol2], [Col3]
FROM
(SELECT TOP 5 [KeyCol1],
[KeyCol2],
[Col3]
FROM [dbo].[table_name]
ORDER BY [KeyCol1],[KeyCol2] DESC) SOME_ALAIS
ORDER BY [KeyCol1],[KeyCol2] ASC
Thanks.
try this.
declare #floor int --this is the offset from the bottom, the number of results to exclude
declare #resultLimit int --the number of results actually retrieved for use
declare #total int --just adds them up, the total number of results fetched initially
--following is for gathering top 60 results total, then getting rid of top 50. We only keep the last 10
set #floor = 50
set #resultLimit = 10
set #total = #floor + #resultLimit
declare #tmp0 table(
--table body
)
declare #tmp1 table(
--table body
)
--this line will drop the wanted results from whatever table we're selecting from
insert into #tmp0
select Top #total --what to select (the where, from, etc)
--using floor, insert the part we don't want into the second tmp table
insert into #tmp1
select top #floor * from #tmp0
--using select except, exclude top x results from the query
select * from #tmp0
except
select * from #tmp1
I've come up with a solution to this that doesn't require you to know the number of row returned.
For example, if you want to get all the locations logged in a table, except the latest 1 (or 2, or 5, or 34)
SELECT *
FROM
(SELECT ROW_NUMBER() OVER (ORDER BY CreatedDate) AS Row, *
FROM Locations
WHERE UserId = 12345) AS SubQuery
WHERE Row > 1 -- or 2, or 5, or 34
Querying a simple subquery sorted descending, followed by sorting on the same column ascending does the trick.
SELECT * FROM
(SELECT TOP 200 * FROM [table] t2 ORDER BY t2.[column] DESC) t1
ORDER BY t1.[column]
SELECT TOP 10*from TABLE1 ORDER BY ID DESC
Where ID is the primary key of the TABLE1.
SELECT columns FROM My_Table LIMIT 200 OFFSET (SELECT Count(*)-200 My_Table)
How can I request a random row (or as close to truly random as is possible) in pure SQL?
See this post: SQL to Select a random row from a database table. It goes through methods for doing this in MySQL, PostgreSQL, Microsoft SQL Server, IBM DB2 and Oracle (the following is copied from that link):
Select a random row with MySQL:
SELECT column FROM table
ORDER BY RAND()
LIMIT 1
Select a random row with PostgreSQL:
SELECT column FROM table
ORDER BY RANDOM()
LIMIT 1
Select a random row with Microsoft SQL Server:
SELECT TOP 1 column FROM table
ORDER BY NEWID()
Select a random row with IBM DB2
SELECT column, RAND() as IDX
FROM table
ORDER BY IDX FETCH FIRST 1 ROWS ONLY
Select a random record with Oracle:
SELECT column FROM
( SELECT column FROM table
ORDER BY dbms_random.value )
WHERE rownum = 1
Solutions like Jeremies:
SELECT * FROM table ORDER BY RAND() LIMIT 1
work, but they need a sequential scan of all the table (because the random value associated with each row needs to be calculated - so that the smallest one can be determined), which can be quite slow for even medium sized tables. My recommendation would be to use some kind of indexed numeric column (many tables have these as their primary keys), and then write something like:
SELECT * FROM table WHERE num_value >= RAND() *
( SELECT MAX (num_value ) FROM table )
ORDER BY num_value LIMIT 1
This works in logarithmic time, regardless of the table size, if num_value is indexed. One caveat: this assumes that num_value is equally distributed in the range 0..MAX(num_value). If your dataset strongly deviates from this assumption, you will get skewed results (some rows will appear more often than others).
I don't know how efficient this is, but I've used it before:
SELECT TOP 1 * FROM MyTable ORDER BY newid()
Because GUIDs are pretty random, the ordering means you get a random row.
ORDER BY NEWID()
takes 7.4 milliseconds
WHERE num_value >= RAND() * (SELECT MAX(num_value) FROM table)
takes 0.0065 milliseconds!
I will definitely go with latter method.
You didn't say which server you're using. In older versions of SQL Server, you can use this:
select top 1 * from mytable order by newid()
In SQL Server 2005 and up, you can use TABLESAMPLE to get a random sample that's repeatable:
SELECT FirstName, LastName
FROM Contact
TABLESAMPLE (1 ROWS) ;
For SQL Server
newid()/order by will work, but will be very expensive for large result sets because it has to generate an id for every row, and then sort them.
TABLESAMPLE() is good from a performance standpoint, but you will get clumping of results (all rows on a page will be returned).
For a better performing true random sample, the best way is to filter out rows randomly. I found the following code sample in the SQL Server Books Online article Limiting Results Sets by Using TABLESAMPLE:
If you really want a random sample of
individual rows, modify your query to
filter out rows randomly, instead of
using TABLESAMPLE. For example, the
following query uses the NEWID
function to return approximately one
percent of the rows of the
Sales.SalesOrderDetail table:
SELECT * FROM Sales.SalesOrderDetail
WHERE 0.01 >= CAST(CHECKSUM(NEWID(),SalesOrderID) & 0x7fffffff AS float)
/ CAST (0x7fffffff AS int)
The SalesOrderID column is included in
the CHECKSUM expression so that
NEWID() evaluates once per row to
achieve sampling on a per-row basis.
The expression CAST(CHECKSUM(NEWID(),
SalesOrderID) & 0x7fffffff AS float /
CAST (0x7fffffff AS int) evaluates to
a random float value between 0 and 1.
When run against a table with 1,000,000 rows, here are my results:
SET STATISTICS TIME ON
SET STATISTICS IO ON
/* newid()
rows returned: 10000
logical reads: 3359
CPU time: 3312 ms
elapsed time = 3359 ms
*/
SELECT TOP 1 PERCENT Number
FROM Numbers
ORDER BY newid()
/* TABLESAMPLE
rows returned: 9269 (varies)
logical reads: 32
CPU time: 0 ms
elapsed time: 5 ms
*/
SELECT Number
FROM Numbers
TABLESAMPLE (1 PERCENT)
/* Filter
rows returned: 9994 (varies)
logical reads: 3359
CPU time: 641 ms
elapsed time: 627 ms
*/
SELECT Number
FROM Numbers
WHERE 0.01 >= CAST(CHECKSUM(NEWID(), Number) & 0x7fffffff AS float)
/ CAST (0x7fffffff AS int)
SET STATISTICS IO OFF
SET STATISTICS TIME OFF
If you can get away with using TABLESAMPLE, it will give you the best performance. Otherwise use the newid()/filter method. newid()/order by should be last resort if you have a large result set.
If possible, use stored statements to avoid the inefficiency of both indexes on RND() and creating a record number field.
PREPARE RandomRecord FROM "SELECT * FROM table LIMIT ?,1";
SET #n=FLOOR(RAND()*(SELECT COUNT(*) FROM table));
EXECUTE RandomRecord USING #n;
Best way is putting a random value in a new column just for that purpose, and using something like this (pseude code + SQL):
randomNo = random()
execSql("SELECT TOP 1 * FROM MyTable WHERE MyTable.Randomness > $randomNo")
This is the solution employed by the MediaWiki code. Of course, there is some bias against smaller values, but they found that it was sufficient to wrap the random value around to zero when no rows are fetched.
newid() solution may require a full table scan so that each row can be assigned a new guid, which will be much less performant.
rand() solution may not work at all (i.e. with MSSQL) because the function will be evaluated just once, and every row will be assigned the same "random" number.
For SQL Server 2005 and 2008, if we want a random sample of individual rows (from Books Online):
SELECT * FROM Sales.SalesOrderDetail
WHERE 0.01 >= CAST(CHECKSUM(NEWID(), SalesOrderID) & 0x7fffffff AS float)
/ CAST (0x7fffffff AS int)
In late, but got here via Google, so for the sake of posterity, I'll add an alternative solution.
Another approach is to use TOP twice, with alternating orders. I don't know if it is "pure SQL", because it uses a variable in the TOP, but it works in SQL Server 2008. Here's an example I use against a table of dictionary words, if I want a random word.
SELECT TOP 1
word
FROM (
SELECT TOP(#idx)
word
FROM
dbo.DictionaryAbridged WITH(NOLOCK)
ORDER BY
word DESC
) AS D
ORDER BY
word ASC
Of course, #idx is some randomly-generated integer that ranges from 1 to COUNT(*) on the target table, inclusively. If your column is indexed, you'll benefit from it too. Another advantage is that you can use it in a function, since NEWID() is disallowed.
Lastly, the above query runs in about 1/10 of the exec time of a NEWID()-type of query on the same table. YYMV.
Insted of using RAND(), as it is not encouraged, you may simply get max ID (=Max):
SELECT MAX(ID) FROM TABLE;
get a random between 1..Max (=My_Generated_Random)
My_Generated_Random = rand_in_your_programming_lang_function(1..Max);
and then run this SQL:
SELECT ID FROM TABLE WHERE ID >= My_Generated_Random ORDER BY ID LIMIT 1
Note that it will check for any rows which Ids are EQUAL or HIGHER than chosen value.
It's also possible to hunt for the row down in the table, and get an equal or lower ID than the My_Generated_Random, then modify the query like this:
SELECT ID FROM TABLE WHERE ID <= My_Generated_Random ORDER BY ID DESC LIMIT 1
As pointed out in #BillKarwin's comment on #cnu's answer...
When combining with a LIMIT, I've found that it performs much better (at least with PostgreSQL 9.1) to JOIN with a random ordering rather than to directly order the actual rows: e.g.
SELECT * FROM tbl_post AS t
JOIN ...
JOIN ( SELECT id, CAST(-2147483648 * RANDOM() AS integer) AS rand
FROM tbl_post
WHERE create_time >= 1349928000
) r ON r.id = t.id
WHERE create_time >= 1349928000 AND ...
ORDER BY r.rand
LIMIT 100
Just make sure that the 'r' generates a 'rand' value for every possible key value in the complex query which is joined with it but still limit the number of rows of 'r' where possible.
The CAST as Integer is especially helpful for PostgreSQL 9.2 which has specific sort optimisation for integer and single precision floating types.
For MySQL to get random record
SELECT name
FROM random AS r1 JOIN
(SELECT (RAND() *
(SELECT MAX(id)
FROM random)) AS id)
AS r2
WHERE r1.id >= r2.id
ORDER BY r1.id ASC
LIMIT 1
More detail http://jan.kneschke.de/projects/mysql/order-by-rand/
With SQL Server 2012+ you can use the OFFSET FETCH query to do this for a single random row
select * from MyTable ORDER BY id OFFSET n ROW FETCH NEXT 1 ROWS ONLY
where id is an identity column, and n is the row you want - calculated as a random number between 0 and count()-1 of the table (offset 0 is the first row after all)
This works with holes in the table data, as long as you have an index to work with for the ORDER BY clause. Its also very good for the randomness - as you work that out yourself to pass in but the niggles in other methods are not present. In addition the performance is pretty good, on a smaller dataset it holds up well, though I've not tried serious performance tests against several million rows.
Random function from the sql could help. Also if you would like to limit to just one row, just add that in the end.
SELECT column FROM table
ORDER BY RAND()
LIMIT 1
For SQL Server and needing "a single random row"..
If not needing a true sampling, generate a random value [0, max_rows) and use the ORDER BY..OFFSET..FETCH from SQL Server 2012+.
This is very fast if the COUNT and ORDER BY are over appropriate indexes - such that the data is 'already sorted' along the query lines. If these operations are covered it's a quick request and does not suffer from the horrid scalability of using ORDER BY NEWID() or similar. Obviously, this approach won't scale well on a non-indexed HEAP table.
declare #rows int
select #rows = count(1) from t
-- Other issues if row counts in the bigint range..
-- This is also not 'true random', although such is likely not required.
declare #skip int = convert(int, #rows * rand())
select t.*
from t
order by t.id -- Make sure this is clustered PK or IX/UCL axis!
offset (#skip) rows
fetch first 1 row only
Make sure that the appropriate transaction isolation levels are used and/or account for 0 results.
For SQL Server and needing a "general row sample" approach..
Note: This is an adaptation of the answer as found on a SQL Server specific question about fetching a sample of rows. It has been tailored for context.
While a general sampling approach should be used with caution here, it's still potentially useful information in context of other answers (and the repetitious suggestions of non-scaling and/or questionable implementations). Such a sampling approach is less efficient than the first code shown and is error-prone if the goal is to find a "single random row".
Here is an updated and improved form of sampling a percentage of rows. It is based on the same concept of some other answers that use CHECKSUM / BINARY_CHECKSUM and modulus.
It is relatively fast over huge data sets and can be efficiently used in/with derived queries. Millions of pre-filtered rows can be sampled in seconds with no tempdb usage and, if aligned with the rest of the query, the overhead is often minimal.
Does not suffer from CHECKSUM(*) / BINARY_CHECKSUM(*) issues with runs of data. When using the CHECKSUM(*) approach, the rows can be selected in "chunks" and not "random" at all! This is because CHECKSUM prefers speed over distribution.
Results in a stable/repeatable row selection and can be trivially changed to produce different rows on subsequent query executions. Approaches that use NEWID() can never be stable/repeatable.
Does not use ORDER BY NEWID() of the entire input set, as ordering can become a significant bottleneck with large input sets. Avoiding unnecessary sorting also reduces memory and tempdb usage.
Does not use TABLESAMPLE and thus works with a WHERE pre-filter.
Here is the gist. See this answer for additional details and notes.
Naïve try:
declare #sample_percent decimal(7, 4)
-- Looking at this value should be an indicator of why a
-- general sampling approach can be error-prone to select 1 row.
select #sample_percent = 100.0 / count(1) from t
-- BAD!
-- When choosing appropriate sample percent of "approximately 1 row"
-- it is very reasonable to expect 0 rows, which definitely fails the ask!
-- If choosing a larger sample size the distribution is heavily skewed forward,
-- and is very much NOT 'true random'.
select top 1
t.*
from t
where 1=1
and ( -- sample
#sample_percent = 100
or abs(
convert(bigint, hashbytes('SHA1', convert(varbinary(32), t.rowguid)))
) % (1000 * 100) < (1000 * #sample_percent)
)
This can be largely remedied by a hybrid query, by mixing sampling and ORDER BY selection from the much smaller sample set. This limits the sorting operation to the sample size, not the size of the original table.
-- Sample "approximately 1000 rows" from the table,
-- dealing with some edge-cases.
declare #rows int
select #rows = count(1) from t
declare #sample_size int = 1000
declare #sample_percent decimal(7, 4) = case
when #rows <= 1000 then 100 -- not enough rows
when (100.0 * #sample_size / #rows) < 0.0001 then 0.0001 -- min sample percent
else 100.0 * #sample_size / #rows -- everything else
end
-- There is a statistical "guarantee" of having sampled a limited-yet-non-zero number of rows.
-- The limited rows are then sorted randomly before the first is selected.
select top 1
t.*
from t
where 1=1
and ( -- sample
#sample_percent = 100
or abs(
convert(bigint, hashbytes('SHA1', convert(varbinary(32), t.rowguid)))
) % (1000 * 100) < (1000 * #sample_percent)
)
-- ONLY the sampled rows are ordered, which improves scalability.
order by newid()
SELECT * FROM table ORDER BY RAND() LIMIT 1
Most of the solutions here aim to avoid sorting, but they still need to make a sequential scan over a table.
There is also a way to avoid the sequential scan by switching to index scan. If you know the index value of your random row you can get the result almost instantially. The problem is - how to guess an index value.
The following solution works on PostgreSQL 8.4:
explain analyze select * from cms_refs where rec_id in
(select (random()*(select last_value from cms_refs_rec_id_seq))::bigint
from generate_series(1,10))
limit 1;
I above solution you guess 10 various random index values from range 0 .. [last value of id].
The number 10 is arbitrary - you may use 100 or 1000 as it (amazingly) doesn't have a big impact on the response time.
There is also one problem - if you have sparse ids you might miss. The solution is to have a backup plan :) In this case an pure old order by random() query. When combined id looks like this:
explain analyze select * from cms_refs where rec_id in
(select (random()*(select last_value from cms_refs_rec_id_seq))::bigint
from generate_series(1,10))
union all (select * from cms_refs order by random() limit 1)
limit 1;
Not the union ALL clause. In this case if the first part returns any data the second one is NEVER executed!
You may also try using new id() function.
Just write a your query and use order by new id() function. It quite random.
Didn't quite see this variation in the answers yet. I had an additional constraint where I needed, given an initial seed, to select the same set of rows each time.
For MS SQL:
Minimum example:
select top 10 percent *
from table_name
order by rand(checksum(*))
Normalized execution time: 1.00
NewId() example:
select top 10 percent *
from table_name
order by newid()
Normalized execution time: 1.02
NewId() is insignificantly slower than rand(checksum(*)), so you may not want to use it against large record sets.
Selection with Initial Seed:
declare #seed int
set #seed = Year(getdate()) * month(getdate()) /* any other initial seed here */
select top 10 percent *
from table_name
order by rand(checksum(*) % seed) /* any other math function here */
If you need to select the same set given a seed, this seems to work.
In MSSQL (tested on 11.0.5569) using
SELECT TOP 100 * FROM employee ORDER BY CRYPT_GEN_RANDOM(10)
is significantly faster than
SELECT TOP 100 * FROM employee ORDER BY NEWID()
For Firebird:
Select FIRST 1 column from table ORDER BY RAND()
In SQL Server you can combine TABLESAMPLE with NEWID() to get pretty good randomness and still have speed. This is especially useful if you really only want 1, or a small number, of rows.
SELECT TOP 1 * FROM [table]
TABLESAMPLE (500 ROWS)
ORDER BY NEWID()
I have to agree with CD-MaN: Using "ORDER BY RAND()" will work nicely for small tables or when you do your SELECT only a few times.
I also use the "num_value >= RAND() * ..." technique, and if I really want to have random results I have a special "random" column in the table that I update once a day or so. That single UPDATE run will take some time (especially because you'll have to have an index on that column), but it's much faster than creating random numbers for every row each time the select is run.
Be careful because TableSample doesn't actually return a random sample of rows. It directs your query to look at a random sample of the 8KB pages that make up your row. Then, your query is executed against the data contained in these pages. Because of how data may be grouped on these pages (insertion order, etc), this could lead to data that isn't actually a random sample.
See: http://www.mssqltips.com/tip.asp?tip=1308
This MSDN page for TableSample includes an example of how to generate an actualy random sample of data.
http://msdn.microsoft.com/en-us/library/ms189108.aspx
It seems that many of the ideas listed still use ordering
However, if you use a temporary table, you are able to assign a random index (like many of the solutions have suggested), and then grab the first one that is greater than an arbitrary number between 0 and 1.
For example (for DB2):
WITH TEMP AS (
SELECT COMLUMN, RAND() AS IDX FROM TABLE)
SELECT COLUMN FROM TABLE WHERE IDX > .5
FETCH FIRST 1 ROW ONLY
A simple and efficient way from http://akinas.com/pages/en/blog/mysql_random_row/
SET #i = (SELECT FLOOR(RAND() * COUNT(*)) FROM table); PREPARE get_stmt FROM 'SELECT * FROM table LIMIT ?, 1'; EXECUTE get_stmt USING #i;
There is better solution for Oracle instead of using dbms_random.value, while it requires full scan to order rows by dbms_random.value and it is quite slow for large tables.
Use this instead:
SELECT *
FROM employee sample(1)
WHERE rownum=1
For SQL Server 2005 and above, extending #GreyPanther's answer for the cases when num_value has not continuous values. This works too for cases when we have not evenly distributed datasets and when num_value is not a number but a unique identifier.
WITH CTE_Table (SelRow, num_value)
AS
(
SELECT ROW_NUMBER() OVER(ORDER BY ID) AS SelRow, num_value FROM table
)
SELECT * FROM table Where num_value = (
SELECT TOP 1 num_value FROM CTE_Table WHERE SelRow >= RAND() * (SELECT MAX(SelRow) FROM CTE_Table)
)
select r.id, r.name from table AS r
INNER JOIN(select CEIL(RAND() * (select MAX(id) from table)) as id) as r1
ON r.id >= r1.id ORDER BY r.id ASC LIMIT 1
This will require a lesser computation time