Efficient repeated sampling with replacement of a table in a PostgreSQL alike? - sql

I'm trying to check the distribution of numbers in a column of a table. Rather than calculate on the entire table (which is large - tens of gigabytes) I want to estimate via repeated sampling. I think the typical Postgres method for this is
select COLUMN
from TABLE
order by RANDOM()
limit 1;
but this is slow for repeated sampling, especially since (I suspect) it manipulates the entire column each time I run it.
Is there a better way?
EDIT: Just to make sure I expressed it right, I want to do the following:
for(i in 1:numSamples)
draw 500 random rows
end
without having to reorder the entire massive table each time. Perhaps I could get all of the table row IDs and sample from it in R or something, and then just request those rows?

As you want a sample of the data, what about using the estimated size of the table and then calculate a percentage of that as the sample?
The table pg_class stores an estimate of the number of rows for each table (updated by the vacuum process if I'm not mistaken).
So the following would select 1% of all rows from that table:
with estimated_rows as (
select reltuples as num_rows
from pg_class t
join pg_namespace n on n.oid = t.relnamespace
where t.relname = 'some_table'
and n.nspname = 'public'
)
select *
from some_table
limit (select 0.01 * num_rows from estimated_rows)
;
If you do that very often you might want to create a function so you could do something like this:
select *
from some_table
limit (select estimate_percent(0.01, 'public', 'some_table'))
;

Create a temporary table from the target table adding a row number column
drop table if exists temp_t;
create temporary table temp_t as
select *, (row_number() over())::int as rn
from t
Create a lighter temporary table by selecting only the columns that will be used in the sampling and filtering as necessary.
Index it by the row number column
create index temp_t_rn on temp_t(rn);
analyze temp_t;
Issue this query for each sample
with r as (
select ceiling(random() * (select max(rn) from temp_t))::int as rn
from generate_series(1, 500) s
)
select *
from temp_t
where rn in (select rn from r)
SQL Fiddle

Related

Efficiently selecting distinct (a, b) from big table

I have a table with around 54 million rows in a Postgres 9.6 DB and would like to find all distinct pairs of two columns (there are around 4 million such values). I have an index over the two columns of interest:
create index ab_index on tbl (a, b)
What is the most efficient way to get such pairs? I have tried:
select a,b
from tbl
where a>$previouslargesta
group by a,b
order by a,b
limit 1000
And also:
select distinct(a,b)
from tbl
where a>previouslargesta
order by a,b
limit 1000
Also this recursive query:
with recursive t AS (
select min(a) AS a from tbl
union all
select (select min(a) from tickets where a > t.a)
FROM t)
select a FROM t
But all are slooooooow.
Is there a faster way to get this information?
Your table has 54 million rows and ...
there are around 4 million such values
7,4 % of all rows is a high percentage, an index can mostly only help by providing pre-sorted data, ideally in an index-only scan. There are more sophisticated techniques for smaller result sets (see below), and there are much faster ways for paging which returns much fewer rows at a time (see below) but for the general case a plain DISTINCT may be among the fastest:
SELECT DISTINCT a, b -- *no* parentheses
FROM tbl;
-- ORDER BY a, b -- ORDER BY wasn't not mentioned as requirement ...
Don't confuse it with DISTINCT ON, which would require parentheses. See:
Select first row in each GROUP BY group?
The B-tree index ab_index you have on (a, b) is already the best index for this. It has to be scanned in its entirety, though. The challenge is to have enough work_mem to process all in RAM. With standard settings it occupies at least 1831 MB on disk, typically more with some bloat. If you can afford it, run the query with a work_mem setting of 2 GB (or more) in your session. See:
Configuration parameter work_mem in PostgreSQL on Linux
SET work_mem = '2 GB';
SELECT DISTINCT a, b ...
RESET work_mem;
A read-only table helps. Else you need aggressive enough VACUUM settings to allow an index-only scan. And some more RAM, yet, would help (with appropriate settings) to keep the index cashed.
Also upgrade to the latest version of Postgres (11.3 as of writing). There have been many improvements for big data.
Paging
If you want to add paging as indicated by your sample query, urgently consider ROW value comparison. See:
Optimize query with OFFSET on large table
SQL syntax term for 'WHERE (col1, col2) < (val1, val2)'
SELECT DISTINCT a, b
FROM tbl
WHERE (a, b) > ($previous_a, $previous_b) -- !!!
ORDER BY a, b
LIMIT 1000;
Recursive CTE
This also may or may not be faster for the general big query as well. For the small subset, it becomes much more attractive:
WITH RECURSIVE cte AS (
( -- parentheses required du to LIMIT 1
SELECT a, b
FROM tbl
WHERE (a, b) > ($previous_a, $previous_b) -- !!!
ORDER BY a, b
LIMIT 1
)
UNION ALL
SELECT x.a, x.b
FROM cte c
CROSS JOIN LATERAL (
SELECT t.a, t.b
FROM tbl t
WHERE (t.a, t.b) > (c.a, c.b) -- lateral reference
ORDER BY t.a, t.b
LIMIT 1
) x
)
TABLE cte
LIMIT 1000;
This can make perfect use of your index and should be as fast as it gets.
Further reading:
Optimize GROUP BY query to retrieve latest row per user
For repeated use and no or little write load on the table, consider a MATERIALIZED VIEW, based on one of the above queries - for much faster read performance.
I cannot guarantee for performance at Postgres, but this is a technique i had used on sql server in a similar case and proven faster than others:
get distinct A into a temp a
get distinct B into a temp b
cross a and b temps to Cartesian into a temp abALL
rank the abALL (optionally)
create a view myview as select top 1 a,b from tbl (your_main_table)
join temp abALL with myview into a temp abCLEAN
rank abCLEAN here if you havent rank above

Fetch No oF Rows that can be returned by select query

I'm trying to fetch data and showing in a table with pagination. so I use limit and offset for that but I also need to show no of rows that can be fetched from that query. Is there any way to get that.
I tried
resultset.last() and getRow()
select count(*) from(query) myNewTable;
These two cases i'm getting correct answer but is it correct way to do this. Performance is a concern
We can get the limited records using below code,
First, we need to set how many records we want like below,
var limit = 10;
After that sent this limit to the below statement
WITH
Temp AS(
SELECT
ROW_NUMBER() OVER( primayKey DESC ) AS RowNumber,
*
FROM
myNewTable
),
Temp2 AS(
SELECT COUNT(*) AS TotalCount FROM Temp
)
SELECT TOP limit * FROM Temp, Temp2 WHERE RowNumber > :offset order by RowNumber
This is run in both MSSQL and MySQL
There is no easy way of doing this.
1. As you found out, it usually boils down to executing 2 queries:
Executing SELECT with limit and offset in order to fetch the data that you need.
Executing a COUNT(*) in order to count the total number of pages.
This approach might work for tables that don't have a lot of rows, or when you filter the data (int the COUNT and SELECT queries) on a column that is indexed.
2. If your table is large, but the data that you need to show represents smaller percentage of the data from the table and the data shares a common trait (for example, the data in all of your pages is created on a single day) you can use partitioning. Executing COUNT and SELECT on a single partition will be way more faster than executing them on the whole table.
3. You can create another table which will store the value of the COUNT query.
For example, lets say that your big_table table looks like this:
id | user_id | timestamp_column | text_column | another_text_column
Now, your SELECT query looks like this:
SELECT * FROM big_table WHERE user_id = 4 ORDER BY timestamp_column LIMIT 20 OFFSET 20;
And your count query:
SELECT COUNT(*) FROM table WHERE user_id = 4;
You could create a count_table that will have the following format:
user_id | count
Once you fill this table with the current data in the system, you will create a trigger which will update this table on every insert or update of the big_table.
This way, the count query will be really fast, because it will be executed on the count_table, for example:
SELECT count FROM count_table WHERE user_id = 4
The drawback of this approach is that the insert in the big_table will be slower, since the trigger will fire and update the count_table on every insert.
This are the approaches that you can try but in the end it all depends on the size and type of your data.

query on the first n rows from large table without checking all rows -oracle sql

Every query is taking a lot of time on my table which is very large. For testing purpose I want queries to be implemented for first few rows of the table. for ex : In select * from table where ROWNUM=1,there would be a check for all rows i.e if ROWNUM is 1 or not. But I want to test my queries for few rows only to save time.
If you want to select only top n rown then you must use -
SELECT *
FROM TABLE
WHERE ROWNUM <= N;
rownum is a pseudocolumn it does not exist on the table records and is assigned at runtime once the predicate (where clause) phase of the query is completed. Due to this only the first query returns the result
select * from hr.employees where employee_id >190 and rownum<2;-- Will return one row
select * from hr.employees where employee_id >190 and rownum>2;-- Won't return any resultset
select * from hr.employees where employee_id >190 and rownum=3;-- Won't return any resultset
Reason behind the last two queries for not returning the resultset is once the predicate (employee_id >190) gets completed and a rownum is assigned to the first row then for (2 query rownum>2) 1>2 returns false and for (3 query rownum=3) 1=3 returns false so no data is returned.
Thanks
Andy
How about creating a mini test-table for your testing queries?
create table my_test_table as select * from big_table where rownum <= n;
Now you could run something like
select count(*) from my_test_table where color='red';
and divide that result by n to get your estimate for what fraction of the rows have color='red' in your big database. Of course, note that you could get really unlucky (i.e. your small table could be a poor sample of the total table population), in which case you can probably just increase n to achieve a better sample.
If you can't or would rather not create a new table, you can certainly just use a nested query:
select * from (select * from big_table where rownum <= n)
where <condition>;

Best way to select random rows PostgreSQL

I want a random selection of rows in PostgreSQL, I tried this:
select * from table where random() < 0.01;
But some other recommend this:
select * from table order by random() limit 1000;
I have a very large table with 500 Million rows, I want it to be fast.
Which approach is better? What are the differences? What is the best way to select random rows?
Fast ways
Given your specifications (plus additional info in the comments),
You have a numeric ID column (integer numbers) with only few (or moderately few) gaps.
Obviously no or few write operations.
Your ID column has to be indexed! A primary key serves nicely.
The query below does not need a sequential scan of the big table, only an index scan.
First, get estimates for the main query:
SELECT count(*) AS ct -- optional
, min(id) AS min_id
, max(id) AS max_id
, max(id) - min(id) AS id_span
FROM big;
The only possibly expensive part is the count(*) (for huge tables). Given above specifications, you don't need it. An estimate to replace the full count will do just fine, available at almost no cost:
SELECT (reltuples / relpages * (pg_relation_size(oid) / 8192))::bigint AS ct
FROM pg_class
WHERE oid = 'big'::regclass; -- your table name
Detailed explanation:
Fast way to discover the row count of a table in PostgreSQL
As long as ct isn't much smaller than id_span, the query will outperform other approaches.
WITH params AS (
SELECT 1 AS min_id -- minimum id <= current min id
, 5100000 AS id_span -- rounded up. (max_id - min_id + buffer)
)
SELECT *
FROM (
SELECT p.min_id + trunc(random() * p.id_span)::integer AS id
FROM params p
, generate_series(1, 1100) g -- 1000 + buffer
GROUP BY 1 -- trim duplicates
) r
JOIN big USING (id)
LIMIT 1000; -- trim surplus
Generate random numbers in the id space. You have "few gaps", so add 10 % (enough to easily cover the blanks) to the number of rows to retrieve.
Each id can be picked multiple times by chance (though very unlikely with a big id space), so group the generated numbers (or use DISTINCT).
Join the ids to the big table. This should be very fast with the index in place.
Finally trim surplus ids that have not been eaten by dupes and gaps. Every row has a completely equal chance to be picked.
Short version
You can simplify this query. The CTE in the query above is just for educational purposes:
SELECT *
FROM (
SELECT DISTINCT 1 + trunc(random() * 5100000)::integer AS id
FROM generate_series(1, 1100) g
) r
JOIN big USING (id)
LIMIT 1000;
Refine with rCTE
Especially if you are not so sure about gaps and estimates.
WITH RECURSIVE random_pick AS (
SELECT *
FROM (
SELECT 1 + trunc(random() * 5100000)::int AS id
FROM generate_series(1, 1030) -- 1000 + few percent - adapt to your needs
LIMIT 1030 -- hint for query planner
) r
JOIN big b USING (id) -- eliminate miss
UNION -- eliminate dupe
SELECT b.*
FROM (
SELECT 1 + trunc(random() * 5100000)::int AS id
FROM random_pick r -- plus 3 percent - adapt to your needs
LIMIT 999 -- less than 1000, hint for query planner
) r
JOIN big b USING (id) -- eliminate miss
)
TABLE random_pick
LIMIT 1000; -- actual limit
We can work with a smaller surplus in the base query. If there are too many gaps so we don't find enough rows in the first iteration, the rCTE continues to iterate with the recursive term. We still need relatively few gaps in the ID space or the recursion may run dry before the limit is reached - or we have to start with a large enough buffer which defies the purpose of optimizing performance.
Duplicates are eliminated by the UNION in the rCTE.
The outer LIMIT makes the CTE stop as soon as we have enough rows.
This query is carefully drafted to use the available index, generate actually random rows and not stop until we fulfill the limit (unless the recursion runs dry). There are a number of pitfalls here if you are going to rewrite it.
Wrap into function
For repeated use with the same table with varying parameters:
CREATE OR REPLACE FUNCTION f_random_sample(_limit int = 1000, _gaps real = 1.03)
RETURNS SETOF big
LANGUAGE plpgsql VOLATILE ROWS 1000 AS
$func$
DECLARE
_surplus int := _limit * _gaps;
_estimate int := ( -- get current estimate from system
SELECT (reltuples / relpages * (pg_relation_size(oid) / 8192))::bigint
FROM pg_class
WHERE oid = 'big'::regclass);
BEGIN
RETURN QUERY
WITH RECURSIVE random_pick AS (
SELECT *
FROM (
SELECT 1 + trunc(random() * _estimate)::int
FROM generate_series(1, _surplus) g
LIMIT _surplus -- hint for query planner
) r (id)
JOIN big USING (id) -- eliminate misses
UNION -- eliminate dupes
SELECT *
FROM (
SELECT 1 + trunc(random() * _estimate)::int
FROM random_pick -- just to make it recursive
LIMIT _limit -- hint for query planner
) r (id)
JOIN big USING (id) -- eliminate misses
)
TABLE random_pick
LIMIT _limit;
END
$func$;
Call:
SELECT * FROM f_random_sample();
SELECT * FROM f_random_sample(500, 1.05);
Generic function
We can make this generic to work for any table with a unique integer column (typically the PK): Pass the table as polymorphic type and (optionally) the name of the PK column and use EXECUTE:
CREATE OR REPLACE FUNCTION f_random_sample(_tbl_type anyelement
, _id text = 'id'
, _limit int = 1000
, _gaps real = 1.03)
RETURNS SETOF anyelement
LANGUAGE plpgsql VOLATILE ROWS 1000 AS
$func$
DECLARE
-- safe syntax with schema & quotes where needed
_tbl text := pg_typeof(_tbl_type)::text;
_estimate int := (SELECT (reltuples / relpages
* (pg_relation_size(oid) / 8192))::bigint
FROM pg_class -- get current estimate from system
WHERE oid = _tbl::regclass);
BEGIN
RETURN QUERY EXECUTE format(
$$
WITH RECURSIVE random_pick AS (
SELECT *
FROM (
SELECT 1 + trunc(random() * $1)::int
FROM generate_series(1, $2) g
LIMIT $2 -- hint for query planner
) r(%2$I)
JOIN %1$s USING (%2$I) -- eliminate misses
UNION -- eliminate dupes
SELECT *
FROM (
SELECT 1 + trunc(random() * $1)::int
FROM random_pick -- just to make it recursive
LIMIT $3 -- hint for query planner
) r(%2$I)
JOIN %1$s USING (%2$I) -- eliminate misses
)
TABLE random_pick
LIMIT $3;
$$
, _tbl, _id
)
USING _estimate -- $1
, (_limit * _gaps)::int -- $2 ("surplus")
, _limit -- $3
;
END
$func$;
Call with defaults (important!):
SELECT * FROM f_random_sample(null::big); --!
Or more specifically:
SELECT * FROM f_random_sample(null::"my_TABLE", 'oDD ID', 666, 1.15);
About the same performance as the static version.
Related:
Refactor a PL/pgSQL function to return the output of various SELECT queries - chapter "Various complete table types"
Return SETOF rows from PostgreSQL function
Format specifier for integer variables in format() for EXECUTE?
INSERT with dynamic table name in trigger function
This is safe against SQL injection. See:
Table name as a PostgreSQL function parameter
SQL injection in Postgres functions vs prepared queries
Possible alternative
I your requirements allow identical sets for repeated calls (and we are talking about repeated calls) consider a MATERIALIZED VIEW. Execute above query once and write the result to a table. Users get a quasi random selection at lightening speed. Refresh your random pick at intervals or events of your choosing.
Postgres 9.5 introduces TABLESAMPLE SYSTEM (n)
Where n is a percentage. The manual:
The BERNOULLI and SYSTEM sampling methods each accept a single
argument which is the fraction of the table to sample, expressed as a
percentage between 0 and 100. This argument can be any real-valued expression.
Bold emphasis mine. It's very fast, but the result is not exactly random. The manual again:
The SYSTEM method is significantly faster than the BERNOULLI method
when small sampling percentages are specified, but it may return a
less-random sample of the table as a result of clustering effects.
The number of rows returned can vary wildly. For our example, to get roughly 1000 rows:
SELECT * FROM big TABLESAMPLE SYSTEM ((1000 * 100) / 5100000.0);
Related:
Fast way to discover the row count of a table in PostgreSQL
Or install the additional module tsm_system_rows to get the number of requested rows exactly (if there are enough) and allow for the more convenient syntax:
SELECT * FROM big TABLESAMPLE SYSTEM_ROWS(1000);
See Evan's answer for details.
But that's still not exactly random.
You can examine and compare the execution plan of both by using
EXPLAIN select * from table where random() < 0.01;
EXPLAIN select * from table order by random() limit 1000;
A quick test on a large table1 shows, that the ORDER BY first sorts the complete table and then picks the first 1000 items. Sorting a large table not only reads that table but also involves reading and writing temporary files. The where random() < 0.1 only scans the complete table once.
For large tables this might not what you want as even one complete table scan might take to long.
A third proposal would be
select * from table where random() < 0.01 limit 1000;
This one stops the table scan as soon as 1000 rows have been found and therefore returns sooner. Of course this bogs down the randomness a bit, but perhaps this is good enough in your case.
Edit: Besides of this considerations, you might check out the already asked questions for this. Using the query [postgresql] random returns quite a few hits.
quick random row selection in Postgres
How to retrieve randomized data rows from a postgreSQL table?
postgres: get random entries from table - too slow
And a linked article of depez outlining several more approaches:
http://www.depesz.com/index.php/2007/09/16/my-thoughts-on-getting-random-row/
1 "large" as in "the complete table will not fit into the memory".
postgresql order by random(), select rows in random order:
These are all slow because they do a tablescan to guarantee that every row gets an exactly equal chance of being chosen:
select your_columns from your_table ORDER BY random()
select * from
(select distinct your_columns from your_table) table_alias
ORDER BY random()
select your_columns from your_table ORDER BY random() limit 1
If you know how many rows are in the table N:
offset by floored random is constant time. However I am NOT convinced that OFFSET is producing a true random sample. It's simulating it by getting 'the next bunch' and tablescanning that, so you can step through, which isn't quite the same as above.
SELECT myid FROM mytable OFFSET floor(random() * N) LIMIT 1;
Roll your own constant Time Select Random N rows with periodic table scan to be absolutely sure of a random:
If your table is huge then the above table-scans are a show stopper taking up to 5 minutes to finish.
To go faster you can schedule a behind the scenes nightly table-scan reindexing which will guarantee a perfectly random selection in an O(1) constant-time speed, except during the nightly reindexing table-scan, where it must wait for maintenance to finish before you may receive another random row.
--Create a demo table with lots of random nonuniform data, big_data
--is your huge table you want to get random rows from in constant time.
drop table if exists big_data;
CREATE TABLE big_data (id serial unique, some_data text );
CREATE INDEX ON big_data (id);
--Fill it with a million rows which simulates your beautiful data:
INSERT INTO big_data (some_data) SELECT md5(random()::text) AS some_data
FROM generate_series(1,10000000);
--This delete statement puts holes in your index
--making it NONuniformly distributed
DELETE FROM big_data WHERE id IN (2, 4, 6, 7, 8);
--Do the nightly maintenance task on a schedule at 1AM.
drop table if exists big_data_mapper;
CREATE TABLE big_data_mapper (id serial, big_data_id int);
CREATE INDEX ON big_data_mapper (id);
CREATE INDEX ON big_data_mapper (big_data_id);
INSERT INTO big_data_mapper(big_data_id) SELECT id FROM big_data ORDER BY id;
--We have to use a function because the big_data_mapper might be out-of-date
--in between nightly tasks, so to solve the problem of a missing row,
--you try again until you succeed. In the event the big_data_mapper
--is broken, it tries 25 times then gives up and returns -1.
CREATE or replace FUNCTION get_random_big_data_id()
RETURNS int language plpgsql AS $$
declare
response int;
BEGIN
--Loop is required because big_data_mapper could be old
--Keep rolling the dice until you find one that hits.
for counter in 1..25 loop
SELECT big_data_id
FROM big_data_mapper OFFSET floor(random() * (
select max(id) biggest_value from big_data_mapper
)
) LIMIT 1 into response;
if response is not null then
return response;
end if;
end loop;
return -1;
END;
$$;
--get a random big_data id in constant time:
select get_random_big_data_id();
--Get 1 random row from big_data table in constant time:
select * from big_data where id in (
select get_random_big_data_id() from big_data limit 1
);
┌─────────┬──────────────────────────────────┐
│ id │ some_data │
├─────────┼──────────────────────────────────┤
│ 8732674 │ f8d75be30eff0a973923c413eaf57ac0 │
└─────────┴──────────────────────────────────┘
--Get 4 random rows from big_data in constant time:
select * from big_data where id in (
select get_random_big_data_id() from big_data limit 3
);
┌─────────┬──────────────────────────────────┐
│ id │ some_data │
├─────────┼──────────────────────────────────┤
│ 2722848 │ fab6a7d76d9637af89b155f2e614fc96 │
│ 8732674 │ f8d75be30eff0a973923c413eaf57ac0 │
│ 9475611 │ 36ac3eeb6b3e171cacd475e7f9dade56 │
└─────────┴──────────────────────────────────┘
--Test what happens when big_data_mapper stops receiving
--nightly reindexing.
delete from big_data_mapper where 1=1;
select get_random_big_data_id(); --It tries 25 times, and returns -1
--which means wait N minutes and try again.
Adapted from: https://www.gab.lc/articles/bigdata_postgresql_order_by_random
Alternatively if all the above is too much work.
A simpler good 'nuff solution for constant time select random row is to make a new column on your big table called big_data.mapper_int make it not null with a unique index. Every night reset the column with a unique integer between 1 and max(n). To get a random row you "choose a random integer between 0 and max(id)" and return the row where mapper_int is that. If there's no row by that id, because the row has changed since re-index, choose another random row. If a row is added to big_data.mapper_int then populate it with max(id) + 1
Alternatively TableSample to the rescue:
If you have postgresql version > 9.5 then tablesample can do a constant time random sample without a heavy tablescan.
https://wiki.postgresql.org/wiki/TABLESAMPLE_Implementation
--Select 1 percent of rows from yourtable,
--display the first 100 rows, order by column a_column
select * from yourtable TABLESAMPLE SYSTEM (1)
order by a_column
limit 100;
TableSample is doing some stuff behind the scenes that takes some time and I don't like it, but is faster than order by random(). Good, fast, cheap, choose any two on this job.
Starting with PostgreSQL 9.5, there's a new syntax dedicated to getting random elements from a table :
SELECT * FROM mytable TABLESAMPLE SYSTEM (5);
This example will give you 5% of elements from mytable.
See more explanation on the documentation: http://www.postgresql.org/docs/current/static/sql-select.html
The one with the ORDER BY is going to be the slower one.
select * from table where random() < 0.01; goes record by record, and decides to randomly filter it or not. This is going to be O(N) because it only needs to check each record once.
select * from table order by random() limit 1000; is going to sort the entire table, then pick the first 1000. Aside from any voodoo magic behind the scenes, the order by is O(N * log N).
The downside to the random() < 0.01 one is that you'll get a variable number of output records.
Note, there is a better way to shuffling a set of data than sorting by random: The Fisher-Yates Shuffle, which runs in O(N). Implementing the shuffle in SQL sounds like quite the challenge, though.
select * from table order by random() limit 1000;
If you know how many rows you want, check out tsm_system_rows.
tsm_system_rows
module provides the table sampling method SYSTEM_ROWS, which can be used in the TABLESAMPLE clause of a SELECT command.
This table sampling method accepts a single integer argument that is the maximum number of rows to read. The resulting sample will always contain exactly that many rows, unless the table does not contain enough rows, in which case the whole table is selected. Like the built-in SYSTEM sampling method, SYSTEM_ROWS performs block-level sampling, so that the sample is not completely random but may be subject to clustering effects, especially if only a small number of rows are requested.
First install the extension
CREATE EXTENSION tsm_system_rows;
Then your query,
SELECT *
FROM table
TABLESAMPLE SYSTEM_ROWS(1000);
Here is a decision that works for me. I guess it's very simple to understand and execute.
SELECT
field_1,
field_2,
field_2,
random() as ordering
FROM
big_table
WHERE
some_conditions
ORDER BY
ordering
LIMIT 1000;
If you want just one row, you can use a calculated offset derived from count.
select * from table_name limit 1
offset floor(random() * (select count(*) from table_name));
One lesson from my experience:
offset floor(random() * N) limit 1 is not faster than order by random() limit 1.
I thought the offset approach would be faster because it should save the time of sorting in Postgres. Turns out it wasn't.
I think the best and simplest way in postgreSQL is:
SELECT * FROM tableName ORDER BY random() LIMIT 1
A variation of the materialized view "Possible alternative" outlined by Erwin Brandstetter is possible.
Say, for example, that you don't want duplicates in the randomized values that are returned. An example use case is to generate short codes which can only be used once.
The primary table containing your (non-randomized) set of values must have some expression that determines which rows are "used" and which aren't — here I'll keep it simple by just creating a boolean column with the name used.
Assume this is the input table (additional columns may be added as they do not affect the solution):
id_values id | used
----+--------
1 | FALSE
2 | FALSE
3 | FALSE
4 | FALSE
5 | FALSE
...
Populate the ID_VALUES table as needed. Then, as described by Erwin, create a materialized view that randomizes the ID_VALUES table once:
CREATE MATERIALIZED VIEW id_values_randomized AS
SELECT id
FROM id_values
ORDER BY random();
Note that the materialized view does not contain the used column, because this will quickly become out-of-date. Nor does the view need to contain other columns that may be in the id_values table.
In order to obtain (and "consume") random values, use an UPDATE-RETURNING on id_values, selecting id_values from id_values_randomized with a join, and applying the desired criteria to obtain only relevant possibilities. For example:
UPDATE id_values
SET used = TRUE
WHERE id_values.id IN
(SELECT i.id
FROM id_values_randomized r INNER JOIN id_values i ON i.id = r.id
WHERE (NOT i.used)
LIMIT 1)
RETURNING id;
Change LIMIT as necessary -- if you need multiple random values at a time, change LIMIT to n where n is the number of values needed.
With the proper indexes on id_values, I believe the UPDATE-RETURNING should execute very quickly with little load. It returns randomized values with one database round-trip. The criteria for "eligible" rows can be as complex as required. New rows can be added to the id_values table at any time, and they will become accessible to the application as soon as the materialized view is refreshed (which can likely be run at an off-peak time). Creation and refresh of the materialized view will be slow, but it only needs to be executed when new id's added to the id_values table need to be made available.
Add a column called r with type serial. Index r.
Assume we have 200,000 rows, we are going to generate a random number n, where 0 < n <= 200, 000.
Select rows with r > n, sort them ASC and select the smallest one.
Code:
select * from YOUR_TABLE
where r > (
select (
select reltuples::bigint AS estimate
from pg_class
where oid = 'public.YOUR_TABLE'::regclass) * random()
)
order by r asc limit(1);
The code is self-explanatory. The subquery in the middle is used to quickly estimate the table row counts from https://stackoverflow.com/a/7945274/1271094 .
In application level you need to execute the statement again if n > the number of rows or need to select multiple rows.
I know I'm a little late to the party, but I just found this awesome tool called pg_sample:
pg_sample - extract a small, sample dataset from a larger PostgreSQL database while maintaining referential integrity.
I tried this with a 350M rows database and it was really fast, don't know about the randomness.
./pg_sample --limit="small_table = *" --limit="large_table = 100000" -U postgres source_db | psql -U postgres target_db

Update n random rows in SQL

I have table which is having about 1000 rows.I have to update a column("X") in the table to 'Y' for n ramdom rows. For this i can have following query
update xyz set X='Y' when m in (
'SELECT m FROM (SELECT m
FROM xyz
order by dbms_random.value
) RNDM
where rownum < n+1);
Is there another efficient way to write this query. The table has no index.
Please help?
I would use the ROWID:
UPDATE xyz SET x='Y' WHERE rowid IN (
SELECT r FROM (
SELECT ROWID r FROM xyz ORDER BY dbms_random.value
) RNDM WHERE rownum < n+1
)
The actual reason I would use ROWID isn't for efficiency though (it will still do a full table scan) - your SQL may not update the number of rows you want if column m isn't unique.
With only 1000 rows, you shouldn't really be worried about efficiency (maybe with a hundred million rows). Without any index on this table, you're stuck doing a full table scan to select random records.
[EDIT:] "But what if there are 100,000 rows"
Well, that's still 3 orders of magnitude less than 100 million.
I ran the following:
create table xyz as select * from all_objects;
[created about 50,000 rows on my system - non-indexed, just like your table]
UPDATE xyz SET owner='Y' WHERE rowid IN (
SELECT r FROM (
SELECT ROWID r FROM xyz ORDER BY dbms_random.value
) RNDM WHERE rownum < 10000
);
commit;
This took approximately 1.5 seconds. Maybe it was 1 second, maybe up to 3 seconds (didn't formally time it, it just took about enough time to blink).
You can improve performance by replacing the full table scan with a sample.
The first problem you run into is that you can't use SAMPLE in a DML subquery, ORA-30560: SAMPLE clause not allowed. But logically this is what is needed:
UPDATE xyz SET x='Y' WHERE rowid IN (
SELECT r FROM (
SELECT ROWID r FROM xyz sample(0.15) ORDER BY dbms_random.value
) RNDM WHERE rownum < 100/*n*/+1
);
You can get around this by using a collection to store the rowids, and then update the rows using the rowid collection. Normally breaking a query into separate parts and gluing them together with PL/SQL leads to horrible performance. But in this case you can still save a lot of time by significantly reducing the amount of data read.
declare
type rowid_nt is table of rowid;
rowids rowid_nt;
begin
--Get the rowids
SELECT r bulk collect into rowids
FROM (
SELECT ROWID r
FROM xyz sample(0.15)
ORDER BY dbms_random.value
) RNDM WHERE rownum < 100/*n*/+1;
--update the table
forall i in 1 .. rowids.count
update xyz set x = 'Y'
where rowid = rowids(i);
end;
/
I ran a simple test with 100,000 rows (on a table with only two columns), and N = 100.
The original version took 0.85 seconds, #Gerrat's answer took 0.7 seconds, and the PL/SQL version took 0.015 seconds.
But that's only one scenario, I don't have enough information to say my answer will always be better. As N increases the sampling advantage is lost, and the writing will be more significant than the reading. If you have a very small amount of data, the PL/SQL context switching overhead in my answer may make it slower than #Gerrat's solution.
For performance issues, the size of the table in bytes is usually much more important than the size in rows. 1000 rows that use a terabyte of space is much larger than 100 million rows that only use a gigabyte.
Here are some problems to consider with my answer:
Sampling does not always return exactly the percent you asked for. With 100,000 rows and a 0.15% sample size the number of rows returned was 147, not 150. That's why I used 0.15 instead of 0.10. You need to over-sample a little bit to ensure that you get more than N. How much do you need to over-sample? I have no idea, you'll probably have to test it and pick a safe number.
You need to know the approximate number of rows to pick the percent.
The percent must be a literal, so as the number of rows and N change, you'll need to use dynamic SQL to change the percent.
The following solution works just fine. It's performant and seems to be similar to sample():
create table t1 as
select level id, cast ('item'||level as varchar2(32)) item
from dual connect by level<=100000;
Table T1 created.
update t1 set item='*'||item
where exists (
select rnd from (
select dbms_random.value() rnd
from t1
) t2 where t2.rowid = t1.rowid and rnd < 0.15
);
14,858 rows updated.
Elapsed: 00:00:00.717
Consider that alias rnd must be included in select clause. Otherwise changes the omptimizer the filter predicat from RND<0.1 to DBMS_RANDOM.VALUE()<0.1. In that case dbms_random.value will be executed only once.
As mentioned in answer #JonHeller, the best solution remains the pl/sql code block because it allows to avoid full table scan. Here is my suggestion:
create or replace type rowidListType is table of varchar(18);
/
create or replace procedure updateRandomly (prefix varchar2 := '*') is
rowidList rowidListType;
begin
select rowidtochar (rowid) bulk collect into rowidList
from t1 sample(15)
;
update t1 set item=prefix||item
where exists (
select 1 from table (rowidList) t2
where chartorowid(t2.column_value) = t1.rowid
);
dbms_output.put_line ('updated '||sql%rowcount||' rows.');
end;
/
begin updateRandomly; end;
/
Elapsed: 00:00:00.293
updated 14892 rows.