MySQL: Alternatives to ORDER BY RAND() - sql

I've read about a few alternatives to MySQL's ORDER BY RAND() function, but most of the alternatives apply only to where on a single random result is needed.
Does anyone have any idea how to optimize a query that returns multiple random results, such as this:
SELECT u.id,
p.photo
FROM users u, profiles p
WHERE p.memberid = u.id
AND p.photo != ''
AND (u.ownership=1 OR u.stamp=1)
ORDER BY RAND()
LIMIT 18

UPDATE 2016
This solution works best using an indexed column.
Here is a simple example of and optimized query bench marked with 100,000 rows.
OPTIMIZED: 300ms
SELECT
g.*
FROM
table g
JOIN
(SELECT
id
FROM
table
WHERE
RAND() < (SELECT
((4 / COUNT(*)) * 10)
FROM
table)
ORDER BY RAND()
LIMIT 4) AS z ON z.id= g.id
note about limit ammount: limit 4 and 4/count(*). The 4s need to be the same number. Changing how many you return doesn't effect the speed that much. Benchmark at limit 4 and limit 1000 are the same. Limit 10,000 took it up to 600ms
note about join: Randomizing just the id is faster than randomizing a whole row. Since it has to copy the entire row into memory then randomize it. The join can be any table that is linked to the subquery Its to prevent tablescans.
note where clause: The where count limits down the ammount of results that are being randomized. It takes a percentage of the results and sorts them rather than the whole table.
note sub query: The if doing joins and extra where clause conditions you need to put them both in the subquery and the subsubquery. To have an accurate count and pull back correct data.
UNOPTIMIZED: 1200ms
SELECT
g.*
FROM
table g
ORDER BY RAND()
LIMIT 4
PROS
4x faster than order by rand(). This solution can work with any table with a indexed column.
CONS
It is a bit complex with complex queries. Need to maintain 2 code bases in the subqueries

Here's an alternative, but it is still based on using RAND():
SELECT u.id,
p.photo,
ROUND(RAND() * x.m_id) 'rand_ind'
FROM users u,
profiles p,
(SELECT MAX(t.id) 'm_id'
FROM USERS t) x
WHERE p.memberid = u.id
AND p.photo != ''
AND (u.ownership=1 OR u.stamp=1)
ORDER BY rand_ind
LIMIT 18
This is slightly more complex, but gave a better distribution of random_ind values:
SELECT u.id,
p.photo,
FLOOR(1 + RAND() * x.m_id) 'rand_ind'
FROM users u,
profiles p,
(SELECT MAX(t.id) - 1 'm_id'
FROM USERS t) x
WHERE p.memberid = u.id
AND p.photo != ''
AND (u.ownership=1 OR u.stamp=1)
ORDER BY rand_ind
LIMIT 18

It is not the fastest, but faster then common ORDER BY RAND() way:
ORDER BY RAND() is not so slow, when you use it to find only indexed column. You can take all your ids in one query like this:
SELECT id
FROM testTable
ORDER BY RAND();
to get a sequence of random ids, and JOIN the result to another query with other SELECT or WHERE parameters:
SELECT t.*
FROM testTable t
JOIN
(SELECT id
FROM `testTable`
ORDER BY RAND()) AS z ON z.id= t.id
WHERE t.isVisible = 1
LIMIT 100;
in your case it would be:
SELECT u.id, p.photo
FROM users u, profiles p
JOIN
(SELECT id
FROM users
ORDER BY RAND()) AS z ON z.id = u.id
WHERE p.memberid = u.id
AND p.photo != ''
AND (u.ownership=1 OR u.stamp=1)
LIMIT 18
It's very blunt method and it can be not proper with very big tables, but still it's faster than common RAND(). I got 20 times faster execution time searching 3000 random rows in almost 400000.

Order by rand() is very slow on large tables,
I found the following workaround in a php script:
Select min(id) as min, max(id) as max from table;
Then do random in php
$rand = rand($min, $max);
Then
'Select * from table where id>'.$rand.' limit 1';
Seems to be quite fast....

I ran into this today and was trying to use 'DISTINCT' along with JOINs, but was getting duplicates I assume because the RAND was making each JOINed row distinct. I muddled around a bit and found a solution that works, like this:
SELECT DISTINCT t.id,
t.photo
FROM (SELECT u.id,
p.photo,
RAND() as rand
FROM users u, profiles p
WHERE p.memberid = u.id
AND p.photo != ''
AND (u.ownership=1 OR u.stamp=1)
ORDER BY rand) t
LIMIT 18

Create a column or join to a select with random numbers (generated in for example php) and order by this column.

The solution I am using is also posted in the link below:
How can i optimize MySQL's ORDER BY RAND() function?
I am assuming your users table is going to be larger than your profiles table, if not then it's 1 to 1 cardinality.
If so, I would first do a random selection on user table before joining with profile table.
First do selection:
SELECT *
FROM users
WHERE users.ownership = 1 OR users.stamp = 1
Then from this pool, pick out random rows through calculated probability. If your table has M rows and you want to pick out N random rows, the probability of random selection should be N/M. Hence:
SELECT *
FROM
(
SELECT *
FROM users
WHERE users.ownership = 1 OR users.stamp = 1
) as U
WHERE
rand() <= $limitCount / (SELECT count(*) FROM users WHERE users.ownership = 1 OR users.stamp = 1)
Where N is $limitCount and M is the subquery that calculates the table row count. However, since we are working on probability, it is possible to have LESS than $limitCount of rows returned. Therefore we should multiply N by a factor to increase the random pool size.
i.e:
SELECT*
FROM
(
SELECT *
FROM users
WHERE users.ownership = 1 OR users.stamp = 1
) as U
WHERE
rand() <= $limitCount * $factor / (SELECT count(*) FROM users WHERE users.ownership = 1 OR users.stamp = 1)
I usually set $factor = 2. You can set the factor to a lower value to further reduce the random pool size (e.g. 1.5).
At this point, we would have already limited a M size table down to roughly 2N size. From here we can do a JOIN then LIMIT.
SELECT *
FROM
(
SELECT *
FROM
(
SELECT *
FROM users
WHERE users.ownership = 1 OR users.stamp = 1
) as U
WHERE
rand() <= $limitCount * $factor / (SELECT count(*) FROM users WHERE users.ownership = 1 OR users.stamp = 1)
) as randUser
JOIN profiles
ON randUser.id = profiles.memberid AND profiles.photo != ''
LIMIT $limitCount
On a large table, this query will outperform a normal ORDER by RAND() query.
Hope this helps!

SELECT
a.id,
mod_question AS modQuestion,
mod_answers AS modAnswers
FROM
b_ask_material AS a
INNER JOIN ( SELECT id FROM b_ask_material WHERE industry = 2 ORDER BY RAND( ) LIMIT 100 ) AS b ON a.id = b.id

I had the same issue today, I fixed it by using limit and offset
You can do by iterating 18 times over a random set of offsets
to avoid duplicates, you can create your random set of offset like this in python sample(range(1, rows_count), random_rows_count)
then for each offset get the corresponding row using OFFSET and LIMIT 1 and add it to a list
rows_count can be cached to avoid performance issue to count total number of rows at each request
EDIT it's actually what this post https://stackoverflow.com/a/40398306/3045926 proposes

Related

How to cast SELECT result to a typed value

Is there any common practice to use SELECT result as a typed value, e.g. for functions arguments?
Could it be something like this?
func((SELECT number FROM numbers WHERE user_id = 1 LIMIT 1).number::numeric)
I thought about CURSOR for such a task but I'm not really sure. Thank you for any advice!
I'm using PostgreSQL so if there is any specific solution feel free to share.
Use the FROM clause or a common table expression:
SELECT func(a.x, b.y)
FROM (SELECT ... LIMIT 1) AS a
CROSS JOIN (SELECT ... LIMIT 1) AS b;
or
WITH a AS (SELECT ... LIMIT 1),
b AS (SELECT ... LIMIT 1)
SELECT func(a.x, b.y)
FROM a CROSS JOIN b;
Doesn't this do what you want?
func( (SELECT number FROM numbers WHERE user_id = 1 LIMIT 1)::numeric )
The subquery is returning (at most) one value so it is a scalar subquery and is equivalent to a scalar reference in the query.
That said, there are many other ways to express this. For instance:
func( (SELECT number::numeric FROM numbers WHERE user_id = 1 LIMIT 1) )
or:
(SELECT func(number::numeric) FROM numbers WHERE user_id = 1 LIMIT 1)
or moving to the from clause and using a lateral join. Or calculating the result in a CTE or subquery.

Is there a better way to retrieve a random row from an Oracle table?

Not so long ago I needed to fetch a random row from a table in an Oracle database. The most widespread solution that I've found was this:
SELECT * FROM
( SELECT * FROM tabela WHERE warunek
ORDER BY dbms_random.value )
WHERE rownum = 1​
However, this is very performance heavy for large tables, as it sorts the table in random order first, then grabs the first row.
Today, one of my collegues suggested a different way:
SELECT * FROM (
SELECT * FROM MAIN_PRODUCT
WHERE ROWNUM <= CAST((SELECT COUNT(*) FROM MAIN_PRODUCT)*dbms_random.value AS INTEGER)
ORDER BY ROWNUM DESC
) WHERE ROWNUM = 1;
It works way faster and seems to return random values, but does it really? Could someone give an insight into whether it is really random and behaves the way as expected? I'm really curious why I haven't found this approach anywhere else while looking, and if it is indeed random and way better performance wise, why isn't it more widespread?
This is the (possibly) the most simple query possible to get the results.
But the SELECT COUNT(*) FROM MAIN_PRODUCT will table scan i doubt you can get a query which does not do that.
P.s This query assumes not deleted records.
Query
SELECT *
FROM
MAIN_PRODUCT
WHERE
ROWNUM = FLOOR(
(dbms_random.value * (SELECT COUNT(*) FROM MAIN_PRODUCT)) + 1
)
FLOOR(
(dbms_random.value * (SELECT COUNT(*) FROM MAIN_PRODUCT)) + 1
)
Will generate a number between between 1 and the max count of the table see demo how that works when you refresh it.
Oracle12c+ Query
SELECT *
FROM
MAIN_PRODUCT
WHERE
ROWNUM <= FLOOR(
(dbms_random.value * (SELECT COUNT(*) FROM MAIN_PRODUCT)) + 1
)
ORDER BY
ROWNUM DESC
FETCH FIRST ROW ONLY
The second code you have
SELECT * FROM (
SELECT * FROM MAIN_PRODUCT
WHERE ROWNUM <= CAST((SELECT COUNT(*) FROM MAIN_PRODUCT)*dbms_random.value AS INTEGER)
ORDER BY ROWNUM DESC
) WHERE ROWNUM = 1;
is excellent, except that it will get subsequent elements. dbms_random.value is returning a real number between 0 and 1. Multiplying this with the number of rows will provide you a really random number and the bottleneck here is counting the number of rows rather then generating a random value for each row.
Proof
Consider the
0 <= x < 1
number. If we multiply it with n, we get
0 <= n * x < n
which is exactly what you need if you want to load a single element. The reason this is not widespread is that in many cases the performance issues are not felt due to only a few thousands of records.
EDIT
If you would need k number of records, not just the first one, then it would be slightly difficult, however, still solvable. The algorithm would be something like this (I do not have Oracle installed to test it, so I only describe the algorithm):
randomize(n, k)
randomized <- empty_set
while (k > 0) do
newValue <- random(n)
n <- n - 1
k <- k - 1
//find out how many elements are lower than newValue
//increase newValue with that amount
//find out if newValue became larger than some values which were larger than new value
//increase newValue with that amount
//repeat until there is no need to increase newValue
while end
randomize end
If you randomize k elements from n, then you will be able to use those values in your filter.
The key to improving performance is to lessen the load of the ORDER BY.
If you know about how many rows match the conditions, then you can filter before the sort. For instance, the following takes about 1% of the rows:
SELECT *
FROM (SELECT *
FROM tabela
WHERE warunek AND dbms_random.value < 0.01
ORDER BY dbms_random.value
)
WHERE rownum = 1​ ;
A variation is to calculate the number of matching values. Then randomly select a smaller sample. The following gets about 100 matching rows and then sorts them for the random selection:
SELECT a.*
FROM (SELECT *
FROM (SELECT a.*, COUNT(*) OVER () as cnt
FROM tabela a
WHERE warunek
) a
WHERE dbms_random.value < 100 / cnt
ORDER BY dbms_random.value
) a
WHERE rownum = 1​ ;

Selectivity estimation error on a simple query

Let us have a simple table tt created like this
WITH x AS (SELECT n FROM (VALUES (0),(1),(2),(3),(4),(5),(6),(7),(8),(9)) v(n)), t1 AS
(
SELECT ones.n + 10 * tens.n + 100 * hundreds.n + 1000 * thousands.n + 10000 * tenthousands.n as id
FROM x ones, x tens, x hundreds, x thousands, x tenthousands, x hundredthousands
)
SELECT id,
id % 100 groupby,
row_number() over (partition by id % 100 order by id) orderby,
row_number() over (partition by id % 100 order by id) / (id % 100 + 1) local_search
INTO tt
FROM t1
I have a simple query Q1:
select distinct g1.groupby,
(select count(*) from tt g2
where local_search = 1 and g1.groupby = g2.groupby) as orderby
from tt g1
option(maxdop 1)
I would like to know why the SQL Server estimates the result size so badly for Q1 (see the printscreen). Most of the operators in the query plan are estimated precisely, however, at the the root Hash Match operator introduce completely insane guess.
To make it more interesting I have tried different rewritings of the Q1. If I apply decorrelation of the subquery I obtain an equivalent query Q2:
select main.groupby,
coalesce(sub1.orderby,0) orderby
from
(
select distinct g1.groupby
from tt g1
) main
left join
(
select groupby, count(*) orderby
from tt g2
where local_search = 1
group by groupby
) sub1 on sub1.groupby = main.groupby
option(maxdop 1)
This query is interesting in two aspects: (1) the estimation is accurate (see printscreen), (2) it has also different query plan, which is more efficient that the query plan of Q1.
So the question is: why the estimation of Q1 is inccorect, whereas the estimation of Q2 is precise? Please do not post other rewritings of this SQL (I know that that is can be written even without subqueries), I'm interested only about the explanation of the selectivity estimator behaviour. Thanks.
It doesn't recognize the orderby value will be the same for all rows with the same groupby so it thinks distinct groupby, orderby will have more combinations than just distinct groupby.
It multiplies the estimate for DISTINCT orderby (for me this is 35.0367) and the estimate for DISTINCT groupby (for me this is 100) as if they were uncorrelated.
I get an estimate for 3503.67 for the root node in Q1
This rewrite avoids it as it now only groups by the single groupby column.
SELECT groupby,
max(orderby) AS orderby
FROM (SELECT g1.groupby,
(SELECT count(*)
FROM tt g2
WHERE local_search = 1
AND g1.groupby = g2.groupby) AS orderby
FROM tt g1) d
GROUP BY groupby
OPTION(maxdop 1)
This is not the optimal approach to this query though as shown by your Q2 and the comment #GarethD makes about the inefficiencies of running the correlated sub query multiple times and discarding the duplicates.

SQL select elements where sum of field is less than N

Given that I've got a table with the following, very simple content:
# select * from messages;
id | verbosity
----+-----------
1 | 20
2 | 20
3 | 20
4 | 30
5 | 100
(5 rows)
I would like to select N messages, which sum of verbosity is lower than Y (for testing purposes let's say it should be 70, then correct results will be messages with id 1,2,3).
It's really important to me, that solution should be database independent (it should work at least on Postgres and SQLite).
I was trying with something like:
SELECT * FROM messages GROUP BY id HAVING SUM(verbosity) < 70;
However it doesn't seem to work as expected, because it doesn't actually sum all values from verbosity column.
I would be very grateful for any hints/help.
SELECT m.id, sum(m1.verbosity) AS total
FROM messages m
JOIN messages m1 ON m1.id <= m.id
WHERE m.verbosity < 70 -- optional, to avoid pointless evaluation
GROUP BY m.id
HAVING SUM(m1.verbosity) < 70
ORDER BY total DESC
LIMIT 1;
This assumes a unique, ascending id like you have in your example.
In modern Postgres - or generally with modern standard SQL (but not in SQLite):
Simple CTE
WITH cte AS (
SELECT *, sum(verbosity) OVER (ORDER BY id) AS total
FROM messages
)
SELECT *
FROM cte
WHERE total < 70
ORDER BY id;
Recursive CTE
Should be faster for big tables where you only retrieve a small set.
WITH RECURSIVE cte AS (
( -- parentheses required
SELECT id, verbosity, verbosity AS total
FROM messages
ORDER BY id
LIMIT 1
)
UNION ALL
SELECT c1.id, c1.verbosity, c.total + c1.verbosity
FROM cte c
JOIN LATERAL (
SELECT *
FROM messages
WHERE id > c.id
ORDER BY id
LIMIT 1
) c1 ON c1.verbosity < 70 - c.total
WHERE c.total < 70
)
SELECT *
FROM cte
ORDER BY id;
All standard SQL, except for LIMIT.
Strictly speaking, there is no such thing as "database-independent". There are various SQL-standards, but no RDBMS complies completely. LIMIT works for PostgreSQL and SQLite (and some others). Use TOP 1 for SQL Server, rownum for Oracle. Here's a comprehensive list on Wikipedia.
The SQL:2008 standard would be:
...
FETCH FIRST 1 ROWS ONLY
... which PostgreSQL supports - but hardly any other RDBMS.
The pure alternative that works with more systems would be to wrap it in a subquery and
SELECT max(total) FROM <subquery>
But that is slow and unwieldy.
db<>fiddle here
Old sqlfiddle
This will work...
select *
from messages
where id<=
(
select MAX(id) from
(
select m2.id, SUM(m1.verbosity) sv
from messages m1
inner join messages m2 on m1.id <=m2.id
group by m2.id
) v
where sv<70
)
However, you should understand that SQL is designed as a set based language, rather than an iterative one, so it designed to treat data as a set, rather than on a row by row basis.

SQL Return Random Numbers Not In Table

I have a table with user_ids that we've gathered from a streaming datasource of active accounts. Now I'm looking to go through and fill in the information about the user_ids that don't do much of anything.
Is there a SQL (postgres if it matters) way to have a query return random numbers not present in the table?
Eg something like this:
SELECT RANDOM(count, lower_bound, upper_bound) as new_id
WHERE new_id NOT IN (SELECT user_id FROM user_table) AS user_id_table
Possible, or would it be best to generate a bunch of random numbers with a scripted wrapper and pass those into the DB to figure out non existant ones?
It is posible. If you want the IDs to be integers, try:
SELECT trunc((random() * (upper_bound - lower_bound)) + lower_bound) AS new_id
FROM generate_series(1,upper_bound)
WHERE new_id NOT IN (
SELECT user_id
FROM user_table)
You can wrap the query above in a subselect, i.e.
SELECT * FROM (SELECT trunc(random() * (upper - lower) + lower) AS new_id
FROM generate_series(1, count)) AS x
WHERE x.new_id NOT IN (SELECT user_id FROM user_table)
I suspect you want a random sampling. I would do something like:
SELECT s
FROM generate_series(1, (select max(user_id) from users) s
LEFT JOIN users ON s.s = user_id
WHERE user_id IS NULL
order by random() limit 5;
I haven't tested this but the idea should work. If you have a lot of users and not a lot of missing id's it will perform better than the other options, but performance no matter what you do may be a problem.
My pragmatic approach would be: generate 500 random numbers and then pick one which is not in the table:
WITH fivehundredrandoms AS ( RANDOM(count, lower_bound, upper_bound) AS onerandom
FROM (SELECT generate_series(1,500)) AS fivehundred )
SELECT onerandom FROM fivehundredrandoms
WHERE onerandom NOT IN (SELECT user_id FROM user_table WHERE user_id > 0) LIMIT 1;
There is way to do what you want with recursive queries, alas it is not nice.
Suppose that you have the following table:
CREATE TABLE test (a int)
To simplify, you want to insert random numbers from 0 to 4 (random() * 5)::int that are not in the table.
WITH RECURSIVE rand (i, r, is_new) AS (
SELECT
0,
null,
false
UNION ALL
SELECT
i + 1,
next_number.v,
NOT EXISTS (SELECT 1 FROM test WHERE test.a = next_number.v)
FROM
rand r,
(VALUES ((random() * 5)::int)) next_number(v)
-- safety check to make sure we do not go into an infinite loop
WHERE i < 500
)
SELECT * FROM rand WHERE rand.is_new LIMIT 1
I'm not super sure, but PostgreSQL should be able to stop iterating once it has one result, since it knows that the query has limit 1.
Nice thing about this query is that you can replace (random() * 5)::int with any id generating function that you want