PostgreSQL random in subquery - sql

I try to generate several thousands of random test data for some synthetic load testing, but I've run into a weird bug I don't understand.
Here's the basic minimum reproducible code I managed to narrow it down. Let's create a table that has some unique rows:
CREATE TABLE vals (
value INT PRIMARY KEY
);
INSERT INTO vals SELECT generate_series(1, 10);
Let's check there are unique values:
SELECT array(SELECT * FROM vals);
>> {1,2,3,4,5,6,7,8,9,10}
Yep, that's good. Now let's create a table that has lots of user data that references to the vals table:
CREATE TABLE tmp (
a INT REFERENCES vals,
b INT[]
);
And fill it with lots of random data:
WITH test_count AS (SELECT generate_series(1, 10000))
-- some more CTEs, so I cannot give up on them
INSERT
INTO tmp
SELECT
(SELECT value FROM vals ORDER BY random() LIMIT 1),
array(SELECT value FROM vals WHERE random() > 0.85)
FROM test_count;
But when we check it, there are 10000 rows filled with the same values:
SELECT DISTINCT a, b FROM tmp;
>> a | b
---------
2 | {8,5}
I've found out that sometimes postgres optimizes calls random() in the same row to the same value, e.g. SELECT random(), random() would return 0.345, 0.345: link.
But in this case random in-row are different, but random over all the rows is the same.
➥ What is the way to fix it?

The problem is premature optimization. Although there are other ways to phrase the query, adding a (non-sensical) correlation clause causes the subqueries to be run for each iteration:
WITH test_count AS (
SELECT generate_series(1, 10000) as id
)
INSERT INTO tmp
SELECT (SELECT value FROM vals WHERE tc.id is not null ORDER BY random() LIMIT 1),
(SELECT value FROM vals WHERE tc.id is not null ORDER BY random() LIMIT 1)
FROM test_count tc;
Here is a db<>fiddle.

Related

Randomly choosing a row

I have table s1, which has 3 rows. How can I RANDOMLY pick a row from s1 and INSERT its corresponding value into d1.
I don't want a hard coded solution. Can ROWNUM() be used then dbms_random? Let's say I want 10 rows in d1.
An example would be appreciated.
Create table s1(
val NUMBER(4)
);
INSERT into s1
(val) VALUES (30);
INSERT into s1
(val) VALUES (40);
INSERT into s1
(val) VALUES (50);
Create table d1(
val NUMBER(4)
);
You can sort by a random value and select one row:
insert into d1 (val)
select val
from (select s1.*
from s1
order by dbms_random.value
) s1
where rownum = 1;
In Oracle 12C+, you don't need the subquery:
insert into d1 (val)
select val
from s1
order by dbms_random.value
fetch first 1 row only;
Note: This assumes that you really mean random and not arbitrary. A random row means that any row in the table has an equal chance of being chosen in any given invocation of the query.
In case of huge tables standard way with sorting by dbms_random.value is not effective because you need to scan whole table and dbms_random.value is pretty slow function and requires context switches. For such cases, there are 2 well-known methods:
Use sample clause:
https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/SELECT.html#GUID-CFA006CA-6FF1-4972-821E-6996142A51C6
https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/SELECT.html#GUID-CFA006CA-6FF1-4972-821E-6996142A51C6
for example:
select *
from s1 sample block(1)
order by dbms_random.value
fetch first 1 rows only
ie get 1% of all blocks, then sort them randomly and return just 1 row.
if you have an index/primary key on the column with normal distribution, you can get min and max values, get random value in this range and get first row with a value greater or equal than that randomly generated value.
Example:
--big table with 1 mln rows with primary key on ID with normal distribution:
Create table s1(id primary key,padding) as
select level, rpad('x',100,'x')
from dual
connect by level<=1e6;
select *
from s1
where id>=(select
dbms_random.value(
(select min(id) from s1),
(select max(id) from s1)
)
from dual)
order by id
fetch first 1 rows only;
Update
and 3rd variant: get random table block, generate rowid and get row from the table by this rowid:
select *
from s1
where rowid = (
select
DBMS_ROWID.ROWID_CREATE (
1,
objd,
file#,
block#,
1)
from
(
select/*+ rule */ file#,block#,objd
from v$bh b
where b.objd in (select o.data_object_id from user_objects o where object_name='S1' /* table_name */)
order by dbms_random.value
fetch first 1 rows only
)
);

Numeric Overflow in Recursive Query : Teradata

I'm new to teradata. I want to insert numbers 1 to 1000 into the table test_seq, which is created as below.
create table test_seq(
seq_id integer
);
After searching on this site, I came up with recusrive query to insert the numbers.
insert into test_seq(seq_id)
with recursive cte(id) as (
select 1 from test_dual
union all
select id + 1 from cte
where id + 1 <= 1000
)
select id from cte;
test_dual is created as follows and it contains just a single value. (something like DUAL in Oracle)
create table test_dual(
test_dummy varchar(1)
);
insert into test_dual values ('X');
But, when I run the insert statement, I get the error, Failure 2616 Numeric overflow occurred during computation.
What did I do wrong here? Isn't the integer datatype enough to hold numeric value 1000?
Also, is there a way to write the query so that i can do away with test_dual table?
When you simply write 1 the parser assigns the best matching datatype to it, which is a BYTEINT. The valid range of values for BYTEINT is -128 to 127, so just add a typecast to INT :-)
Usually you don't need a dummy DUAL table in Teradata, "SELECT 1;" is valid, but in some cases the parser still insists on a FROM (don't ask me why). This trick should work:
SEL * FROM (SELECT 1 AS x) AS dt;
You can create a view on this:
REPLACE VIEW oDUAL AS SELECT * FROM (SELECT 'X' AS dummy) AS dt;
Explain "SELECT 1 FROM oDUAL;" is a bit stupid, so a real table might be better. But to get efficient access (= single AMP/single row) it must be defined as follows:
CREATE TABLE dual_tbl(
dummy VARCHAR(1) CHECK ( dummy = 'X')
) UNIQUE PRIMARY INDEX(dummy); -- i remember having fun when you inserted another row in Oracle's DUAL :_)
INSERT INTO dual_tbl VALUES ('X');
REPLACE VIEW oDUAL AS SELECT dummy FROM dual_tbl WHERE dummy = 'X';
insert into test_seq(seq_id)
with recursive cte(id) as (
select cast(1 as int) from oDUAL
union all
select id + 1 from cte
where id + 1 <= 1000
)
select id from cte;
But recursion is not an appropriate way to get a range of numbers as it's sequential and always an "all-AMP step" even if it the data resides on a single AMP like in this case.
If it's less than 73414 values (201 years) better use sys_calendar.calendar (or any other table with a known sequence of numbers) :
SELECT day_of_calendar
FROM sys_calendar.CALENDAR
WHERE day_of_calendar BETWEEN 1 AND 1000;
Otherwise use CROSS joins, e.g. to get numbers from 1 to 1,000,000:
WITH cte (i) AS
( SELECT day_of_calendar
FROM sys_calendar.CALENDAR
WHERE day_of_calendar BETWEEN 1 AND 1000
)
SELECT
(t2.i - 1) * 1000 + t1.i
FROM cte AS t1 CROSS JOIN cte AS t2;

Return id if a row exists, INSERT otherwise

I'm writing a function in node.js to query a PostgreSQL table.
If the row exists, I want to return the id column from the row.
If it doesn't exist, I want to insert it and return the id (insert into ... returning id).
I've been trying variations of case and if else statements and can't seem to get it to work.
A solution in a single SQL statement. Requires PostgreSQL 8.4 or later though.
Consider the following demo:
Test setup:
CREATE TEMP TABLE tbl (
id serial PRIMARY KEY
,txt text UNIQUE -- obviously there is unique column (or set of columns)
);
INSERT INTO tbl(txt) VALUES ('one'), ('two');
INSERT / SELECT command:
WITH v AS (SELECT 'three'::text AS txt)
,s AS (SELECT id FROM tbl JOIN v USING (txt))
,i AS (
INSERT INTO tbl (txt)
SELECT txt
FROM v
WHERE NOT EXISTS (SELECT * FROM s)
RETURNING id
)
SELECT id, 'i'::text AS src FROM i
UNION ALL
SELECT id, 's' FROM s;
The first CTE v is not strictly necessary, but achieves that you have to enter your values only once.
The second CTE s selects the id from tbl if the "row" exists.
The third CTE i inserts the "row" into tbl if (and only if) it does not exist, returning id.
The final SELECT returns the id. I added a column src indicating the "source" - whether the "row" pre-existed and id comes from a SELECT, or the "row" was new and so is the id.
This version should be as fast as possible as it does not need an additional SELECT from tbl and uses the CTEs instead.
To make this safe against possible race conditions in a multi-user environment:
Also for updated techniques using the new UPSERT in Postgres 9.5 or later:
Is SELECT or INSERT in a function prone to race conditions?
I would suggest doing the checking on the database side and just returning the id to nodejs.
Example:
CREATE OR REPLACE FUNCTION foo(p_param1 tableFoo.attr1%TYPE, p_param2 tableFoo.attr1%TYPE) RETURNS tableFoo.id%TYPE AS $$
DECLARE
v_id tableFoo.pk%TYPE;
BEGIN
SELECT id
INTO v_id
FROM tableFoo
WHERE attr1 = p_param1
AND attr2 = p_param2;
IF v_id IS NULL THEN
INSERT INTO tableFoo(id, attr1, attr2) VALUES (DEFAULT, p_param1, p_param2)
RETURNING id INTO v_id;
END IF;
RETURN v_id:
END;
$$ LANGUAGE plpgsql;
And than on the Node.js-side (i'm using node-postgres in this example):
var pg = require('pg');
pg.connect('someConnectionString', function(connErr, client){
//do some errorchecking here
client.query('SELECT id FROM foo($1, $2);', ['foo', 'bar'], function(queryErr, result){
//errorchecking
var id = result.rows[0].id;
};
});
Something like this, if you are on PostgreSQL 9.1
with test_insert as (
insert into foo (id, col1, col2)
select 42, 'Foo', 'Bar'
where not exists (select * from foo where id = 42)
returning foo.id, foo.col1, foo.col2
)
select id, col1, col2
from test_insert
union
select id, col1, col2
from foo
where id = 42;
It's a bit longish and you need to repeat the id to test for several times, but I can't think of a different solution that involves a single SQL statement.
If a row with id=42 exists, the writeable CTE will not insert anything and thus the existing row will be returned by the second union part.
When testing this I actually thought the new row would be returned twice (therefor a union not a union all) but it turns out that the result of the second select statement is actually evaluated before the whole statement is run and it does not see the newly inserted row. So in case a new row is inserted, it will be taken from the "returning" part.
create table t (
id serial primary key,
a integer
)
;
insert into t (a)
select 2
from (
select count(*) as s
from t
where a = 2
) s
where s.s = 0
;
select id
from t
where a = 2
;

Recursive MySQL query

My database schema looks like this:
Table t1:
id
valA
valB
Table t2:
id
valA
valB
What I want to do, is, for a given set of rows in one of these tables, find rows in both tables that have the same valA or valB (comparing valA with valA and valB with valB, not valA with valB). Then, I want to look for rows with the same valA or valB as the rows in the result of the previous query, and so on.
Example data:
t1 (id, valA, valB):
1, a, B
2, b, J
3, d, E
4, d, B
5, c, G
6, h, J
t2 (id, valA, valB):
1, b, E
2, d, H
3, g, B
Example 1:
Input: Row 1 in t1
Output:
t1/4, t2/3
t1/3, t2/2
t2/1
...
Example 2:
Input: Row 6 in t1
Output:
t1/2
t2/1
I would like to have the level of the search at that the row was found in the result (e.g. in Example 1: Level 1 for t1/2 and t2/1, level 2 for t1/5, ...) A limited depth of recursion is okay. Over time, I maybe want to include more tables following the same schema into the query. It would be nice if it was easy to extend the query for that purpose.
But what matters most, is the performance. Can you tell me the fastest possible way to accomplish this?
Thanks in advance!
try this although it's not fully tested but looked like it was working :P (http://pastie.org/1140339)
drop table if exists t1;
create table t1
(
id int unsigned not null auto_increment primary key,
valA char(1) not null,
valB char(1) not null
)
engine=innodb;
drop table if exists t2;
create table t2
(
id int unsigned not null auto_increment primary key,
valA char(1) not null,
valB char(1) not null
)
engine=innodb;
drop view if exists t12;
create view t12 as
select 1 as tid, id, valA, valB from t1
union
select 2 as tid, id, valA, valB from t2;
insert into t1 (valA, valB) values
('a','B'),
('b','J'),
('d','E'),
('d','B'),
('c','G'),
('h','J');
insert into t2 (valA, valB) values
('b','E'),
('d','H'),
('g','B');
drop procedure if exists find_children;
delimiter #
create procedure find_children
(
in p_tid tinyint unsigned,
in p_id int unsigned
)
proc_main:begin
declare done tinyint unsigned default 0;
declare dpth smallint unsigned default 0;
create temporary table children(
tid tinyint unsigned not null,
id int unsigned not null,
valA char(1) not null,
valB char(1) not null,
depth smallint unsigned default 0,
primary key (tid, id, valA, valB)
)engine = memory;
insert into children select p_tid, t.id, t.valA, t.valB, dpth from t12 t where t.tid = p_tid and t.id = p_id;
create temporary table tmp engine=memory select * from children;
/* http://dec.mysql.com/doc/refman/5.0/en/temporary-table-problems.html */
while done <> 1 do
if exists(
select 1 from t12 t
inner join tmp on tmp.valA = t.valA or tmp.valB = t.valB and tmp.depth = dpth) then
insert ignore into children
select
t.tid, t.id, t.valA, t.valB, dpth+1
from t12 t
inner join tmp on tmp.valA = t.valA or tmp.valB = t.valB and tmp.depth = dpth;
set dpth = dpth + 1;
truncate table tmp;
insert into tmp select * from children where depth = dpth;
else
set done = 1;
end if;
end while;
select * from children order by depth;
drop temporary table if exists children;
drop temporary table if exists tmp;
end proc_main #
delimiter ;
call find_children(1,1);
call find_children(1,6);
You can do it with stored procedures (see listings 7 and 7a):
http://www.artfulsoftware.com/mysqlbook/sampler/mysqled1ch20.html
You just need to figure out a query for the step of the recursion - taking the already-found rows and finding some more rows.
If you had a database which supported SQL-99 recursive common table expressions (like PostgreSQL or Firebird, hint hint), you could take the same approach as in the above link, but using a rCTE as the framework, so avoiding the need to write a stored procedure.
EDIT: I had a go at doing this with an rCTE in PostgreSQL 8.4, and although i can find the rows, i can't find a way to label them with the depth at which they were found. First, i create a a view to unify the tables:
create view t12 (tbl, id, vala, valb) as (
(select 't1', id, vala, valb from t1)
union
(select 't2', id, vala, valb from t2)
)
Then do this query:
with recursive descendants (tbl, id, vala, valb) as (
(select *
from t12
where tbl = 't1' and id = 1) -- the query that identifies the seed rows, here just t1/1
union
(select c.*
from descendants p, t12 c
where (p.vala = c.vala or p.valb = c.valb)) -- the recursive term
)
select * from descendants;
You would imagine that capturing depth would be as simple as adding a depth column to the rCTE, set to zero in the seed query, then somehow incremented in the recursive step. However, i couldn't find any way to do that, given that you can't write subqueries against the rCTE in the recursive step (so nothing like select max(depth) + 1 from descendants in the column list), and you can't use an aggregate function in the column list (so no max(p.depth) + 1 in the column list coupled with a group by c.* on the select).
You would also need to add a restriction to the query to exclude already-selected rows; you don't need to do that in the basic version, because of the distincting effect of the union, but if you add a count column, then a row can be included in the results more than once with different counts, and you'll get a Cartesian explosion. But you can't easily prevent it, because you can't have subqueries against the rCTE, which means you can't say anything like and not exists (select * from descendants d where d.tbl = c.tbl and d.id = c.id)!
I know all this stuff about recursive queries is of no use to you, but i find it riveting, so please do excuse me.

Make SQL Select same row multiple times

I need to test my mail server. How can I make a Select statement
that selects say ID=5469 a thousand times.
If I get your meaning then a very simple way is to cross join on a derived query on a table with more than 1000 rows in it and put a top 1000 on that. This would duplicate your results 1000 times.
EDIT: As an example (This is MSSQL, I don't know if Access is much different)
SELECT
MyTable.*
FROM
MyTable
CROSS JOIN
(
SELECT TOP 1000
*
FROM
sysobjects
) [BigTable]
WHERE
MyTable.ID = 1234
You can use the UNION ALL statement.
Try something like:
SELECT * FROM tablename WHERE ID = 5469
UNION ALL
SELECT * FROM tablename WHERE ID = 5469
You'd have to repeat the SELECT statement a bunch of times but you could write a bit of VB code in Access to create a dynamic SQL statement and then execute it. Not pretty but it should work.
Create a helper table for this purpose:
JUST_NUMBER(NUM INT primary key)
Insert (with the help of some (VB) script) numbers from 1 to N. Then execute this unjoined query:
SELECT MYTABLE.*
FROM MYTABLE,
JUST_NUMBER
WHERE MYTABLE.ID = 5469
AND JUST_NUMBER.NUM <= 1000
Here's a way of using a recursive common table expression to generate some empty rows, then to cross join them back onto your desired row:
declare #myData table (val int) ;
insert #myData values (666),(888),(777) --some dummy data
;with cte as
(
select 100 as a
union all
select a-1 from cte where a>0
--generate 100 rows, the max recursion depth
)
,someRows as
(
select top 1000 0 a from cte,cte x1,cte x2
--xjoin the hundred rows a few times
--to generate 1030301 rows, then select top n rows
)
select m.* from #myData m,someRows where m.val=666
substitute #myData for your real table, and alter the final predicate to suit.
easy way...
This exists only one row into the DB
sku = 52 , description = Skullcandy Inkd Green ,price = 50,00
Try to relate another table in which has no constraint key to the main table
Original Query
SELECT Prod_SKU , Prod_Descr , Prod_Price FROM dbo.TB_Prod WHERE Prod_SKU = N'52'
The Functional Query ...adding a not related table called 'dbo.TB_Labels'
SELECT TOP ('times') Prod_SKU , Prod_Descr , Prod_Price FROM dbo.TB_Prod,dbo.TB_Labels WHERE Prod_SKU = N'52'
In postgres there is a nice function called generate_series. So in postgreSQL it is as simple as:
select information from test_table, generate_series(1, 1000) where id = 5469
In this way, the query is executed 1000 times.
Example for postgreSQL:
CREATE EXTENSION IF NOT EXISTS "uuid-ossp"; --To be able to use function uuid_generate_v4()
--Create a test table
create table test_table (
id serial not null,
uid UUID NOT NULL,
CONSTRAINT uid_pk PRIMARY KEY(id));
-- Insert 10000 rows
insert into test_table (uid)
select uuid_generate_v4() from generate_series(1, 10000);
-- Read the data from id=5469 one thousand times
select id, uid, uuid_generate_v4() from test_table, generate_series(1, 1000) where id = 5469;
As you can see in the result below, the data from uid is read 1000 times as confirmed by the generation of a new uuid at every new row.
id |uid |uuid_generate_v4
----------------------------------------------------------------------------------------
5469|"10791df5-ab72-43b6-b0a5-6b128518e5ee"|"5630cd0d-ee47-4d92-9ee3-b373ec04756f"
5469|"10791df5-ab72-43b6-b0a5-6b128518e5ee"|"ed44b9cb-c57f-4a5b-ac9a-55bd57459c02"
5469|"10791df5-ab72-43b6-b0a5-6b128518e5ee"|"3428b3e3-3bb2-4e41-b2ca-baa3243024d9"
5469|"10791df5-ab72-43b6-b0a5-6b128518e5ee"|"7c8faf33-b30c-4bfa-96c8-1313a4f6ce7c"
5469|"10791df5-ab72-43b6-b0a5-6b128518e5ee"|"b589fd8a-fec2-4971-95e1-283a31443d73"
5469|"10791df5-ab72-43b6-b0a5-6b128518e5ee"|"8b9ab121-caa4-4015-83f5-0c2911a58640"
5469|"10791df5-ab72-43b6-b0a5-6b128518e5ee"|"7ef63128-b17c-4188-8056-c99035e16c11"
5469|"10791df5-ab72-43b6-b0a5-6b128518e5ee"|"5bdc7425-e14c-4c85-a25e-d99b27ae8b9f"
5469|"10791df5-ab72-43b6-b0a5-6b128518e5ee"|"9bbd260b-8b83-4fa5-9104-6fc3495f68f3"
5469|"10791df5-ab72-43b6-b0a5-6b128518e5ee"|"c1f759e1-c673-41ef-b009-51fed587353c"
5469|"10791df5-ab72-43b6-b0a5-6b128518e5ee"|"4a70bf2b-ddf5-4c42-9789-5e48e2aec441"
Of course other DBs won't necessarily have the same function but it could be done:
See here.
If your are doing this in sql Server
declare #cnt int
set #cnt = 0
while #cnt < 1000
begin
select '12345'
set #cnt = #cnt + 1
end
select '12345' can be any expression
Repeat rows based on column value of TestTable. First run the Create table and insert statement, then run the following query for the desired result.
This may be another solution:
CREATE TABLE TestTable
(
ID INT IDENTITY(1,1),
Col1 varchar(10),
Repeats INT
)
INSERT INTO TESTTABLE
VALUES ('A',2), ('B',4),('C',1),('D',0)
WITH x AS
(
SELECT TOP (SELECT MAX(Repeats)+1 FROM TestTable) rn = ROW_NUMBER()
OVER (ORDER BY [object_id])
FROM sys.all_columns
ORDER BY [object_id]
)
SELECT * FROM x
CROSS JOIN TestTable AS d
WHERE x.rn <= d.Repeats
ORDER BY Col1;
This trick helped me in my requirement.
here, PRODUCTDETAILS is my Datatable
and orderid is my column.
declare #Req_Rows int = 12
;WITH cte AS
(
SELECT 1 AS Number
UNION ALL
SELECT Number + 1 FROM cte WHERE Number < #Req_Rows
)
SELECT PRODUCTDETAILS.*
FROM cte, PRODUCTDETAILS
WHERE PRODUCTDETAILS.orderid = 3
create table #tmp1 (id int, fld varchar(max))
insert into #tmp1 (id, fld)
values (1,'hello!'),(2,'world'),(3,'nice day!')
select * from #tmp1
go
select * from #tmp1 where id=3
go 1000
drop table #tmp1
in sql server try:
print 'wow'
go 5
output:
Beginning execution loop
wow
wow
wow
wow
wow
Batch execution completed 5 times.
The easy way is to create a table with 1000 rows. Let's call it BigTable. Then you would query for the data you want and join it with the big table, like this:
SELECT MyTable.*
FROM MyTable, BigTable
WHERE MyTable.ID = 5469