Related
with
table1 as (
select 'joe' as name, 17 as age, 25 as speed
),
table2 as (
select 'nick' as name, 21 as speed, 23 as strength
)
select * from table1
union all
select * from table2
In Google BigQuery, this union all does not throw an error because both tables have the same number of columns (3 each). However I receive bad data output because the columns do not match. Rather than outputting a new table with 4 columns name, age, speed, strength with correct values + nulls for missing values (which would probably be preferred), the union all keeps the 3 columns from the top row.
Is there a good way to catch that the columns do not match, rather than the query silently returning bad data? Is there any way for this to return an error perhaps, as opposed to a successful table? I'm not sure how to check in SQL that the columns in the 2 tables match.
Edit: in this example it is clear to see that the columns do not match, however in our data we have 100+ columns and we want to avoid a situation where we make an error in a UNION ALL
Below is for BigQuery Standard SQL and using scripting feature of BQ
DECLARE statement STRING;
SET statement = (
WITH table1_columns AS (
SELECT column FROM (SELECT * FROM `project.dataset.table1` LIMIT 1) t,
UNNEST(REGEXP_EXTRACT_ALL(TRIM(TO_JSON_STRING(t), '{}'), r'"([^"]*)":')) column
), table2_columns AS (
SELECT column FROM (SELECT * FROM `project.dataset.table2` LIMIT 1) t,
UNNEST(REGEXP_EXTRACT_ALL(TRIM(TO_JSON_STRING(t), '{}'), r'"([^"]*)":')) column
), all_columns AS (
SELECT column FROM table1_columns UNION DISTINCT SELECT column FROM table2_columns
)
SELECT (
SELECT 'SELECT ' || STRING_AGG(IF(t.column IS NULL, 'NULL as ', '') || a.column, ', ') || ' FROM `project.dataset.table1` UNION ALL '
FROM all_columns a LEFT JOIN table1_columns t USING(column)
) || (
SELECT 'SELECT ' || STRING_AGG(IF(t.column IS NULL, 'NULL as ', '') || a.column, ', ') || ' FROM `project.dataset.table2`'
FROM all_columns a LEFT JOIN table2_columns t USING(column)
)
);
EXECUTE IMMEDIATE statement;
when applied to sample data from your question - output is
Row name age speed strength
1 joe 17 25 null
2 nick null 21 23
After saving table1 and table2 as 2 tables in a dataset in BigQuery, I then used the metadata using INFORMATION_SCHEMA to check that the columns matched.
SELECT *
FROM models.INFORMATION_SCHEMA.COLUMNS
where table_name = 'table1'
SELECT *
FROM models.INFORMATION_SCHEMA.COLUMNS
where table_name = 'table2'
INFORMATION_SCHEMA.COLUMNS returns information including the column names and their positioning. I can join these 2 tables together then to check that the names match...
"Table1" structure is as shown below:
source table table1
Player_NAME || Player_NUMBER || Client_name || Client_country || Player_country|| Rating
GERALD || A1234 || BENFIELD || IND || IND || 76
GERALD ||A6578 || ROTFIELD || USA || USA || 64
KUMAR || P1234 || LFV || ARG || ARG || -24
KUMAR || P5678 ||JEURASIN || ARG || TUR ||-32
KUMAR || P0101 ||ARGENIA ||ARG ||POL ||-16
ANDREW ||R1234 || GENMAD || GER || GER || 23
I need to select the records from above table “Table1” and copy them to “Table2”.
I need to select the player record from table1 which satisfy the below conditions :
If a player has multiple client_names or multiple client_country, then select the record which has the maximum value of rating . If it is negavie, then take the absolute value of that value. i.e if the rating is -10 and -34, then take the absolute value which is greatest. i. e by taking absolute value it is 10,34 and 34 is greatest one.
For ex: Kumar has 3 diff client names or 3 diff client_country ,so for kumar the record with rating 32 should be selected ,after taking the absolute value of it.
Below is the expected output:
Player_NAME || Player_NUMBER ||Client_name || Client_country ||Player_country|| Rating
GERALD || A1234 || BENFIELD|| IND|| IND|| 76
KUMAR || P5678 || JEURASIN ||ARG ||TUR || -32
ANDREW || R1234 || GENMAD ||GER ||GER || 23
destination table-'table2'
You can try something like this:
INSERT INTO Table2
(
Player_Name,
Player_Number,
Cliet_Name,
Client_country,
Player_country,
Rating
)
SELECT
Player_Name,
Player_Number,
Cliet_Name,
Client_country,
Player_country,
MAX(ABS(Rating)) OVER (PARTITION BY player_Name ORDER BY Cliet_Name,
Client_country) as Rating
FROM
table1
If your DBMS supports Analytical Function you can utilize ROW_NUMBER:
select ... -- all columns but rn
from
(
select ... -- all columns
,row_number()
over (partition by player_name
order by abs(Rating) desc as rn
from table1
) as dt
where rn = 1;
Otherwise use a Correlated Subquery:
select *
from table1 as t1
where abs(rating) =
( select max(abs(rating))
from table1 as t2
where t1.player_name = t2.player_name
)
If you got multiple rows with the same max(abs(rating)) #1. will select one of them randomly, but #2 will select all.
I guess, this query will work:
select
max(abs(Rating))
from Table1
group by Player_NAME
To insert data into Table2, you can do it like so:
INSERT INTO Table2 (
Player_Name,
Player_Number,
Cliet_Name,
Client_country,
Player_country,
Rating
)
SELECT
t1.Player_Name,
t1.Player_Number,
t1.Cliet_Name,
t1.Client_country,
t1.Player_country,
t1.Rating
FROM Table1 t1
INNER JOIN (
SELECT
Player_NAME,
MAX(ABS(Rating)) as Rating
FROM Table1
GROUP BY Player_NAME
) t2 ON t2.Player_NAME = t1.Player_NAME AND ABS(t1.Rating) = t2.Rating
I need your help with the regexp_replace function. I have a table which has a column for concatenated string values which contain duplicates. How do I eliminate them?
Example:
Ian,Beatty,Larry,Neesha,Beatty,Neesha,Ian,Neesha
I need the output to be
Ian,Beatty,Larry,Neesha
The duplicates are random and not in any particular order.
Update--
Here's how my table looks
ID Name1 Name2 Name3
1 a b c
1 c d a
2 d e a
2 c d b
I need one row per ID having distinct name1,name2,name3 in one row as a comma separated string.
ID Name
1 a,c,b,d,c
2 d,c,e,a,b
I have tried using listagg with distinct but I'm not able to remove the duplicates.
The easiest option I would go with -
SELECT ID, LISTAGG(NAME_LIST, ',')
FROM (SELECT ID, NAME1 NAME_LIST FROM DATA UNION
SELECT ID, NAME2 FROM DATA UNION
SELECT ID, NAME3 FROM DATA
)
GROUP BY ID;
Demo.
So, try this out...
([^,]+),(?=.*[A-Za-z],[] ]*\1)
I don't think you can do it just with regexp_replace if the repeated values are not next to each other. One approach is to split the values up, eliminate the duplicates, and then put them back together.
The common method to tokenize a delimited string is with regexp_substr and a connect by clause. Using a bind variable with your string to make the code a bit clearer:
var value varchar2(100);
exec :value := 'Ian,Beatty,Larry,Neesha,Beatty,Neesha,Ian,Neesha';
select regexp_substr(:value, '[^,]+', 1, level) as value
from dual
connect by regexp_substr(:value, '[^,]+', 1, level) is not null;
VALUE
------------------------------
Ian
Beatty
Larry
Neesha
Beatty
Neesha
Ian
Neesha
You can use that as a subquery (or CTE), get the distinct values from it, then reassemble it with listagg:
select listagg(value, ',') within group (order by value) as value
from (
select distinct value from (
select regexp_substr(:value, '[^,]+', 1, level) as value
from dual
connect by regexp_substr(:value, '[^,]+', 1, level) is not null
)
);
VALUE
------------------------------
Beatty,Ian,Larry,Neesha
It's a bit more complicated if you're looking at multiple rows in a table as that confused the connect-by syntax, but you can use a non-determinisitic reference to avoid loops:
with t42 (id, value) as (
select 1, 'Ian,Beatty,Larry,Neesha,Beatty,Neesha,Ian,Neesha' from dual
union all select 2, 'Mary,Joe,Mary,Frank,Joe' from dual
)
select id, listagg(value, ',') within group (order by value) as value
from (
select distinct id, value from (
select id, regexp_substr(value, '[^,]+', 1, level) as value
from t42
connect by regexp_substr(value, '[^,]+', 1, level) is not null
and id = prior id
and prior dbms_random.value is not null
)
)
group by id;
ID VALUE
---------- ------------------------------
1 Beatty,Ian,Larry,Neesha
2 Frank,Joe,Mary
Of course this wouldn't be necessary if you were storing relational data properly; having a delimited string in a column is not a good idea.
There is a way to find duplicates in this case, but it is a problem to remove them if there are more than one duplicated name within a string per id. Here is code that can deal with one duplicate per id.
Sample data:
WITH
tbl AS
(
Select 1 "ID", 'a' "NAME_1", 'b' "NAME_2", 'c' "NAME_3" From Dual Union All
Select 1 "ID", 'c' "NAME_1", 'd' "NAME_2", 'a' "NAME_3" From Dual Union All
Select 2 "ID", 'd' "NAME_1", 'e' "NAME_2", 'a' "NAME_3" From Dual Union All
Select 2 "ID", 'c' "NAME_1", 'd' "NAME_2", 'b' "NAME_3" From Dual
),
lists AS
(
Select 1 "ID", 'a,c,b,d,c' "NAME" From Dual Union All
Select 2 "ID", 'd,c,e,a,b' "NAME" From Dual
),
Creating CTE that compares your LISTAGG sttring with original data finding duplicate values:
grid AS
(
Select DISTINCT l.ID, l.NAME,
CASE WHEN ( Length(l.NAME || ',') - Length(Replace(l.NAME || ',', t.NAME_1 || ',', '')) ) / Length(t.NAME_1 || ',') > 1 THEN NAME_1 END "NAME_1",
CASE WHEN ( Length(l.NAME || ',') - Length(Replace(l.NAME || ',', t.NAME_2 || ',', '')) ) / Length(t.NAME_2 || ',') > 1 THEN NAME_2 END "NAME_2",
CASE WHEN ( Length(l.NAME || ',') - Length(Replace(l.NAME || ',', t.NAME_3 || ',', '')) ) / Length(t.NAME_3 || ',') > 1 THEN NAME_3 END "NAME_3"
From
lists l
Inner Join
tbl t ON(t.ID = l.ID)
)
ID NAME NAME_1 NAME_2 NAME_3
---------- --------- ------ ------ ------
2 d,c,e,a,b
1 a,c,b,d,c c
1 a,c,b,d,c c
Main SQL, using Union, builds new string (removing second appearance) where the duplicate was found and then puts that new string after comparison with the old one.
SELECT DISTINCT l.ID, Nvl(g.NAME, l.NAME) NAME
FROM
lists l
LEFT JOIN
(
SELECT ID, CASE WHEN NAME_1 Is Not Null
THEN REPLACE(NAME, NAME, COALESCE( REPLACE( SubStr(NAME, 1, InStr(NAME, NAME_1, 1, 2) - 1) || SubStr(NAME, InStr(NAME, NAME_1, 1, 2) + Length(NAME_1)), ',,', ','), NULL ) )
END "NAME"
FROM grid
WHERE COALESCE(NAME_1, NAME_2, NAME_3) IS NOT NULL
UNION ALL
SELECT ID, CASE WHEN NAME_2 Is Not Null
THEN REPLACE(NAME, NAME, COALESCE( REPLACE( SubStr(NAME, 1, InStr(NAME, NAME_2, 1, 2) - 1) || SubStr(NAME, InStr(NAME, NAME_2, 1, 2) + Length(NAME_2)), ',,', ','), NULL ) )
END "NAME"
FROM grid
WHERE COALESCE(NAME_1, NAME_2, NAME_3) IS NOT NULL
UNION ALL
SELECT ID, CASE WHEN NAME_3 Is Not Null
THEN REPLACE(NAME, NAME, COALESCE( REPLACE( SubStr(NAME, 1, InStr(NAME, NAME_3, 1, 2) - 1) || SubStr(NAME, InStr(NAME, NAME_3, 1, 2) + Length(NAME_3)), ',,', ','), NULL ) )
END "NAME"
FROM grid
WHERE COALESCE(NAME_1, NAME_2, NAME_3) IS NOT NULL
) g ON(g.ID = l.ID And Length(g.NAME) < Length(l.NAME))
R e s u l t :
ID NAME
---------- -------------
2 d,c,e,a,b
1 a,c,b,d
For multiple occurences within a string or for multiplicated different names there should be done some recursions or multiplied nestings to get it done...
I have a PostgreSQL function (or table) which gives me the following output:
Sl.no username Designation salary etc..
1 A XYZ 10000 ...
2 B RTS 50000 ...
3 C QWE 20000 ...
4 D HGD 34343 ...
Now I want the Output as below:
Sl.no 1 2 3 4 ...
Username A B C D ...
Designation XYZ RTS QWE HGD ...
Salary 10000 50000 20000 34343 ...
How to do this?
SELECT
unnest(array['Sl.no', 'username', 'Designation','salary']) AS "Columns",
unnest(array[Sl.no, username, value3Count,salary]) AS "Values"
FROM view_name
ORDER BY "Columns"
Reference : convertingColumnsToRows
Basing my answer on a table of the form:
CREATE TABLE tbl (
sl_no int
, username text
, designation text
, salary int
);
Each row results in a new column to return. With a dynamic return type like this, it's hardly possible to make this completely dynamic with a single call to the database. Demonstrating solutions with two steps:
Generate query
Execute generated query
Generally, this is limited by the maximum number of columns a table can hold. So not an option for tables with more than 1600 rows (or fewer). Details:
What is the maximum number of columns in a PostgreSQL select query
Postgres 9.4+
Dynamic solution with crosstab()
Use the first one you can. Beats the rest.
SELECT 'SELECT *
FROM crosstab(
$ct$SELECT u.attnum, t.rn, u.val
FROM (SELECT row_number() OVER () AS rn, * FROM '
|| attrelid::regclass || ') t
, unnest(ARRAY[' || string_agg(quote_ident(attname)
|| '::text', ',') || '])
WITH ORDINALITY u(val, attnum)
ORDER BY 1, 2$ct$
) t (attnum bigint, '
|| (SELECT string_agg('r'|| rn ||' text', ', ')
FROM (SELECT row_number() OVER () AS rn FROM tbl) t)
|| ')' AS sql
FROM pg_attribute
WHERE attrelid = 'tbl'::regclass
AND attnum > 0
AND NOT attisdropped
GROUP BY attrelid;
Operating with attnum instead of actual column names. Simpler and faster. Join the result to pg_attribute once more or integrate column names like in the pg 9.3 example.
Generates a query of the form:
SELECT *
FROM crosstab(
$ct$
SELECT u.attnum, t.rn, u.val
FROM (SELECT row_number() OVER () AS rn, * FROM tbl) t
, unnest(ARRAY[sl_no::text,username::text,designation::text,salary::text]) WITH ORDINALITY u(val, attnum)
ORDER BY 1, 2$ct$
) t (attnum bigint, r1 text, r2 text, r3 text, r4 text);
This uses a whole range of advanced features. Just too much to explain.
Simple solution with unnest()
One unnest() can now take multiple arrays to unnest in parallel.
SELECT 'SELECT * FROM unnest(
''{sl_no, username, designation, salary}''::text[]
, ' || string_agg(quote_literal(ARRAY[sl_no::text, username::text, designation::text, salary::text])
|| '::text[]', E'\n, ')
|| E') \n AS t(col,' || string_agg('row' || sl_no, ',') || ')' AS sql
FROM tbl;
Result:
SELECT * FROM unnest(
'{sl_no, username, designation, salary}'::text[]
,'{10,Joe,Music,1234}'::text[]
,'{11,Bob,Movie,2345}'::text[]
,'{12,Dave,Theatre,2356}'::text[])
AS t(col,row1,row2,row3,row4);
db<>fiddle here
Old sqlfiddle
Postgres 9.3 or older
Dynamic solution with crosstab()
Completely dynamic, works for any table. Provide the table name in two places:
SELECT 'SELECT *
FROM crosstab(
''SELECT unnest(''' || quote_literal(array_agg(attname))
|| '''::text[]) AS col
, row_number() OVER ()
, unnest(ARRAY[' || string_agg(quote_ident(attname)
|| '::text', ',') || ']) AS val
FROM ' || attrelid::regclass || '
ORDER BY generate_series(1,' || count(*) || '), 2''
) t (col text, '
|| (SELECT string_agg('r'|| rn ||' text', ',')
FROM (SELECT row_number() OVER () AS rn FROM tbl) t)
|| ')' AS sql
FROM pg_attribute
WHERE attrelid = 'tbl'::regclass
AND attnum > 0
AND NOT attisdropped
GROUP BY attrelid;
Could be wrapped into a function with a single parameter ...
Generates a query of the form:
SELECT *
FROM crosstab(
'SELECT unnest(''{sl_no,username,designation,salary}''::text[]) AS col
, row_number() OVER ()
, unnest(ARRAY[sl_no::text,username::text,designation::text,salary::text]) AS val
FROM tbl
ORDER BY generate_series(1,4), 2'
) t (col text, r1 text,r2 text,r3 text,r4 text);
Produces the desired result:
col r1 r2 r3 r4
-----------------------------------
sl_no 1 2 3 4
username A B C D
designation XYZ RTS QWE HGD
salary 10000 50000 20000 34343
Simple solution with unnest()
SELECT 'SELECT unnest(''{sl_no, username, designation, salary}''::text[] AS col)
, ' || string_agg('unnest('
|| quote_literal(ARRAY[sl_no::text, username::text, designation::text, salary::text])
|| '::text[]) AS row' || sl_no, E'\n , ') AS sql
FROM tbl;
Slow for tables with more than a couple of columns.
Generates a query of the form:
SELECT unnest('{sl_no, username, designation, salary}'::text[]) AS col
, unnest('{10,Joe,Music,1234}'::text[]) AS row1
, unnest('{11,Bob,Movie,2345}'::text[]) AS row2
, unnest('{12,Dave,Theatre,2356}'::text[]) AS row3
, unnest('{4,D,HGD,34343}'::text[]) AS row4
Same result.
If (like me) you were needing this information from a bash script, note there is a simple command-line switch for psql to tell it to output table columns as rows:
psql mydbname -x -A -F= -c "SELECT * FROM foo WHERE id=123"
The -x option is the key to getting psql to output columns as rows.
I have a simpler approach than Erwin pointed above, that worked for me with Postgres (and I think that it should work with all major relational databases whose support SQL standard)
You can use simply UNION instead of crosstab:
SELECT text 'a' AS "text" UNION SELECT 'b';
text
------
a
b
(2 rows)
Of course that depends on the case in which you are going to apply this. Considering that you know beforehand what fields you need, you can take this approach even for querying different tables. I.e.:
SELECT 'My first metric' as name, count(*) as total from first_table UNION
SELECT 'My second metric' as name, count(*) as total from second_table
name | Total
------------------|--------
My first metric | 10
My second metric | 20
(2 rows)
It's a more maintainable approach, IMHO. Look at this page for more information: https://www.postgresql.org/docs/current/typeconv-union-case.html
There is no proper way to do this in plain SQL or PL/pgSQL.
It will be way better to do this in the application, that gets the data from the DB.
I am having very big table (more than 1 millions rows) , These rows are having product name and price from different sources.
There are many products which are same name but they differ in price.
Here is the problem ,
We have same products many times in row but their name will not be same for example
Row Product name price
----- ----------------------- ----
Row 1 : XYZ - size information $a
Row 2. XYZ -Brand information $b
Row 3. xyz $c
I want to get all products which price differ. If name are same in row then I can easily go for self join as Table1.Product_Name = Table1.Product_name and Table1.Price != Table2.Price
But this will not work in this case :(
Can any one suggest a solution for it ?
You could try to use regexp_replace to go into the right direction:
create table tq84_products (
name varchar2(50),
price varchar2( 5)
);
Three products:
xyz
ABCD
efghi
of which ABCD has two records with the same price and all others have a different price.
insert into tq84_products values (' XYZ - size information', '$a');
insert into tq84_products values ('XYZ - brand information', '$b');
insert into tq84_products values ('xyz' , '$c');
insert into tq84_products values ('Product ABCD' , '$d');
insert into tq84_products values ('Abcd is the best' , '$d');
insert into tq84_products values ('efghi is cheap' , '$f');
insert into tq84_products values ('no, efghi is expensive' , '$g');
The select statement with stop words to remove words that are typically found in product names.
with split_into_words as (
select
name,
price,
upper (
regexp_replace(name,
'\W*' ||
'(\w+)?\W?+' ||
'(\w+)?\W?+' ||
'(\w+)?\W?+' ||
'(\w+)?\W?+' ||
'(\w+)?\W?+' ||
'(\w+)?\W?+' ||
'(\w+)?\W?+' ||
'(\w+)?\W?+' ||
'(\w+)?' ||
'.*',
'\' || submatch.counter
)
) word
from
tq84_products,
(select
rownum counter
from
dual
connect by
level < 10
) submatch
),
stop_words as (
select 'IS' word from dual union all
select 'BRAND' word from dual union all
select 'INFORMATION' word from dual
)
select
w1.price,
w2.price,
w1.name,
w2.name
-- substr(w1.word, 1, 30) common_word,
-- count(*) over (partition by w1.name) cnt
from
split_into_words w1,
split_into_words w2
where
w1.word = w2.word and
w1.name < w2.name and
w1.word is not null and
w2.word is not null and
w1.word not in (select word from stop_words) and
w2.word not in (select word from stop_words) and
w1.price != w2.price;
This then selects
$a $b XYZ - size information XYZ - brand information
$b $c XYZ - brand information xyz
$a $c XYZ - size information xyz
$f $g efghi is cheap no, efghi is expensive
So, abcd is not returned while the others are.