How to use any_value in bigquery - sql

I am looking to derive the columns 2 and 3 from 1 and 4 using sql, need advise
output logics for col_2 and col_3
any of the values in col_1 group by col_4 is True , then col_2 is true col_3 is false
all the values in col_1 group by col_4 is false, then col_2 is false and col_3 is true
all the values in col_1 group by col_4 is true , then col_2 is false and col_3 is false

You seems to want logical_or() and logical_and() functions in BigQuery.
WITH sample_table AS (
SELECT col_4, col_1
FROM UNNEST(SPLIT('aaaaabbbccc', '')) col_4 WITH offset
JOIN UNNEST([false, true, false, true, false, false, false, false, true, true, true]) col_1 WITH offset
USING (offset)
)
SELECT *,
CASE
WHEN LOGICAL_AND(col_1) OVER (PARTITION BY col_4) IS TRUE THEN STRUCT(false AS col2, false AS col_3)
WHEN LOGICAL_OR(col_1) OVER (PARTITION BY col_4) IS FALSE THEN (false, true)
ELSE (true, false)
END.*
FROM sample_table;
Query results

One way would be to count true and false values for each col_4 key inside a CTE or a subquery (whichever way works for you).
Then perform a join with the original table.
That way, you can then derive col_2 and col_3 directly by checking the counts of True and False in the final select statement.

Related

Update single column in a table with with multiple conditions in the WHERE statement

I am trying to write an UPDATE statement to update a column in a table based on multiple WHERE conditions. See query below,
UPDATE table_new
SET col_4 = 'new value'
WHERE ?
(SELECT col_1, col_2, col_3 FROM table_new
EXCEPT
SELECT col_1, col_2, col_3 FROM table_old);
I trying to update col_4 with the new value for the unique col_1 + col_2 + col_3 combination coming from the EXPECT SQL query. I am not sure about what would follow the WHERE condition as the WHERE clause is usually followed by just one column in an UPDATE statement. I am thinking of doing a CONCAT for the unique col_1 + col_2 + col_3 combination for both the EXPECT SQL query and the column name that would follow the WHERE clause but not sure if that would help my case
My possible solution:
UPDATE table_new
SET col_4 = 'new value'
WHERE CONCAT('col_1','-','col_2','-','col_3') IN
(SELECT CONCAT('col_1','-','col_2','-','col_3') FROM table_new
EXCEPT
SELECT CONCAT('col_1','-','col_2','-','col_3') FROM table_old
);
Sample Data in table_new (Before running the UPDATE statement):
Col_1 Col_2 Col_3 Col_4 (old value)
123456 123XYZ 456ABC 100
654321 ZYX321 CBA654 200
Desired Result in table_new (After running the UPDATE statement):
Col_1 Col_2 Col_3 Col_4 (new value)
123456 123XYZ 456ABC 300
654321 ZYX321 CBA654 400
I think you would be better off just using a join clause:
UPDATE table_new
SET col_4 = 'new value'
FROM table_new tn
LEFT JOIN table_old to ON tn.col_1 = to.col_1
AND tn.col_2 = to.col_2
AND tn.col_3 = to.col_3
WHERE to.col_1 IS NULL;
the left join will give you matching/non matching records, and you can then determine where the records from the old table aren't in the new table by looking for the null fields in the right side (table_old) of those results. If you need it to be more specific, you could add IS NULL statements in the WHERE clause for all the column names.
No need to concat variables, you can compare and group by multiple columns at the same time with something like WHERE (b.x,b.y,b.z) = (a.x,a.y,a.z):
create temp table a_new as
select 'a' x, 'b' y, 'c' z, 100 value
union all
select 'a1', 'b1', 'c1', 200 value;
create temp table a_old as
select 'a' x, 'b' y, 'c' z, 300 value
union all
select 'a1', 'b1', 'c1', 500 value;
update a_new as a
set value = b.value
from a_old b
where (b.x,b.y,b.z) = (a.x,a.y,a.z);

Remove character from column based on condition on another column in Redshift sql

I have a table where I want to do the following:
If col_1 has values "sakc" or "cosc", remove occurrences of character "_" from those rows of col_2.
Example:
Given table_1
col_1 col_2
sakc abc_aw
sakc asw_12
cosc absd12
dasd qwe_32
cosc dasd_1
Desired table_1
col_1 col_2
sakc abcaw
sakc asw12
cosc absd12
dasd qwe_32
cosc dasd1
I tried using something along the lines of:
select case when col_1 in ('sakc', 'cosc') then trim("_" from col_2) end col_2 from table_1;
But I am sure it's not the right way and is giving me errors.
You can use replace()
SELECT
col_1
,CASE
WHEN col_1 in ('sakc', 'cosc') THEN REPLACE(col_2, '_', '')
ELSE col_2
END col2
FROM table_1;

Include NULL values in unpivot

I have been looking for a solution for this problem for quite a long time. But, couldn't find any.
I have a table as below:
Month Col_1 Col_2 Col_3 Col_4 Col_5
---------------------------------------------
Jan NULL NULL 1 1 1
I want to unpivot this table inorder to join with another table on fieldnames (Col_1,Col2,etc).
My query:
select Month,Name,value from
TableName
unpivot
(
Value
for Name in (Col_1,Col_2,Col_3,Col_4,Col_5)
) u
Current Result:
this gives me without the NULL values as below:
Month Name Value
-----------------------
Jan Col_3 1
Jan Col_4 1
Jan Col_5 1
Expected Result:
I want the NULLs to be included in the result.
Month Name Value
-----------------------
Jan Col_1 NULL
Jan Col_2 NULL
Jan Col_3 1
Jan Col_4 1
Jan Col_5 1
Any help would be appreciated.
SELECT name,value
FROM #Table1
CROSS APPLY (VALUES ('Col_1', Col_1),
('Col_2', Col_2),
('Col_3', Col_3),
('Col_4', Col_4),
('Col_5', Col_5))
CrossApplied (name, value)
output
name value
Col_1 NULL
Col_2 NULL
Col_3 1
Col_4 1
Col_5 1
Query:
select c.column_name,value
from INFORMATION_SCHEMA.COLUMNS c
left join(select * from 'tablename') t1
unpivot(value for column_name in (col_1,col_2,col_3,col_4,col_5)) t2
on t2.column_name=c.COLUMN_NAME where c.TABLE_NAME='tablename'
You can use the following query as a work-around in case Col_1, Col_2, ... are guarnteed not to take a specific value, say -1:
select [Month], Name, NULLIF(value, -1) AS value
from (
select [Month],
coalesce(Col_1, -1) AS Col_1,
coalesce(Col_2, -1) AS Col_2,
coalesce(Col_3, -1) AS Col_3,
coalesce(Col_4, -1) AS Col_4,
coalesce(Col_5, -1) AS Col_5
from TableName) AS t
unpivot
(
Value
for Name in (Col_1,Col_2,Col_3,Col_4,Col_5)
) AS u
Demo here
I had your same problem and this is
my quick and dirty solution :
your query :
select
Month,Name,value
from TableName
unpivot
(
Value for Name in (Col_1,Col_2,Col_3,Col_4,Col_5
)
) u
replace with :
select Month,Name,value from
( select
isnull(Month,'no-data') as Month,
isnull(Name,'no-data') as Name,
isnull(value,'no-data') as value from TableName
) as T1
unpivot
(
Value
for Name in (Col_1,Col_2,Col_3,Col_4,Col_5)
) u
ok the null value is replaced with a string, but all rows will be returned !!

Conditional SELECT within a Query in SQL

I have a table with below rows, which i need to join in a complex query
COL_1 COL_2 COL_3 COL_4 COL_5
----- ----- ----- ----- ----
1 A X Y
1 * * *
.............
.......
COL_2, COL_3 and COL_4 can have a specific value or '*' means ALL.
I need to select only one row, if a row found with all the specific values.
COL_2 ='A' and COL_3 = 'X' and COL_4 = 'Y' AND COL_1 = '1'
If such row not found, a row with below condition should be selected.
COL_2 ='*' and COL_3 = '*' and COL_4 = '*' AND COL_1 = '1'
If i Use 'OR' for the values, i get both the rows.
Please Help.
Depending on how complex your situation is, you can check the row's existence:
where col2='A' and col3='X' and col4='Y' and col1='1'
or
(
not exists (select 1 from tbl where col2='A' and col3='X' and col4='Y' and col1='1')
and col2='*' and col3='*' and col4='*' and col1='1'
)
If it's any more complex than this, this technique will get ugly fast.
Again, depending on how complex this is in real life, something like this may work:
select top 1 col2, col3, col4, col1 from
(
select 1 [priority], col2, col3, col4, col1
from tbl
where col2='A' and col3='X' and col4='Y' and col1='1'
union
select 2 [priority], col2, col3, col4, col1
from tbl
where col2='*' and col3='*' and col4='*' and col1='1'
) x
order by x.priority
This will retrieve both possible scenarios, but order them by a given priority (the best one is at the top), then pick the top 1 record.
This technique can be evolved so that you can do more complex things. For example, rather than a fixed priority value, you can calculate priority based on how many of the columns actually match vs. how many are stars - maybe start with something like:
select top 1 case when col2='A' then 100 when col2='*' then 1 else 0 end
+ case when col3='X' then 100 when col2='*' then 1 else 0 end
...etc... [priority]
from tbl
where col2 in ('A','*') and col3 in ('X','*') ...etc...
order by priority desc
This retrieves all records that match or have any combination of matches or asterisks, but prioritizes them based on how many real matches are found vs. asterisks (in this case a higher number is a better match).

SQLite How to select multiple columns and return them in order of appearance

I have 42 columns of data which are consecutive, from column 3 onwards.
I need to select each column in the order it appears. Each row of results must include the first two columns. This is what I have in one row:
rowid, col_2, col_3, col_4, col_5, ......col_42
So the result would look like this:
rowid col_2 col_3
rowid col_2 col_4
rowid col_2 col_5
rowid col_2 col_6
rowid col_2 col_7
.......
rowid col_2 col_42
Then the next row would be listed after that in the same fashion and so on.
I have tried a few things, but multiple select isn't allowed with bracketed select statements. Any ideas on how I could do this?
execute the follwing stmt
for(i=3;i<=42;i++)
{
nsstring *strVal = [NSString stringWithFormat:#"col_%d",i];
const char *sqlStatement = [[NSString stringWithFormat:#" sélect rowid,col_2,'%#' from TableName",strVal] strVal]UTF8String];
}