I am performing data QA testing.
I have this query to establish any errors between the source table and the destination table.
select
count(case when coalesce(x.col1,1) = coalesce(y.col1,1) then null else 1 end) as cnt_col1,
count(case when coalesce(x.col2,"1") = coalesce(y.col2,"1") then null else 1 end) as cnt_col2
from
`DatasetA.Table` x
OUTER JOIN
`DatasetB.Table` y
on x.col1 = y.col1
The output of this query is like this:
col1, col2
null, null
null, null
1, null
null, 1
I have 200 tables that I need to perform this test on, and the number of cols are dynamic. the table above only has two columns, some have 50.
I have the queries for the tables already, but I need to conform the output of all of the tests into a single output. My plan is to conform each query into a unified output and join them together using a UNION ALL.
The output set should say:
COLUMN, COUNT_OF_ERRORS
cnt_col1, 1
cnt_col2, 1
...
cnt_col15, 0
My question is this.
How do I reverse pivot this so I can achieve the output I'm looking for.
Thanks
How do I reverse pivot this so I can achieve the output I'm looking for.
Assuming you have table `data`
col1 col2 col3
---- ---- ----
null null null
null null 1
null 1 1
1 null 1
1 null 1
1 null 1
And you need reverse pivot it to
column count_of_errors
-------- ---------------
cnt_col1 3
cnt_col2 1
cnt_col3 5
Below is for BigQuery Standard SQL and does exactly this
#standardSQL
WITH `data` AS (
SELECT NULL AS col1, NULL AS col2, NULL AS col3 UNION ALL
SELECT NULL, NULL, 1 UNION ALL
SELECT 1, NULL, 1 UNION ALL
SELECT NULL, 1, 1 UNION ALL
SELECT 1, NULL, 1 UNION ALL
SELECT 1, NULL, 1
)
SELECT r.* FROM (
SELECT
[
STRUCT<column STRING, count_of_errors INT64>
('cnt_col1', SUM(col1)),
('cnt_col2', SUM(col2)),
('cnt_col3', SUM(col3))
] AS row
FROM `data`
), UNNEST(row) AS r
It is simple enough and friendly for adjusting to any number of columns you potentially have in your initial `data` table - you just need to add respective number of ('cnt_colN', SUM(colN)), - which can be done manually or you can just write simple script to generate those lines (or whole query)
About "comparing 2 tables" in Big Data, I don't think that doing some Joins is the best approach, because Joins are quite slow in general and then you have to handle the case of "outer" joins rows.
I worked on this topic years ago (https://community.hortonworks.com/articles/1283/hive-script-to-validate-tables-compare-one-with-an.html) and I am now trying to backport this knowledge to compare Hive tables with BigQuery tables.
One of my main idea is to use some checksums to be sure that a table is fully identical to the other one.
Here is a "basic example":
with one_string as(
select concat( sessionid ,'|',referrercode ,'|',purchaseid ,'|',customerid ,'|', cast(bouncerateind as string),'|', cast( productpagevisit as string),'|', cast( itemordervalue as string),'|', cast( purchaseinsession as string),'|', cast( hit_time_gmt as string),'|',datedir ,'|',productcategory ,'|',post_cookies) as bigstring from bidwh2.omniture_2017_03_24_v2
),
shas as(
select TO_BASE64( sha1( bigstring)) as sha from one_string
),
shas_prefix as(
select substr( sha, 0 , 1) as prefix, sha from shas
),
shas_ordered as(
select prefix, sha from shas_prefix order by sha
),
results_prefix as(
select concat( prefix, ' ', TO_BASE64( sha1( STRING_AGG( sha, '|')))) as res from shas_ordered group by prefix
),
results_ordered as(
select 1 as myall, res from results_prefix order by res
)
select SHA1( STRING_AGG( res, '|')) as sha from results_ordered group by myall;
So you do that on each of the 2 tables, and compare the 2 checksums numbers.
Final idea is to have an Python script (not finished yet, I hope my company allows me to opensource when finished) that would do the following:
count the rows for some "buckets" (groups of rows that whose column with a good distribution has the same checksum modulo a big number) and compare the results (because there is no need to checksum the whole table if the number of rows does not match).
visually shows the differences if count does not match
use the bucket/rows technique + some other "buckets/columns" to do some checksums in a similar way as shown in above example. And compare all those checksums together.
visually shows the differences if checksums do not match
Edit on 03/11/2017: script is finished and can be found at: https://github.com/bolcom/hive_compared_bq
Related
I'm trying to summarize data in a table:
counting total rows
counting values on specific fields
getting the distinct values on specific fields
and, more importantly, I'm struggling with:
getting the count for each field nested in an object
given this data
COL1
COL2
A
0
null
1
B
null
B
null
the expected result from this query would be:
with dummy as (
select 'A' as col1, 0 as col2
union all
select null, 1
union all
select 'B', null
union all
select 'B', null
)
select
count(1) as total
,count(col1) as col1
,array_agg(distinct col1) as dist_col1
--,object_construct(???) as col1_object_count
,count(col2) as col2
,array_agg(distinct col2) as dist_col2
--,object_construct(???) as col2_object_count
from
dummy
TOTAL
COL1
DIST_COL1
COL1_OBJECT_COUNT
COL2
DIST_COL2
COL2_OBJECT_COUNT
4
3
["A", "B"]
{"A": 1, "B", 2, null: 1}
2
[0, 1]
{0: 1, 1: 1, null: 2}
I've tried several functions inside OBJECT_CONSTRUCT mixed with ARRAY_AGG, but all failed
OBJECT_CONSTRUCT can work with several columns but only given all (*), if you try a select statement inside, it will fail
another issue is that analytical functions are not easily taken by the object or array functions in Snowflake.
You could use Snowflake Scripting or Snowpark for this but here's a solution that is somewhat flexible so you can apply it to different tables and column sets.
Create test table/view:
Create or Replace View dummy as (
select 'A' as col1, 0 as col2
union all
select null, 1
union all
select 'B', null
union all
select 'B', null
);
Set session variables for table and colnames.
set tbname = 'DUMMY';
set colnames = '["COL1", "COL2"]';
Create view that generates the required table_column_summary data:
Create or replace View table_column_summary as
with
-- Create table of required column names
cn as (
select VALUE::VARCHAR CNAME
from table(flatten(input => parse_json($colnames)))
)
-- Convert rows into objects
,ro as (
select
object_construct_keep_null(*) row_object
-- using identifier on session variable to dynamically supply table/view name
from identifier($tbname) )
-- Flatten row objects into key/values
,rof as (
select
key col_name,
ifnull(value,'null')::VARCHAR col_value
from ro, lateral flatten(input => row_object), cn
-- You will only need this filter if you need a subset
-- of columns from the source table/query summarised
where col_name = cn.cname)
-- Get the column value distinct value counts
,cdv as (
select col_name,
col_value,
sum(1) col_value_count
from rof
group by 1,2
)
-- and derive required column level stats and combine with cdv
,cv as (
select
(select count(1) from dummy) total,
col_name,
object_construct('COL_COUNT', count(col_value) ,
'COL_DIST', array_agg(distinct col_value),
'COL_OBJECT_COUNT', object_agg(col_value,col_value_count)) col_values
from cdv
group by 1,2)
-- Return result
Select * from cv;
Use this final query if you want a solution that works flexibility with any table/columns provided as input...
Select total, object_agg(col_name, col_values) col_values_obj
From table_column_summary
Group by 1;
Or use this final query if you want the fixed columns output as described in your question...
Select total,
COL1[0]:COL_COUNT COL1,
COL1[0]:COL_DIST DIST_COL1,
COL1[0]:COL_OBJECT_COUNT COL1_OBJECT_COUNT,
COL2[0]:COL_COUNT COL2,
COL2[0]:COL_DIST DIST_COL2,
COL2[0]:COL_OBJECT_COUNT COL2_OBJECT_COUNT
from table_column_summary
PIVOT ( ARRAY_AGG ( col_values )
FOR col_name IN ( 'COL1', 'COL2' ) ) as pt (total, col1, col2);
I have a SQL table with 6 columns. 1 ID int column and 5 Datetime columns Round1, Round2, ..., Round5
The data looks something like this. Either there is a date or the cell is empty.
I would like the query to show the number of filled datetime columns. That is
Can you please give some hint on how to build this query? Would this involve aggregate function?
Thank you
Consider:
SELECT ID, IIf(Round1 Is Null, 0, 1) + IIf(Round2 Is Null, 0, 1) +
IIf(Round 3 Is Null, 0, 1) + IIf(Round4 Is Null, 0, 1) + IIf(Round5 Is Null, 0, 1) AS Cnt
FROM Table;
Aggregate function is not helpful unless you first normalize the data with UNION query.
SELECT ID, Round1 AS Dte, "R1" AS Src FROM table
UNION SELECT ID, Round2, "R2" FROM table
UNION SELECT ID, Round3, "R3" FROM table
UNION SELECT ID, Round4, "R4" FROM table
UNION SELECT ID, Round5, "R5" FROM table;
Then use that query in aggregate SQL.
SELECT ID, Count(Dte) AS CntD FROM Q1 GROUP BY ID;
You can use CASE expressions to return 1 when a value is not NULL or 0 otherwise. Then just add all that.
SELECT id,
CASE
WHEN round1 IS NOT NULL THEN
1
ELSE
0
END
+
...
CASE
WHEN round5 IS NOT NULL THEN
1
ELSE
0
END total
FROM elbat;
And next time do not post images of tables. Post their CREATE statements with sample data as INSERT statements. And tag the specific DBMS you're using.
I am attempting to create an IF statement in BigQuery. I have built a concept that will work but it does not select the data from a table, I can only get it to display 1 or 0
Example:
SELECT --AS STRUCT
CASE
WHEN (
Select Count(1) FROM ( -- If the records are the same, then return = 0, if the records are not the same then > 1
Select Distinct ESCO, SOURCE, LDCTEXT, STATUS,DDR_DATE, TempF, HeatingDegreeDays, DecaTherms
from `gas-ddr.gas_ddr_outbound.LexingtonDDRsOutbound_onchange_Prior_Filtered`
Except Distinct
Select Distinct ESCO, SOURCE, LDCTEXT, STATUS,DDR_DATE, TempF, HeatingDegreeDays, DecaTherms
from `gas-ddr.gas_ddr_outbound.LexingtonDDRsOutbound_onchange_Latest_Filtered`
)
)= 0
THEN
(Select * from `gas-ddr.gas_ddr_outbound.LexingtonDDRsOutbound_onchange_Latest`) -- This Does not
work Scalar subquery cannot have more than one column unless using SELECT AS
STRUCT to build STRUCT values at [16:4] END
SELECT --AS STRUCT
CASE
WHEN (
Select Count(1) FROM ( -- If the records are the same, then return = 0, if the records are not the same then > 1
Select Distinct ESCO, SOURCE, LDCTEXT, STATUS,DDR_DATE, TempF, HeatingDegreeDays, DecaTherms
from `gas-ddr.gas_ddr_outbound.LexingtonDDRsOutbound_onchange_Prior_Filtered`
Except Distinct
Select Distinct ESCO, SOURCE, LDCTEXT, STATUS,DDR_DATE, TempF, HeatingDegreeDays, DecaTherms
from `gas-ddr.gas_ddr_outbound.LexingtonDDRsOutbound_onchange_Latest_Filtered`
)
)= 0
THEN 1 --- This does work
Else
0
END
How can I Get this query to return results from an existing table?
You question is still a little generic, so my answer same as well - and just mimic your use case at extend I can reverse engineer it from your comments
So, in below code - project.dataset.yourtable mimics your table ; whereas
project.dataset.yourtable_Prior_Filtered and project.dataset.yourtable_Latest_Filtered mimic your respective views
#standardSQL
WITH `project.dataset.yourtable` AS (
SELECT 'aaa' cols, 'prior' filter UNION ALL
SELECT 'bbb' cols, 'latest' filter
), `project.dataset.yourtable_Prior_Filtered` AS (
SELECT cols FROM `project.dataset.yourtable` WHERE filter = 'prior'
), `project.dataset.yourtable_Latest_Filtered` AS (
SELECT cols FROM `project.dataset.yourtable` WHERE filter = 'latest'
), check AS (
SELECT COUNT(1) > 0 changed FROM (
SELECT DISTINCT cols FROM `project.dataset.yourtable_Latest_Filtered`
EXCEPT DISTINCT
SELECT DISTINCT cols FROM `project.dataset.yourtable_Prior_Filtered`
)
)
SELECT t.* FROM `project.dataset.yourtable` t
CROSS JOIN check WHERE check.changed
the result is
Row cols filter
1 aaa prior
2 bbb latest
if you changed your table to
WITH `project.dataset.yourtable` AS (
SELECT 'aaa' cols, 'prior' filter UNION ALL
SELECT 'aaa' cols, 'latest' filter
) ......
the result will be
Row cols filter
Query returned zero records.
I hope this gives you right direction
Added more explanations:
I can be wrong - but per your question - it looks like you have one table project.dataset.yourtable and two views project.dataset.yourtable_Prior_Filtered and project.dataset.yourtable_Latest_Filtered which present state of your table prior and after some event
So, first three CTE in the answer above just mimic those table and views which you described in your question.
They are here so you can see concept and can play with it without any extra work before adjusting this to your real use-case.
For your real use-case you should omit them and use your real table and views names and whatever columns the have.
So the query for you to play with is:
#standardSQL
WITH check AS (
SELECT COUNT(1) > 0 changed FROM (
SELECT DISTINCT cols FROM `project.dataset.yourtable_Latest_Filtered`
EXCEPT DISTINCT
SELECT DISTINCT cols FROM `project.dataset.yourtable_Prior_Filtered`
)
)
SELECT t.* FROM `project.dataset.yourtable` t
CROSS JOIN check WHERE check.changed
It should be a very simple IF statement in any language.
Unfortunately NO! it cannot be done with just simple IF and if you see it fit you can submit a feature request to BigQuery team for whatever you think makes sense
I am trying out my hands on oracle 11g. I have a requirement such that I want to fetch those id from list which does not exists in table.
For example:
SELECT * FROM STOCK
where item_id in ('1','2'); // Return those records where result is null
I mean if item_id '1' is not present in db then the query should return me 1.
How can I achieve this?
You need to store the values in some sort of "table". Then you can use left join or not exists or something similar:
with ids as (
select 1 as id from dual union all
select 2 from dual
)
select ids.id
from ids
where not exists (select 1 from stock s where s.item_id = ids.id);
You can use a LEFT JOIN to an in-line table that contains the values to be searched:
SELECT t1.val
FROM (
SELECT '1' val UNION ALL SELECT '2'
) t1
LEFT JOIN STOCK t2 ON t1.val = t2.item_id
WHERE t2.item_id IS NULL
First create the list of possible IDs (e.g. 0 to 99 in below query). You can use a recursive cte for this. Then select these IDs and remove the IDs already present in the table from the result:
with possible_ids(id) as
(
select 0 as id from dual
union all
select id + 1 as id from possible_ids where id < 99
)
select id from possible_ids
minus
select item_id from stock;
A primary concern of the OP seems to be a terse notation of the query, notably the set of values to test for. The straightforwwrd recommendation would be to retrieve these values by another query or to generate them as a union of queries from the dual table (see the other answers for this).
The following alternative solution allows for a verbatim specification of the test values under the following conditions:
There is a character that does not occur in any of the test values provided ( in the example that will be - )
The number of values to test stays well below 2000 (to be precise, the list of values plus separators must be written as a varchar2 literal, which imposes the length limit ). However, this should not be an actual concern - If the test involves lists of hundreds of ids, these lists should definitely be retrieved froma table/view.
Caveat
Whether this method is worth the hassle ( not to mention potential performance impacts ) is questionable, imho.
Solution
The test values will be provided as a single varchar2 literal with - separating the values which is as terse as the specification as a list argument to the IN operator. The string starts and ends with -.
'-1-2-3-156-489-4654648-'
The number of items is computed as follows:
select cond, regexp_count ( cond, '[-]' ) - 1 cnt_items from (select '-1-2-3-156-489-4654648-' cond from dual)
A list of integers up to the number of items starting with 1 can be generated using the LEVEL pseudocolumn from hierarchical queries:
select level from dual connect by level < 42;
The n-th integer from that list will serve to extract the n-th value from the string (exemplified for the 4th value) :
select substr ( cond, instr(cond,'-', 1, 4 )+1, instr(cond,'-', 1, 4+1 ) - instr(cond,'-', 1, 4 ) - 1 ) si from (select cond, regexp_count ( cond, '[-]' ) - 1 cnt_items from (select '-1-2-3-156-489-4654648-' cond from dual) );
The non-existent stock ids are generated by subtracting the set of stock ids from the set of values. Putting it all together:
select substr ( cond, instr(cond,'-',1,level )+1, instr(cond,'-',1,level+1 ) - instr(cond,'-',1,level ) - 1 ) si
from (
select cond
, regexp_count ( cond, '[-]' ) - 1 cnt_items
from (
select '-1-2-3-156-489-4654648-' cond from dual
)
)
connect by level <= cnt_items + 1
minus
select item_id from stock
;
I have a coworker looking for this, and I don't recall ever running into anything like that.
Is there a reasonable technique that would let you simulate it?
SELECT PRODUCT(X)
FROM
(
SELECT 3 X FROM DUAL
UNION ALL
SELECT 5 X FROM DUAL
UNION ALL
SELECT 2 X FROM DUAL
)
would yield 30
select exp(sum(ln(col)))
from table;
edit:
if col always > 0
DECLARE #a int
SET #a = 1
-- re-assign #a for each row in the result
-- as what #a was before * the value in the row
SELECT #a = #a * amount
FROM theTable
There's a way to do string concat that is similiar:
DECLARE #b varchar(max)
SET #b = ""
SELECT #b = #b + CustomerName
FROM Customers
Here's another way to do it. This is definitely the longer way to do it but it was part of a fun project.
You've got to reach back to school for this one, lol. They key to remember here is that LOG is the inverse of Exponent.
LOG10(X*Y) = LOG10(X) + LOG10(Y)
or
ln(X*Y) = ln(X) + ln(Y) (ln = natural log, or simply Log base 10)
Example
If X=5 and Y=6
X * Y = 30
ln(5) + ln(6) = 3.4
ln(30) = 3.4
e^3.4 = 30, so does 5 x 6
EXP(3.4) = 30
So above, if 5 and 6 each occupied a row in the table, we take the natural log of each value, sum up the rows, then take the exponent of the sum to get 30.
Below is the code in a SQL statement for SQL Server. Some editing is likely required to make it run on Oracle. Hopefully it's not a big difference but I suspect at least the CASE statement isn't the same on Oracle. You'll notice some extra stuff in there to test if the sign of the row is negative.
CREATE TABLE DUAL (VAL INT NOT NULL)
INSERT DUAL VALUES (3)
INSERT DUAL VALUES (5)
INSERT DUAL VALUES (2)
SELECT
CASE SUM(CASE WHEN SIGN(VAL) = -1 THEN 1 ELSE 0 END) % 2
WHEN 1 THEN -1
ELSE 1
END
* CASE
WHEN SUM(VAL) = 0 THEN 0
WHEN SUM(VAL) IS NOT NULL THEN EXP(SUM(LOG(ABS(CASE WHEN SIGN(VAL) <> 0 THEN VAL END))))
ELSE NULL
END
* CASE MIN(ABS(VAL)) WHEN 0 THEN 0 ELSE 1 END
AS PRODUCT
FROM DUAL
The accepted answer by tuinstoel is correct, of course:
select exp(sum(ln(col)))
from table;
But notice that if col is of type NUMBER, you will find tremendous performance improvement when using BINARY_DOUBLE instead. Ideally, you would have a BINARY_DOUBLE column in your table, but if that's not possible, you can still cast col to BINARY_DOUBLE. I got a 100x improvement in a simple test that I documented here, for this cast:
select exp(sum(ln(cast(col as binary_double))))
from table;
Is there a reasonable technique that would let you simulate it?
One technique could be using LISTAGG to generate product_expression string and XMLTABLE + GETXMLTYPE to evaluate it:
WITH cte AS (
SELECT grp, LISTAGG(l, '*') AS product_expression
FROM t
GROUP BY grp
)
SELECT c.*, s.val AS product_value
FROM cte c
CROSS APPLY(
SELECT *
FROM XMLTABLE('/ROWSET/ROW/*'
PASSING dbms_xmlgen.getXMLType('SELECT ' || c.product_expression || ' FROM dual')
COLUMNS val NUMBER PATH '.')
) s;
db<>fiddle demo
Output:
+------+---------------------+---------------+
| GRP | PRODUCT_EXPRESSION | PRODUCT_VALUE |
+------+---------------------+---------------+
| b | 2*6 | 12 |
| a | 3*5*7 | 105 |
+------+---------------------+---------------+
More roboust version with handling single NULL value in the group:
WITH cte AS (
SELECT grp, LISTAGG(l, '*') AS product_expression
FROM t
GROUP BY grp
)
SELECT c.*, s.val AS product_value
FROM cte c
OUTER APPLY(
SELECT *
FROM XMLTABLE('/ROWSET/ROW/*'
passing dbms_xmlgen.getXMLType('SELECT ' || c.product_expression || ' FROM dual')
COLUMNS val NUMBER PATH '.')
WHERE c.product_expression IS NOT NULL
) s;
db<>fiddle demo
*CROSS/OUTER APPLY(Oracle 12c) is used for convenience and could be replaced with nested subqueries.
This approach could be used for generating different aggregation functions.
There are many different implmentations of "SQL". When you say "does sql have" are you referring to a specific ANSI version of SQL, or a vendor specific implementation. DavidB's answer is one that works in a few different environments I have tested but depending on your environment you could write or find a function exactly like what you are asking for. Say you were using Microsoft SQL Server 2005, then a possible solution would be to write a custom aggregator in .net code named PRODUCT which would allow your original query to work exactly as you have written it.
In c# you might have to do:
SELECT EXP(SUM(LOG([col])))
FROM table;