I want write hive query such that it I can see count of null values of each column
You can use this SQL - this will give you total count, null and not null count.
SELECT
count(*) total_cnt,
sum(case when data_col is null then 1 else 0 end) null_cnt,
sum(case when data_col is null then 0 else 1 end) nonnull_cnt
From mytable
I have a table contain records as per below image. I want to do a count for each status and am able to do that by selecting each type of status. Which I will needs to exec 4 query to get the result. I would like to know how can I achieve that by using single query statement? Any advices or suggestion is welcome and highly appreciated. Thanks in advance.
WITH CTE(NO,STATUS)AS
(
SELECT 1,'OPEN' UNION ALL
SELECT 2,'OPEN'UNION ALL
SELECT 3,'BILLED'UNION ALL
SELECT 4,'CANCELLED'UNION ALL
SELECT 5,'BILLING'UNION ALL
SELECT 6,'BILLED'UNION ALL
SELECT 7,'CANCELLED'UNION ALL
SELECT 8,'BILLING'UNION ALL
SELECT 9,'CONFIRM'UNION ALL
SELECT 10,'IN PROGRESS'UNION ALL
SELECT 11,'OPEN'UNION ALL
SELECT 12,'CONFIRM'
)
SELECT
SUM(CASE WHEN C.STATUS='BILLED'THEN 1 ELSE 0 END)AS BILLED,
SUM(CASE WHEN C.STATUS='BILLING'THEN 1 ELSE 0 END)AS BILLING,
SUM(CASE WHEN C.STATUS='CANCELLED'THEN 1 ELSE 0 END)AS CANCELLED,
SUM(CASE WHEN C.STATUS NOT IN ('CANCELLED','BILLING','BILLED')THEN 1 ELSE 0 END)AS UNBILL
FROM CTE AS C
CTE is an example you have provided. Please replace reference to it with reference to your table
I have two queries and I want to get the maximum value of the two of them.
MAX((SELECT COUNT(p.[ItemID]) FROM [dbo].[Table] p WHERE HasHuman=0),
(SELECT COUNT(p.[ItemID]) FROM [dbo].[Table] p WHERE HasHuman=1))
You can calculate both result in a single query and then apply TOP:
select top 1
HasHuman,
COUNT(p.[ItemID]) as cnt
from [dbo].[Table]
group by HasHuman
order by cnt desc
You could even do this in a single query:
SELECT
CASE WHEN SUM(CASE WHEN HasHuman=0 THEN 1 ELSE 0 END) >
SUM(CASE WHEN HasHuman=1 THEN 1 ELSE 0 END)
THEN SUM(CASE WHEN HasHuman=0 THEN 1 ELSE 0 END)
ELSE SUM(CASE WHEN HasHuman=1 THEN 1 ELSE 0 END) END
FROM [dbo].[Table]
WHERE ItemID IS NOT NULL -- you were not counting NULLs
SELECT MAX(RC)
FROM (SELECT COUNT(p.ItemID) AS RC FROM dbo.[Table]
WHERE HasHuman=0
UNION
SELECT COUNT(p.ItemID) AS RC FROM dbo.[Table]
WHERE HasHuman=1
) A
I want to do something like this:
select id,
count(*) as total,
FOR temp IN SELECT DISTINCT somerow FROM mytable ORDER BY somerow LOOP
sum(case when somerow = temp then 1 else 0 end) temp,
END LOOP;
from mytable
group by id
order by id
I created working select:
select id,
count(*) as total,
sum(case when somerow = 'a' then 1 else 0 end) somerow_a,
sum(case when somerow = 'b' then 1 else 0 end) somerow_b,
sum(case when somerow = 'c' then 1 else 0 end) somerow_c,
sum(case when somerow = 'd' then 1 else 0 end) somerow_d,
sum(case when somerow = 'e' then 1 else 0 end) somerow_e,
sum(case when somerow = 'f' then 1 else 0 end) somerow_f,
sum(case when somerow = 'g' then 1 else 0 end) somerow_g,
sum(case when somerow = 'h' then 1 else 0 end) somerow_h,
sum(case when somerow = 'i' then 1 else 0 end) somerow_i,
sum(case when somerow = 'j' then 1 else 0 end) somerow_j,
sum(case when somerow = 'k' then 1 else 0 end) somerow_k
from mytable
group by id
order by id
this works, but it is 'static' - if some new value will be added to 'somerow' I will have to change sql manually to get all the values from somerow column, and that is why I'm wondering if it is possible to do something with for loop.
So what I want to get is this:
id somerow_a somerow_b ....
0 3 2 ....
1 2 10 ....
2 19 3 ....
. ... ...
. ... ...
. ... ...
So what I'd like to do is to count all the rows which has some specific letter in it and group it by id (this id isn't primary key, but it is repeating - for id there are about 80 different values possible).
http://sqlfiddle.com/#!15/18feb/2
Are arrays good for you? (SQL Fiddle)
select
id,
sum(totalcol) as total,
array_agg(somecol) as somecol,
array_agg(totalcol) as totalcol
from (
select id, somecol, count(*) as totalcol
from mytable
group by id, somecol
) s
group by id
;
id | total | somecol | totalcol
----+-------+---------+----------
1 | 6 | {b,a,c} | {2,1,3}
2 | 5 | {d,f} | {2,3}
In 9.2 it is possible to have a set of JSON objects (Fiddle)
select row_to_json(s)
from (
select
id,
sum(totalcol) as total,
array_agg(somecol) as somecol,
array_agg(totalcol) as totalcol
from (
select id, somecol, count(*) as totalcol
from mytable
group by id, somecol
) s
group by id
) s
;
row_to_json
---------------------------------------------------------------
{"id":1,"total":6,"somecol":["b","a","c"],"totalcol":[2,1,3]}
{"id":2,"total":5,"somecol":["d","f"],"totalcol":[2,3]}
In 9.3, with the addition of lateral, a single object (Fiddle)
select to_json(format('{%s}', (string_agg(j, ','))))
from (
select format('%s:%s', to_json(id), to_json(c)) as j
from
(
select
id,
sum(totalcol) as total_sum,
array_agg(somecol) as somecol_array,
array_agg(totalcol) as totalcol_array
from (
select id, somecol, count(*) as totalcol
from mytable
group by id, somecol
) s
group by id
) s
cross join lateral
(
select
total_sum as total,
somecol_array as somecol,
totalcol_array as totalcol
) c
) s
;
to_json
---------------------------------------------------------------------------------------------------------------------------------------
"{1:{\"total\":6,\"somecol\":[\"b\",\"a\",\"c\"],\"totalcol\":[2,1,3]},2:{\"total\":5,\"somecol\":[\"d\",\"f\"],\"totalcol\":[2,3]}}"
In 9.2 it is also possible to have a single object in a more convoluted way using subqueries in instead of lateral
SQL is very rigid about the return type. It demands to know what to return beforehand.
For a completely dynamic number of resulting values, you can only use arrays like #Clodoaldo posted. Effectively a static return type, you do not get individual columns for each value.
If you know the number of columns at call time ("semi-dynamic"), you can create a function taking (and returning) polymorphic parameters. Closely related answer with lots of details:
Dynamic alternative to pivot with CASE and GROUP BY
(You also find a related answer with arrays from #Clodoaldo there.)
Your remaining option is to use two round-trips to the server. The first to determine the the actual query with the actual return type. The second to execute the query based on the first call.
Else, you have to go with a static query. While doing that, I see two nicer options for what you have right now:
1. Simpler expression
select id
, count(*) AS total
, count(somecol = 'a' OR NULL) AS somerow_a
, count(somecol = 'b' OR NULL) AS somerow_b
, ...
from mytable
group by id
order by id;
How does it work?
Compute percents from SUM() in the same SELECT sql query
SQL Fiddle.
2. crosstab()
crosstab() is more complex at first, but written in C, optimized for the task and shorter for long lists. You need the additional module tablefunc installed. Read the basics here if you are not familiar:
PostgreSQL Crosstab Query
SELECT * FROM crosstab(
$$
SELECT id
, count(*) OVER (PARTITION BY id)::int AS total
, somecol
, count(*)::int AS ct -- casting to int, don't think you need bigint?
FROM mytable
GROUP BY 1,3
ORDER BY 1,3
$$
,
$$SELECT unnest('{a,b,c,d}'::text[])$$
) AS f (id int, total int, a int, b int, c int, d int);
My table structure is this
id last_mod_dt nr is_u is_rog is_ror is_unv
1 x uuid1 1 1 1 0
2 y uuid1 1 0 1 1
3 z uuid2 1 1 1 1
I want the count of rows with:
is_ror=1 or is_rog =1
is_u=1
is_unv=1
All in a single query. Is it possible?
The problem I am facing is that there can be same values for nr as is the case in the table above.
Case statments provide mondo flexibility...
SELECT
sum(case
when is_ror = 1 or is_rog = 1 then 1
else 0
end) FirstCount
,sum(case
when is_u = 1 then 1
else 0
end) SecondCount
,sum(case
when is_unv = 1 then 1
else 0
end) ThirdCount
from MyTable
you can use union to get multiple results e.g.
select count(*) from table with is_ror=1 or is_rog =1
union
select count(*) from table with is_u=1
union
select count(*) from table with is_unv=1
Then the result set will contain three rows each with one of the counts.
Sounds pretty simple if "all in a single query" does not disqualify subselects;
SELECT
(SELECT COUNT(DISTINCT nr) FROM table1 WHERE is_ror=1 OR is_rog=1) cnt_ror_reg,
(SELECT COUNT(DISTINCT nr) FROM table1 WHERE is_u=1) cnt_u,
(SELECT COUNT(DISTINCT nr) FROM table1 WHERE is_unv=1) cnt_unv;
how about something like
SELECT
SUM(IF(is_u > 0 AND is_rog > 0, 1, 0)) AS count_something,
...
from table
group by nr
I think it will do the trick
I am of course not sure what you want exactly, but I believe you can use the logic to produce your desired result.