I have a table like this to save the results of a medical checkup and the date of the report sent and the result. Actually the date sent is based on the clinic_visit date. A client can have one or more reports (date may varies)
---------------------------------------
| client_id | date_sent | result |
---------------------------------------
| 1 | 2001 | A |
| 1 | 2002 | B |
| 2 | 2002 | D |
| 3 | 2001 | A |
| 3 | 2003 | C |
| 3 | 2005 | E |
| 4 | 2002 | D |
| 4 | 2004 | E |
| 5 | 2004 | B |
---------------------------------------
I want to extract the following report from the above data.
---------------------------------------------------
| client_id | result1 | result2 | resut3 |
---------------------------------------------------
| 1 | A | B | |
| 2 | D | | |
| 3 | A | C | E |
| 4 | D | E | |
| 5 | B | | |
---------------------------------------------------
I'm working on Postgresql. the "crosstab" function won't work here because the "date_sent" is not consistent for each client.
Can anyone please give a rough idea how it should be queried?
I suggest the following approach:
SELECT client_id, array_agg(result) AS results
FROM labresults
GROUP BY client_id;
It's not exactly the same output format, but it will give you the same information much faster and cleaner.
If you want the results in separate columns, you can always do this:
SELECT client_id,
results[1] AS result1,
results[2] AS result2,
results[3] AS result3
FROM
(
SELECT client_id, array_agg(result) AS results
FROM labresults
GROUP BY client_id
) AS r
ORDER BY client_id;
although that will obviously introduce a hardcoded number of possible results.
While I was reading about "simulating row_number", I tried to figure out another way to do this.
SELECT client_id,
MAX( CASE seq WHEN 1 THEN result ELSE '' END ) AS result1,
MAX( CASE seq WHEN 2 THEN result ELSE '' END ) AS result2,
MAX( CASE seq WHEN 3 THEN result ELSE '' END ) AS result3,
MAX( CASE seq WHEN 4 THEN result ELSE '' END ) AS result4,
MAX( CASE seq WHEN 5 THEN result ELSE '' END ) AS result5
FROM ( SELECT p1.client_id,
p1.result,
( SELECT COUNT(*)
FROM labresults p2
WHERE p2.client_id = p1.client_id
AND p2.result <= p1.result )
FROM labresults p1
) D ( client_id, result, seq )
GROUP BY client_id;
but the query took 10 minutes (500,000 ms++). for 30,000 records. This is too long..
Related
I have to perform a query where I can count the number of distinct codes per Id.
|Id | Code
------------
| 1 | C
| 1 | I
| 2 | I
| 2 | C
| 2 | D
| 2 | D
| 3 | C
| 3 | I
| 3 | D
| 4 | I
| 4 | C
| 4 | C
The output should be something like:
|Id | Count | #Code C | #Code I | #Code D
-------------------------------------------
| 1 | 2 | 1 | 1 | 0
| 2 | 3 | 1 | 0 | 2
| 3 | 3 | 1 | 1 | 1
| 4 | 2 | 2 | 1 | 0
Can you give me some advise on this?
This answers the original version of the question.
You are looking for count(distinct):
select id, count(distinct code)
from t
group by id;
If the codes are only to the provided ones, the following query can provide the desired result.
select
pvt.Id,
codes.total As [Count],
COALESCE(C, 0) AS [#Code C],
COALESCE(I, 0) AS [#Code I],
COALESCE(D, 0) AS [#Code D]
from
( select Id, Code, Count(code) cnt
from t
Group by Id, Code) s
PIVOT(MAX(cnt) FOR Code IN ([C], [I], [D])) pvt
join (select Id, count(distinct Code) total from t group by Id) codes on pvt.Id = codes.Id ;
Note: as I can see from sample input data, code 'I' is found in all of Ids. Its count is zero for Id = 3 in the expected output (in the question).
Here is the correct output:
DB Fiddle
I have a table with old values (some null) and new values for various attributes, all inserted at different add times throughout the months. I'm trying to update a second table with records with business month end dates. Right now, these records only contain the most recent new values for all month end dates. The goal is to create historical data by updating the previous month end values with the old values from the first table. I am a beginner and was able to come up with a query to update on one object where there was one entry from the first table. Now I am trying to expand the query to include multiple objects, with possible, multiple old values within the same month. I tried to use "order by" (since I need to make updates for a month in ascending order so it gets the latest value) but read that doesn't work with update statements without a subquery. So I tried my hand at making a more complicated query, without success. I am getting the following error: single-row subquery returns more than one row. Thanks!
TableA:
| ID | TYPE | OLD_VALUE | NEW_VALUE | ADD_TIME|
-----------------------------------------------
| 1 | A | 2 | 3 | 1/11/2019 8:00:00am |
| 1 | B | 3 | 4 | 12/10/2018 8:00:00am|
| 1 | B | 4 | 5 | 12/11/2018 8:00:00am|
| 2 | A | 5 | 1 | 12/5/2018 08:00:00am|
| 2 | A | 1 | 2 | 12/5/2019 09:00:00am|
| 2 | A | 2 | 3 | 12/5/2019 10:00:00am|
| 2 | B | 1 | 2 | 12/5/2019 10:00:00am|
TableB
| ID | MONTH_END | TYPE_A | TYPE_B |
-----------------------------------
| 1 | 1/31/19 | 3 | 5 |
| 1 | 12/31/18 | 3 | 5 |
| 1 | 11/30/18 | 3 | 5 |
| 2 | 12/31/18 | 3 | 2 |
| 2 | 11/30/18 | 3 | 2 |
Desired Output for TableB
| ID | MONTH_END | TYPE_A | TYPE_B |
-----------------------------------
| 1 | 1/31/19 | 3 | 5 |
| 1 | 12/31/18 | 2 | 5 |
| 1 | 11/30/18 | 2 | 3 |
| 2 | 12/31/18 | 3 | 2 |
| 2 | 11/30/18 | 5 | 2 |
My Query for Type A (Which I plan to adapt for Type B and execute as well for the desired output)
update TableB B
set b.type_a =
(
with aa as
(
select id, nvl(old_value, new_value) typea, add_time
from TableA
where type = 'A'
order by id, add_time ascending
)
select typea
from aa
where aa.id = b.id
and b.month_end <= aa.add_tm
)
where exists
(
with aa as
(
select id, nvl(old_value, new_value) typea, add_time
from TableA
where type = 'A'
order by id, add_time ascending
)
select typea
from aa
where aa.id = b.id
and b.month_end <= aa.add_tm
)
Kudo's for giving example input data and desired output. I found your question a bit confusing so let me rephrase to "Provide the last type a value from table a that is in the same month as the month end.
By matching on type and date of entry, we can get your answer. The "ROWNUM=1" is to limit result set to a single entry in case there is more than one row with the same add_time. This SQL is still a mess, maybe someone else can come up with a better one.
UPDATE tableb b
SET b.typea =
(SELECT old_value
FROM tablea a
WHERE LAST_DAY( TRUNC( a.add_time ) ) = b.month_end
AND TYPE = 'A'
AND add_time =
(SELECT MAX( add_time )
FROM tablea
WHERE TYPE = 'A' AND LAST_DAY( TRUNC( a.add_time ) ) = b.month_end)
AND ROWNUM = 1)
WHERE EXISTS
(SELECT old_value
FROM tablea a
WHERE LAST_DAY( TRUNC( a.add_time ) ) = b.month_end AND TYPE = 'A');
i have 2 table something like this. i'm running a hive query and windows function seems pretty limited in hive.
Table dept
id | name |
1 | a |
2 | b |
3 | c |
4 | d |
Table time (build with heavy load query so it's make a very slow process if i need to join to another newly created table time.)
id | date | first | last |
1 | 1992-01-01 | 1 | 1 |
2 | 1993-02-02 | 1 | 2 |
2 | 1993-03-03 | 2 | 1 |
3 | 1993-01-01 | 1 | 3 |
3 | 1994-01-01 | 2 | 2 |
3 | 1995-01-01 | 3 | 1 |
i need to retrieve something like this :
SELECT d.id,d.name,
t.date AS firstdate,
td.date AS lastdate
FROM dbo.dept d LEFT JOIN dbo.time t ON d.id=t.id AND t.first=1
LEFT JOIN time td ON d.id=td.id AND td.last=1
How the most optimized answer ?
GROUP BY operation that will be done in a single map-reduce job
select id
,max(name) as name
,max(case when first = 1 then `date` end) as firstdate
,max(case when last = 1 then `date` end) as lastdate
from (select id
,null as name
,`date`
,first
,last
from time
where first = 1
or last = 1
union all
select id
,name
,null as `date`
,null as first
,null as last
from dept
) t
group by id
;
+----+------+------------+------------+
| id | name | firstdate | lastdate |
+----+------+------------+------------+
| 1 | a | 1992-01-01 | 1992-01-01 |
| 2 | b | 1993-02-02 | 1993-03-03 |
| 3 | c | 1993-01-01 | 1995-01-01 |
| 4 | d | (null) | (null) |
+----+------+------------+------------+
select d.id
,max(d.name) as name
,max(case when t.first = 1 then t.date end) as 'firstdate'
,max(case when t.last = 1 then t.date end) as 'lastdate'
from dept d left join
time t on d.id = t.id
where t.first = 1 or t.last = 1
group by d.id
I have been really struggling with this one! Essentially, I have been trying to use COUNT and GROUP BY within a subquery, errors returning more than one value and whole host of errors.
So, I have the following table:
start_date | ID_val | DIR | tsk | status|
-------------+------------+--------+-----+--------+
25-03-2015 | 001 | U | 28 | S |
27-03-2016 | 003 | D | 56 | S |
25-03-2015 | 004 | D | 56 | S |
25-03-2015 | 001 | U | 28 | S |
16-02-2016 | 002 | D | 56 | S |
25-03-2015 | 001 | U | 28 | S |
16-02-2016 | 002 | D | 56 | S |
16-02-2016 | 005 | NULL | 03 | S |
25-03-2015 | 001 | U | 17 | S |
16-02-2016 | 002 | D | 81 | S |
Ideally, I need to count the number of times the unique value of ID_val had for example U and 28 or D and 56. and only those combinations.
For example I was hoping to return the below results if its possible:
start_date | ID_val | no of times | status |
-------------+------------+---------------+--------+
25-03-2015 | 001 | 3 | S |
27-03-2016 | 003 | 1 | S |
25-03-2015 | 004 | 1 | S |
25-03-2015 | 002 | 3 | S |
I've managed to get the no of times on their own, but not be apart of a table with other values (subquery?)
Any advice is much appreciated!
This is a basic conditional aggregation:
select id_val,
sum(case when (dir = 'U' and tsk = 28) or (dir = 'D' and tsk = 56)
then 1 else 0
end) as NumTimes
from t
group by id_val;
I left out the other columns because your question focuses on id_val, dir, and tsk. The other columns seem unnecessary.
You want one result per ID_val, so you'd group by ID_val.
You want the minimum start date: min(start_date).
You want any status (as it is always the same): e.g. min(status) or max(status).
You want to count matches: count(case when <match> then 1 end).
select
min(start_date) as start_date,
id_val,
count(case when (dir = 'U' and tsk = 28) or (dir = 'D' and tsk = 56) then 1 end)
as no_of_times,
min(status) as status
from mytable
group by id_val;
Use COUNT with GROUP BY.
Query
select start_date, ID_val, count(ID_Val) as [no. of times], [status]
from your_table_name
where (tsk = 28 and DIR = 'U') or (tsk = 56 and DIR = 'D')
group by start_date, ID_val, [status]
So far, all the answers assume you are going to know the value pairs in advance and will require modification if these change or are added to. This solution makes no assumptions.
Table Creation
CREATE TABLE IDCounts
(
start_date date
, ID_val char(3)
, DIR nchar(1)
, tsk int
, status nchar(1)
)
INSERT IDCounts
VALUES
('2015-03-25','001','U' , 28,'S')
,('2016-03-27','003','D' , 56,'S')
,('2015-03-25','004','D' , 56,'S')
,('2015-03-25','001','U' , 28,'S')
,('2016-03-16','002','D' , 56,'S')
,('2015-03-25','001','U' , 28,'S')
,('2016-02-16','002','D' , 56,'S')
,('2016-02-16','005', NULL, 03,'S')
,('2015-03-25','001','U' , 17,'S')
,('2016-02-16','002','D' , 81,'S');
Code
SELECT Distinct i1.start_date, i1.ID_Val, i2.NumOfTimes, i1.status
from IDCounts i1
JOIN
(
select start_date, ID_val, isnull(DIR,N'')+cast(tsk as nvarchar) ValuePair, count(DIR+cast(tsk as nvarchar)) as NumOfTimes
from IDCounts
GROUP BY start_date, ID_val, isnull(DIR,N'')+cast(tsk as nvarchar)
) i2 on i2.start_date=i1.start_date
and i2.ID_val =i1.ID_val
and i2.ValuePair =isnull(i1.DIR,N'')+cast(i1.tsk as nvarchar)
order by i1.ID_val, i1.start_date;
I have this pivoted table
+---------+----------+----------+-----+----------+
| Date | Product1 | Product2 | ... | ProductN |
+---------+----------+----------+-----+----------+
| 7/1/15 | 5 | 2 | ... | 7 |
| 8/1/15 | 7 | 1 | ... | 9 |
| 9/1/15 | NULL | 7 | ... | NULL |
| 10/1/15 | 8 | NULL | ... | NULL |
| 11/1/15 | NULL | NULL | ... | NULL |
+---------+----------+----------+-----+----------+
I wanted to fill in the NULL column with the values above them. So, the output should be something like this.
+---------+----------+----------+-----+----------+
| Date | Product1 | Product2 | ... | ProductN |
+---------+----------+----------+-----+----------+
| 7/1/15 | 5 | 2 | ... | 7 |
| 8/1/15 | 7 | 1 | ... | 9 |
| 9/1/15 | 7 | 7 | ... | 9 |
| 10/1/15 | 8 | 7 | ... | 9 |
| 11/1/15 | 8 | 7 | ... | 9 |
+---------+----------+----------+-----+----------+
I've found this article that might help me but this only manipulate one column. How do I apply this to all my column or how can I achieve such result since my columns are dynamic.
Any help would be much appreciated. Thanks!
The ANSI standard has the IGNORE NULLS option on LAG(). This is exactly what you want. Alas, SQL Server has not (yet?) implemented this feature.
So, you can do this in several ways. One is using multiple outer applys. Another uses correlated subqueries:
select p.date,
(case when p.product1 is not null else p.product1
else (select top 1 p2.product1 from pivoted p2 where p2.date < p.date order by p2.date desc)
end) as product1,
(case when p.product1 is not null else p.product1
else (select top 1 p2.product1 from pivoted p2 where p2.date < p.date order by p2.date desc)
end) as product1,
(case when p.product2 is not null else p.product2
else (select top 1 p2.product2 from pivoted p2 where p2.date < p.date order by p2.date desc)
end) as product2,
. . .
from pivoted p ;
I would recommend an index on date for this query.
I would like to suggest you a solution. If you have a table which consists of merely two columns my solution will work perfectly.
+---------+----------+
| Date | Product |
+---------+----------+
| 7/1/15 | 5 |
| 8/1/15 | 7 |
| 9/1/15 | NULL |
| 10/1/15 | 8 |
| 11/1/15 | NULL |
+---------+----------+
select x.[Date],
case
when x.[Product] is null
then min(c.[Product])
else
x.[Product]
end as Product
from
(
-- this subquery evaluates a minimum distance to the rows where Product column contains a value
select [Date],
[Product],
min(case when delta >= 0 then delta else null end) delta_min,
max(case when delta < 0 then delta else null end) delta_max
from
(
-- this subquery maps Product table to itself and evaluates the difference between the dates
select p.[Date],
p.[Product],
DATEDIFF(dd, p.[Date], pnn.[Date]) delta
from #products p
cross join (select * from #products where [Product] is not null) pnn
) x
group by [Date], [Product]
) x
left join #products c on x.[Date] =
case
when abs(delta_min) < abs(delta_max) then DATEADD(dd, -delta_min, c.[Date])
else DATEADD(dd, -delta_max, c.[Date])
end
group by x.[Date], x.[Product]
order by x.[Date]
In this query I mapped the table to itself rows which contain values by CROSS JOIN statement. Then I calculated differences between dates in order to pick the closest ones and thereafter fill empty cells with values.
Result:
+---------+----------+
| Date | Product |
+---------+----------+
| 7/1/15 | 5 |
| 8/1/15 | 7 |
| 9/1/15 | 7 |
| 10/1/15 | 8 |
| 11/1/15 | 8 |
+---------+----------+
Actually, the suggested query doesn't choose the previous value. Instead of this, it selects the closest value. In other words, my code can be used for a number of different purposes.
First You need to add identity column in temporary or hard table then resolved by following method.
--- Solution ----
Create Table #Test (ID Int Identity (1,1),[Date] Date , Product_1 INT )
Insert Into #Test ([Date], Product_1)
Values
('7/1/15',5)
,('8/1/15',7)
,('9/1/15',Null)
,('10/1/15',8)
,('11/1/15',Null)
Select ID , DATE ,
IIF ( Product_1 is null ,
(Select Product_1 from #TEST
Where ID = (Select Top 1 a.ID From #TEST a where a.Product_1 is not null and a.ID<b.ID
Order By a.ID desc)
),Product_1) Product_1
from #Test b
-- Solution End ---