how to count the number of last day - sql

I got a data like it :
id date_ type
1 05/03/2020 A
2 07/03/2020 A
3 15/03/2020 A
4 25/03/2020 B
5 24/03/2020 B
6 31/03/2020 C
7 31/03/2020 D
I used the function last_day,
I did it :
select last_day(date_) from table1
But I got it :
31/03/2020 : 7
And I want to have it :
31/03/2020 : 2
thanks !

If you are looking for the count of records having last day of the month in date_ field then:
Schema and insert statements:
create table table1(id int, date_ date, type varchar(10));
insert into table1 values(1, '05-Mar-2020', 'A');
insert into table1 values(2, '07-Mar-2020', 'A');
insert into table1 values(3, '15-Mar-2020', 'A');
insert into table1 values(4, '25-Mar-2020', 'B');
insert into table1 values(5, '24-Mar-2020', 'B');
insert into table1 values(6, '31-Mar-2020', 'C');
insert into table1 values(7, '31-Mar-2020', 'D');
Query:
select date_, count(*)cnt
from table1
where date_ = last_day(date_)
group by date_;
Ouput:
DATE_
CNT
31-MAR-20
2
If you need all the date_ with count no need to use last_day:
Query:
select date_, count(*)cnt
from table1
group by date_
order by date_;
Output:
DATE_
CNT
05-MAR-20
1
07-MAR-20
1
15-MAR-20
1
24-MAR-20
1
25-MAR-20
1
31-MAR-20
2
db<>fiddle here

I think you want aggregation:
select date_, count(*)
from t
where date_ = last_day(date_)
group by date_;

The way I understood it, "last day" isn't the result of the LAST_DAY function, but maximum date value in that table. The result you're after is count of rows whose date is equal to that "maximum" date.
If that's so, then this might be one option: counting rows is easy. ROW_NUMBER analytic function calculates ordinal numbers of each row, sorted by date in descending order which means that it is the 1st row you need.
Something like this:
SQL> select date_, cnt
2 from (select date_,
3 count(*) cnt,
4 row_number() over (order by date_ desc) rn
5 from table1
6 group by date_
7 )
8 where rn = 1;
DATE_ CNT
---------- ----------
31/03/2020 2
SQL>

Related

Oracle get row where column value changed

Say I have a table, something like
ID CCTR DATE
----- ------ ----------
1 2C 8/1/2018
2 2C 7/2/2018
3 2C 5/4/2017
4 2B 3/2/2017
5 2B 1/1/2017
6 UC 11/23/2016
There are other fields, but I made it simple. So I create a query where i have the date in descending order. I was to return the row where there was a change in CCTR. So in this case it would return ID 4. Basically i want to find the previous value of CCTR before it changed, in this case from 2B to 2C.
How do I do this? Ive tried to google it, but can't seem to find the right method.
You can use the LAG() window function to peek at the previous row and compare it. If your data is:
create table t2 (
id number(6),
cctr varchar2(10),
date1 date
);
insert into t2 (id, cctr, date1) values (1, '2C', date '2018-08-01');
insert into t2 (id, cctr, date1) values (2, '2C', date '2018-07-02');
insert into t2 (id, cctr, date1) values (3, '2C', date '2017-05-04');
insert into t2 (id, cctr, date1) values (4, '2B', date '2017-03-02');
insert into t2 (id, cctr, date1) values (5, '2B', date '2017-01-01');
insert into t2 (id, cctr, date1) values (6, 'UC', date '2016-11-23');
Then the query would be:
select * from t2 where date1 = (
select max(date1)
from (
select
id, date1, cctr, lag(cctr) over(order by date1 desc) as prev
from t2
) x
where prev is not null and cctr <> prev
);
Result:
ID CCTR DATE1
------- ---------- -------------------
4 2B 2017-03-02 00:00:00
You may use first_value analytic function to detect the changes in CCTR column :
select fv as value, cctr
from
(
with t(ID,CCTR) as
(
select 1,'2C' from dual union all
select 2,'2C' from dual union all
select 3,'2C' from dual union all
select 4,'2B' from dual union all
select 5,'2B' from dual union all
select 6,'UC' from dual
)
select id, cctr, first_value(id) over (partition by cctr order by id ) fv
from t
order by id
)
where id = fv;
VALUE CCTR
----- ----
1 2C
4 2B
6 UC
Rextester Demo

How to get rows from two tables on maximum value of particular field

I have two tables that has date_updated column.
TableA is like below
con_id date_updated type
--------------------------------------------
123 19/06/2018 2
123 15/06/2018 1
123 01/05/2018 3
101 06/04/2018 1
101 05/03/2018 2
And I have TableB that also has the same structure
con_id date_updated type
--------------------------------------------
123 15/05/2018 2
123 01/05/2018 1
101 07/06/2018 1
The resultant table should have the data with the recent date
con_id date_updated type
--------------------------------------------
123 19/06/2018 2
101 07/06/2018 1
Here the date_updated column is datetime datatype of sql server. I tried this by using group by and selecting the maximum date_updated. But i am not able to include column type in select statement. When i used type in group by ,the result is not correct as the type is also grouped. How can i query this. Please help
SELECT *
FROM
(SELECT *, ROW_NUMBER() OVER(Partition By con_id ORDER BY date_updated DESC) as seq
FROM
(SELECT * FROM TableA
UNION ALL
SELECT * FROM TableB) as tblMain) as tbl2
WHERE seq = 1
One method:
WITH A AS(
SELECT TOP 1 con_id,
date_updated,
type
FROM TableA
ORDER BY date_updated DESC),
B AS(
SELECT TOP 1 con_id,
date_updated,
type
FROM TableB
ORDER BY date_updated DESC),
U AS(
SELECT *
FROM A
UNION ALL
SELECT *
FROM B)
SELECT *
FROM U;
The 2 CTE's at the top get your most recent rows from the tables, and then the end statement unions them together.
For the benefit of the person who says this doesn't work:
USE Sandbox;
GO
CREATE TABLE tablea (con_id int, date_updated date, [type] tinyint);
CREATE TABLE tableb (con_id int, date_updated date, [type] tinyint);
GO
INSERT INTO tablea
VALUES
(123,'19/06/2018',2),
(123,'15/06/2018',1),
(123,'01/05/2018',3),
(101,'06/04/2018',1),
(101,'05/03/2018',2);
INSERT INTO tableb
VALUES
(123,'15/05/2018',2),
(123,'01/05/2018',1),
(101,'07/06/2018',1);
GO
WITH A AS(
SELECT TOP 1 con_id,
date_updated,
[type]
FROM TableA
ORDER BY date_updated DESC),
B AS(
SELECT TOP 1 con_id,
date_updated,
[type]
FROM TableB
ORDER BY date_updated DESC),
U AS(
SELECT *
FROM A
UNION ALL
SELECT *
FROM B)
SELECT *
FROM U;
GO
DROP TABLE tablea;
DROP TABLE tableb;
This returns the dataset:
con_id date_updated type
----------- ------------ ----
123 2018-06-19 2
101 2018-06-07 1
Which is identical to the OP's data:
con_id date_updated type
--------------------------------------------
123 19/06/2018 2
101 07/06/2018 1
Hope this helps:
WITH combined
AS(
select * FROM tableA
UNION
select * FROM tableB)
SELECT t1.con_id,
t1.date_updated,
t1.type
FROM (
SELECT con_id,
date_updated,
type,
row_number() OVER(partition BY con_id ORDER BY date_updated DESC) AS rownumber
FROM combined) t1
WHERE rownumber = 1;
Can be done using window functions:
declare #TableA table (con_id int, date_updated date, [type] int)
declare #TableB table (con_id int, date_updated date, [type] int)
insert into #TableA values
(123, '2018-06-19', 2)
, (123, '2018-06-15', 1)
, (123, '2018-05-01', 3)
, (101, '2018-04-06', 1)
, (101, '2018-03-05', 2)
insert into #TableB values
(123, '2018-05-15', 2)
, (123, '2018-05-01', 1)
, (101, '2018-06-07', 1)
select distinct con_id
, first_value(date_updated) over (partition by con_id order by con_id, date_updated desc) as con_id
, first_value([type]) over (partition by con_id order by con_id, date_updated desc) as [type]
from
(Select * from #TableA UNION Select * from #TableB) x

I need a way to group data based on previous rows

Let me try to explain this again.
This table has a record for each person for each day of the month. There are approx 20 fields in the table. If any of the fields change (other than the date fields), then I want to group those records. So, for example, if days 1, 2, & 3 are the same, then when I read in day 4 and notice that it is changed, I want to group days 1, 2, & 3 together with a begindate of day one, and an enddate of day 3...etc
Rownum ID BegDate EndDate Field1, Field2.... Field20
1 1 6/1/2017 6/1/2017 xxxx xxxx xxxxx
2 1 6/2/2017 6/2/2017 xxxx xxxx xxxxx
3 1 6/3/2017 6/3/2017 xxxx xxxx xxxxx
4 1 6/4/2017 6/4/2017 yyyy yyyy yyyy
5 1 6/5/2017 6/5/2017 yyyy yyyy yyyy
6 1 6/6/2017 6/6/2017 xxxx xxxx xxxxx
7 1 6/7/2017 6/7/2017 xxxx xxxx xxxxx
8 1 6/8/2017 6/8/2017 zzzz zzzz zzzz
....
So in the example data above, I would have a group with rows 1,2,3 then a group with rows 4,5 then a group with rows 6,7 then a group with 8...etc
ID BegDate EndDate Field1 Field2 ...... Field20 Sum
1 6/1/2017 6/3/2017 xxxx xxxx xxxxx 3
1 6/4/2017 6/5/2017 yyyy yyyy yyyy 2
1 6/6/2017 6/7/2017 xxxx xxxx xxxxx 2
1 6/8/2017 6/15/2017 zzzz zzzz zzzz 8
.....
As example. Create table:
create table t
(date_ datetime,
status varchar(1));
And add data
insert into t values ('2017-11-01','A');
insert into t values ('2017-11-02','A');
insert into t values ('2017-11-03','A');
insert into t values ('2017-11-04','B');
insert into t values ('2017-11-05','B');
insert into t values ('2017-11-06','B');
insert into t values ('2017-11-07','C');
insert into t values ('2017-11-08','C');
insert into t values ('2017-11-09','C');
insert into t values ('2017-11-10','C');
insert into t values ('2017-11-11','B');
insert into t values ('2017-11-12','B');
insert into t values ('2017-11-13','B');
insert into t values ('2017-11-14','B');
insert into t values ('2017-11-15','B');
And use this query
select min(date_start),
IFNULL(date_end,now()),
status
from
( select
t1.date_ date_start,
(select min(date_) from t t2 where t2.date_>t1.date_ and t2.status<>t1.status) - interval 1 day as 'date_end',
t1.status status
from t t1
) a
group by date_end,status
order by 1
http://sqlfiddle.com/#!9/96e27/11
You can do this with a difference of row numbers:
select ID, min(BegDate) as Begdate, max(EndDate) as max(EndDate),
Field1, Field2, ...... Field20,
datediff(day, min(BegDate), max(EndDate))
from (select t.*,
row_number() over (partition by id order by begdate) as seqnum,
row_number() over (partition by id, Field1, Field2, . . ., Field20 order by begdate) as seqnum_2
from t
) t
group by id, (seqnum - seqnum_2), Field1, Field2, . . . Field20 ;
try below query (with 2 extra fields - field1 and field2).
To handle you 20 fields add remaining column whereever you see field1,field2 with field1, field2, field3,......field20
create table #tmp (RowNum int, id int,begdate datetime,EndDate datetime, field1 varchar(10),field2 varchar(10))
insert into #tmp values(1,1,'2017-06-01','2017-06-01','xxxxx','xxxxx')
insert into #tmp values(2,1,'2017-06-02','2017-06-02','xxxxx','xxxxx')
insert into #tmp values(3,1,'2017-06-03','2017-06-03','xxxxx','xxxxx')
insert into #tmp values(4,1,'2017-06-04','2017-06-04','yyyyy','yyyyy')
insert into #tmp values(5,1,'2017-06-05','2017-06-05','yyyyy','yyyyy')
insert into #tmp values(6,1,'2017-06-06','2017-06-06','xxxxx','xxxxx')
insert into #tmp values(7,1,'2017-06-07','2017-06-07','xxxxx','xxxxx')
insert into #tmp values(8,1,'2017-06-08','2017-06-08','zzzzz','zzzzz')
insert into #tmp values(9,1,'2017-06-09','2017-06-09','zzzzz','zzzzz')
insert into #tmp values(10,1,'2017-06-10','2017-06-10','zzzzz','zzzzz')
insert into #tmp values(11,2,'2017-06-04','2017-06-04','yyyyy','yyyyy')
insert into #tmp values(12,2,'2017-06-05','2017-06-05','yyyyy','yyyyy')
insert into #tmp values(13,2,'2017-06-06','2017-06-06','xxxxx','xxxxx')
insert into #tmp values(14,2,'2017-06-07','2017-06-07','xxxxx','xxxxx')
insert into #tmp values(15,1,'2017-06-11','2017-06-11','xxxxx','xxxxx')
insert into #tmp values(16,1,'2017-06-12','2017-06-12','xxxxx','xxxxx')
insert into #tmp values(17,1,'2017-06-13','2017-06-13','zzzzz','xxxxx')
insert into #tmp values(18,1,'2017-06-14','2017-06-14','zzzzz','xxxxx')
insert into #tmp values(19,1,'2017-06-15','2017-06-15','yyyyy','xxxxx')
insert into #tmp values(20,1,'2017-06-16','2017-06-16','zzzzz','xxxxx')
select ID, min(BegDate) as Begdate, max(EndDate) as EndDate,
Field1,Field2, /*Add all other fields here*/
datediff(day, min(BegDate), max(EndDate))+1 As [Sum]
from(
select *,
row_number() over (partition by id order by begdate) as seqnum,
row_number() over (partition by id, Field1,field2 /*Add all other fields here*/ order by begdate) as seqnum_2
from #tmp
) t
group by id, (seqnum - seqnum_2), Field1,Field2 /*Add all other fields here*/
order by ID,Begdate
Drop table #tmp

Select ID, Count(ID) and Group by Date

I have an article table which has id and date (month/year) columns, first of all I would like to count ids and group them by date, then I would like to see which id belongs to which date group in single query like that:
id date count
-----------------
1 01/2015 2
2 01/2015 2
3 02/2015 1
4 03/2015 4
5 03/2015 4
6 03/2015 4
7 03/2015 4
I have 2 queries
Select Count(id)
from article
group by date
and
Select id
from article
gives results;
count date id date
------------- ----------
2 01/2015 1 01/2015
1 02/2015 2 01/2015
4 03/2015 3 02/2015
I need a single query like
select count(id), id, date
from....
which brings id, count, date columns to use in my C# code.
Can someone help me with this?
SELECT id,
date,
COUNT(*) OVER (PARTITION BY date) AS Count
FROM article
Sql fiddle
Can't quite do that in one query, but you could use a CTE to produce a single result set:
create table #tt (id int null, dt varchar(8))
insert #tt values
(1,'01/2015'),
(2,'01/2015'),
(3,'02/2015'),
(4,'03/2015'),
(5,'03/2015'),
(6,'03/2015'),
(7,'03/2015')
;with cteCount(d, c) AS
(
select dt, count(id) from #tt group by dt
)
select id, dt, c
from #tt a
inner join cteCount cc
on a.dt = cc.d
drop table #tt
results:
id dt c
1 01/2015 2
2 01/2015 2
3 02/2015 1
4 03/2015 4
5 03/2015 4
6 03/2015 4
7 03/2015 4
if not exists(select * from TEST.sys.objects where type=N'U' and name=N'article')
begin
create table article(
[id] int,
[date] date)
end
with this data:
insert into article(id,date) values(1,convert(date,'15/01/2015',103));
insert into article(id,date) values(1,convert(date,'15/02/2015',103));
insert into article(id,date) values(2,convert(date,'15/03/2015',103));
insert into article(id,date) values(2,convert(date,'15/01/2015',103));
insert into article(id,date) values(3,convert(date,'15/02/2015',103));
insert into article(id,date) values(4,convert(date,'15/03/2015',103));
insert into article(id,date) values(5,convert(date,'15/01/2015',103));
insert into article(id,date) values(5,convert(date,'15/02/2015',103));
insert into article(id,date) values(1,convert(date,'15/03/2015',103));
insert into article(id,date) values(2,convert(date,'15/01/2015',103));
insert into article(id,date) values(3,convert(date,'15/02/2015',103));
insert into article(id,date) values(4,convert(date,'15/03/2015',103));
insert into article(id,date) values(5,convert(date,'15/01/2015',103));
insert into article(id,date) values(1,convert(date,'15/02/2015',103));
insert into article(id,date) values(2,convert(date,'15/03/2015',103));
insert into article(id,date) values(3,convert(date,'15/01/2015',103));
insert into article(id,date) values(4,convert(date,'15/03/2015',103));
select id,[date], count(id) [count] from article
group by [date],[id]
the result:
id date count
1 2015-01-15 1
1 2015-02-15 2
1 2015-03-15 1
2 2015-01-15 2
2 2015-03-15 2
3 2015-01-15 1
3 2015-02-15 2
4 2015-03-15 3
5 2015-01-15 2
5 2015-02-15 1
It's not clear how you want to generate the id field in result. if you
want to generate it manually then use RANK() or if you want to get it
from the table id value then you can use max() or min()(depends
upon on your expected result)
Use RANK() Fiddle Demo Here
Try:
create table tt (id int null, dt varchar(8),count int)
insert tt values
(1,'01/2015',2),
(2,'01/2015',2),
(3,'02/2015',1),
(4,'03/2015',4),
(5,'03/2015',4),
(6,'03/2015',4),
(7,'03/2015',4)
Query:
select count(id) as count,dt,RANK()
over(order by count(id)) as id from tt group by dt
EDIT2:
or you just can use MAX() or MIN()
like:
select count(id) as count,dt,Min(id) as id from tt group by dt
or
select count(id) as count,dt,MAX(id) as id from tt group by dt

how to get all rows distinct which has date closed with today

I have one table in SQL in this form:
id Name Date
----------------------------
1 john 04/05/2014
2 andi 12/05/2014
3 mark 05/08/2014
4 sofie 05/11/2014
5 john 12/12/2014
5 mark 15/12/2014
and i want to select data in this form "distinct"
id Name Date
---------------------------
1 john 12/12/2014
2 mark 15/12/2014
3 andi 12/05/2014
try something like:
SELECT t1.*
FROM <table_name> t1
WHERE t1.date = (SELECT MAX(t2.date)
FROM <table_name> t2
WHERE t2.name = t1.name)
Try this:
-- sample data
create table #tbl (id int, name nvarchar(20), [date] date);
insert into #tbl (id, name, [date]) values
(1, 'john', '2014-05-04'),
(2, 'andi', '2014-05-12'),
(3, 'mark', '2014-08-05'),
(4, 'sofie', '2014-11-05'),
(5, 'john', '2014-12-12'),
(5, 'mark', '2014-12-15');
-- solution
with ranked as
(
select id, name, [date]
, row_number() over(partition by name order by datediff(day, [date], getdate())) [rank]
from #tbl
)
select id, name, [date] from ranked where [rank] = 1;
-- cleanup
drop table #tbl;
Result
ID NAME DATE
----------------------
2 andi 2014-05-12
5 john 2014-12-12
5 mark 2014-12-15
4 sofie 2014-11-05
This solution ranks original dataset by name and in case when there are the same names ranks them by count of days between today and [date]. So the result dataset consists of rows with the unique names and the rows with names which [date] is closest to today.
Check SQLFiddle
Looks like you may want
SELECT name, MAX(date)
FROM table
GROUP BY name
However, from your example it's impossible to infer on what criterion you'd exclude sofie (and what does a duplicate id mean in your example table -- id is normally used to denote a unique identifier).