calling many table in one query based on week - sql

if there are multiple tables with the same contents like this:
table one:
CustomerID
CustTrans
Weeks
C001
2022-09-03
36
C002
2022-09-02
36
C003
2022-09-03
36
C004
2022-09-02
36
C002
2022-09-08
37
C001
2022-09-05
37
C002
2022-09-11
38
C002
2022-09-23
39
C004
2022-09-19
39
C001
2022-09-18
39
C003
2022-09-26
40
C005
2022-09-17
38
C006
2022-09-25
40
C001
2022-09-25
40
table 2:
CustomerID
CustTrans
Weeks
C001
2022-09-03
36
C002
2022-09-02
36
C003
2022-09-03
36
C004
2022-09-02
36
C002
2022-09-08
37
C001
2022-09-05
37
C002
2022-09-11
38
C002
2022-09-23
39
C004
2022-09-19
39
C001
2022-09-18
39
C003
2022-09-26
40
C005
2022-09-17
38
C006
2022-09-25
40
C001
2022-09-25
40
table 3
CustomerID
CustTrans
Weeks
C001
2022-09-03
36
C002
2022-09-02
36
C003
2022-09-03
36
C004
2022-09-02
36
C002
2022-09-08
37
C001
2022-09-05
37
C002
2022-09-11
38
C002
2022-09-23
39
C004
2022-09-19
39
C001
2022-09-18
39
C003
2022-09-26
40
C005
2022-09-17
38
C006
2022-09-25
40
C001
2022-09-25
40
it's possible to make many table in just single table?
this is my query
CREATE DATABASE manydata;
CREATE TABLE trydata1
(
CustomerID CHAR(7) not null,
CustTrans date,
CustSales int,
)
insert into trydata1(CustomerID,CustSales,CustTrans)
values('C001',34,'2022-09-03'),('C002',23,'2022-09-02'),('C003',132,'2022-09-03'),
('C004',95,'2022-09-02'),('C002',68,'2022-09-08'),('C001',54,'2022-09-05'),
('C002',34,'2022-09-11'),('C002',98,'2022-09-23'),('C004',34,'2022-09-19'),
('C001',30,'2022-09-18'),('C003',34,'2022-09-26'),('C005',34,'2022-09-17'),
('C006',34,'2022-09-25'),('C001',34,'2022-09-25');
CREATE TABLE trydata2
(
CustomerID CHAR(7) not null,
CustTrans date,
CustSales int,
)
insert into trydata2(CustomerID,CustSales,CustTrans)
values('C001',34,'2022-09-03'),('C002',23,'2022-09-02'),('C003',132,'2022-09-03'),
('C004',95,'2022-09-02'),('C002',68,'2022-09-08'),('C001',54,'2022-09-05'),
('C002',34,'2022-09-11'),('C002',98,'2022-09-23'),('C004',34,'2022-09-19'),
('C001',30,'2022-09-18'),('C003',34,'2022-09-26'),('C005',34,'2022-09-17'),
('C006',34,'2022-09-25'),('C001',34,'2022-09-25');
CREATE TABLE trydata3
(
CustomerID CHAR(7) not null,
CustTrans date,
CustSales int,
)
insert into trydata3(CustomerID,CustSales,CustTrans)
values('C001',34,'2022-09-03'),('C002',23,'2022-09-02'),('C003',132,'2022-09-03'),
('C004',95,'2022-09-02'),('C002',68,'2022-09-08'),('C001',54,'2022-09-05'),
('C002',34,'2022-09-11'),('C002',98,'2022-09-23'),('C004',34,'2022-09-19'),
('C001',30,'2022-09-18'),('C003',34,'2022-09-26'),('C005',34,'2022-09-17'),
('C006',34,'2022-09-25'),('C001',34,'2022-09-25');

You seem to be looking for UNION or UNION ALL.
select *
from
(
select * from table1
union all
select * from table2
union all
select * from table3
) t
where ...;
According to your sample data, this gets you duplicates, as some entries exist in more than one table. If you want to remove the duplicates, use UNION instead of UNION ALL.

Related

create a new column contains ISO week postgresql

I have table like this:
CustomerID
Trans_date
C001
01-sep-22
C001
04-sep-22
C001
14-sep-22
C002
03-sep-22
C002
01-sep-22
C002
18-sep-22
C002
20-sep-22
C003
02-sep-22
C003
28-sep-22
C004
08-sep-22
C004
18-sep-22
I would make a new column consist ISO week
CustomerID
Trans_date
Week_ISO
C001
01-sep-22
35
C001
04-sep-22
35
C001
14-sep-22
35
C002
03-sep-22
35
C002
01-sep-22
35
C002
18-sep-22
35
C002
20-sep-22
35
C003
02-sep-22
35
C003
28-sep-22
35
C004
08-sep-22
36
C004
18-sep-22
36
But i can't make because there's not have datepart
You can define a view instead of altering the original table. Use extract or date_part.
create or replace view the_view as
select customerid, trans_date,
extract('week' from trans_date) week_iso
from the_table;
DB-fiddle

Weekly cohorts of subscribers retention

My analysis subjects remind Netflix subscribers. Users subscribe on a certain date (e.g. 2021-04-25) and unsubscribe on another date (e.g. e.g. 2022-01-15) or null if user is still subscribed:
user_id subscription_start subscription_end
1231 2021-03-24 2021-04-07
1232 2021-05-06 2021-05-26
1234 2021-05-28 null
1235 2021-05-30 2021-06-19
1236 2021-06-01 2021-07-07
1237 2021-06-24 2021-07-09
1238 2021-07-06 null
1239 2021-08-14 null
1240 2021-09-12 null
How could I using SQL extract the weekly cohort data of user retention. E.g. 2021-03-22 (Monday) - 2021-03-28 (Sunday) is first cohort which had a single subscriber on 2021-03-24. This user stayed with the service until 2021-04-07, that is for 3 weekly cohorts and should be displayed as active on 1, 2 and 3rd week.
The end result should look like (dummy data):
Subscribed Week 1 Week2 Week 3 Week 4 Week 5 Week 6
2021-03-22 100 98 97 82 72 53 21
2021-03-29 100 97 88 88 76 44 22
2021-04-05 100 87 86 86 86 83 81
2021-04-12 100 100 100 99 98 97 96
2021-04-19 100 100 99 89 79 79 79

pandas: filter rows having max value per category

Starting out with data like this:
np.random.seed(314)
df = pd.DataFrame({
'date':[pd.date_range('2016-04-01', '2016-04-05')[r] for r in np.random.randint(0,5,20)],
'cat':['ABCD'[r] for r in np.random.randint(0,4,20)],
'count': np.random.randint(0,100,20)
})
cat count date
0 B 87 2016-04-04
1 A 95 2016-04-05
2 D 89 2016-04-02
3 D 39 2016-04-05
4 A 39 2016-04-01
5 C 61 2016-04-05
6 C 58 2016-04-04
7 B 49 2016-04-03
8 D 20 2016-04-02
9 B 54 2016-04-01
10 B 87 2016-04-01
11 D 36 2016-04-05
12 C 13 2016-04-05
13 A 79 2016-04-04
14 B 91 2016-04-03
15 C 83 2016-04-05
16 C 85 2016-04-05
17 D 93 2016-04-01
18 C 85 2016-04-02
19 B 91 2016-04-03
I'd like to end up with only the rows where count is the maximum value in the corresponding cat:
cat count date
1 A 95 2016-04-05
14 B 91 2016-04-03
16 C 85 2016-04-05
17 D 93 2016-04-01
18 C 85 2016-04-02
19 B 91 2016-04-03
Note that can be multiple records with the max count per category
Using transform
df[df['count']==df.groupby('cat')['count'].transform('max')]
Out[163]:
cat count date
1 A 95 2016-04-05
14 B 91 2016-04-03
16 C 85 2016-04-05
17 D 93 2016-04-01
18 C 85 2016-04-02
19 B 91 2016-04-03

MSSQL MAX returns all results?

I have tried the following query to return the highest P.Maxvalue for each ME.Name from the last day between 06:00 and 18:00:
SELECT MAX(P.MaxValue) AS Value,P.DateTime,ME.Name AS ID
FROM vManagedEntity AS ME INNER JOIN
Perf.vPerfHourly AS P ON ME.ManagedEntityRowId = P.ManagedEntityRowId INNER JOIN
vPerformanceRuleInstance AS PRI ON P.PerformanceRuleInstanceRowId = PRI.PerformanceRuleInstanceRowId INNER JOIN
vPerformanceRule AS PR ON PRI.RuleRowId = PR.RuleRowId
WHERE (ME.ManagedEntityTypeRowId = 2546) AND (pr.ObjectName = 'VMGuest-cpu') AND (pr.CounterName LIKE 'cpuUsageMHz') AND (CAST(p.DateTime as time) >= '06:00:00' AND CAST(p.DateTime as time) <='18:00:00') AND (p.DateTime > DATEADD(day, - 1, getutcdate()))
group by ME.Name,P.DateTime
ORDER by id
but it seems to return each MaxValue for each ID instead of the highest?
like:
Value DateTime ID
55 2018-02-19 12:00:00.000 bob:vm-100736
51 2018-02-19 13:00:00.000 bob:vm-100736
53 2018-02-19 14:00:00.000 bob:vm-100736
52 2018-02-19 15:00:00.000 bob:vm-100736
52 2018-02-19 16:00:00.000 bob:vm-100736
51 2018-02-19 17:00:00.000 bob:vm-100736
54 2018-02-19 18:00:00.000 bob:vm-100736
51 2018-02-20 06:00:00.000 bob:vm-100736
51 2018-02-20 07:00:00.000 bob:vm-100736
53 2018-02-20 08:00:00.000 bob:vm-100736
52 2018-02-20 09:00:00.000 bob:vm-100736
78 2018-02-19 12:00:00.000 bob:vm-101
82 2018-02-19 13:00:00.000 bob:vm-101
79 2018-02-19 14:00:00.000 bob:vm-101
78 2018-02-19 15:00:00.000 bob:vm-101
79 2018-02-19 16:00:00.000 bob:vm-101
77 2018-02-19 17:00:00.000 bob:vm-101
82 2018-02-19 18:00:00.000 bob:vm-101
82 2018-02-20 06:00:00.000 bob:vm-101
79 2018-02-20 07:00:00.000 bob:vm-101
81 2018-02-20 08:00:00.000 bob:vm-101
82 2018-02-20 09:00:00.000 bob:vm-101
155 2018-02-19 12:00:00.000 bob:vm-104432
there is one value per hour for each id hence twelve results for each id
does MAX not work in this way i want ?
Thanks
expected view like this :
Value DateTime ID
55 2018-02-19 12:00:00.000 bob:vm-100736
82 2018-02-19 13:00:00.000 bob:vm-101
etc
If you're using group by on datetime and id, you'll get all datetimes and all ids, it's that simple.
If you don't need exact time, you can group by date only:
SELECT MAX(P.MaxValue) AS Value, cast(P.DateTime as date) as dat, ME.Name AS ID
...
group by ME.Name, cast(P.DateTime as date)
Or if you do, you may use not exists clause instead of group by.

Trigger created with compilation error

I am trying to create a trigger that increases the discnt of a customer by .04 every time that customer places an order. Next I need to insert a new order in the orders table.
The following is the Customers table:
CID CNAME CITY DISCNT
c001 Tiptop Duluth 10
c002 Basics California 12
c003 7/11 California 8
c004 ACME Duluth 8
c006 ACME Kyoto 0
c007 Goldberg NYC 15
The following is the orders table:
ORDNO MON CID AID PID QTY DOLLARS
1011 jan c001 a01 p01 1000 450
1012 jan c001 a01 p01 1000 450
1019 feb c001 a02 p02 400 180
1017 feb c001 a06 p03 95959 540
1018 feb c001 a03 p04 600 540
1023 mar c001 a04 p05 500 450
1022 mar c001 a05 p06 400 720
1025 apr c001 a05 p07 800 720
1013 jan c002 a03 p03 1000 880
1026 may c002 a05 p03 800 704
1015 jan c003 a03 p05 1200 1104
1014 jan c003 a03 p05 1200 1104
1021 feb c004 a06 p01 1000 460
1016 jan c006 a01 p01 1000 500
1020 feb c006 a03 p07 600 600
1024 mar c006 a06 p01 800 400
The trigger I have created is:
create or replace trigger UpdateDiscnt
after insert or update on orders
for each row
begin
update customers set discnt = 0.4 + :old.discnt where
customers.cid=:new.cid;
end;
/
The error is an oracle error and there is no discnt in the order table so any version of old.discnt is incorrect.
try
create or replace trigger UpdateDiscnt
after insert or update on orders
for each row
begin
update customers set discnt = 0.4 + discnt
where customers.cid= :new.cid;
end;
/