Please tell me how to fix the error. An error Expression not in GROUP BY key 'isin'.
I understand that I am doing the grouping incorrectly, but I do not know how to redo the code for this request correctly. Here you need to find the maximum value of end_circ and the minimum value of begin_circ for the key stocks_full_id. It is necessary to display all columns from select together with max and min.
SELECT a.isin as id,
a.state_number as number,
a.update_time as valid_from_date,
'2999-12-31 00:00:00' as valid_to_date,
a.operdate as oper,
a.inn as inn_num,
a.name_eng as name,
coalesce(ts.full_name_eng,a.name_eng) as full_nm,
max (stg.end_circ) as end_date,
min (stg.begin_circ) as start_date,
case when sk.name_eng IS NULL then sk.name_uk else sk.name_eng end as subtype_nm
FROM (SELECT s.*, rank() over (PARTITION BY isin,state_number ORDER BY operdate desc) as rn
FROM stocks s
WHERE isin IS NOT NULL and state_number IS NOT NULL) a
JOIN trading_stocks ts ON ts.emission_is=a.id
JOIN stocks_trading_grounds stg ON stg.stocks_full_id=a.id
JOIN stocks_kinds sk ON sk.id=a.kind_id
WHERE stg.end_circ >= "2021-01-01 00:00:00" and a.rn=1
GROUP BY stg.stocks_full_id
GROUP BY for the individual table within the JOIN:
SELECT a.isin as id,
a.state_number as number,
a.update_time as valid_from_date,
'2999-12-31 00:00:00' as valid_to_date,
a.operdate as oper,
a.inn as inn_num,
a.name_eng as name,
coalesce(ts.full_name_eng,a.name_eng) as full_nm,
stg.end_date,
stg.start_date,
case when sk.name_eng IS NULL then sk.name_uk else sk.name_eng end as subtype_nm
FROM (SELECT s.*,
rank() over (PARTITION BY isin,state_number ORDER BY operdate desc) as rn
FROM stocks s
WHERE isin IS NOT NULL
and state_number IS NOT NULL
) a
JOIN trading_stocks ts ON ts.emission_is=a.id
JOIN stocks_kinds sk ON sk.id=a.kind_id
JOIN (
SELECT stocks_full_id,
max(stg.end_circ) as end_date,
min(stg.begin_circ) as start_date
FROM stocks_trading_grounds
GROUP BY stocks_full_id
) stg
ON stg.stocks_full_id=a.id
WHERE stg.end_date >= DATE '2021-01-01'
AND a.rn=1
Also, double quotes are not used for a string literal; they are used for a quoted identifier. Either use single quotes '2021-01-01 00:00:00' for a string literal or DATE '2021-01-01' for a date literal.
You have to include all the non-aggregated columns in your group by clause. So you updated query would be -
SELECT a.isin as id,
a.state_number as number,
a.update_time as valid_from_date,
'2999-12-31 00:00:00' as valid_to_date,
a.operdate as oper,
a.inn as inn_num,
a.name_eng as name,
coalesce(ts.full_name_eng,a.name_eng) as full_nm,
max (stg.end_circ) as end_date,
min (stg.begin_circ) as start_date,
case when sk.name_eng IS NULL then sk.name_uk else sk.name_eng end as subtype_nm
FROM (SELECT s.*, rank() over (PARTITION BY isin,state_number ORDER BY operdate desc) as rn
FROM stocks s
WHERE isin IS NOT NULL and state_number IS NOT NULL) a
JOIN trading_stocks ts ON ts.emission_is=a.id
JOIN stocks_trading_grounds stg ON stg.stocks_full_id=a.id
JOIN stocks_kinds sk ON sk.id=a.kind_id
WHERE stg.end_circ >= '2021-01-01 00:00:00' and a.rn=1
GROUP BY a.isin,
a.state_number,
a.update_time,
a.operdate,
a.inn,
a.name_eng,
coalesce(ts.full_name_eng,a.name_eng),
case when sk.name_eng IS NULL then sk.name_uk else sk.name_eng end;
Related
I am using postgresql and need to write a query to sum values from separate columns of two different tables and then segregate into separate columns if positive or negative.
For Example,
Below is the source table
Below is the resultant table which need to be created also used while populating it
I have written below query to aggregate sum and able to populate TOT_CREDIT and TOT_DEBIT column. Is there any optimized query to achieve that ?
select t.account_id,
t.transaction_date,
SUM(t.transaction_amt) filter (where t.transaction_amt >= 0) as tot_debit,
SUM(t.transaction_amt) filter (where t.transaction_amt < 0) as tot_credit,
case
when
(
SUM(t.transaction_amt) +
SUM(COALESCE(b.credit_balance,0)) +
SUM(COALESCE(b.debit_balance,0))
) < 0
then
(
SUM(t.transaction_amt) +
SUM(COALESCE(b.credit_balance,0)) +
SUM(COALESCE(b.debit_balance,0))
)
end as credit_balance,
case
when
(
SUM(t.transaction_amt) +
SUM(COALESCE(b.credit_balance,0)) +
SUM(COALESCE(b.debit_balance,0))
) > 0
then
(
SUM(t.transaction_amt) +
SUM(COALESCE(b.credit_balance,0)) +
SUM(COALESCE(b.debit_balance,0))
)
end as debit_balance,
from
transaction t
LEFT OUTER JOIN balance b ON (t.account_id = b.account_id
and t.transaction_date = b.transaction_date
and b.transaction_date=t.transaction_date- INTERVAL '1 DAYS')
group by
t.account_id,
t.transaction_date
Please provide some pointer.
EDIT 1: This query is not working in expected manner.
One way is to break your logic into smal queries and join them in the end!
select tw.account_id, tw.t_date,tw.t_c,th.T_D,fo.C_B,fi.d_B from
(select account_id, Transaction_date as t_date, sum(Transaction_AMT) as t_C from TransactionTABLE
where Transaction_AMT<0 group by account_id, Transaction_date ) as tw inner join
(select account_id, Transaction_date as t_date, sum(Transaction_AMT) as t_d from TransactionTABLE
where Transaction_AMT>0 group by account_id, Transaction_date ) as th on tw.account_id=th.account_id and tw.t_date=th.t_date inner join
(select account_id, Transaction_date as t_date, sum(Transaction_AMT) as C_B from TransactionTABLE
where sum(Transaction_AMT)<0 group by account_id, Transaction_date ) as fo on th.account_id=fo.account_id and th.t_date=fo.t_date inner join
(select account_id, Transaction_date as t_date, sum(Transaction_AMT) as d_B from TransactionTABLE
where sum(Transaction_AMT)>0 group by account_id, Transaction_date ) as fi on fi.account_id=fo.account_id and fi.t_date=fo.t_date;
Or else
You could try something as follows which calculates the running count of d_B over the Transaction_date and account_id
select account_id,
transaction_date,
SUM(transaction_amt) filter (where transaction_amt >= 0) as tot_debit,
SUM(transaction_amt) filter (where transaction_amt < 0) as tot_credit,
sum(transaction_amt) over (partition by account_id where sum(transaction_amt)<0) as credit_balance,
sum(transaction_amt) over (partition by account_id where sum(transaction_amt)>=0) as debit_balance
from TransactionTABLE group by account_id, Transaction_date order by 1,2;
I have the following query
WITH time_series AS (
SELECT *
FROM generate_series(now() - interval '1days', now(), INTERVAL '1 hour') AS ts
), recent_instances AS (
SELECT instance_id,
(CASE WHEN last_update_granted_ts IS NOT NULL THEN last_update_granted_ts ELSE created_ts END),
version,
4 status
FROM instance_application
WHERE group_id=$1
AND last_check_for_updates >= now() - interval '1days'
ORDER BY last_update_granted_ts DESC
), instance_versions AS (
SELECT instance_id, created_ts, version, status
FROM instance_status_history
WHERE instance_id IN (SELECT instance_id
FROM recent_instances)
AND status = 4
UNION
(SELECT * FROM recent_instances)
ORDER BY created_ts DESC
)
SELECT ts,
(CASE WHEN version IS NULL THEN '' ELSE version END),
sum(CASE WHEN version IS NOT null THEN 1 ELSE 0 END) total
FROM (
SELECT *
FROM time_series
LEFT JOIN LATERAL (
SELECT distinct ON (instance_id) instance_Id, version, created_ts
FROM instance_versions
WHERE created_ts <= time_series.ts
ORDER BY instance_Id, created_ts DESC
) _ ON true
) AS _
GROUP BY 1,2
ORDER BY ts DESC;
So instance_versions subquery is executed with every value of timestamps generated from time_series query(see the last select statement). But for some reason the lateral join is very slow,the rows returned by the subquery of lateral join ranges in around 12k-15k(for a single timestamp from time_series query) which is not a big number and the final no of rows returned after the Lateral join ranges from 250k-350k. Is there a way i can optimize this?
I have some prices for the month of January.
Date,Price
1,100
2,100
3,115
4,120
5,120
6,100
7,100
8,120
9,120
10,120
Now, the o/p I need is a non-overlapping date range for each price.
price,from,To
100,1,2
115,3,3
120,4,5
100,6,7
120,8,10
I need to do this using SQL only.
For now, if I simply group by and take min and max dates, I get the below, which is an overlapping range:
price,from,to
100,1,7
115,3,3
120,4,10
This is a gaps-and-islands problem. The simplest solution is the difference of row numbers:
select price, min(date), max(date)
from (select t.*,
row_number() over (order by date) as seqnum,
row_number() over (partition by price, order by date) as seqnum2
from t
) t
group by price, (seqnum - seqnum2)
order by min(date);
Why this works is a little hard to explain. But if you look at the results of the subquery, you will see how the adjacent rows are identified by the difference in the two values.
SELECT Lag.price,Lag.[date] AS [From], MIN(Lead.[date]-Lag.[date])+Lag.[date] AS [to]
FROM
(
SELECT [date],[Price]
FROM
(
SELECT [date],[Price],LAG(Price) OVER (ORDER BY DATE,Price) AS LagID FROM #table1 A
)B
WHERE CASE WHEN Price <> ISNULL(LagID,1) THEN 1 ELSE 0 END = 1
)Lag
JOIN
(
SELECT [date],[Price]
FROM
(
SELECT [date],Price,LEAD(Price) OVER (ORDER BY DATE,Price) AS LeadID FROM [#table1] A
)B
WHERE CASE WHEN Price <> ISNULL(LeadID,1) THEN 1 ELSE 0 END = 1
)Lead
ON Lag.[Price] = Lead.[Price]
WHERE Lead.[date]-Lag.[date] >= 0
GROUP BY Lag.[date],Lag.[price]
ORDER BY Lag.[date]
Another method using ROWS UNBOUNDED PRECEDING
SELECT price, MIN([date]) AS [from], [end_date] AS [To]
FROM
(
SELECT *, MIN([abc]) OVER (ORDER BY DATE DESC ROWS UNBOUNDED PRECEDING ) end_date
FROM
(
SELECT *, CASE WHEN price = next_price THEN NULL ELSE DATE END AS abc
FROM
(
SELECT a.* , b.[date] AS next_date, b.price AS next_price
FROM #table1 a
LEFT JOIN #table1 b
ON a.[date] = b.[date]-1
)AA
)BB
)CC
GROUP BY price, end_date
first punch as in time,
second punch as out time
if possible avoid duplicate punch on same time within a minute
I need to get all in time ,outtime in a row with total hours
like below any format.
I tried below query but can't get my expected output
WITH Level1
AS (
SELECT A.emp_reader_id,
DT
,A.EventCatId
,A.Belongs_to
,ROW_NUMBER() OVER ( PARTITION BY A.Belongs_to,A.emp_reader_id ORDER BY DT ) AS RowNum
FROM dbo.trnevents A
)
,
LEVEL2
AS (-- find the last and next event type for each row
SELECT A.emp_reader_id,A.DT , A.EventCatId ,COALESCE(LastVal.EventCatId, 10) AS LastEvent,
COALESCE(NextVal.EventCatId, 10) AS NextEvent ,A.Belongs_to
FROM Level1 A
LEFT JOIN Level1 LastVal
ON A.emp_reader_id = LastVal.emp_reader_id and A.Belongs_to=LastVal.Belongs_to
AND A.RowNum - 1 = LastVal.RowNum
LEFT JOIN Level1 NextVal
ON A.emp_reader_id = NextVal.emp_reader_id and A.Belongs_to=NextVal.Belongs_to
AND A.RowNum + 1 = NextVal.RowNum
)
select * from level2 where emp_reader_id=92 order by dt desc
Expected output:
Try this below script. I considered all DT with Sam Minutes as single entry for the calculation.
WITH CTE AS
(
SELECT MAX(emp_reader_id) emp_reader_id,
CAST(DT AS DATE) Date_for_Group,
LEFT(CAST(DT AS VARCHAR),16) Time_For_Group,
ROW_NUMBER() OVER(PARTITION BY CAST(DT AS DATE) ORDER BY LEFT(CAST(DT AS VARCHAR),16)) RN,
CASE
WHEN ROW_NUMBER() OVER(PARTITION BY CAST(DT AS DATE) ORDER BY LEFT(CAST(DT AS VARCHAR),16))%2 = 0 THEN 'OUT'
ELSE 'IN'
END In_Out
FROM your_table
GROUP BY CAST(DT AS DATE),LEFT(CAST(DT AS VARCHAR),16)
)
SELECT A.emp_reader_id,A.Date_for_Group,
SUM(DATEDIFF(Minute,CAST(A.Time_For_Group AS DATETIME),CAST(B.Time_For_Group AS DATETIME)))/60 Hr,
SUM(DATEDIFF(Minute,CAST(A.Time_For_Group AS DATETIME),CAST(B.Time_For_Group AS DATETIME)))%60 Min
FROM CTE A
INNER JOIN CTE B
ON A.emp_reader_id = B.emp_reader_id
AND A.RN = B.RN -1
AND A.Date_for_Group = B.Date_for_Group
WHERE A.In_Out = 'IN'
GROUP BY A.emp_reader_id,A.Date_for_Group
first assign rownumber to datetime column then start the same result set with rownumber+1
Then Inner join them on rownumbers. After that select min an max from timein and out columns and group by on date to get total workhours of that day. hope it helps.
select empid
,date
,min(timein) as timein,max (timeout) timeout,convert(nvarchar(20),datediff(hh,min (timein),max(timeout))%24)
+':'+
convert(nvarchar(20),datediff(mi,min (timein),max(timeout))%60) as totalhrs
from(
Select a.empid,cast(a.dt as date) date,b.dt as timein,a.dt as timeout from(
SELECT DT
,[empid]
, id
,row_number() over(order by dt) as inn
FROM [test1].[dbo].[Table_2]
)a
inner join(
SELECT distinct DT
,[empid]
, id
,rank() over(order by dt)+1 as out
FROM [test1].[dbo].[Table_2])b
on FORMAT(a.dt,'hh:mm') <> FORMAT(b.dt,'hh:mm')
and cast(a.dt as date)=cast(b.dt as date)
and a.inn=b.out)b
group by b.empid,b.date
My objective is produce a dataset that shows a boatload of data from, in total, just shy of 50 tables, all in the same Oracle SQL database schema. Each table except the first consists of, as far as the report I'm building cares, two elements:
A foreign-key identifier that matches a row on the first table
A date
There may be many rows on one of these tables corresponding to one case, and it will NOT be the same number of rows from table to table.
My objective is to have each row in the first table show up as many times as needed to display all the results from the other tables once. So, something like this (except on a lot more tables):
CASE_FILE_ID INITIATED_DATE INSPECTION_DATE PAYMENT_DATE ACTION_DATE
------------ -------------- --------------- ------------ -----------
1000 10-JUL-1986 14-JUL-1987 10-JUL-1986
1000 14-JUL-1988 10-JUL-1987
1000 14-JUL-1989 10-JUL-1988
1000 10-JUL-1989
My current SQL code (shrunk down to five tables, but the rest all follow the same format as T1-T4):
SELECT DISTINCT
A.CASE_FILE_ID,
T1.DATE AS INITIATED_DATE,
T2.DATE AS INSPECTION_DATE,
T3.DATE AS PAYMENT_DATE,
T4.DATE AS ACTION_DATE
FROM
RECORDS.CASE_FILE A
LEFT OUTER JOIN RECORDS.INITIATE T1 ON A.CASE_FILE_ID = T1.CASE_FILE_ID
LEFT OUTER JOIN RECORDS.INSPECTION T2 ON A.CASE_FILE_ID = T2.CASE_FILE_ID
LEFT OUTER JOIN RECORDS.PAYMENT T3 ON A.CASE_FILE_ID = T3.CASE_FILE_ID
LEFT OUTER JOIN RECORDS.ACTION T4 ON A.CASE_FILE_ID = T4.CASE_FILE_ID
ORDER BY
A.CASE_FILE_ID
The problem is, the output this produces results in distinct combinations; so in the above example (where I added a 'WHERE' clause of A.CASE_FILE_ID = '1000'), instead of four rows for case 1000, it'd show twelve (1 Initiated Date * 3 Inspection Dates * 4 Payment Dates = 12 rows). Suffice it to say, as the number of tables increases, this would get very prohibitive in both display and runtime, very quickly.
What is the best way to get an output loosely akin to the ideal above, where any one date is only shown once? Failing that, is there a way to get it to only show as many lines for one CASE_FILE as it needs to show all the dates, even if some dates repeat within that?
There isn't a good way, but there are two ways. One method involves subqueries for each table and complex outer joins. The second involves subqueries and union all. Let's go with that one:
SELECT CASE_FILE_ID,
MAX(INITIATED_DATE) as INITIATED_DATE,
MAX(INSPECTION_DATE) as INSPECTION_DATE,
MAX(PAYMENT_DATE) as PAYMENT_DATE,
MAX(ACTION) as ACTION
FROM ((SELECT A.CASE_FILE_ID, NULL as INITIATED_DATE, NULL as INSPECTION_DATE,
NULL as PAYMENT_DATE, NULL as ACTION_DATE,
1 as seqnum
FROM RECORDS.CASE_FILE A
) UNION ALL
(SELECT T1.CASE_FILE_ID, DATE as INITIATED_DATE, NULL as INSPECTION_DATE,
NULL as PAYMENT_DATE, NULL as ACTION_DATE,
ROW_NUMBER() OVER (PARTITION BY CASE_FILE_ID ORDER BY DATE) as seqnum
FROM RECORDS.INITIATE
) UNION ALL
(SELECT T1.CASE_FILE_ID, NULL as INITIATED_DATE, DATE as INSPECTION_DATE,
NULL as PAYMENT_DATE, NULL as ACTION_DATE,
ROW_NUMBER() OVER (PARTITION BY CASE_FILE_ID ORDER BY DATE) as seqnum
FROM RECORDS.INSPECTION
) UNION ALL
(SELECT T1.CASE_FILE_ID, NULL as INITIATED_DATE, NULL as INSPECTION_DATE,
DATE as PAYMENT_DATE, NULL as ACTION_DATE,
ROW_NUMBER() OVER (PARTITION BY CASE_FILE_ID ORDER BY DATE) as seqnum
FROM RECORDS.PAYMENT
) UNION ALL
(SELECT T1.CASE_FILE_ID, NULL as INITIATED_DATE, NULL as INSPECTION_DATE,
NULL as PAYMENT_DATE, ACTION as ACTION_DATE,
ROW_NUMBER() OVER (PARTITION BY CASE_FILE_ID ORDER BY DATE) as seqnum
FROM RECORDS.ACTION
)
) a
GROUP BY CASE_FILE_ID, seqnum;
Hmmm, a closely related solution is easier to maintain:
SELECT CASE_FILE_ID,
MAX(CASE WHEN type = 'INITIATED' THEN DATE END) as INITIATED_DATE,
MAX(CASE WHEN type = 'INSPECTION' THEN DATE END) as INSPECTION_DATE,
MAX(CASE WHEN type = 'PAYMENT' THEN DATE END) as PAYMENT_DATE,
MAX(CASE WHEN type = 'ACTION' THEN DATE END) as ACTION
FROM ((SELECT A.CASE_FILE_ID, NULL as TYPE, NULL as DATE,
1 as seqnum
FROM RECORDS.CASE_FILE A
) UNION ALL
(SELECT T1.CASE_FILE_ID, 'INSPECTION', DATE,
ROW_NUMBER() OVER (PARTITION BY CASE_FILE_ID ORDER BY DATE) as seqnum
FROM RECORDS.INITIATE
) UNION ALL
(SELECT T1.CASE_FILE_ID, 'INSPECTION', DATE,
ROW_NUMBER() OVER (PARTITION BY CASE_FILE_ID ORDER BY DATE) as seqnum
FROM RECORDS.INSPECTION
) UNION ALL
(SELECT T1.CASE_FILE_ID, 'PAYMENT', DATE,
ROW_NUMBER() OVER (PARTITION BY CASE_FILE_ID ORDER BY DATE) as seqnum
FROM RECORDS.PAYMENT
) UNION ALL
(SELECT T1.CASE_FILE_ID, 'ACTION', DATE,
ROW_NUMBER() OVER (PARTITION BY CASE_FILE_ID ORDER BY DATE) as seqnum
FROM RECORDS.ACTION
)
) a
GROUP BY CASE_FILE_ID, seqnum;