i need to sum working hours from employees with some limitations. My code right now just for one employee:
SELECT rp.mnr,
trunc(sum(rp.OUT_DATE-rp.IN_DATE)*24, 2) AS "Hours"
FROM table1 rp
WHERE rp.CALC_DATE >= '01.04.2022' and rp.CALC_DATE <= '30.4.2022'
AND rp.MNR = 90590
GROUP BY rp.MNR
ORDER BY rp.mnr desc
MNR |Hours |
-----+------+
90590|181.98|
I need to split this sum with hours limit 50.. Result should look like:
MNR |Hours |
-----+------+
90590|50.00|
90590|50.00|
90590|50.00|
90590|31.98|
Anyone can help me with that ?
DB: Oracle.
Thank you!
MF
You have to generate the required rows to get your desired output. CONNECT BY clause is one of the method -
WITH DATA AS (<YOUR SELECT QUERY GIVING COLUMNS AS MNR AND HOURS>),
FIXED_AMOUNT AS (SELECT 50 AMT FROM DUAL),
CALC AS (SELECT D.*, AMT, SUM(AMT) OVER (ORDER BY ROWNUM) CUMM_SUM
FROM DATA D, FIXED_AMOUNT
CONNECT BY (LEVEL - 1) * 50 <= HOURS)
SELECT MNR, CASE WHEN HOURS_ - CUMM_SUM < 0
THEN HOURS_ - CUMM_SUM + AMT
ELSE AMT
END HOURS
FROM CALC;
Demo.
Related
Thanks for taking the time to look at this.
How to get the MAX of a text value in column A. I would like to and another where clause to show only one of the "FY20-Q4M#" (MAX value only)
Current Table
YY-QQ_STATUS
Program
FY20-Q2_ACTUALS
XYZ
FY20-Q3_ACTUALS
XYZ
FY20-Q3_BUDGET
XYZ
FY20-Q4M0
XYZ
FY20-Q4M1
XYZ
FY20-Q4M2
XYZ
FY20-Q4_BUDGET
XYZ
FY20-Q4_OUTLOOK
XYZ
Goal:
YY-QQ_STATUS
Program
FY20-Q2_ACTUALS
XYZ
FY20-Q3_ACTUALS
XYZ
FY20-Q3_BUDGET
XYZ
FY20-Q4M2
XYZ
FY20-Q4_BUDGET
XYZ
FY20-Q4_OUTLOOK
XYZ
SELECT
CASE WHEN UPPER(SCENARIO) LIKE '%ACTUALS%' THEN CONCAT(LEFT(FISCAL_YEAR,4) ,'-', QUARTER, '_ACTUALS')
WHEN UPPER(SCENARIO) LIKE 'Q%M%' THEN CONCAT(LEFT(FISCAL_YEAR,4) ,'-', QUARTER, SUBSTR(SCENARIO,3,2))
WHEN UPPER(SCENARIO) LIKE '%BUDGET%' THEN CONCAT(LEFT(FISCAL_YEAR,4) ,'-', QUARTER, '_BUDGET')
END AS SCENARIO,
CONCAT(LEFT(FISCAL_YEAR,4) ,'-', QUARTER) AS QUARTER,
PROGRAM_NAME,
FROM [XXXXFinance.FINANCE_OPS_REPORT_V] FIN
WHERE PROGRAM_NAME = 'Kiev20'
AND (XYZ_PER_NAME_QUARTER >= (SELECT XYZ_PER_NAME_QUARTER FROM [XXXXMaster_Data.CALENDAR]
WHERE CALENDAR_DATE_STR = CURRENT_DATE())
OR XYZ_PER_NAME_QUARTER < (SELECT XYZ_PER_NAME_QUARTER FROM [XXXXMaster_Data.CALENDAR]
WHERE CALENDAR_DATE_STR = CURRENT_DATE()) AND (SCENARIO CONTAINS 'Actual' OR SCENARIO CONTAINS 'Budget')
OR XYZ_PER_NAME_QUARTER < (SELECT XYZ_PER_NAME_QUARTER FROM [XXXXMaster_Data.CALENDAR]
WHERE CALENDAR_DATE_STR = CURRENT_DATE()) AND (SCENARIO CONTAINS 'Q%M%'))
group by 1,2,3
order by SCENARIO```
Try below
select
ifnull(format('FY%s-Q%sM%s', any_value(year), any_value(quartal), max(month)), any_value(YY_QQ_STATUS)) as YY_QQ_STATUS,
Program
from (
select *,
regexp_extract(YY_QQ_STATUS, r'FY(\d\d)-Q\dM\d') year,
regexp_extract(YY_QQ_STATUS, r'FY\d\d-Q(\d)M\d') quartal,
regexp_extract(YY_QQ_STATUS, r'FY\d\d-Q\dM(\d)') month
from `project.XXXXFinance.FINANCE_OPS_REPORT_V`
)
group by Program, ifnull(format('FY%s-Q%s', year, quartal), YY_QQ_STATUS)
if applied to sample data in y our question - output is
If you want one of the values, you could use row_number() and choose one row:
select t.* except (seqnum)
from (select t.*,
row_number() over (partition by left(YY_QQ_STATUS, 7) order by YY_QQ_STATUS desc) as seqnum
from t
) t
where not (YY_QQ_STATUS like 'FY20-Q4%' and seqnum > 1);
I'm setting up a time series with each row = 1 hr.
The input data has sometimes multiple values per hour. This can vary.
Right now the specific code looks like this:
select
patientunitstayid
, generate_series(ceil(min(nursingchartoffset)/60.0),
ceil(max(nursingchartoffset)/60.0)) as hr
, avg(case when nibp_systolic >= 1 and nibp_systolic <= 250 then
nibp_systolic else null end) as nibp_systolic_avg
from nc
group by patientunitstayid
order by patientunitstayid asc;
and generates this data:
It takes the average of the entire time series for each patient instead of taking it for each hour. How can I fix this?
I'm expecting something like this:
select nc.patientunitstayid, gs.hr,
avg(case when nc.nibp_systolic >= 1 and nc.nibp_systolic <= 250
then nibp_systolic
end) as nibp_systolic_avg
from (select nc.*,
min(nursingchartoffset) over (partition by patientunitstayid) as min_nursingchartoffset,
max(nursingchartoffset) over (partition by patientunitstayid) as max_nursingchartoffset
from nc
) nc cross join lateral
generate_series(ceil(min_nursingchartoffset/60.0),
ceil(max_nursingchartoffset/60.0)
) as gs(hr)
group by nc.patientunitstayid, hr
order by nc.patientunitstayid asc, hr asc;
That is, you need to be aggregating by hr. I put this into the from clause, to highlight that this generates rows. If you are using an older version of Postgres, then you might not have lateral joins. If so, just use a subquery in the from clause.
EDIT:
You can also try:
from (select nc.*,
generate_series(ceil(min(nursingchartoffset) over (partition by patientunitstayid) / 60.0),
ceil(max(nursingchartoffset) over (partition by patientunitstayid)/ 60.0)
) hr
from nc
) nc
And adjust the references to hr in the outer query.
I have a table that has data like following.
attr |time
----------------|--------------------------
abc |2018-08-06 10:17:25.282546
def |2018-08-06 10:17:25.325676
pqr |2018-08-05 10:17:25.366823
abc |2018-08-06 10:17:25.407941
def |2018-08-05 10:17:25.449249
I want to group them and count by attr column row wise and also create additional columns in to show their counts per day and percentages as shown below.
attr |day1_count| day1_%| day2_count| day2_%
----------------|----------|-------|-----------|-------
abc |2 |66.6% | 0 | 0.0%
def |1 |33.3% | 1 | 50.0%
pqr |0 |0.0% | 1 | 50.0%
I'm able to display one count by using group by but unable to find out how to even seperate them to multiple columns. I tried to generate day1 percentage with
SELECT attr, count(attr), count(attr) / sum(sub.day1_count) * 100 as percentage from (
SELECT attr, count(*) as day1_count FROM my_table WHERE DATEPART(week, time) = DATEPART(day, GETDate()) GROUP BY attr) as sub
GROUP BY attr;
But this also is not giving me correct answer, I'm getting all zeroes for percentage and count as 1. Any help is appreciated. I'm trying to do this in Redshift which follows postgresql syntax.
Let's nail the logic before presenting:
with CTE1 as
(
select attr, DATEPART(day, time) as theday, count(*) as thecount
from MyTable
)
, CTE2 as
(
select theday, sum(thecount) as daytotal
from CTE1
group by theday
)
select t1.attr, t1.theday, t1.thecount, t1.thecount/t2.daytotal as percentofday
from CTE1 t1
inner join CTE2 t2
on t1.theday = t2.theday
From here you can pivot to create a day by day if you feel the need
I am trying to enhance the query #johnHC btw if you needs for 7days then you have to those days in case when
with CTE1 as
(
select attr, time::date as theday, count(*) as thecount
from t group by attr,time::date
)
, CTE2 as
(
select theday, sum(thecount) as daytotal
from CTE1
group by theday
)
,
CTE3 as
(
select t1.attr, EXTRACT(DOW FROM t1.theday) as day_nmbr,t1.theday, t1.thecount, t1.thecount/t2.daytotal as percentofday
from CTE1 t1
inner join CTE2 t2
on t1.theday = t2.theday
)
select CTE3.attr,
max(case when day_nmbr=0 then CTE3.thecount end) as day1Cnt,
max(case when day_nmbr=0 then percentofday end) as day1,
max(case when day_nmbr=1 then CTE3.thecount end) as day2Cnt,
max( case when day_nmbr=1 then percentofday end) day2
from CTE3 group by CTE3.attr
http://sqlfiddle.com/#!17/54ace/20
In case that you have only 2 days:
http://sqlfiddle.com/#!17/3bdad/3 (days descending as in your example from left to right)
http://sqlfiddle.com/#!17/3bdad/5 (days ascending)
The main idea is already mentioned in the other answers. Instead of joining the CTEs for calculating the values I am using window functions which is a bit shorter and more readable I think. The pivot is done the same way.
SELECT
attr,
COALESCE(max(count) FILTER (WHERE day_number = 0), 0) as day1_count, -- D
COALESCE(max(percent) FILTER (WHERE day_number = 0), 0) as day1_percent,
COALESCE(max(count) FILTER (WHERE day_number = 1), 0) as day2_count,
COALESCE(max(percent) FILTER (WHERE day_number = 1), 0) as day2_percent
/*
Add more days here
*/
FROM(
SELECT *, (count::float/count_per_day)::decimal(5, 2) as percent -- C
FROM (
SELECT DISTINCT
attr,
MAX(time::date) OVER () - time::date as day_number, -- B
count(*) OVER (partition by time::date, attr) as count, -- A
count(*) OVER (partition by time::date) as count_per_day
FROM test_table
)s
)s
GROUP BY attr
ORDER BY attr
A counting the rows per day and counting the rows per day AND attr
B for more readability I convert the date into numbers. Here I take the difference between current date of the row and the maximum date available in the table. So I get a counter from 0 (first day) up to n - 1 (last day)
C calculating the percentage and rounding
D pivot by filter the day numbers. The COALESCE avoids the NULL values and switched them into 0. To add more days you can multiply these columns.
Edit: Made the day counter more flexible for more days; new SQL Fiddle
Basically, I see this as conditional aggregation. But you need to get an enumerator for the date for the pivoting. So:
SELECT attr,
COUNT(*) FILTER (WHERE day_number = 1) as day1_count,
COUNT(*) FILTER (WHERE day_number = 1) / cnt as day1_percent,
COUNT(*) FILTER (WHERE day_number = 2) as day2_count,
COUNT(*) FILTER (WHERE day_number = 2) / cnt as day2_percent
FROM (SELECT attr,
DENSE_RANK() OVER (ORDER BY time::date DESC) as day_number,
1.0 * COUNT(*) OVER (PARTITION BY attr) as cnt
FROM test_table
) s
GROUP BY attr, cnt
ORDER BY attr;
Here is a SQL Fiddle.
Hello i have a sql query and it does not count my one row. which is called Spend, you can see it in the fiddle. what is wrong with my code?
I just need basic table
Month ID GOT SPEND
1 1 100 50
2 1 500 200
1 2 200 50
I have created the fiddle http://sqlfiddle.com/#!9/3623b1/2
Could you please help me?
Here is the query:
select
keliones_lapas.Vairuot_Id,
MONTH(keliones_lapas.Data_darbo),
sum(keliones_lapas.uzdarbis) as Got,
coalesce(Suma, 0) AS Spend,
(sum(keliones_lapas.uzdarbis) - coalesce(Suma, 0)) Total
from keliones_lapas
left join (
select Vairuotas,
MONTH(Data_islaidu) as Data_islaidu,
sum(Suma) as Suma
from islaidos
group by Vairuotas, MONTH(Data_islaidu)) islaidos
on keliones_lapas.Vairuot_Id = islaidos.Vairuotas
and MONTH(keliones_lapas.Data_darbo) = MONTH(islaidos.Data_islaidu)
group by keliones_lapas.Vairuot_Id, MONTH(keliones_lapas.Data_darbo), Suma
order by keliones_lapas.Vairuot_Id, MONTH(keliones_lapas.Data_darbo);
TRY THIS: You are taking already month in your subquery then again using MONTH to retrieve from month in the join so it's returning NULL and not matching with any month of keliones_lapas
SELECT
keliones_lapas.Vairuot_Id,
MONTH(keliones_lapas.Data_darbo),
SUM(keliones_lapas.uzdarbis) AS Got,
COALESCE(Suma, 0) AS Spend,
(SUM(keliones_lapas.uzdarbis) - COALESCE(Suma, 0)) Total
FROM keliones_lapas
LEFT JOIN (
SELECT Vairuotas,
MONTH(Data_islaidu) AS Data_islaidu, --It's already in MONTH
SUM(Suma) AS Suma
FROM islaidos
GROUP BY Vairuotas, MONTH(Data_islaidu)) islaidos
ON keliones_lapas.Vairuot_Id = islaidos.Vairuotas
AND MONTH(keliones_lapas.Data_darbo) = Data_islaidu --No need to use MONTH or `vice versa`
GROUP BY keliones_lapas.Vairuot_Id, MONTH(keliones_lapas.Data_darbo), Suma
ORDER BY keliones_lapas.Vairuot_Id, MONTH(keliones_lapas.Data_darbo)
I have a very small SQL table that lists courses attended and the date of attendance. I can use the code below to count the attendees for each month
select to_char(DATE_ATTENDED,'YYYY/MM'),
COUNT (*)
FROM TRAINING_COURSE_ATTENDED
WHERE COURSE_ATTENDED = 'Fire Safety'
GROUP BY to_char(DATE_ATTENDED,'YYYY/MM')
ORDER BY to_char(DATE_ATTENDED,'YYYY/MM')
This returns a list as expected for each month that has attendees. However I would like to list it as
January 2
February 0
March 5
How do I show the count results along with the nulls? My table is very basic
1234 01-JAN-15 Fire Safety
108 01-JAN-15 Fire Safety
1443 02-DEC-15 Healthcare
1388 03-FEB-15 Emergency
1355 06-MAR-15 Fire Safety
1322 09-SEP-15 Fire Safety
1234 11-DEC-15 Fire Safety
I just need to display each month and the total attendees for Fire Safety only. Not used SQL developer for a while so any help appreciated.
You would need a calendar table to select a period you want to display. Simplified code would look like this:
select to_char(c.Date_dt,'YYYY/MM')
, COUNT (*)
FROM calendar as c
left join TRAINING_COURSE_ATTENDED as tca
on tca.DATE_ATTENDED = c.Date_dt
WHERE tca.COURSE_ATTENDED = 'Fire Safety'
and c.Date_dt between [period_start_dt] and [period_end_dt]
GROUP BY to_char(c.Date_dt,'YYYY/MM')
ORDER BY to_char(c.Date_dt,'YYYY/MM')
You can create your own set required year month's on-fly with 0 count and use query as below.
Select yrmth,sum(counter) from
(
select to_char(date_attended,'YYYYMM') yrmth,
COUNT (1) counter
From TRAINING_COURSE_ATTENDED Where COURSE_ATTENDED = 'Fire Safety'
Group By Y to_char(date_attended,'YYYYMM')
Union All
Select To_Char(2015||Lpad(Rownum,2,0)),0 from Dual Connect By Rownum <= 12
)
group by yrmth
order by 1
If you want to show multiple year's, just change the 2nd query to
Select To_Char(Year||Lpad(Month,2,0)) , 0
From
(select Rownum Month from Dual Connect By Rownum <= 12),
(select 2015+Rownum-1 Year from Dual Connect By Rownum <= 3)
Try this :
SELECT Trunc(date_attended, 'MM') Month,
Sum(CASE
WHEN course_attended = 'Fire Safety' THEN 1
ELSE 0
END) Fire_Safety
FROM training_course_attended
GROUP BY Trunc(date_attended, 'MM')
ORDER BY Trunc(date_attended, 'MM')
Another way to generate a calendar table inline:
with calendar (month_start, month_end) as
( select add_months(date '2014-12-01', rownum)
, add_months(date '2014-12-01', rownum +1) - interval '1' second
from dual
connect by rownum <= 12 )
select to_char(c.month_start,'YYYY/MM') as course_month
, count(tca.course_attended) as attended
from calendar c
left join training_course_attended tca
on tca.date_attended between c.month_start and c.month_end
and tca.course_attended = 'Fire Safety'
group by to_char(c.month_start,'YYYY/MM')
order by 1;
(You could also have only the month start in the calendar table, and join on trunc(tca.date_attended,'MONTH') = c.month_start, though if you had indexes or partitioning on tca.date_attended that might be less efficient.)