MachineID Active_Inactive Time
A 0 10.10 am
A 0 10.11 am
A 1 10.12 am
A 0 10.13 am
A 0 10.14 am
A 0 10.15 am
A 1 10.16 am
A 1 10.17 am
A 1 10.18 am
Now, from the above table I want to find out the output in a way that it gives me how many times Machine A was active and how many times inactive in a 2 minute window. So the aggregation needs to be done for every two minute stint. Like A was 2- twice inactive from 10.10- 10.11 and 0 times active. How is the best way to represent the output table
There are 5 slots
10.10-10.11(1), 10.12-10.13(2) and so on...
The output should look something like this..
Slots Active A Inactive A
1 0 2
2 1 1
3 0 2
4 1 1
5 2 0
Assuming that time is a date type, this is the what I would do. Take note that this is on oracle. But it should not differ much.
CREATE TABLE temp (
Machine nvarchar2 (10),
Active number,
dt date
);
INSERT INTO temp VALUES ('A', 0, to_date('10.10 am', 'hh.mi am'));
INSERT INTO temp VALUES ('A', 0, to_date('10.11 am', 'hh.mi am'));
INSERT INTO temp VALUES ('A', 1, to_date('10.12 am', 'hh.mi am'));
INSERT INTO temp VALUES ('A', 0, to_date('10.13 am', 'hh.mi am'));
INSERT INTO temp VALUES ('A', 0, to_date('10.14 am', 'hh.mi am'));
INSERT INTO temp VALUES ('A', 0, to_date('10.15 am', 'hh.mi am'));
INSERT INTO temp VALUES ('A', 1, to_date('10.16 am', 'hh.mi am'));
INSERT INTO temp VALUES ('A', 1, to_date('10.17 am', 'hh.mi am'));
INSERT INTO temp VALUES ('A', 1, to_date('10.18 am', 'hh.mi am'));
Select
Machine,
Active,
to_char(dt, 'hh') || '.' || to_char(floor(to_char(DT, 'mi') /2) * 2) || '-' || to_char(dt, 'hh') || '.' || to_char(floor(to_char(DT, 'mi') /2) * 2 + 1) timeGroup
from temp
group by Machine, Active, to_char(dt, 'hh') || '.' || to_char(floor(to_char(DT, 'mi') /2) * 2) || '-' || to_char(dt, 'hh') || '.' || to_char(floor(to_char(DT, 'mi') /2) * 2 + 1)
;
You can use conversion, string and date functions to create grouping
SELECT machine_id, active_inactive,CONVERT(VARCHAR(13),time,21)+ ':'+RIGHT ('00'+FLOOR(CAST(DATEPART(minute,time)/2) *2 AS VARCHAR(2)),2), COUNT(*)
FROM yourtable
GROUP BY machine_id, active_inactive, CONVERT(VARCHAR(13),time,21)+ ':'+RIGHT ('00'+FLOOR(CAST(DATEPART(minute,time)/2) *2 AS VARCHAR(2)),2)
Related
I have a table name ar for column operation in it I can allow only specific values ('C', 'R', 'RE', 'M', 'P'). I have added a check constraint for it.
Requirement:
I need to insert 1 million records in the table but operation column has a constraint that only specific values are allowed. I am using generate_series() to generate values which generates random values and throws error. How can I avoid the error and insert 1 million record with only the required values ('C', 'R', 'RE', 'M', 'P') in column named operation.
CREATE TABLE ar (
mappingId TEXT,
actionRequestId integer,
operation text,
CONSTRAINT chk_operation CHECK (operation IN ('C', 'R', 'RE', 'M', 'P'))
);
INSERT INTO ar (mappingId, actionRequestId, operation)
SELECT substr(md5(random()::text), 1, 10),
(random() * 70 + 10)::integer,
substr(md5(random()::text), 1, 10)
FROM generate_series(1, 1000000);
ERROR: new row for relation "ar" violates check constraint "chk_operation"
You can do a cross join with the allowed values:
INSERT INTO ar (mappingid, actionrequestid, operation)
SELECT substr(md5(random()::text), 1, 10),
(random() * 70 + 10)::integer,
o.operation
FROM generate_series(1, 1000000 / 5)
cross join (
values ('C'), ('R'), ('RE'), ('M'), ('P')
) as o(operation);
INSERT INTO ar (mappingId, actionRequestId, operation)
SELECT substr(md5(random()::text), 1, 10),
(random() * 70 + 10)::integer,
'C'
FROM generate_series(1, 200000)
UNION ALL
SELECT substr(md5(random()::text), 1, 10),
(random() * 70 + 10)::integer,
'R'
FROM generate_series(1, 200000)
UNION ALL
SELECT substr(md5(random()::text), 1, 10),
(random() * 70 + 10)::integer,
'RE'
FROM generate_series(1, 200000)
Imagine a employee who works in a company whos having a contract to work on a specific task, he comes in and goes on start and end date respectively. I want to get the interval at which the employee comes to office without any absence.
Example Data:
DECLARE #TimeClock TABLE (PunchID INT IDENTITY, EmployeeID INT, PunchinDate DATE)
INSERT INTO #TimeClock (EmployeeID, PunchInDate) VALUES
(1, '2020-01-01'), (1, '2020-01-02'), (1, '2020-01-03'), (1, '2020-01-04'),
(1, '2020-01-05'), (1, '2020-01-06'), (1, '2020-01-07'), (1, '2020-01-08'),
(1, '2020-01-09'), (1, '2020-01-10'), (1, '2020-01-11'), (1, '2020-01-12'),
(1, '2020-01-13'), (1, '2020-01-14'), (1, '2020-01-16'),
(1, '2020-01-17'), (1, '2020-01-18'), (1, '2020-01-19'), (1, '2020-01-20'),
(1, '2020-01-21'), (1, '2020-01-22'), (1, '2020-01-23'), (1, '2020-01-24'),
(1, '2020-01-25'), (1, '2020-01-26'), (1, '2020-01-27'), (1, '2020-01-28'),
(1, '2020-01-29'), (1, '2020-01-30'), (1, '2020-01-31'),
(1, '2020-02-01'), (1, '2020-02-02'), (1, '2020-02-03'), (1, '2020-02-04'),
(1, '2020-02-05'), (1, '2020-02-06'), (1, '2020-02-07'), (1, '2020-02-08'),
(1, '2020-02-09'), (1, '2020-02-10'), (1, '2020-02-12'),
(1, '2020-02-13'), (1, '2020-02-14'), (1, '2020-02-15'), (1, '2020-02-16');
--the output shall look like this '2020-01-01 to 2020-02-10' as this is the interval at which the employee comes without any leave
SELECT 1 AS ID, FORMAT( getdate(), '2020-01-01') as START_DATE, FORMAT( getdate(), '2020-01-10') as END_DATE union all
SELECT 1 AS ID, FORMAT( getdate(), '2020-01-11') as START_DATE, FORMAT( getdate(), '2020-01-15') as END_DATE union all
SELECT 1 AS ID, FORMAT( getdate(), '2020-01-21') as START_DATE, FORMAT( getdate(), '2020-01-31') as END_DATE union all
SELECT 1 AS ID, FORMAT( getdate(), '2020-02-01') as START_DATE, FORMAT( getdate(), '2020-02-10') as END_DATE
--the output shall look like this '2020-01-01 to 2020-01-15' and '2020 01-21 to 2020-02-10'as these are the intervals at which the employee comes without any leave
Using the example data provided we can query the table like this:
;WITH iterate AS (
SELECT *, DATEADD(DAY,1,PunchinDate) AS NextDate
FROM #TimeClock
), base AS (
SELECT *
FROM (
SELECT *, CASE WHEN DATEADD(DAY,-1,PunchInDate) = LAG(PunchinDate,1) OVER (PARTITION BY EmployeeID ORDER BY PunchinDate) THEN PunchInDate END AS s
FROM iterate
) a
WHERE s IS NULL
), rCTE AS (
SELECT EmployeeID, PunchInDate AS StartDate, PunchInDate AS EndDate, NextDate
FROM base
UNION ALL
SELECT a.EmployeeID, a.StartDate, r.PunchInDate, r.NextDate
FROM rCTE a
INNER JOIN iterate r
ON a.NextDate = r.PunchinDate
AND a.EmployeeID = r.EmployeeID
)
SELECT EmployeeID, StartDate, MAX(EndDate) AS EndDate, DATEDIFF(DAY,StartDate,MAX(EndDate)) AS Streak
FROM rCTE
GROUP BY rCTE.EmployeeID, rCTE.StartDate
This is known as a recursive common table expression, and allows us to compare values between related rows. In this case we're looking for rows where they follow a streak, and we want o re-start that streak anytime we encounter a break. We're using a windowed function called LAG to look back a row to the previous value, and compare it to the one we have now. If it's not yesterday, then we start a new streak.
EmployeeID StartDate EndDate Streak
------------------------------------------
1 2020-01-01 2020-01-15 14
1 2020-01-17 2020-02-10 24
1 2020-02-12 2020-02-16 4
I have multiple monotonic counters that can be reset ad-hoc. These counters exhibit sawtooth behavior when graphed (however they are not strictly increasing). I want a monthly report showing daily sums of the maxima for each counter.
My strategy so far is to put a '1' on the rows where the counter is less than the previous sampling of the counter (also less than or equal to the next). Then calculate a running total on that column to identify series without resets.
Then I group over the daily intervals to calculate max-min for each series in the day, then sum those portions to get grand totals for the day.
What I have works, but it takes ~10s to run. The execution plan shows two big sorts: one in cteData and I think the other is in cteSeries. I feel like I should be able to eliminate one of them but I'm at a loss how to do it.
The result of this code is (which I can now see is actually skipping a sample across the interval boundary):
interval tagname total
2020-01-01 alpha 3
2020-01-01 bravo 4
2020-01-02 alpha 3
2020-01-02 bravo 4
IF OBJECT_ID('tempdb..#counter_data') IS NOT NULL
DROP TABLE #counter_data;
CREATE TABLE #counter_data(
t_stamp DATETIME NOT NULL
,tagname VARCHAR(32) NOT NULL
,val REAL NULL
PRIMARY KEY(t_stamp, tagname)
);
INSERT INTO #counter_data(t_stamp, tagname, val)
VALUES
('2020-01-01 04:00', 'alpha', 0)
,('2020-01-01 04:00', 'bravo', 0)
,('2020-01-01 08:00', 'alpha', 1)
,('2020-01-01 08:00', 'bravo', 1)
,('2020-01-01 12:00', 'alpha', 2)
,('2020-01-01 12:00', 'bravo', 2)
,('2020-01-01 16:00', 'alpha', 0)
,('2020-01-01 16:00', 'bravo', 3)
,('2020-01-01 20:00', 'alpha', 1)
,('2020-01-01 20:00', 'bravo', 4)
,('2020-01-02 04:00', 'alpha', 2)
,('2020-01-02 04:00', 'bravo', 5)
,('2020-01-02 08:00', 'alpha', 3)
,('2020-01-02 08:00', 'bravo', 6)
,('2020-01-02 12:00', 'alpha', 0)
,('2020-01-02 12:00', 'bravo', 7)
,('2020-01-02 16:00', 'alpha', 1)
,('2020-01-02 16:00', 'bravo', 8)
,('2020-01-02 20:00', 'alpha', 2)
,('2020-01-02 20:00', 'bravo', 9)
;
DECLARE #dateStart AS DATETIME = '2020-01-01';
DECLARE #dateEnd AS DATETIME = DATEADD(month, 2, #dateStart);
WITH cteData AS(
SELECT
t_stamp
,tagname
,val
,CASE
WHEN val < LAG(val) OVER(PARTITION BY tagname ORDER BY t_stamp)
AND val <= LEAD(val) OVER(PARTITION BY tagname ORDER BY t_stamp)
THEN 1
ELSE 0
END AS rn
FROM #counter_data
WHERE
t_stamp >= #dateStart AND t_stamp < #dateEnd
AND tagname IN(
'alpha'
,'bravo'
)
)
,cteSeries AS(
SELECT
CAST(t_stamp AS DATE) AS interval
,tagname
,val
,SUM(rn) OVER(PARTITION BY tagname ORDER BY t_stamp) AS series
FROM cteData
)
,cteSubtotal AS(
SELECT
interval
,tagname
,MAX(val) - MIN(val) AS subtotal
FROM cteSeries
GROUP BY interval, tagname, series
)
,cteGrandTotal AS(
SELECT
interval
,tagname
,SUM(subtotal) AS total
FROM cteSubtotal
GROUP BY interval, tagname
)
SELECT *
FROM cteGrandTotal
ORDER BY interval, tagname
I would just calculate the increase of the counter in each row by comparing it to the previous row:
with cte
as
(
SELECT *,isnull(lag(val) over (partition by tagname order by t_stamp),0) as previousVal
FROM counter_data
)
SELECT cast(t_stamp as date),tagname, sum(case when val>previousVal then val-previousval else val end )
FROM cte
GROUP BY cast(t_stamp as date),tagname;
This looks like a gaps-and-islands problem. I think that you want lag() to get the "previous" value and a conditional sum to compute the daily count.
select
tag_name,
cast(t_stamp as date) t_date,
sum(case when val = lag_val + 1 the 1 else 0 end) total
from (
select
c.*,
lag(val) over(
partition by tagname, cast(t_stamp as date)
order by t_stamp
) lag_val
from #counter_data c
) c
group by tagname, cast(t_stamp as date)
order by t_date, tagname
I have a table where users perform an order action. I want to get difference in dates between his two or more orders. And similar for all users and then calculate their average or median.
Another issue is the order rows are duplicates because of another column in the table called order_received time which are 5 secs apart due to this two rows are created for the same users with same order time.
Based on your comment on my initial answer here is another worksheet.
Table DDL
create table tbl_order(
order_id integer,
account_number integer,
ordered_at date
);
Data as in other thread you pointed out
insert into tbl_order values (1, 1001, to_date('10-Sep-2019 00:00:00', 'DD-MON-YYYY HH24:MI:SS'));
insert into tbl_order values (2, 2001, to_date('01-Sep-2019 00:00:00', 'DD-MON-YYYY HH24:MI:SS'));
insert into tbl_order values (3, 2001, to_date('03-Sep-2019 00:00:00', 'DD-MON-YYYY HH24:MI:SS'));
insert into tbl_order values (4, 1001, to_date('12-Sep-2019 00:00:00', 'DD-MON-YYYY HH24:MI:SS'));
insert into tbl_order values (5, 3001, to_date('18-Sep-2019 00:00:00', 'DD-MON-YYYY HH24:MI:SS'));
insert into tbl_order values (6, 1001, to_date('20-Sep-2019 00:00:00', 'DD-MON-YYYY HH24:MI:SS'));
Query
WITH VW AS (
SELECT ACCOUNT_NUMBER,
MIN(ORDERED_AT) EARLIEST_ORDER_AT,
MAX(ORDERED_AT) LATEST_ORDER_AT,
ROUND(MAX(ORDERED_AT) - MIN(ORDERED_AT), 5) DIFF_IN_DAYS,
COUNT(*) TOTAL_ORDER_COUNT
FROM TBL_ORDER
GROUP BY ACCOUNT_NUMBER
)
SELECT ACCOUNT_NUMBER, EARLIEST_ORDER_AT, LATEST_ORDER_AT,
DIFF_IN_DAYS, ROUND( DIFF_IN_DAYS/TOTAL_ORDER_COUNT, 4) AVERAGE
FROM VW;
Result
===========Initial answer hereafter===========
Your question is not entirely clear, for example
Do you want difference in date per day (a user can make multiple orders per day) or just between their earliest and latest orders
What do you mean by average is it just (latest order date - earliest order date) / total purchase? This will be hours / purchase. is it even useful?
Anyways, here is a working sheet, this will give enough to set you in right direction (hopefully). This is for Oracle database, will work mostly for other database except the time conversion functions used here. You will have to search and use equivalent functions for database of your choice, if its not Oracle.
Create table
create table tbl_order(
order_id integer,
user_id integer,
item varchar2(100),
ordered_at date
);
Insert some data
insert into tbl_order values (8, 1, 'A2Z', to_date('21-Mar-2019 16:30:20', 'DD-MON-YYYY HH24:MI:SS'));
insert into tbl_order values (1, 1, 'ABC', to_date('22-Mar-2019 07:30:20', 'DD-MON-YYYY HH24:MI:SS'));
insert into tbl_order values (2, 1, 'ABC', to_date('22-Mar-2019 07:30:20', 'DD-MON-YYYY HH24:MI:SS'));
insert into tbl_order values (3, 1, 'EFGT', to_date('22-Mar-2019 09:30:30', 'DD-MON-YYYY HH24:MI:SS'));
insert into tbl_order values (4, 1, 'XYZ', to_date('22-Mar-2019 12:38:50', 'DD-MON-YYYY HH24:MI:SS'));
insert into tbl_order values (5, 1, 'ABC', to_date('22-Mar-2019 16:30:20', 'DD-MON-YYYY HH24:MI:SS'));
insert into tbl_order values (6, 2, 'ABC', to_date('22-Mar-2019 14:20:20', 'DD-MON-YYYY HH24:MI:SS'));
insert into tbl_order values (7, 2, 'A2C', to_date('22-Mar-2019 14:20:50', 'DD-MON-YYYY HH24:MI:SS'));
Get latest, earliest and total_purchase per user and an average
WITH VW AS (
SELECT USER_ID,
TO_CHAR(MIN(ORDERED_AT), 'DD-MON-YYYY HH24:MI:SS') EARLIEST_ORDER_AT,
TO_CHAR(MAX(ORDERED_AT), 'DD-MON-YYYY HH24:MI:SS')LATEST_ORDER_AT,
ROUND(MAX(ORDERED_AT) - MIN(ORDERED_AT), 5) * 24 DIFF_IN_HOURS,
COUNT(*) TOTAL_ORDER_COUNT
FROM TBL_ORDER
GROUP BY USER_ID
)
SELECT USER_ID, EARLIEST_ORDER_AT, LATEST_ORDER_AT,
DIFF_IN_HOURS, DIFF_IN_HOURS/TOTAL_ORDER_COUNT AVERAGE
FROM VW;
Get latest, earliest and total_purchase per user per day and an average
WITH VW AS (
SELECT USER_ID, TO_CHAR(ORDERED_AT, 'DD-MON-YYYY') ORDER_DATE_PART,
TO_CHAR(MIN(ORDERED_AT), 'DD-MON-YYYY HH24:MI:SS') EARLIEST_ORDER_AT,
TO_CHAR(MAX(ORDERED_AT), 'DD-MON-YYYY HH24:MI:SS')LATEST_ORDER_AT,
ROUND(MAX(ORDERED_AT) - MIN(ORDERED_AT), 5) * 24 DIFF_IN_HOURS,
COUNT(*) TOTAL_ORDER_COUNT
FROM TBL_ORDER
GROUP BY USER_ID, TO_CHAR(ORDERED_AT, 'DD-MON-YYYY')
)
SELECT USER_ID, ORDER_DATE_PART, EARLIEST_ORDER_AT, LATEST_ORDER_AT,
DIFF_IN_HOURS, DIFF_IN_HOURS/TOTAL_ORDER_COUNT AVERAGE
FROM VW;
I need resolve the following situation:
I have the following twelve rows in a table A_5MIN_TST1 (the data to be compared are hexa, but examples works with decimal values):
UTCTIME|TLQ_INST
01/08/2013 01:05:00 a.m.|32
01/08/2013 01:10:00 a.m.|128
01/08/2013 01:15:00 a.m.|8
01/08/2013 01:20:00 a.m.|32
01/08/2013 01:25:00 a.m.|1
01/08/2013 01:30:00 a.m.|10
01/08/2013 01:35:00 a.m.|100
01/08/2013 01:40:00 a.m.|1000
01/08/2013 01:45:00 a.m.|2000
01/08/2013 01:50:00 a.m.|3000
01/08/2013 01:55:00 a.m.|4000
Doing a select I must analyze each bit of the tlq_inst column (hexadecimal data) and decide:
If some value of tlq_inst is
= 8
or
= 32
or
= 128
then write = 8.
When tlq_inst doesn't is 8, 32, 128 then write the first value of tlq_inst, over the range.
I have tried with this query:
SELECT DECODE(POWER(2,BITAND(tlq_inst, 168)), 1, 'OK','Q') salida
FROM A_5MIN_TST1
WHERE utctime >= TO_DATE ('01/08/2013 01:00:01','dd/mm/yyyy hh24:mi:ss')
AND utctime < TO_DATE ('01/08/2013 02:00:00','dd/mm/yyyy hh24:mi:ss')
AND POINTNUMBER = 330062;
And I received these results:
SALIDA
Q
Q
Q
Q
OK
Q
Q
Q
Q
Q
Q
Q
Resuming, on these 12 values, I need to do:
Get 'Q' if the comparison condition with mask is met.
Get the first value of tlq_inst, when the comparison with the mask, is NOT true.
If possible, do the same but inside where
With this query I managed to get 12 values, but I need to get only one.
Could you help me to resolve this problem?
CREATE TABLE A_5MIN_TST1
(
UTCTIME DATE NOT NULL,
POINTNUMBER INTEGER NOT NULL,
SITEID INTEGER,
VALOR_INST FLOAT(126),
TLQ_INST INTEGER,
VALOR_PROM FLOAT(126),
TLQ_PROM INTEGER,
VALOR_MAX FLOAT(126),
TLQ_MAX INTEGER,
UTCTIME_MAX DATE,
VALOR_MIN FLOAT(126),
TLQ_MIN INTEGER,
UTCTIME_MIN DATE
)
TABLESPACE USERS
PCTUSED 0
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 64K
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
)
LOGGING
NOCOMPRESS
NOCACHE
NOPARALLEL
MONITORING;
ALTER TABLE A_5MIN_TST1 ADD (
PRIMARY KEY
(UTCTIME, POINTNUMBER)
USING INDEX
TABLESPACE USERS
PCTFREE 10
INITRANS 2
MAXTRANS 255
STORAGE (
INITIAL 64K
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
));
SET DEFINE OFF;
Insert into A_5MIN_TST1
(UTCTIME, TLQ_INST)
Values
(TO_DATE('08/01/2013 01:05:00', 'MM/DD/YYYY HH24:MI:SS'), 32);
Insert into A_5MIN_TST1
(UTCTIME, TLQ_INST)
Values
(TO_DATE('08/01/2013 01:10:00', 'MM/DD/YYYY HH24:MI:SS'), 128);
Insert into A_5MIN_TST1
(UTCTIME, TLQ_INST)
Values
(TO_DATE('08/01/2013 01:15:00', 'MM/DD/YYYY HH24:MI:SS'), 8);
Insert into A_5MIN_TST1
(UTCTIME, TLQ_INST)
Values
(TO_DATE('08/01/2013 01:20:00', 'MM/DD/YYYY HH24:MI:SS'), 32);
Insert into A_5MIN_TST1
(UTCTIME, TLQ_INST)
Values
(TO_DATE('08/01/2013 01:25:00', 'MM/DD/YYYY HH24:MI:SS'), 1);
Insert into A_5MIN_TST1
(UTCTIME, TLQ_INST)
Values
(TO_DATE('08/01/2013 01:30:00', 'MM/DD/YYYY HH24:MI:SS'), 10);
Insert into A_5MIN_TST1
(UTCTIME, TLQ_INST)
Values
(TO_DATE('08/01/2013 01:35:00', 'MM/DD/YYYY HH24:MI:SS'), 100);
Insert into A_5MIN_TST1
(UTCTIME, TLQ_INST)
Values
(TO_DATE('08/01/2013 01:40:00', 'MM/DD/YYYY HH24:MI:SS'), 1000);
Insert into A_5MIN_TST1
(UTCTIME, TLQ_INST)
Values
(TO_DATE('08/01/2013 01:45:00', 'MM/DD/YYYY HH24:MI:SS'), 2000);
Insert into A_5MIN_TST1
(UTCTIME, TLQ_INST)
Values
(TO_DATE('08/01/2013 01:50:00', 'MM/DD/YYYY HH24:MI:SS'), 3000);
Insert into A_5MIN_TST1
(UTCTIME, TLQ_INST)
Values
(TO_DATE('08/01/2013 01:55:00', 'MM/DD/YYYY HH24:MI:SS'), 4000);
COMMIT;
Here is a statement giving you Q when at least one record matches the bitmask and the earliest TLQ_INST otherwise. It uses KEEP DENSE_RANK. It orders the records by utctime, gets the earliest record and returns the tlq_inst of that record. In case there are more records with the same earliest time, it returns the maximum tlq_inst of these records.
select
case when max(bitand(tlq_inst, 168)) = 0 then
max(tlq_inst) keep (dense_rank first order by utctime)
else
'Q'
end as result
from a_5min_tst1
where utctime >= to_date ('01/08/2013 01:00:01','dd/mm/yyyy hh24:mi:ss')
and utctime < to_date ('01/08/2013 02:00:00','dd/mm/yyyy hh24:mi:ss')
and pointnumber = 330062;