I have a table collecting verification call cdrs like this in REDSHIFT:
name | callid | timestamp | Anumber | BNumber | Duration | Status
customerA |1631c40d-e397 |2017/01/01 03:01:00| +390123456789 | +440123456700 | 0 | aborted
customerA |8a12ca23-8728 |2017/01/01 03:02:00| +390123456789 | +440123456701 | 0 | aborted
customerB |54739440-c297 |2017/01/01 03:03:00| +440123456755 | +440123456780 | 0 | aborted
customerA |87e01f74-ce9e |2017/01/01 03:03:01| +390123456789 | +440123456700 | 1 | success
customerB |54739440-c297 |2017/01/01 03:03:02| +440123456755 | +440123456123 | 0 | aborted
customerB |1d192eb7-01b0 |2017/01/01 03:03:03| +440123456755 | +440123456123 | 1 | success
Each attempt is a call with a unique callid.
Re-attempt means: I call you, call fails, I call you again until call is successful. So If anumber calls bnumber 3 times but just the third one is successful, then I have 3 attempts of which 2 re-attempts failed.
I need to get:
Number of aborted calls globally which have been re-attempted in less than 5 sec
Number of aborted calls globally which have been re-attempted in less than 20 sec
e.g.
customerA made 1000 calls globally in April 01-30, 2017. 600 successful, 300 failed, 100 aborted.
Among these (300+100) calls, how many of them got re-attempted in less than 5 sec (too quick) and 20 sec.
Number/percentage of re-attempts when a number fails
e.g.
Say (300+100) calls were made to 200 individual number.
I would like to know the percentage of those who actually retries on the same number.
If 100 users just try once & failed, the remaining 100 users (so 50% here) tried on avg 3 times when their number failed to be verified.
The big issue here for me is that customerA's call1 call2 and call3 are not linked in any way. So if customerA makes a call using numberA to numberB and it fails (aborted or failed) then he can try again and again until one call goes 'successful'.
I not it's not easy to explain. hope it is clearer a bit now.
Results should be like this
customer | aborted_calls | percentage_aborted_5s | percentage_aborted_20s
customerA | 300 | 20 | 40
I was trying to calculate summing the attempt like this but it's not counting all the attempts for the same 'call' (anumber calling many times bnumber withouth success)
select a.anumber,a.bnumber, sum(stat) from
(
select anumber,bnumber,timestamp
,case when status=3 then 0 else 1 end stat --3 is aborted or failed
from mytable
) a
inner join (select * from mytable
)v
on a.anumber=v.anumber and a.bnumber=v.bnumber
where datediff(s,v.timestamp,a.timestamp)<=5
group by a.anumber,a.bnumber
If I have a success, than the counting should start again from 0 .
Related
I'm currently running timescaleDB. I have a table that looks similar to the following
one_day | name | metric_value
------------------------+---------------------
2022-05-30 00:00:00+00 | foo | 400
2022-05-30 00:00:00+00 | bar | 200
2022-06-01 00:00:00+00 | foo | 800
2022-06-01 00:00:00+00 | bar | 1000
I'd like a query that returns the % growth and raw growth of metric, so something like the following.
name | % growth | growth
-------------------------
foo | 200% | 400
bar | 500% | 800
I'm fairly new to timescaleDB and not sure what the most efficient way to do this is. I've tried using LAG, but the main problem I'm facing with that is OVER (GROUP BY time, url) doesn't respect that I ONLY want to consider the same name in the group by and can't seem to get around it. The query works fine for a single name.
Thanks!
Use LAG to get the previous value for the same name using the PARTITION option:
lag(metric_value,1,0) over (partition by name order by one_day)
This says, when ordered by 'one_day', within each 'name', give me the previous (the second parameter to LAG says 1 row) value of 'metric_value'; if there is no previous row, give me '0'.
I want to create a report basis on,
Lead Name
Verified By
Verified on
Lead 1
ABC
11-02-2021
Lead 2
KMJ
9-02-2021
Lead 3
ABC
11-02-2021
The report will look like,
Consider today's date as 12-02-2021, we need to create a report for the last 30 days for employees work count
user
12-02-2021
11-02-2021
10-02-2021
9-02-2021
8-02-2021
7-02-2021
so on till last 30 days
ABC
0
2
0
0
0
0
XYZ
0
0
0
0
0
0
KMJ
0
0
0
1
0
0
I have written MSSQL Query as below,
CAST(lead.CREATED_ON as date) between cast(DATEADD(day, -30, getdate()) as date) and CAST(getdate() as date)
but, I am not able to get data in the below format, and also if there no entry for one date that date should show 0 in front of all users
user
12-02-2021
11-02-2021
10-02-2021
9-02-2021
8-02-2021
7-02-2021
so on
Kindly help me to complete this query, if possible kindly share any article link, it will be a great help for me thank you
First of all, are the dates really stored as strings like that? If so, that's a big problem that will make this already-difficult situation much worse. It's important enough you should consider the current schema as actively broken.
Moving on, the most correct way to handle this situation is pivot the data in the client code or reporting tool. That is, return a result set from the SQL database looking more like this:
User | Date | Value
ABC | 2021-02-12 | 0
ABC | 2021-02-12 | 2
ABC | 2021-02-10 | 0
ABC | 2021-02-09 | 0
ABC | 2021-02-08 | 0
ABC | 2021-02-07 | 0
XYZ | 2021-02-12 | 0
XYZ | 2021-02-12 | 0
XYZ | 2021-02-10 | 0
XYZ | 2021-02-09 | 0
XYZ | 2021-02-08 | 0
XYZ | 2021-02-07 | 0
... and so on
And then let the client do the work to reformat this data however you want, rather than asking a server that is often licensed at thousands of dollars per cpu core and isn't suited for the task to do that work.
But if you really must do this on the database server, you should know the SQL language has a very strict requirement to know the number, name, and type of columns in the output at query evaluation time, before looking at any data. Even SELECT * respects this, because the * is based on a table definition known ahead of time.
If the output won't know how many columns there are until it looks at the data, you must do this in 3 steps:
Run a query to get data about columns.
Use the result from step 1 to build a new query dynamically.
Run the new query.
However, since it looks like you always want exactly 30 days worth of data, you may be able to do this in a single query using the PIVOT keyword if you're willing to name the columns something more like OneDayBack, TwoDaysBack, ThreeDaysBack, ... ThirtyDaysBack, such that you can reference them in the pivot code regardless of the current date.
I have 2 fields in 2 tables .ie. status (status VARCHAR(80) CHARACTER SET LATIN CASESPECIFIC)
One table has 1000 status as value is 'success'
2nd table has 1 value for status=success and other values like 'failure'. I want to join 2 tables and get the value from 2nd table (dw_status_id)
1ST TABLE scratch.COGIPF_RUNREPORT_test
STATUS | any_number
success | 67
success | 1
success | 2
success | 3
success | 42
success | 52
failure | 45
2nd table scratch.dw_job_status_dim_test
status |dw_status_id
failure |34
success |12
running |45
Result :-
Status | dw_status_id
success | 12
success | 12
success | 12
success | 12
success | 12
success | 12
failure | 34
Query I am using :-
sel b.dw_status_id from scratch.COGIPF_RUNREPORT_test a
join scratch.dw_job_status_dim_test b on trim(a.status)=trim(b.status)
Actual result =0
It would be very great if any one can help to achieve this
Thanks
You only have to select the status from scratch.COGIPF_RUNREPORT_test and the dw_status_id from scratch.dw_job_status_dim_test. And you have to check if the status of both tables equals success.
So I've tried this on my own, maybe it helps:
select distinct scratch.COGIPF_RUNREPORT_test.status, dw_status_id
from scratch.COGIPF_RUNREPORT_test, scratch.dw_job_status_dim_test
where scratch.COGIPF_RUNREPORT_test.status = scratch.dw_job_status_dim_test.status
These are 3 records from first table :-
ship by date on time metrics-superrush 0 0 0 0 2018-03-07 01:40:08 2018-03-07 00:00:00 2018-03-07 01:41:46.917000 1.64101666667 success
lab - late shipment details 0 0 0 0 2018-03-07 01:40:08 2018-03-07 00:00:00 2018-03-07 01:40:44.950000 0.6078 success
shipping upgrade-tp/wpd/mypub 0 0 0 0 2018-03-07 01:40:09 2018-03-07 00:00:00 2018-03-07 01:40:25.028000 0.2674 success
These are some records from 2nd table
dw_status_id status description success_indicator
10 running RUNNING 0
11 stopped STOPPED 0
12 success SUCCESS 1
I have tried your 2 queries.. both are giving no results.
Ideally it should give the desired result. But somehow we are mistaking in join or varchar case specific ..PLease let me know if you are having more thought where i am doing mistake
I'm going to do my best to explain this so I apologize in advance if my explanation is a little awkward. If I am foggy somewhere, please tell me what would help you out.
I have a table filled with circuits and dates. Each circuit gets trimmed on a time cycle of about 36 months or 48 months. I have a column that gives me this info. I have one record for every time the a circuit's trim cycle has been completed. I am attempting to link a known circuit outage list, to a table with their outage data, to a table with the circuit's trim history. The twist is the following:
I only want to get back circuits that have exceeded their trim cycles by 6 months. So I would need to take all records for a circuit, look at each individual record, find the most recent previous record relative to the record currently being examined (I will need every record examined invididually), calculate the difference between the two records in months, then return only the records that exceeded 6 months of difference between any two entries for a given feeder.
Here is an example of the data:
+----+--------+----------+-------+
| ID | feeder | comp | cycle |
| 1 | 123456 | 1/1/2001 | 36 |
| 2 | 123456 | 1/1/2004 | 36 |
| 3 | 123456 | 7/1/2007 | 36 |
| 4 | 123456 | 3/1/2011 | 36 |
| 5 | 123456 | 1/1/2014 | 36 |
+----+--------+----------+-------+
Here is an example of the result set I would want (please note: cycle can vary by circuit, so the value in the cycle column needs to be in the calculation to determine if I exceeded the cycle by 6 months between trimmings):
+----+--------+----------+-------+
| ID | feeder | comp | cycle |
| 3 | 123456 | 7/1/2007 | 36 |
| 4 | 123456 | 3/1/2011 | 36 |
+----+--------+----------+-------+
This is the query I started but I'm failing really hard at determining how to make the date calculations correctly:
SELECT temp_feederList.Feeder, Temp_outagesInfo.causeType, Temp_outagesInfo.StormNameThunder, Temp_outagesInfo.deviceGroup, Temp_outagesInfo.beginTime, tbl_Trim_History.COMP, tbl_Trim_History.CYCLE
FROM (temp_feederList
LEFT JOIN Temp_outagesInfo ON temp_feederList.Feeder = Temp_outagesInfo.Feeder)
LEFT JOIN tbl_Trim_History ON Temp_outagesInfo.Feeder = tbl_Trim_History.CIRCUIT_ID;
I wasn't really able to figure out where I need to go from here to get that most recent entry and perform the mathematical comparison. I've never been asked to do SQL this complex before, so I want to thank all of you for your patience and any assistance you're willing to lend.
I'm making some assumptions, but this uses a subquery to give you rows in the feeder list where the previous completed date was greater than the number of months ago indicated by the cycle:
SELECT tbl_Trim_History.ID, tbl_Trim_History.feeder,
tbl_Trim_History.comp, tbl_Trim_History.cycle
FROM tbl_Trim_History
WHERE tbl_Trim_History.comp>
(SELECT Max(DateAdd("m", tbl_Trim_History.cycle, comp))
FROM tbl_Trim_History T2
WHERE T2.feeder = tbl_Trim_History.feeder AND
T2.comp < tbl_Trim_History.comp)
If you needed to check for longer than 36 months you could add an arbitrary value to the months calculated by the DateAdd function.
Also I don't know if the value of cycle specified the number of month from the prior cycle or the number of months to the next one. If the latter I would change tbl_Trim_History.cycle in the DateAdd function to just cycle.
SELECT tbl_trim_history.ID, tbl_trim_history.Feeder,
tbl_trim_history.Comp, tbl_trim_history.Cycle,
(select max(comp) from tbl_trim_history T
where T.feeder=tbl_trim_history.feeder and
t.comp<tbl_trim_history.comp) AS PriorComp,
IIf(DateDiff("m",[priorcomp],[comp])>36,"x") AS [Select]
FROM tbl_trim_history;
This query identifies (with an X in the last column) the records from tbl_trim_history that exceed the cycle time - but as noted in the comments I'm not entirely sure if this is what you need or not, or how to incorporate the other 2 tables. Once you see what it is doing you can modify it to only keep the records you need.
I have a table "AuctionResults" like below
Auction Action Shares ProfitperShare
-------------------------------------------
Round1 BUY 6 200
Round2 BUY 5 100
Round2 SELL -2 50
Round3 SELL -5 80
Now I need to aggregate results by every auction with BUYS after netting out SELLS in subsequent rounds on a "First Come First Net basis"
so in Round1 I bought 6 Shares and then sold 2 in Round2 and rest "4" in Round3 with a total NET profit of 6 * 200-2 * 50-4 * 80 = 780
and in Round2 I bought 5 shares and sold "1" in Round3(because earlier "4" belonged to Round1) with a NET Profit of 5 * 100-1 * 80 = 420
...so the Resulting Output should look like:
Auction NetProfit
------------------
Round1 780
Round2 420
Can we do this using just Oracle SQL(10g) and not PL-SQL
Thanks in advance
I know this is an old question and won't be of use to the original poster, but I wanted to take a stab at this because it was an interesting question. I didn't test it out enough, so I would expect this still needs to be corrected and tuned. But I believe the approach is legitimate. I would not recommend using a query like this in a product because it would be difficult to maintain or understand (and I don't believe this is really scalable). You would be much better off creating some alternate data structures. Having said that, this is what I ran in Postgresql 9.1:
WITH x AS (
SELECT round, action
,ABS(shares) AS shares
,profitpershare
,COALESCE( SUM(shares) OVER(ORDER BY round, action
ROWS BETWEEN UNBOUNDED PRECEDING
AND 1 PRECEDING)
, 0) AS previous_net_shares
,COALESCE( ABS( SUM(CASE WHEN action = 'SELL' THEN shares ELSE 0 END)
OVER(ORDER BY round, action
ROWS BETWEEN UNBOUNDED PRECEDING
AND 1 PRECEDING) ), 0 ) AS previous_sells
FROM AuctionResults
ORDER BY 1,2
)
SELECT round, shares * profitpershare - deduction AS net
FROM (
SELECT buy.round, buy.shares, buy.profitpershare
,SUM( LEAST( LEAST( sell.shares, GREATEST(buy.shares - (sell.previous_sells - buy.previous_sells), 0)
,GREATEST(sell.shares + (sell.previous_sells - buy.previous_sells) - buy.previous_net_shares, 0)
)
) * sell.profitpershare ) AS deduction
FROM x buy
,x sell
WHERE sell.round > buy.round
AND buy.action = 'BUY'
AND sell.action = 'SELL'
GROUP BY buy.round, buy.shares, buy.profitpershare
) AS y
And the result:
round | net
-------+-----
1 | 780
2 | 420
(2 rows)
To break it down into pieces, I started with this data set:
CREATE TABLE AuctionResults( round int, action varchar(4), shares int, profitpershare int);
INSERT INTO AuctionResults VALUES(1, 'BUY', 6, 200);
INSERT INTO AuctionResults VALUES(2, 'BUY', 5, 100);
INSERT INTO AuctionResults VALUES(2, 'SELL',-2, 50);
INSERT INTO AuctionResults VALUES(3, 'SELL',-5, 80);
INSERT INTO AuctionResults VALUES(4, 'SELL', -4, 150);
select * from auctionresults;
round | action | shares | profitpershare
-------+--------+--------+----------------
1 | BUY | 6 | 200
2 | BUY | 5 | 100
2 | SELL | -2 | 50
3 | SELL | -5 | 80
4 | SELL | -4 | 150
(5 rows)
The query in the "WITH" clause adds some running totals to the table.
"previous_net_shares" indicates how many shares are available to sell before the current record. This also tells me how many 'SELL' shares I need to skip before I can start allocating it to this 'BUY'.
"previous_sells" is a running count of the number of "SELL" shares encountered, so the difference between two "previous_sells" indicates the number of 'SELL' shares used in that time.
round | action | shares | profitpershare | previous_net_shares | previous_sells
-------+--------+--------+----------------+---------------------+----------------
1 | BUY | 6 | 200 | 0 | 0
2 | BUY | 5 | 100 | 6 | 0
2 | SELL | 2 | 50 | 11 | 0
3 | SELL | 5 | 80 | 9 | 2
4 | SELL | 4 | 150 | 4 | 7
(5 rows)
With this table, we can do a self-join where each "BUY" record is associated with each future "SELL" record. The result would look like this:
SELECT buy.round, buy.shares, buy.profitpershare
,sell.round AS sellRound, sell.shares AS sellShares, sell.profitpershare AS sellProfitpershare
FROM x buy
,x sell
WHERE sell.round > buy.round
AND buy.action = 'BUY'
AND sell.action = 'SELL'
round | shares | profitpershare | sellround | sellshares | sellprofitpershare
-------+--------+----------------+-----------+------------+--------------------
1 | 6 | 200 | 2 | 2 | 50
1 | 6 | 200 | 3 | 5 | 80
1 | 6 | 200 | 4 | 4 | 150
2 | 5 | 100 | 3 | 5 | 80
2 | 5 | 100 | 4 | 4 | 150
(5 rows)
And then comes the crazy part that tries to calculate the number of shares available to sell in the order vs the number over share not yet sold yet for a buy. Here are some notes to help follow that. The "greatest"calls with "0" are just saying we can't allocate any shares if we are in the negative.
-- allocated sells
sell.previous_sells - buy.previous_sells
-- shares yet to sell for this buy, if < 0 then 0
GREATEST(buy.shares - (sell.previous_sells - buy.previous_sells), 0)
-- number of sell shares that need to be skipped
buy.previous_net_shares
Thanks to David for his assistance