Extract json in column and place in column - sql

I have a table
id
outgoing
1939
{"a945248027_14454878":"processing","old.a945248027_14454878":"cancelled",,"old.a945248027_454878":"cancelled"}
1000
{"a945248027_154878":"processing","new.a945248027_878":"cancelled"}
I want to extract the content of the outgoing column as such
id
outgoing
status
amount
1939
{"a945248027_14454878":"processing","a945248027_14454878":"cancelled","a945248027_454878":"cancelled"}
processing
14454878
1939
{"a945248027_14454878":"processing","a945248027_14454878":"cancelled","a945248027_454878":"cancelled"}
cancelled
14454878
1939
{"a945248027_14454878":"processing","a945248027_14454878":"cancelled",,"a945248027_454878":"cancelled"}
cancelled
454878
1000
{"a945248027_154878":"processing","a945248027_878":"cancelled"}
processing
154878
1000
{"a945248027_154878":"processing","a945248027_878":"confirmed"}
processing
878
I have written the query
select CAST(substring(key from '_([^_]+)$') AS INTEGER) as amount,substring(outgoing::varchar from ':"([a-z]*)"' ) as Status from table1 cross join lateral json_object_keys(outgoing) as j(key);
The query repeats the first status for all the status column

Related

fetch latest message from sql DB based on timestamp by ignore failed one

here in my DB for shipment, 12456 there are many records to which we have status code as 1000 (success) or 1001 (failed) and I want to pull a report for the given list of shipments Ids to which
If the latest record in DB contains status code 1000 then ignore else display me data in the select query. Should be able to add an additional filter based on a message where if record contains a specific text, just ignore in the report
How to modify the query. I am new to this area
select createdDate, , status_code, message from SHIPMENT_DATA
where shipmentid in (
'12456'
)
sample data
TimeStamp ShipmentId StatusCode Message
####################################################################################################
03-NOV-20 07.15.28.951000000 AM 12456 1000 error message
03-NOV-20 06.15.28.951000000 AM 222 1001 error message
03-NOV-20 05.15.28.951000000 AM 12456 1001 Success
03-NOV-20 04.15.28.951000000 AM 333 1000 Success
here for shipment, 12456 have latest message status code as 1000, dont pull in port and display rest 2 record.
If you want the detail for unsuccessful shipments, you can use first_value():
select shipment_id, createdDate, status_code, message
from (select sd.*,
first_value(status_code) over (partition by shipment_id order by createdDate desc) as last_status_code
from SHIPMENT_DATA sd
where shipmentid in ('12456')
) sd
where last_status_code <> 1000;

sql query count rows per id to by selecting range between 2 min dates in different columns

temp
|id|received |changed |ur|context|
|33|2019-02-18|2019-11-18|
|33|2019-08-02|2019-09-18|
|33|2019-12-27|2019-12-18|
|18|2019-07-14|2019-10-18|
|50|2019-03-20|2019-05-26|
|50|2019-01-19|2019-06-26|
temp2
|id|min_received |min_changed |
|33|2019-02-18 |2019-09-18 |
|18|2019-04-14 |2019-09-18 |
|50|2019-01-11 |2019-05-25 |
The 'temp' table shows users who received a request for an activity. A user can make multiple requests. Hence the received column has multiple dates showing when the requests was received. The 'changed' table shows when the status was changed. There are also multiple values for it.
There is another temp2 column which shows the min dates for received and changed. Need to count total requests per user between the range of values in temp2
The expected result should look like this :- The third row of id- 33 should not be selected because the received date is after the changed date.
|id|total_requests_sent|
|33|2 |
|18|1 |
|50|2 |
Tried Creating 2 CTE's for both MIN date values and joined with the original one
I may be really over-simplifying your task, but wouldn't something like this work?
select
t.id, count (*) as total_requests_sent
from
temp t
join temp2 t2 on
t.id = t2.id
where
t.received between t2.min_received and t2.min_changed
group by
t.id
I believe the output will match your example on the use case you listed, but with a limited dataset it's hard to be sure.

2 max statements in one Oracle SQL query

Is there a way of having Oracle bring back the max value for 2+ fields in one SQL query?
I have code to search for all instances of "ID38" in the notes field, then bring back the highest job_log_number with "ID38" in it.
This however might not be the final status on the job. So I would like to create another column to bring back the highest job_log_number value on the job, regardless of whether it contains "ID38" or not.
Existing Code:
select
j1.job_number,
job_status_log.log_text,
job_status_log.job_log_number
from
job j1
inner join job_status_log on j1.job_number = job_status_log.job_number
where
job_status_log.job_log_number =
(select max (job_status_log.job_log_number)
from
job j2
inner join job_status_log on j2.job_number = job_status_log.job_number
where j1.job_number = j2.job_number and
job_status_log.log_text LIKE '%to Tree Team 1 (ID38)%')
order by
j1.job_number, job_status_log.job_log_number
Current Results
Job Number Log Text Log Number
123 ID38 3
193 ID38 4
392 ID38 1
What I would like to end up with is
Job Number Log Text Log Number Highest Log Number On Job
123 ID38 3 3
193 ID38 4 5
392 ID38 1 4

I need to extract duplicate information from the same table however it not the entire information in the row that is the same

I picked up that the system is creating duplicate billing on a policy level when user apply a cancellation on a accidental claim.
I need to extract all the duplicated billing transactions from the transactions table however the entire row is not duplicate just a few fields as the billing increases the balance and also creates a new GID and new contract movement.
The billing movementid is 101
TransactionType id is 100
Matching information will be ContractGid, AccountingPeriodID, Amount
Fields that will be different Billinggid, Balance.
I was hoping i could just write a where statement for example
select *
from LIF_TMS_T_FinancialTransaction
where ContractGID = 'DF31A6BD-FC48-4722-A820-A66500C1E136'
and accountingperiodid = accountingperiodid
or any of the other matches and then the extract should only pull
select *
from LIF_TMS_T_FinancialTransaction
where ContractGID = 'DF31A6BD-FC48-4722-A820-A66500C1E136'
and accountingperiodid = accountingperiodid
GID ContractGID ContractMovementCount MovementDate ContractMovementID AccountingPeriodID Amount Balance
31E7720D-FE34-47AD-92B3-AA0300B13FA5 DF31A6BD-FC48-4722-A820-A66500C1E136 2 2019-03-01 00:00:00.000 101 201649 -61 -61
AB46BC52-9CD3-4C9D-BEB2-AA1500F5A830 DF31A6BD-FC48-4722-A820-A66500C1E136 5 2019-03-01 00:00:00.000 101 201649 -61 -122
AE4C06E1-B1E8-41EE-88A3-AA070113C8B1 DF31A6BD-FC48-4722-A820-A66500C1E136 2 2019-03-02 00:00:00.000 810 201649 61 -61
Based on the above table i would only want the first two records to be extract.
This is just a sample
Since you didn't post your table definition, I can only offer general guidance. if you know the columns that are duplicated, something like this will do it
with t as (
select *, rc = rowcount() over(partition by <duplicate columns>)
)
from mytable
select * from t where rc > 1 -- to show duplicate rows
delete * from t where rc > 1 -- to delete duplicate rows

Outer table reference in sub-select

I have two tables, one that represents stock trades:
Blotter
TradeDate Symbol Shares Price
2014-09-02 ABC 100 157.79
2014-09-10 ABC 200 72.50
2014-09-16 ABC 100 36.82
and one that stores a history of stock splits for all symbols:
Splits
SplitDate Symbol Factor
2014-09-08 ABC 2
2014-09-15 ABC 2
2014-09-20 DEF 2
I am trying to write a report that reflects trades and includes what their current split adjustment factor should be. For these table values, I would expect the report to look like:
TradeDate Symbol Shares Price Factor
2014-09-02 ABC 100 157.79 4
2014-09-10 ABC 200 72.50 2
2014-09-16 ABC 100 36.82 1
The first columns are taken straight from Blotter - the Factor should represent the split adjustments that have taken place since the trade occurred (the Price is not split-adjusted).
Complicating matters is that each symbol could have multiple splits, which means I can't just OUTER JOIN the Splits table or I will start duplicating rows.
I have a subquery that I adapted from https://stackoverflow.com/a/3912258/3063706 to allow me to calculate the product of rows, grouped by symbol, but how do I only return the product of all Splits records with SplitDates occurring after the TradeDate?
A query like the following
SELECT tb.TradeDate, tb.Symbol, tb.Shares, tb.Price, ISNULL(s.Factor, 1) AS Factor
FROM Blotter tb
LEFT OUTER JOIN (
SELECT Symbol, EXP(Factor) AS Factor
FROM
(SELECT Symbol, SUM(LOG(ABS(NULLIF(Factor, 0)))) AS Factor
FROM Splits s
WHERE s.SplitDate > tb.TradeDate -- tb is unknown here
GROUP BY Symbol
) splits) s
ON s.Symbol = tb.Symbol
returns the error "Msg 4104, Level 16, State 1, Line 1 The multi-part identifier "tb.TradeDate" could not be bound."
Without the inner WHERE clause I get results like:
TradeDate Symbol Shares Price Factor
2014-09-02 ABC 100 157.79 4
2014-09-10 ABC 200 72.50 4
2014-09-16 ABC 100 36.82 4
Update The trade rows in Blotter are not guaranteed to be unique, so I think that rules out one suggested solution using a GROUP BY.
One way without changing the logic too much is to put the factor calculation into a table valued function:
create function dbo.FactorForDate(
#Symbol char(4), #TradeDate datetime
) returns table as
return (
select
exp(Factor) as Factor
from (
select
sum(log(abs(nullif(Factor, 0)))) as Factor
from
Splits s
where
s.SplitDate > #TradeDate and
s.Symbold = #Symbol
) splits
);
select
tb.TradeDate,
tb.Symbol,
tb.Shares,
tb.Price,
isnull(s.Factor, 1) as Factor
from
Blotter tb
outer apply
dbo.FactorForDate(tb.Symbol, tb.TradeDate) s;
To do it in a single statement is going to be something like:
select
tb.TradeDate,
tb.Symbol,
tb.Shares,
tb.Price,
isnull(exp(sum(log(abs(nullif(factor, 0))))), 1) as Factor
from
Blotter tb
left outer join
Symbol s
on s.Symbol = tb.Symbol and s.SplitDate > tb.TradeDate
group by
tb.TradeDate,
tb.Symbol,
tb.Shares,
tb.Price;
This will probably perform better if you can get it to work.
Apologies for any syntax errors, don't have access to SQL at the moment.