How do I convert a user-generated Oracle EBS spreadsheet into a SQL-driven script? - sql

I need to convert a manually-created spreadsheet into a SQL report. The report is generated via Oracle Forms (one of the BI Suite apps).
There's an inbox where many tickets come in to, identified by a "Request Number"
Each Request Number is a help desk ticket in the system. Here is a screenshot of that spreadsheet:
A full listing of columns(with descriptions) :
Request No.
Modules
Submitted ( the request being submitted , a timestamp )
Supervisor ( the request being approved , by supervisor )
Average Supervisor Approval Duration
Responsibility
Average Resp. Approval Duration
Training (the request being approved for completing training)
Average Training Approval Duration -
Configuration - depending on whether its PRISM or iProcurement
Average Config. Approval Duration
Approved
Total Overall Duration
Security
Average Security Approval Duration
SOD
Avg SOD Approval Duration
So what I'm trying to do is to generate, some or all, of it using a SQL script.
One of my issues is that .. for a specific "Request Number" I'll often get redundant and conflicting data in the database.
Below is my query so far.
SELECT hur.reg_request_id AS "Request No",
RESPONSIBILITY_NAME AS "Module",
to_char(hur.creation_date, 'DD-Mon-YYYY HH24:MI:SS') AS "Submitted" ,
to_char(hur.last_update_date, 'DD-Mon-YYYY HH24:MI:SS') AS "Supervisor" ,
ROUND((hur.last_update_date - hur.creation_date), 3 ) ||
' days' AS "Avg Supervisor Appr Duration" ,
/* hura.creation_date AS "Responsibility" */,
NULL AS "Avg Resp Appr Duration", NULL AS "Training",
NULL AS "Avg Training Appr Duration" ,
hur.LAST_UPDATE_DATE AS "Security",
NULL AS "Avg Security Appr Duration",
NULL AS "SOD",
NULL AS "Average SOD Approval Duration" ,
NULL AS "Configuration",
NULL AS "Avg Config Appr Duration",
hurr.last_update_date AS "Approved",
ROUND( hurr.last_update_date - hur.creation_date, 2 )
AS "Total Overall Duration",
NULL AS "Notes"
FROM HHS_UMX_REG_SERVICES hur
JOIN fnd_responsibility_vl frt
ON (hur.responsibility_id = frt.responsibility_id and
hur.responsibility_application_id = frt.application_id)
JOIN
hhs_umx_reg_requests hurr ON
(hurr.reg_request_id = hur.reg_request_id )
/* JOIN hhs_umx_resp_activity hura */
left outer JOIN
hhs_umx_resp_activity
hura ON (hur.reg_request_id = hura.reg_request_id )--ADD ON
WHERE EXISTS (
SELECT NULL FROM hhs_umx_resp_activity hura
WHERE hura.created_by = hur.created_by
AND
hura.creation_date > hur.creation_date)
AND hur.reg_request_id IN
('263428', '263458', '263473',
'264717', '263402', '263404',
'262671', '263229', '263268')
ORDER BY hur.reg_request_id ASC
Here are the schemas :
HHS_UMX_RESP_ACTIVITY
HHS_UMX_REG_SERVICES
fnd_responsibility_vl
**hhs_umx_reg_requests **
thanks

As a simple solution you can just create rows that add the columns together in an SQL command. Something like "INSERT INTO TABLE ('REQ_NO', 'MODULE') VALUES (A2, B2)" etc. Then you can just use the excel function that copies it again and again for each row.
Another simple solution is to save the spreadsheet as CSV (play around with the exact CSV format to see what suits you) and then you might be able to just import it into your SQL table and play around there.
There are of course other more advanced solutions like TommCatt pointed out.

I have done this many times using Perl. There are packages for reading spreadsheets so just write the script to generate the insert statements needed. Then just write it into a nice .SQL text file which can then be executed by SQL Developer or whatever IDE you're using. Or just execute it directly from the Perl script.

It is hard to diagnose without sample tables (with only a few rows) reproducing the issue.
I think your duplicate data comes from a wrong join.
You could try to completely remove hhs_umx_resp_activity from the query (I see that you commented places where it is used).
Then if it continues it has to be related to the join on hhs_umx_reg_requests (you might be able to join an additional column or replace this join by something else, an outer apply for example).

Related

DBSQL_SQL_INTERNAL_DB_ERROR SQL error 2048

I have to join two tabled ACDOCA and BKPF. I have written the follow code for it.
SELECT a~rbukrs,
a~racct,
a~bldat,
a~blart,
a~kunnr,
a~belnr,
a~sgtxt,
b~xblnr,
a~budat,
a~hsl,
a~prctr
INTO TABLE #it_final
FROM acdoca AS a
LEFT OUTER JOIN bkpf AS b
ON a~rbukrs = b~bukrs
AND a~gjahr = b~gjahr
WHERE a~rbukrs IN #s_bukrs
AND a~Kunnr IN #s_kunnr
AND a~Budat IN #s_budat
AND a~Belnr IN #s_belnr
AND a~rldnr IN #s_rldnr
AND a~blart = 'DR' OR a~blart = 'ZK' OR a~blart = 'UE'.
Facing the following errors:----
Runtime error: DBSQL_SQL_INTERNAL_DB_ERROR
SQL error "SQL code: 2048" occurred while accessing table "ACDOCA".
Short Text: An exception has occurred in class "CX_SY_OPEN_SQL_DB"
How do I resolve this? please help.
A few things:
Selecting directly from the database tables is error prone (e.g. you'll forget keys while joining) and you have to deal with those terrible german abbreviations (e.g. Belegnummer -> belnr). Since quite some time there are CDS Views on top such as I_JournalEntryItem with associations and proper english names for those fields, if you can use them, I would (also they're C1 released).
As already pointed out by xQBert the query does probably not work as intended as AND has prescendence over OR, and as such your query basically returns everything from ACDOCA, multiplied by everything from BKPF which likely leads to the database error you've posted
With range queries you might still get a lot of results (like billions of entries, depending on your company's size), you should either limit the query with UP TO, implement some pagination or COUNT(*) first and show an error to the user if the result set is too large.
I would write that like this:
TYPES:
BEGIN OF t_filters,
company_codes TYPE RANGE OF bukrs,
customers TYPE RANGE OF kunnr,
document_dates TYPE RANGE OF budat,
accounting_documents TYPE RANGE OF fis_belnr,
ledgers TYPE RANGE OF rldnr,
END OF t_filters.
DATA(filters) = VALUE t_filters(
" filter here
).
SELECT FROM I_JournalEntryItem
FIELDS
CompanyCode,
GLAccount,
DocumentDate,
AccountingDocumentType,
Customer,
AccountingDocument,
DocumentItemText,
\_JournalEntry-DocumentReferenceID,
PostingDate,
AmountInCompanyCodeCurrency,
ProfitCenter
WHERE
CompanyCode IN #filters-company_codes AND
Customer IN #filters-customers AND
DocumentDate IN #filters-document_dates AND
AccountingDocument IN #filters-accounting_documents AND
Ledger IN #filters-ledgers AND
AccountingDocumentType IN ( 'DR', 'ZK', 'UE' )
INTO TABLE #DATA(sales_orders)
UP TO 100 ROWS.
(As a bonus you'll get proper DCL authorization checks)
2048 is/can be a memory allocation error: Too much data being returned. Given that, this line is highly suspect
AND a~blart = 'DR' OR a~blart = 'ZK' OR a~blart = 'UE'.
I'd consider this instead. Otherwise ALL blart ZK and UE records are returned regardless of customer, year, company et...
SELECT a~rbukrs,
a~racct,
a~bldat,
a~blart,
a~kunnr,
a~belnr,
a~sgtxt,
b~xblnr,
a~budat,
a~hsl,
a~prctr
INTO TABLE #it_final
FROM acdoca AS a
LEFT OUTER JOIN bkpf AS b
ON a~rbukrs = b~bukrs
AND a~gjahr = b~gjahr
WHERE a~rbukrs IN #s_bukrs
AND a~Kunnr IN #s_kunnr
AND a~Budat IN #s_budat
AND a~Belnr IN #s_belnr
AND a~rldnr IN #s_rldnr
AND a~blart IN ('DR','ZK','UE').
However, if you really did mean to return all blart ZK, UE records and only those that ar DR and in the defined parameters... you're simply asking for too much data from teh system and need to "LIMIT" your result set and somehow let the user know only a limited set is being returned due to data volume
I'd also ensure your join on keys is sufficient. Fiscal Year and company code represent an incomplete key to BKPF. I dont' know ACDOCA data table so I'm unsure if that's a proper join which may be leading to a semi-cartesean contributing to data bloat. I'd think in a multi-tenant db, you may need to join on mandt as well... possibly a doc number and some other values... again, this lookst to be an incomplete join on key.... so perhaps more is needed there as well.

SQL Looking up prior record information

Here is my situation. I have a series of claims for a person. We occasionally get a duplicate claim which is given a DUP error code and and denied with zero dollar amount. What I am trying to do is look up the original claim units and billed amount. If the duplicate and the original claim are the same units and billed amount I intend to ignore it (or at least label it as NOT a potential re-bill). If the units and/or the billed amount are different, that claim will be labeled as a potential re-bill.
I've got a function that correctly finds the original claim primary key value and the base query runs in less than a second. However, when I try to link that dataset back to the tables it crushes my run time to the point of uselessness. What I don't understand is why does the function alone run so quickly but attempting to link it back bogs it down so much, we are talking about a dataset of 140ish claims over a year of activity.
If anyone could offer some insight or has a better way to accomplish this I would be obliged.
SELECT pre.*
--, st.unitsofservice as OrigUnits
--I only include the line above if the link to the servicetransaction table on the last line of the query is active
FROM
(
SELECT Sub.*--,
,dbo.cf_DuplicateServiceSTID(sub.servicetransactionid) as PaidSvc
--the line above is the function returning the primary key value for the original claim
FROM
(
SELECT Pre.servicetransactionID,
st.servicedate, st.individualid, st.agencyid, st.servicecode, st.placeofserviceid, st.unitsofservice,
dbo.sortmodifiers(st.modifiercodeid, st.modifiercodeid2, st.modifiercodeid3, st.modifiercodeid4) as Modifiers,
bd.billedamount,
a.upi,
b.name
FROM (select pmt.servicetransactionid
from pmtadjdetail pmt
where substring(pmt.reasoncodes,1,5) = 'DUP') Pre
JOIN servicetransaction st on pre.servicetransactionid = st.servicetransactionid
join billdetail bd on st.servicetransactionid = bd.servicetransactionid
join agency a on st.agencyid = a.agencyid
join business b on a.businessid = b.businessid
where st.servicedate between #StartDate and #EndDate
and st.agencyid = iif(#AgencyID is null, st.agencyid, #AgencyID)
) Sub
join individual i on sub.individualid = i.individualid
join enrollment e on sub.individualid = e.individualid
WHERE e.enrollmenttype <> 'p'
) Pre
--join servicetransaction st on pre.paidsvc = st.servicetransactionid
--If I remove the comment from the line above my run time increases to the point of uselessness

Access 2013 Query, DateDiff from Consecutive Rows TimeStamps

I'm facing a problem successfully completing (running) a query on a singular table in Access 2013 using SQL to complete a Datediff on consecutive/Sequential rows of timestamps, which track status changes in tickets going through our ticketing system.
The table titled: dbo_Master3_FieldHistory, has a field which tracks timestamps each time a ticket's status changes. Unfortunately, it only includes 1 timestamp per change, meaning it doesn't inherently have a secondary timestamp for when the status is changed again, which I need to run a DateDiff to calculate AGE for tickets, based on Status.
I found a plausible solution for this on StackOverflow, linked below. When i tried to implement this solution, as is with minor adjustments, and including adjustments for filtering out old data and particular fields, it just freezes my Access program and never times out (have to force close Access)
Date Difference between consecutive rows
'This is the basic code, traslated from the linked StackOverflow solution to fit this tables fields (I believed)
SELECT T.mrID, T.mrSEQUENCE, T.mrUSERID, T.mrFIELDNAME, T.mrNEWFIELDVALUE, T.mrOLDFIELDVALUE, T.mrTIMESTAMP, T.mrNextTIMESTAMP, DateDiff("s",T.mrTIMESTAMP, T.mrNextTIMESTAMP) AS STATUSTIME
FROM (
SELECT T1.mrID, T1.mrSEQUENCE, T1.mrUSERID, T1.mrFIELDNAME, T1.mrNEWFIELDVALUE, T1.mrOLDFIELDVALUE, T1.mrTIMESTAMP,
(SELECT MIN(mrTIMESTAMP)
FROM dbo_MASTER3_FIELDHISTORY AS T2
WHERE T2.mrID = T1.mrID
AND T2.mrTIMESTAMP > T1.mrTIMESTAMP
) As mrNextTIMESTAMP
FROM dbo_MASTER3_FIELDHISTORY AS T1
) AS T
'This is the code that I wanted to use to account for filtering out two particular fields, limiting the data to tickets (mrID) newer than 1/1/2018 and only those where the mrFIELDNAME is mrSTATUS
SELECT T.mrID, T.mrSEQUENCE, T.mrUSERID, T.mrFIELDNAME, T.mrNEWFIELDVALUE, T.mrOLDFIELDVALUE, T.mrTIMESTAMP, T.mrNextTIMESTAMP, DateDiff("s",T.mrTIMESTAMP, T.mrNextTIMESTAMP) AS STATUSTIME
FROM (
SELECT T1.mrID, T1.mrSEQUENCE, T1.mrUSERID, T1.mrFIELDNAME, T1.mrNEWFIELDVALUE, T1.mrOLDFIELDVALUE, T1.mrTIMESTAMP,
(SELECT MIN(mrTIMESTAMP)
FROM dbo_MASTER3_FIELDHISTORY AS T2
WHERE mrFIELDNAME = "mrSTATUS"
AND T2.mrID = T1.mrID
AND T2.mrTIMESTAMP > T1.mrTIMESTAMP
) As T1.mrNextTIMESTAMP
FROM dbo_MASTER3_FIELDHISTORY AS T1
WHERE mrFIELDNAME = "mrSTATUS"
AND mrTIMESTAMP >= #1/1/2018#
) AS T;
Access freezes when I try to run these queries. I've tried several ways but can't get it to work
I was able to figure it out, thank you to those who took your time to read through this interesting challenge. Instead of using the second code set in the link provided, I utilized the first and it worked beautifully. With some additions to the code to account for other filters/criteria, I have the results I need.
SELECT T1.mrID, T1.mrSEQUENCE, T1.mrUSERID, T1.mrFIELDNAME, T1.mrNEWFIELDVALUE, T1.mrOLDFIELDVALUE, T1.mrTIMESTAMP, MIN(T2.mrTIMESTAMP) AS mrNextTIMESTAMP, DATEDIFF("s", T1.mrTIMESTAMP, MIN(T2.mrTIMESTAMP)) AS TimeInStatus
FROM ((dbo_MASTER3_FIELDHISTORY AS T1 LEFT JOIN dbo_MASTER3_FIELDHISTORY AS T2 ON (T2.mrTIMESTAMP > T1.mrTIMESTAMP) AND (T1.mrID = T2.mrID)) INNER JOIN dbo_MASTER3 AS T4 ON (T4.mrID = T1.mrID))
WHERE T4.mrSUBMITDATE >= #1/1/2018#
AND t1.mrFIELDNAME = "mrSTATUS"
AND NOT T4.mrSTATUS="_Deleted_"
AND NOT T4.mrSTATUS="_SOLVED_"
AND NOT T4.mrSTATUS="_PENDING_SOLUTION_"
GROUP BY T1.mrID, T1.mrSEQUENCE, T1.mrUSERID, T1.mrFIELDNAME, T1.mrNEWFIELDVALUE, T1.mrOLDFIELDVALUE, T1.mrTIMESTAMP
ORDER BY T1.mrID, T1.mrTIMESTAMP;
Sincerely,
Kristopher

Most recent transaction date against a Works Order?

Apologies in advance for what will probably be a very stupid question but I've been using Google to teach myself SQL after making the move from years of using Crystal Reports.
We have Works Orders which can have numerous transactions against them. I want to find the most recent one and have it returned against the Works Order number (which is a unique ID)? I attempted to use MAX but that just returns whatever the Transaction Date for that record is.
I think my struggles may be caused by a lack of understanding of grouping in SQL. In Crystal it was just 'choose what to group by' but for some reason in SQL I seem to be forced to group by all selected fields.
My ultimate goal is to be able to compare the planned end date of the Works Order ("we need to finish this job by then") vs when the last transaction was booked against the Works Order, so that I can create an OTIF KPI.
I've attached an image of what I'm currently seeing in SQL Server 2014 Management Studio and below is my attempt at the query.
SELECT wip.WO.WO_No
, wip.WO.WO_Type
, stock.Stock_Trans_Log.Part_No
, stock.Stock_Trans_Types.Description
, stock.Stock_Trans_Log.Qty_Change
, stock.Stock_Trans_Log.Trans_Date
, wip.WO.End_Date
, wip.WO.Qty - wip.WO.Qty_Stored AS 'Qty remaining'
, MAX(stock.Stock_Trans_Log.Trans_Date) AS 'Last Production Receipt'
FROM stock.Stock_Trans_Log
INNER JOIN production.Part
ON stock.Stock_Trans_Log.Part_No = production.Part.Part_No
INNER JOIN wip.WO
ON stock.Stock_Trans_Log.WO_No = wip.WO.WO_No
INNER JOIN stock.Stock_Trans_Types
ON stock.Stock_Trans_Log.Tran_Type = stock.Stock_Trans_Types.Type
WHERE (stock.Stock_Trans_Types.Type = 10)
AND (stock.Stock_Trans_Log.Store_Code <> 'BI')
GROUP BY wip.WO.WO_No
, wip.WO.WO_Type
, stock.Stock_Trans_Log.Part_No
, stock.Stock_Trans_Types.Description
, stock.Stock_Trans_Log.Qty_Change
, stock.Stock_Trans_Log.Trans_Date
, wip.WO.End_Date
, wip.WO.Qty - wip.WO.Qty_Stored
HAVING (stock.Stock_Trans_Log.Part_No BETWEEN N'2Z' AND N'9A')
Query + results
If my paraphrase is correct, you could use something along the following lines...
WITH
sequenced_filtered_stock_trans_log AS
(
SELECT
*,
ROW_NUMBER() OVER (PARTITION BY WO_No
ORDER BY Trans_Date DESC) AS reversed_sequence_id
FROM
stock.Stock_Trans_Log
WHERE
Type = 10
AND Store_Code <> 'BI'
AND Part_No BETWEEN N'2Z' AND N'9A'
)
SELECT
<stuff>
FROM
sequenced_filtered_stock_trans_log AS stock_trans_log
INNER JOIN
<your joins>
WHERE
stock_trans_log.reversed_sequence_id = 1
First, this will apply the WHERE clause to filter the log table.
After the WHERE clause is applied, a sequence id is calculated. Restarting from one for each partition (each WO_No), and starting from the highest Trans_Date.
Finally, that can be used in your outer query with a WHERE clause that specifies that you only want the records with sequence id one, this it the most recent row per WO_No. The rest of the joins on to that table would proceed as normal.
If there is any other filtering that should be done (through joins or any other means) that should all be done before the application of the ROW_NUMBER().

if clause in bigquery

I have a (join) query that works as expected. But as soon as I add the following column, it does not show any results nor does it complete. (Query running counter keeps growing)
IF((d.network_type contains '_user' AND d.is_network=1),s.impressions,0) AS effimp
Is there any other way to optimize this?
The full query is as follows and it was working when I tried it in the last month.
SELECT s.date_time AS date_time
, s.requests AS requests, s.impressions AS impressions
, s.clicks AS clicks, s.conversions AS conversions
, IF((d.network_type contains '_user'
AND d.is_network=1),s.impressions,0) AS effimp
, s.total_revenue AS total_revenue
, s.total_basket_value AS total_basket_value
, s.total_num_items AS total_num_items
, s.zone_id as zone_id
FROM company.ox_data_summary s
INNER JOIN company.ox_banners1 AS d ON d.bannerid=s.ad_id
limit 100
Query Failed
Error: Unexpected. Please try again.
If I remove the "IF clause it does work.
Looks like you're hitting a query processing bug. We're investigating.