I am running one sql query to fetch records from a table with multiple threads. Initially all the threads go in INACTIVE status and slowly it starts coming to ACTIVE status.
I have created HASH partition on my table based on the loc.
Also I have created an index over Item, loc columns.
select item || ',' || loc || ',' || stock_on_hand || ',' || on_order_qty || ',' || 0 || ',' || in_transit_qtyAS csv
from makro_m_oplcom_stg
where loc = ${loc} and ITEM_INV_IND = 0;
When I run this query in my DB for one loc it takes only one sec to fetch the records but when I am running it from shell script in multi threaded mode I am facing the session going into INACTIVE mode.
This is my wrapper script. Inside this I am calling another shell script that has the above mentioned query.
while [ $thread -le $SLOTS ] do
sleep 1
LOG_MESSAGE "$pgm_name Started by batch_makro_oplcom_process for thread $thread out of $SLOTS by ${USER}"
( ksh $pgm_name.ksh $UP $SLOTS $thread
if [ $? -ne 0 ]; then
LOG_MESSAGE "$pgm_name failed for thread: $thread, check the $pgm_name error file."
exit 1
else
LOG_MESSAGE "$pgm_name executed successfully for thread: $thread." fi )
& let thread=thread+1
done
Related
I have a script that runs a SAS passhtrough query that connects to an Oracle database. This was part of a cronjob that runs on a Unix server, and has had no issues for years. In the past few weeks however, the job has started hanging up on this one particular step - according to logs it used to take about 15 seconds to run but now just will run indefinitely before we have to kill the job. There are no associated errors or warnings in the log - the job will create a lockfile and just run indefinitely until we have to kill it.
The step where the job hangs up is pasted in below. There are two macro variables &start_dt and &end_dt, which represent the date range the job is pulling sales data for.
While investigating, we tried a few different approaches, and were able to get this step to run successfully and in its usual time by changing three things:
running the script through an Enterprise Guide client which connects
to the same server as opposed to running the script via CLI / shell
script
Changing the library the step writes to to work instead of writing
the dataset to salesdata library (as seen in code below)
Changing the dates to hardcoded values instead of macro variables.
As for the date variables themselves, they are strings in date9 format, e.g
&start_dt = '08-May-22', &end_dt = '14-May-22'. Initially I suspected the issue was related to the way the dates are structured since this is an older project I have inherited, but am confused to why the job ran without issue for so long up until a few weeks ago, even with these oddly formatted date macro vars.
The other possibility I considered was that some sort of resource on the unix server was getting locked up when it got to this step, potentially from some sort of hanging job or some other conflict with an older file such as a log or a previous sas dataset.
Problematic version of the step in the script pasted below:
PROC SQL;
connect to oracle(user=&uid pass=&pwd path='#dw');
create table salesdata.shipped as
Select
SKN_NBR,
COLOR_NBR,
SIZE_NBR,
SALESDIV_KEY,
ORDER_LINE_QTY as QUANTITY label="SUM(ORDER_LINE_QTY)",
EX1 as DOLLARS label="SUM(EX1)" from connection to oracle(
select
A1."SKN_NBR",
A1."COLOR_NBR",
A1."SIZE_NBR",
decode(A1."SALESDIV_KEY", 'ILB', 'IQ',
'IQ ', 'IQ',
'IQC', 'IQ',
'ISQ', 'IQ',
'IWC', 'IQ',
'QVC'),
SUM(A1."ORDER_LINE_QTY"),
SUM(A1."ORDER_LINE_QTY" * A1."ORDER_LINE_PRICE_AMT")
from DW.ORDERLINE A1, DISTINCT_SKN A2, DW.ORDERSTATUSTYPE A3
where
A2."SKN_NBR" = A1."SKN_NBR" AND
A1."CURRENT_STATUS_DATE" Between &start_dt and &end_dt AND
A1."ORDERLINESTATUS_KEY" = A3."ORDERLINESTATUS_KEY" AND
A3."ORDERSTATUS_SHIPPED" = 'Y' AND
A1."ORDER_LINE_PRICE_AMT" > 0
group by A1."SKN_NBR",
A1."COLOR_NBR",
A1."SIZE_NBR",
decode(A1."SALESDIV_KEY", 'ILB', 'IQ',
'IQ ', 'IQ',
'IQC', 'IQ',
'ISQ', 'IQ',
'IWC', 'IQ',
'QVC')
order by A1."SKN_NBR",
A1."COLOR_NBR",
A1."SIZE_NBR",
decode(A1."SALESDIV_KEY", 'ILB', 'IQ',
'IQ ', 'IQ',
'IQC', 'IQ',
'ISQ', 'IQ',
'IWC', 'IQ',
'QVC')
) as t1(SKN_NBR, COLOR_NBR, SIZE_NBR, SALESDIV_KEY, ORDER_LINE_QTY, EX1)
;
disconnect from oracle; quit;
[1]: https://i.stack.imgur.com/GGjin.jpg
What style you need to use for date constants in Oracle depends on your settings in Oracle. But normally you can use expressions like one of these
date '2022-05-14'
'2022-05-14'
You seem to claim that on your system you can use values like
'14-May-22'
(how does Oracle know what century you mean by that?).
Note that in Oracle it is important to use single quotes around constants as it interprets strings in double quotes as object names.
So if you have a date value in SAS just make sure to make the macro variable value look like what Oracle wants.
For example to set ENDDT to today's date you could use:
data _null_;
call symputx('enddt',quote(put(today(),date11.),"'"));
run;
Which would the same as
%let enddt='17-MAY-2022';
So #Tom answer was helpful - it appears that our DBAs updated some settings a few weeks back that impacted how stringent Oracle is in terms of which date formats are accepted.
For what it's worth, the date macro vars were being constructed on the fly using a clunky data step that read off of a date key dataset:
You'll notice the last piece of the date string being put together for bost variables uses year2. format, so just the last two digits of the year.
To #Tom's point, this is apparently confusing Oracle in terms of which century its in, so the job gets hung up.
data dateparm;
set salesdata.week_end_date;
start = "'" || put(day(week_end_date - 6), z2.) || '-' || put(week_end_date - 6, monname3.) || '-' ||
put(week_end_date - 6, year2.) || "'";
end = "'" || put(day(week_end_date), z2.) || '-' || put(week_end_date, monname3.) || '-' ||
put(week_end_date, year2.) || "'";
call symput('start_dt', start);
call symput('end_dt', end);
run;
Once I changed this step to use year4. format for the last piece, the job was able to run fine without incident on both unix and E guide. Example below:
data dateparm;
set npdd.week_end_date;
start = "'" || put(day(week_end_date - 6), z2.) || '-' || put(week_end_date - 6, monname3.) || '-' ||
put(week_end_date - 6, year4.) || "'";
end = "'" || put(day(week_end_date), z2.) || '-' || put(week_end_date, monname3.) || '-' ||
put(week_end_date, year4.) || "'";
call symput('start_dt', start);
call symput('end_dt', end);
run;
I have created a package with two procedures and two cursors in it, but while executing the procedure, it is executed successful, but same record executed multiple times and a buffer overflow occurred.
I also tried removing the loop from the cursor but for 1 record that will be fine and for multiple record it won't work as expected.
EXPECTED
I just need to remove multiple execution of same record from the procedure where i am getting multiple execution of same record
for single procedure and single cursor it is working properly but for multiple cursor and multiple procedure i am getting problem here which caused buffer overflow too where i need different record
Is there any alternative way that I can fix the problem ?
CREATE OR REPLACE PACKAGE test.report AS
PROCEDURE distribution (
code_in IN user.test.code%TYPE,
fromdate date,
todate date
);
PROCEDURE tdvalue (
id IN user.test.custid%TYPE
);
END report;
/
Package Body
CREATE OR REPLACE PACKAGE BODY test.report as
----------VARIABLE DECLARATION----------------
code_in user.test.code%TYPE;
custidin user.test.custid%TYPE;
fromdate DATE;
todate DATE;
diff number(17,2);
---------------CURSOR DECLARATION--------------
CURSOR td_data(code_in user.test.code%TYPE,
fromdate date,
todate date
) IS
( SELECT
test.code,
COUNT(test.code) AS count,
SUM(test2.Deposit_amount) AS total,
test.currency
FROM
user.test2
JOIN user.test ON test2.acid = test.acid
WHERE
user.test2.open_effective_date BETWEEN TO_DATE(fromdate, 'dd-mm-yyyy') AND TO_DATE(todate, 'dd-mm-yyyy')
and
user.test.code = code_in
GROUP BY
test.code,test.currency
);
td__data td_data%rowtype;
CURSOR C_DATA(custidin user.test.custid%TYPE) IS SELECT
test.custid,
test2.id,
TO_DATE(test2.initial_date, 'dd-mm-yyyy') - TO_DATE(test2.end_date, 'dd-mm-yyyy') AS noofdays,
round(((test2.deposit_amount *((TO_DATE(test2.initial_date, 'dd-mm-yyyy') - TO_DATE(test2.end_date, 'dd-mm-yyyy'
)) / 365) * test4.interest_rate) / 100), 2) + test2.deposit_amount AS calculated_amount,
SUM(test.flow_amt) + test2.deposit_amount AS system_amount
FROM
user.test
JOIN user.test2 ON test3.entity_id = test2.id
WHERE
test.custid = custidin
GROUP BY
test.custid,
test2.id;
c__data c_data%ROWTYPE;
PROCEDURE distribution
(
code_in IN user.test.code%TYPE,
fromdate in date,
todate in date
)
AS
BEGIN
OPEN td_data(code_in,fromdate,todate);
loop
FETCH td_data INTO td__data;
dbms_output.put_line(td__data.code
|| ' '
|| td__data.count
|| ' '
||td__data.currency
||' '
||td__data.total
);
end loop;
CLOSE td_data;
END distribution;
PROCEDURE tdvalue (
custidin IN user.test.custid%TYPE
)
AS
BEGIN
open c_data(custidin);
fetch c_data into c__data;
loop
diff:= c__data.calculated_amount- c__data.system_amount;
dbms_output.put_line(c__data.custid
|| ' '
|| c__data.noofdays
|| ' '
|| c__data.end_date
|| ' '
|| c__data.initial_date
|| ' '
|| c__data.calculated_amount
||' '
||diff
);
end loop;
close c_data;
END tdvalue;
END report;
/
To run
ALTER SESSION set nls_date_format='dd-mm-yyyy';
SET SERVEROUTPUT ON;
EXEC REPORT.DISTRIBUTION('872328','01-02-2016','08-02-2019');
/
EXEC REPORT.tdvalue('S9292879383SS53');
Buffer overflow - ORU-10027 - happens when the total number of bytes displayed through DBMS_OUTPUT exceeds the size of the serveroutput buffer. The default is only 20000 bytes (who knows why?). Your session is using that default because of how you enable serveroutput. Clearly one record is less than 2000 and you only hit that limit when you run for multiple records.
To fix this try this
SET SERVEROUTPUT ON size unlimited
It's not actually unlimited, but the upper bound is the PGA limit (session memory) and you really shouldn't hit that limit with DBMS_OUTPUT. Apart from anything else who would read all that?
So the other problem with your code - as #piezol points out - is that your loops have no exit points. You should test whether the FETCH actually fetched anything and exit if it didn't:
loop
FETCH td_data INTO td__data;
exit when td_data%notfound;
dbms_output.put_line(td__data.code
|| ' '
|| td__data.count
|| ' '
||td__data.currency
||' '
||td__data.total
);
end loop;
Remembering to do this is just one reason why implicit cursors and cursor for loops are preferred over explicit cursors.
The second cursor loop is even worse because not only does it not have an exist point, the fetch is outside the loop. That's why you have repeated output for the same record.
So let's rewrite this ...
open c_data(custidin);
fetch c_data into c__data; -- should be inside
loop
diff:= c__data.calculated_amount- c__data.system_amount;
… as a cursor for loop:
PROCEDURE tdvalue (
custidin IN user.test.custid%TYPE
)
AS
BEGIN
for c__data in c_data(custidin)
loop
diff:= c__data.calculated_amount- c__data.system_amount;
dbms_output.put_line(c__data.custid
|| ' '
|| c__data.noofdays
|| ' '
|| c__data.end_date
|| ' '
|| c__data.initial_date
|| ' '
|| c__data.calculated_amount
||' '
||diff
);
end loop;
END tdvalue;
No need for OPEN, CLOSE or FETCH, and no need to check when the cursor is exhausted.
In PL/SQL, the preferred mechanism for setting the DBMS_OUTPUT buffer size would be within your procedure. This has the benefit of working in any client tool, such as Java or Toad (though it is still up to the client tool to retrieve the output from DBMS_OUTPUT).
DBMS_Output.ENABLE
Pass in a parameter of NULL for unlimited buffer size.
It would go like this:
BEGIN
DBMS_OUTPUT.ENABLE(NULL);
FOR I IN 1..1000 LOOP
DBMS_OUTPUT.PUT_LINE('The quick red fox jumps over the lazy brown dog.');
END LOOP;
END;
/
Bonus fact:
You can use the other functions and procedures in DBMS_OUTPUT to roll your own if you aren't using SQL*Plus or a DBMS_OUTPUT-savvy tool like Toad.
You can use the GET_LINE or GET_LINES procedures from your client code to get whatever may have been written to DBMS_OUTPUT.
GET_LINE
I'm trying to export a query to a csv file using the COPY command in pgAdminIII.
I'm using this command:
COPY (SELECT CASE WHEN cast(click as int) = 1 THEN 1
ELSE -1
END
|| ' '
|| id
|| '|'
|| 'hr_' || substring(hour, 7,8)
--|| ' dw_x' + substring(datename(dw, substring(hour,1,6) ),1,2)
|| ' |dw_' || substring(to_char(to_date(substring(hour, 1,6),'YYMMDD'), 'dy'),1,2)
|| ' |C1_' || c1
|| ' |C21_' || c21
|| ' |C22_' || substring(to_char(to_date(substring(hour, 1,6),'YYMMDD'), 'dy'),1,2) || '_' || substring(hour, 7,8)
AS COL1
FROM clickthru.train limit 10)
TO 'C:\me\train.csv' with csv;
When I run it I get:
ERROR: could not open file "C:\me\train.csv" for writing: Permission denied
SQL state: 42501
I then tried using the following in psql:
grant all privileges on train to public
and then look at access privileges using \z which returns :
but am still getting the same error. I'm using postgresql 9.4 on a Windows 7 box. Any other suggestions?
copy writes the file under the user account of the Postgrs server process (it is a server side operation).
From the manual:
The file must be accessible by the PostgreSQL user (the user ID the server runs as) and the name must be specified from the viewpoint of the server
Under Windows this is the system's "Network Account" by default.
Apparently that account does not have write privileges on that directory.
You have two possible ways of solving that:
change the privileges on the directory to allow everybody full access
use the client side \copy command (in psql) instead
I have a simple shell script with SQL code which does:
generate with SQL*Plus (SQL statement) a batch file
checks if output from SQL*Plus more than 400 lines (if more than 400 lines exit and writes mail to Operations team)
if less than 400 lines SQL*Plus output, executes the batch file automatically
This script works very well. I wish to write the same script with PL/SQL (without Shell code). Is this possible? Can you provide me the code (I am in process of learning PL/SQL).
Database is Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 on Solaris.
#!/bin/ksh
. /opt/db/scripts/setpath.sh
generate_batch ()
{
sqlplus -S $DBUSER/$DBPASSWD#$ORACLE_SID <<EOF > /opt/db/scripts/tools/delete_connection/batchrun/batchrun.$(/bin/date '+%d%m%Y.%Hh')
set echo Off
set term On
set pages 0
set head off
set ver off
set feed off
set trims on
set linesize 20000
WITH data
AS (SELECT user_id,
jc_name,
upd_time,
RANK () OVER (PARTITION BY user_id ORDER BY upd_time ASC)
rk
FROM user_jc
WHERE user_id IN ( SELECT user_id
FROM user_jc
WHERE JC_NAME LIKE 'CFF\_S\_%' ESCAPE '\'
GROUP BY user_id
HAVING COUNT (user_id) > 1)
AND JC_NAME LIKE 'CFF\_S\_%' ESCAPE '\')
SELECT 'DISCONNECT ent_user FROM job_code WITH user_id = "'
|| user_id
|| '", jc_name = "'
|| jc_name
|| '";'
FROM data
WHERE rk = 1;
exit
EOF
}
sanity_check ()
{
line_nr=$(wc -l /opt/db/scripts/tools/delete_connection/batchrun/batchrun.$(/bin/date '+%d%m%Y.%Hh') | awk ' { print $1 } ')
if [ $line_nr -gt 400 ]; then
(cat /opt/db/scripts/tools/delete_connection/mail_body.txt) | mailx -s "Alert: please manually execute /opt/db/scripts/tools/delete_connection/batchrun/batchrun.$DATE" -r test#example.com test2#example.com
exit 1
fi
}
run_batch ()
{
/opt/bmchome/bin/ess batchrun -A -i /opt/db/scripts/tools/delete_connection/batchrun/batchrun.$(/bin/date '+%d%m%Y.%Hh')
}
generate_batch && sanity_check && run_batch
In PL/SQL, I'd do it the other way round:
Count number of connections that match your query
If result > 400 send email
Else generate the disconnection statements, probably with ALTER SYSTEM DISCONNECT SESSION...
I don't know your requirement of course, but could it be solved with resource profiles to limit user connections?
CREATE PROFILE myprofile LIMIT SESSIONS_PER_USER = 1;
ALTER USER myuser PROFILE myprofile;
I am testing an application using Selenium IDE. There is a table with unstable row ordering - which means the elements I need to verify are in different rows on each test run.
I'd like to use text known to be in the first column to find the row; and then verify the other columns in the same row.
The test looks like this:
store || //form/table || tableXpath
store || 3 || initialsRow
verifyTable || ${tableXpath}.${initialsRow}.0 || Initials
verifyTable || ${tableXpath}.${initialsRow}.1 || MJ
verifyTable || ${tableXpath}.${initialsRow}.2 || MH
Instead of hard-coding the "initialsRow" value; is it not possible to find the row index dynamically?
The solution I found was to use the Selenium's storeElementIndex command. It gets the index of an HTML element relative to its parent.
See http://release.seleniumhq.org/selenium-core/1.0.1/reference.html
I changed the test as follows:
store || //form/table || tableXpath
storeElementIndex || ${tableXpath}//tr/td[text() = "Initials"]/.. || initialsRow
verifyTable || ${tableXpath}.${initialsRow}.1 || MJ
verifyTable || ${tableXpath}.${initialsRow}.2 || MH
The XPath query //form/table//tr/td[text() = "Initials"]/.. finds the 'tr' element above the 'td' element containing the text "Initials". Selenium stores the index of this 'tr' element relative to whatever its parent element is.
Well, now I found, selenium CAN calculate. Unfortunately not implicitly like ${tableXpath}.${initialsRow + 1}.1 .
So I added an additional command:
storeEval || ${ORBInitialPort} + 1 || ORBInitialPortRow
and used ORBInitialPortRow instead of ORBInitialPort as index.