I'm pretty new to Oracle so not entirely sure this is possible, or if perhaps I'm going about it the wrong way but here goes ...
Part of an old feeder script I'm fixing is looping through ~ 20 tables (could change anytime) to populate relevant staging tables. This part is currently very basic:
...
INSERT INTO staging_tbl_1(
SELECT *
FROM source_tbl_1
);
INSERT INTO staging_tbl_2(
SELECT *
FROM source_tbl_2
);
...
Some of the fields in the source database have different constraints etc which means that every now and then it will throw an exception and the feeder will stop. What I'm hoping to do is create a procedure within the existing feeder package to loop through each row in each record before it is inserted and simply wrap it in an exception block. This way it can be logged without causing the feeder to stop.
Essentially I'm chasing something like this:
BEGIN procedure_x(source_record, staging_record)
-- Perform validation to ensure records exit
-- Loop through all record rows
FOR row IN (SELECT * FROM source_record) LOOP
-- Wrap in exception block
-- Insert into staging record
-- Log exception if it occurs
END LOOP;
END
I've attempted ref cursors however in order to get them to work I would also need to know the rowtype in advance (from my limited understanding). I've also tried execute immediate however I cannot find a way to loop this in an appropriate way. Are there any other ways to tackle this?
Additional:
I realise that we really should be fixing the source of the problem rather than going about it like this, unfortunately it is far outside my area of influence.
It is possible to do this without making a separate procedure and just wrap all of the table references in a loop, however I'd like to leave this as a last resort.
Oracle has functionality for logging of DML errors. Use it with single SQL statements. Don't go row-by-row and make your processes crawl.
http://docs.oracle.com/cd/B19306_01/server.102/b14231/tables.htm#ADMIN10261
Related
The situation : I'm writing an AFTER INSERT trigger on a table, so I can access to INSERTED pseudo-table, if I have good memory. The trigger is a bit long, so I can't copy / pasta it here, but basically, I'd like to compare the datas of the row I'm inserting (representing a good) with the rows of another table (very similar, representing the wishes), in order to determine if the good inserted corresponds to someone's wishes.
So, I almost finished my trigger, but an error occurred. At a given point, I wrote :
-- Create and open a cursor
IF (#variable1 = INSERTED.MyField)
BEGIN
-- some code
END
-- Deallocate and close my cursor
But I have the following error :
The multi-part identifier "INSERTED.MyField" could not be bound
I thought I could do it, as there is only one line in INSERTED as this moment (I'm right, don't I ?), but it seems I can't.
Can someone explain me why I'm wrong ?
PS : Yes, I've seen this link, or this one, or this one, but they all have a problem with JOIN, and I don't have any JOIN in here
That error indicates SQL is trying to read 'INSERTED' as an alias for another table
IF (#variable1 = INSERTED.MyField)
Try the following to reference the inserted table
IF (#variable1 = MyField from inserted)
Using the inserted and deleted Tables:
http://technet.microsoft.com/en-us/library/ms191300%28v=sql.105%29.aspx
This fixes the syntax and answers the question of why the error is occurring, but comparing inserted to a scalar variable is not recommended. As HLGEM stated, what if you have multiple values in the insert where some match and some don't.
Additionally, Cursors should be a last resort in SQL. In general, cursors are slower and hold up resources. SQL is a optimized for set-based operations and cursors don't leverage that. Without knowing exactly what you are trying to do in the cursor and how much data you are manipulating, I can't say definitely in this case.
One of the many discussions on StackOverflow about Cursors: stackoverflow.com/questions/743183/what-is-wrong-with-cursors
I'm using SQL Server 2012, and I'm debugging a store procedure that do some INSERT INTO #temporal table SELECT.
There is any way to view the data selected in the command (the subquery of the insert into?)
There is any way to view the data inserted and/or the temporal table where the insert maked the changes?
It doesn't matter if is the total rows, not one by one
UPDATE:
Requirements from AT Compliance and Company Policy requires that any modification can be done in the process of test and it's probable this will be managed by another team. There is any way to avoid any change on the script?
The main idea is that the AT user check in their workdesktop the outputs, copy and paste them, without make any change on environment or product.
Thanks and kind regards.
If I understand your question correctly, then take a look at the OUTPUT clause:
Returns information from, or expressions based on, each row affected
by an INSERT, UPDATE, DELETE, or MERGE statement. These results can be
returned to the processing application for use in such things as
confirmation messages, archiving, and other such application
requirements.
For instance:
INSERT INTO #temporaltable
OUTPUT inserted.*
SELECT *
FROM ...
Will give you all the rows from the INSERT statement that was inserted into the temporal table, which were selected from the other table.
Is there any reason you can't just do this: SELECT * FROM #temporal? (And debug it in SQL Server Management Studio, passing in the same parameters your application is passing in).
It's a quick and dirty way of doing it, but one reason you might want to do it this way over the other (cleaner/better) answer, is that you get a bit more control here. And, if you're in a situation where you have multiple inserts to your temp table (hopefully you aren't), you can just do a single select to see all of the inserted rows at once.
I would still probably do it the other way though (now I know about it).
I know of no way to do this without changing the script. Howeer, for the future, you should never write a complex strored proc or script without a debug parameter that allows you to put in the data tests you will want. Make it the last parameter with a default value of 0 and you won't even have to change your current code that calls the proc.
Then you can add statements like the below everywhere you will want to check intermediate results. Further in debug mode you might always rollback any transactions so that a bug will not affect the data.
IF #debug = 1
BEGIN
SELECT * FROM #temp
END
I have a collection of sysid's that I am using to iterate over in several forall loops. This script is meant to be run on a regular basis, so I would like to know whether the collection persists in the database and needs to be cleared out, or if the script is alright as-is.
Also, I am new to PL/SQL, so if you see anything wrong with the script, please do let me know.
This is going to run on Oracle 10g and 11g.
Thanks
DECLARE
TYPE sSysid IS TABLE OF person.sysid%TYPE
INDEX BY PLS_INTEGER;
l_sSysid sSysid;
BEGIN
SELECT sysid
BULK COLLECT INTO l_sSysid
FROM person
where purge_in_process = 1;
FORALL i IN l_sSysid.FIRST .. l_sSysid.LAST
delete from person_attribute where property_pk like concat(l_sSysid(i), '%');
FORALL i IN l_sSysid.FIRST .. l_sSysid.LAST
delete from person_property where person_sysid = l_sSysid(i);
FORALL i IN l_sSysid.FIRST .. l_sSysid.LAST
delete from person where sysid = l_sSysid(i);
END;
/
commit;
The collection is a local variable so it will no longer exist after the block finishes executing. There will be no need to clear it out. Depending on the number of rows in the PERSON table where PURGE_IN_PROCESS will be 1, you may want to use the LIMIT clause in order to avoid consuming too much PGA memory though.
The idea of an anonymous PL/SQL block that is run regularly, however, is a bit foreign to me. If you intend the code to be run regularly, I'd strongly suggest that you create a stored procedure rather than an anonymous block and then schedule the procedure to be run regularly. That opens up the ability to use the database's scheduling facilities (DBMS_JOB and DBMS_SCHEDULER) to run the process and allows other applications to call it as well if the need ever arises. Plus, you'll get the benefits of things like dependency tracking in the database.
Justin is correct; but for the sake of completeness, I'll just add that if you ever decide to convert this into a stored PACKAGE, you need to take some extra care, because anything you declare in a PACKAGE specification DOES retain its value through the entire session.
Query:
BEGIN TRY
SELECT #AccountNumber,
#AccountSuffix,
#Sedat,
#Dedo,
#Payalo,
#Artisto
FROM SWORDBROS
WHERE AMAZING ='HAPPENS'
END TRY
EGIN CATCH
Print #Sedat
END CATCH
How can I get the #Sedat, is it possible?
SQL 2005 , it will be in an SP
Like this, no?
BEGIN TRY
SELECT #AccountNumber,
#AccountSuffix,
#Sedat,
#Dedo,
#Payalo,
#Artisto
FROM SWORDBROS
WHERE AMAZING ='HAPPENS'
END TRY
BEGIN CATCH
--error handling only
END CATCH
--There is no finally block like .net
Print #Sedat
IN a proc when I want to trap the exact values that caused an erorr, this is what I do. I declare a table variable (very important must be a table variable not a temp table) that has the fields I want to have information on. I populate the table variable with records as I go. In a multitep proc, I would add one record for each step if I wanted to see the who process or only a record if I hit an error (which I would populate in this case in the catch block typically). Then in The catch block I would rollback the transaction and then I would insert the contents of the table varaible into a permanent exception processing table. You could also just do a select of this table if you wanted, but if I'm going to this much trouble it usually is for an automated process where I need to be able to research the problem at a later time, not see the problem when it hits becasue I'm not running it on my mchine or where I could see a select or print statement. By using the table varaible which stay in scope even after the rollback, my information is still available for me to log in my exception logging table. But it important that you do the logging to any permananent table after the rollback or the process will rollback with everything else.
which database are you using?
also, which programming language is this?
usually there would be an INTO clause and some local variables declared.
your query should also have a FROM clause at a minimum
It is not clear if you are expecting the returned values to be placed into the # variables or whether you are trying to dynamically specify which columns you want selected. In a Sql Server stored procedure you usually return a result set, not a bunch of individual variables. The syntax you have will not work if you want column values returned since what you have will dynamically specify which columns are wanted based on the column names passed into the stored procedure. And this will not work since the stored procedure must know which columns you are going after when it is analyzed as it is stored. Now the except clause will be trigged if there is a problem reading from the database (communication down, disk error, etc.) in which case none of the column values will be known.
Use the Sql Query Analyzer tool (under the "Tools" menu in SqlManager after you have selected a database) to define your stored procedure and test it. If you installed the documentation when you installed SqlManager go to Start>Programs>Microsoft Sql Server>Books Online and open the "Transact-SQL Reference" node for documentation on what can be done.
I am new to oracle. I need to process large amount of data in stored proc. I am considering using Temporary tables. I am using connection pooling and the application is multi-threaded.
Is there a way to create temporary tables in a way that different table instances are created for every call to the stored procedure, so that data from multiple stored procedure calls does not mix up?
You say you are new to Oracle. I'm guessing you are used to SQL Server, where it is quite common to use temporary tables. Oracle works differently so it is less common, because it is less necessary.
Bear in mind that using a temporary table imposes the following overheads:read data to populate temporary tablewrite temporary table data to fileread data from temporary table as your process startsMost of that activity is useless in terms of helping you get stuff done. A better idea is to see if you can do everything in a single action, preferably pure SQL.
Incidentally, your mention of connection pooling raises another issue. A process munging large amounts of data is not a good candidate for running in an OLTP mode. You really should consider initiating a background (i.e. asysnchronous) process, probably a database job, to run your stored procedure. This is especially true if you want to run this job on a regular basis, because we can use DBMS_SCHEDULER to automate the management of such things.
IF you're using transaction (rather than session) level temporary tables, then this may already do what you want... so long as each call only contains a single transaction? (you don't quite provide enough detail to make it clear whether this is the case or not)
So, to be clear, so long as each call only contains a single transaction, then it won't matter that you're using a connection pool since the data will be cleared out of the temporary table after each COMMIT or ROLLBACK anyway.
(Another option would be to create a uniquely named temporary table in each call using EXECUTE IMMEDIATE. Not sure how performant this would be though.)
In Oracle, it's almost never necessary to create objects at runtime.
Global Temporary Tables are quite possibly the best solution for your problem, however since you haven't said exactly why you need a temp table, I'd suggest you first check whether a temp table is necessary; half the time you can do with one SQL what you might have thought would require multiple queries.
That said, I have used global temp tables in the past quite successfully in applications that needed to maintain a separate "space" in the table for multiple contexts within the same session; this is done by adding an additional ID column (e.g. "CALL_ID") that is initially set to 1, and subsequent calls to the procedure would increment this ID. The ID would necessarily be remembered using a global variable somewhere, e.g. a package global variable declared in the package body. E.G.:
PACKAGE BODY gtt_ex IS
last_call_id integer;
PROCEDURE myproc IS
l_call_id integer;
BEGIN
last_call_id := NVL(last_call_id, 0) + 1;
l_call_id := last_call_id;
INSERT INTO my_gtt VALUES (l_call_id, ...);
...
SELECT ... FROM my_gtt WHERE call_id = l_call_id;
END;
END;
You'll find GTTs perform very well even with high concurrency, certainly better than using ordinary tables. Best practice is to design your application so that it never needs to delete the rows from the temp table - since the GTT is automatically cleared when the session ends.
I used global temporary table recently and it was behaving very unwantedly manner.
I was using temp table to format some complex data in a procedure call and once the data is formatted, pass the data to fron end (Asp.Net).
In first call to the procedure, i used to get proper data and any subsequent call used to give me data from last procedure call in addition to current call.
I investigated on net and found out an option to delete rows on commit.
I thought that will fix the problem.. guess what ? when i used on commit delete rows option, i always used to get 0 rows from database. so i had to go back to original approach of on commit preserve rows, which preserves the rows even after commiting the transaction.This option clears rows from temp table only after session is terminated.
then i found out this post and came to know about the column to track call_id of a session.
I implemented that solution and still it dint fix the problem.
then i wrote following statement in my procedure before i starting any processing.
Delete From Temp_table;
Above statemnet made the trick. my front end was using connection pooling and after each procedure call it was commitng the transaction but still keeping the connection in connection pool and subsequent request was using the same connection and hence the database session was not terminated after every call..
Deleting rows from temp table before strating any processing made it work....
It drove me nuts till i found this solution....