Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am trying to set up an automated data workflow for my company, inserting data into a database (microsoft sql server) every Monday.
The "Bulk Insert" statement will insert data row by row. However, if it finds wrong data in the middle, it will stop the process and it won't take out the data that were inserted.
Is there any way that we can validate the data first so that it won't start inserting until it validates that the data are clean to be inserted?
Thank you!
... if it finds wrong data in the middle, it will stop the process and it won't take out the data that were inserted.
Use a transaction, that is exactly what they were made for (to roll back or commit multiple operations as a part of a transaction)
General Remarks
The BULK INSERT statement can be executed within a user-defined transaction to import data into a table or view. Optionally, to use multiple matches for bulk importing data, a transaction can specify the BATCHSIZE clause in the BULK INSERT statement. If a multiple-batch transaction is rolled back, every batch that the transaction has sent to SQL Server is rolled back.
Example:
BEGIN TRANSACTION TrnBlkInsert
BEGIN TRY
-- your bulk insert here
BULK INSERT .....
COMMIT TRANSACTION TrnBlkInsert
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION TrnBlkInsert;
THROW;
END CATCH
Validate every row before insert is a way.
But still something wrong may happen when you do insert.
So, you may consider Transaction. the whole insert in one Transaction.
Any row insert fail. database will rollback all.
with transaction. you don't need validate before.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
We can go for transaction management in procedure but we can’t go in function, I have seen this statement at multiple places, while we ask for difference between function & procedure, But I did below test in oracle and **I can see its working fine for function also**. Can anybody please let me know what thing am I missing about above statement, because this statement looks completely wrong to me?
select * from test; *(test table having single column "name varchar(2)")
create or replace function FUNTest
return number as result NUMBER(6,2);
BEGIN
SAVEPOINT fn_fntest;
insert into test(NAME) values('Dinesh');
ROLLBACK TO fn_fntest;
return 1;
END;
/
Begin
DBMS_OUTPUT.PUT_LINE(FUNTest());
end;
/
Purpose of function is different from procedure.
Function: Supposed to do some calculation and return some value(most
of the cases)
Procedure: Perform some operation based on data/column.It manages transaction as well, because you will definately be storing new data somewhere.
Now talking about transaction management in function, well it depends on calling mechanism of function.
If your function is having transactional statement like commit/rollback then it should be called from some other block which is capable of handling transaction like procedure or anonymous block(your case).
If you call that same function from select statement like "select funtest() from dual;" then you will get error as select statement is not capable of opening transaction.
If you still want to call any function having transactional statement from non-transactional body(select statement) then your function should be capable of opening separate independent transaction(PRAGMA AUTONOMOUS_TRANSACTION).
Please refer to http://www.datacoons.com/content/transaction.php for more information on transaction management.
is there a manner to lock a table specifying that only the insert with a specific value should be blocked?
For example I have the table "task" and the table "subtask" I need an operation that when the task is closed also the subtasks pertaining to it and still open should be closed. So I would like to:
Start a transaction
Lock the table subtask but only preventing that an insert with a given task id can be performed
Close all the subtask
Close the task using the optimistic lock (if the task version changed do a rollback of all)
Commit the transaction
Is it possible to do what I described in step 2? If not (or if there is a better way) how can I obtain a safe concurrency on this kind of scenario?
The best way is to create the row yourself! But you can delete it at the end of the transaction. Here's how it goes
CREATE OR REPLACE FUNCTION ....
AS $$
BEGIN
INSERT INTO subtasks VALUE(pk, ...) ON CONFLICT (id) DO NOTHING;
GET DIAGNOSTICS inserted = ROW_COUNT
if inserted THEN
DELETE from subtasks where pk = id
endif;
COMMIT;
$$ LANGUAGE plpgsql;
Note that in the above code INSERT ON CONFLICT does not actually make any changes to the existing data. It will return the number of rows changed. If no rows were changed that inserted variable will hold zero.
What's happening here is that having inserted a record with in your transaction, any other connection will not be able to save the same record. You can confirm this for your self by opening to connections to the database with psql. Then do
begin;
insert ...
# just wait here
commit;
and in the other one, try the same insert, you will see that it hangs. Whether the insert suceeds or fails depends on whether you commit or rollback in the first psql connection.
You can't lock a table for a specific value but in your case this solution would apply: create a trigger on insert in subtasks and in this trigger check if the corespondent task is closed. If the task is closed than do not allow the insert.
Another trigger on tasks will close all corespondent subtask as soon as the task is closed.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I have deleted a record from my table and after executing rollback command it shows "command completed successfully" but when i check it using select statement the table is still looking empty? why ?
Some database connection methods such as ODBC default to what's known as "autocommit" mode - that is, the database connection automatically issues a COMMIT after each statement is executed. JDBC does the same thing. I cannot say if this is what's happening in your case, but if so there's no way to do a ROLLBACK. Best of luck.
Rollback command takes you back to the latest committed state of the table.I guess your delete query might have contained some statement that committed the change(deletion of record).
Jason Clark,
I did a test using MySql and use the "begin", "delete" and "rollback", used the following SQL (an example):
begin; delete from aluno where id = 1; rollback;
In PostgreSQL, SQL syntax is the same and also worked.
Are you sure you used the correct SQL? I might have been some mistake? Is it really necessary to use "begin transaction" instead of just "begin"?
I hope this helps!
I am trying to do a trigger, that select values into some tables, and then insert them into another table.
So, for now I got this. There is a lot of columns, so I don't copy them, it is only varchar2 values, and this part works, so I don't think it is useful :
create or replace
TRIGGER TRIGGER_FICHE
AFTER INSERT ON T_AG
BEGIN
declare
begin
INSERT INTO t_ag_hab#DBLINK_DEV
()
values
();
/*commit;*/
end;
END;
Stored procedure where trigger will be called(again a lot of parameters, not relevant to copy them :
INSERT INTO T_AG()
VALUES
();
commit work;
The thing is, we cannot do commit into a trigger, I read that, and understand it.
But, how can I see an update of my table, with the new value?
When the process is runnign, there is nor error, but I don't see the new line into t_ag_hab.
I know it's not very clear, but I don't know how to explain it other way.
How can I fix this?,
Because you're inserting into a remove table via a database link, you have a distributed transaction:
... distributed transaction processing is more complicated because the database must coordinate the committing or rolling back of the changes in a transaction as an atomic unit. The entire transaction must commit or roll back.
When you commit, you're committing both the local insert and the remote insert performed by your trigger, as an atomic unit. You cannot commit one without the other, and you don't have to do anything extra to commit the remote change:
The two-phase commit mechanism is transparent to users who issue distributed transactions. In fact, users need not even know the transaction is distributed. A COMMIT statement denoting the end of a transaction automatically triggers the two-phase commit mechanism. No coding or complex statement syntax is required to include distributed transactions within the body of a database application.
If you can't see the inserted data from the remote database afterwards then something else has deleted it after the commit, or more likely you're looking at the wrong database.
One slight downside (though also a feature) of a database link is that it hides the details of where the work is being done. You can drop and recreate a link to make your code update a different target database without having to modify the code itself. But that means your code doesn't know where the insert is actually going - you'd need to check the data dictionary to see where the link is pointing. And even then you might not know as the link can be using a TNS alias to identify the database, and changes to the tnsnames.ora aren't visible from within the database.
If you can see the data after committing by querying t_ag_ab#dblink_dev from the same database you ran your procedure, but you can't see if when queried locally from the database you expect that to be pointing to, then the link isn't pointing where you think it is. The insert is going to one database, and you are performing your query against a different one. Only you can decide which is the 'correct' database though; and either redefine the link (or TNS entry, if appropriate), or change where you're doing the query.
I am not able to understand you requirement clearly. For updating records in main table and insert the old records in audit table. we can use the below query as a trigger.(MS-SQL)
Create trigger trg_update ON T_AGENT
AFTER UPDATE AS
BEGIN
UPDATE Tab1
SET COL1 = I.COL1, COL2=I.COL2
FROM INSERTED I INNER JOIN Tab1 ON I.COL3=Tab1.Col3
INSERT Tab1_Audit(COL1,COL2,COL3)
SELECT Tab1 FROM DELETED
RETURN;
END;
So far what you presented is just for Inserting trigger. If you want to see the update action done try to add Update like this example.
SQL> CREATE OR REPLACE TRIGGER validate_update
2 AFTER INSERT OR UPDATE ON T_AGENT
3 FOR EACH ROW
4 BEGIN
5 IF UPDATING('ACCOUNT_ID') THEN -- do something like this when updating
6 DBMS_OUTPUT.put_line ('ERROR'); -- add your action here
7 ELSIF INSERTING THEN
8 INSERT INTO t_ag_hab#DBLINK_DEV() values();
9 END IF;
10 END;
11 /
Trigger created.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
If a Table A is updated a trigger gets fired. That trigger calls another SP to do some processing.
Is there any chance that if SP fails the update that happened on Table A will be reverted?
I have a code just after the update "If Sqlca.SqlCode" and this always has 0 coming for the update.
Please Help!!
Yes, if the trigger encounters an error (internally or through calling some external procedure) and rolls back the transaction, it will roll back the whole transaction, including whatever UPDATE caused the trigger to fire in the first place. There are multiple ways to get around this, if it is not the behavior you want:
use TRY / CATCH to absorb any errors form the external procedure, or move the procedure logic into the trigger, or add proper error handling to the stored procedure so that, if you don't care that an error happened there, it doesn't bubble up and roll back everything.
use an INSTEAD OF trigger - combined with TRY / CATCH (or possibly committing your own UPDATE first), you should be able to update the table without caring whether the external stored procedure fails.
Example of the INSTEAD OF trigger:
USE tempdb;
GO
CREATE TABLE dbo.flooblat(id INT PRIMARY KEY, name VARCHAR(32));
INSERT dbo.flooblat(id,name) VALUES(1, 'Bob');
GO
CREATE PROCEDURE dbo.oh_my
AS
SELECT 1/0;
GO
CREATE TRIGGER dbo.trFlooblat
ON dbo.flooblat
INSTEAD OF UPDATE
AS
BEGIN
SET NOCOUNT ON;
UPDATE f SET f.name = i.name
FROM dbo.flooblat AS f
INNER JOIN inserted AS i
ON f.id = i.id;
COMMIT TRANSACTION;
EXEC dbo.oh_my;
END
GO
UPDATE dbo.flooblat SET name = 'Frank';
GO
SELECT id, name FROM dbo.flooblat;
GO
Results:
Msg 8134, Level 16, State 1, Procedure oh_my
Divide by zero error encountered.
The statement has been terminated.
However, the SELECT reveals that, even though an error occurred in the trigger, it happened after the UPDATE was committed - so unlike an exception that occurs in an AFTER trigger (without proper error handling), we were able to prevent the error from rolling back all of the work we've done.
id name
---- -----
1 Frank
Triggers can do things AFTER the DML is executed, INSTEAD OF executing the DML, etc. So yes, there is a chance that the update (dml) wont happen if the SP fails -- just depends on how you write it / what features you use.
Read up on triggers a bit here: http://technet.microsoft.com/en-us/library/ms189799%28v=sql.105%29.aspx
If you want a more specific answer for trigger in quesiton, then you'll need to post the code.