In my client place they have a database. Once I complete the incremental changes on database, I have prepare the list of SQL object changes in one SQL file.
The script is like this:
If sql object 1 present in database
DROP the SQL object 1
GO
create the SQL Object 1
If sql object 2 present in database
DROP the SQL object 2
create the SQL Object 2
All the time I have drop the existing Object and re-create the same.
Now this batch may contains some error.
My requirement is that if there any error in file the file. non of the the sql objects has been re-created. it should rollback the old sql objects.
If there is no error then it would create all the SQL objects.
Due the GO statement in middle I could not able to user the TRANSACTION in sql.
How can this be solved?
Don't use GO, then. Simply remove it from your script, and add your BEGIN and COMMIT TRANSACTION commands where you need them.
BEGIN TRAN
IF EXISTS Object1
BEGIN
DROP Object1;
END
CREATE Object1;
IF EXISTS Object2
BEGIN
DROP Object2;
END
CREATE Object2;
COMMIT TRAN
Modifying database schema via DROP/CREATE has many problems:
it may loose data
it looses permissions and extended properties added to the objects dropped
cross object dependencies (eg. foreign keys) require a certain order of drop/create
Usually is better to try to ALTER the object from schema version to schema version. This requires you to know which schema version is currently deployed, but that problem is easily solvable (use a database extended property, see Version Control and your Database).
Back to your question, a naive approach is to wrap your entire script in a big BEGIN TRAN/COMMIT but that seldom works:
it creates a potentially large transaction that requires much log space.
the result is impossible to validate until after the commit when is too late to do anything about it
the behavior mingling exceptions and transactions is messy at best. XACT_ABORT ON helps somehow, but only so much.
Not all DLL statements can be run from inside a transaction
For these resons I would recommnd a much simpler and safer approach: take a backup, WITH COPY_ONLY, of the database before modifying the schema. If anything goes wrong, rollback to the copy. Alternative, a database snapshot can be used as a backup. See How to: Revert a Database to a Database Snapshot.
Note that BEGIN TRAN/COMMIT can span batches (ie. can be separated by multiple GO) so your concern is not an issue.
Related
I am doing a test that updates my database each time I run it
And I cannot do the test again with the updated values
I am recreating the WHOLE database with:
postgres=# drop database mydb;
DROP DATABASE
postgres=# CREATE DATABASE mydb WITH TEMPLATE mycleandb;
CREATE DATABASE
This takes a while
Is there any way I can update just the tables that I changed with tables from mycleandb?
Transactions
You haven't mentioned what your programming language or framework are. Many of them have built in test mechanisms that take care of this sort of thing. If you are not using one of them, what you can do is to start a transaction with each test setup. Then roll it back when you tear down the test.
BEGIN;
...
INSERT ...
SELECT ...
DELETE ...
ROLLBACK;
Rollback, as the name suggests reverses all that has been done to the database so that it remains at the original condition.
There is one small problem with this approach though, you can't do integration tests where you intentionally enter incorrect values and cause a query to fail integrity tests. If you do that the transaction ends and no new statements can be executed until rolled back.
pg_dump/pg_restore
it's possible to use the -t option of pg_dump to dump and then restore one or a few tables. This maybe the next best option when transactions are not practical.
Non Durable Settings / Ramdisk
If both above options are inapplicable please see this answer: https://stackoverflow.com/a/37221418/267540
It's on a question about django testing but there's very little django specific stuff on that. However coincidentally django's rather excellent test framework relies on the begin/update/rollback mechanis described above by default.
Test inside a transaction:
begin;
update t
set a = 1;
Check the results and then:
rollback;
It will be back to a clean state;
I have a problem to solve which requires undo operation of each executed sql file in Oracle Database.
I execute them in an xml file with MSBuild - exec command sqlplus with log in and #*.sql.
Obviously rollback won't do, because it can't rollback already commited transaction.
I have been searching for several days and still can't find the answer. What I learned is Oracle Flashback and Point in Time Recovery. The problem is that I want the changes to be undone only for the current user i.e. if another user makes some changes at the same time then my solution performs undo only on user 'X' not 'Y'.
I found the start_scn and commit_scn in flashback_transaction_query. But does it identify only one user? What if I flashback to a given SCN? Will that undo only for me or for other users as well? I have taken out
select start_scn from flashback_transaction_query WHERE logon_user='MY_USER_NAME'
and
WHERE table_name = "MY_TABLE NAME"
and performed
FLASHBACK TO SCN"here its number"
on a chosen operation's SCN. Will that work for me?
I also found out about Point in Time Recovery but as I read it makes the whole database unavailable so other users will be unable to work with it.
So I need something that will undo a whole *.sql file.
This is possible but maybe not with the tools that you use. sqlplus can rollback your transaction, you just have to make sure auto commit isn't enabled and that your scripts only contain a single commit right before you end the sqlplus session (if you don't commit at all, sqlplus will always roll back all changes when it exits).
The problems start when you have several scripts and you want, for example, to rollback a script that you ran yesterday. This is a whole new can of worms and there is no general solution that will always work (it's part of the "merge problem" group of problems, i.e. how can you merge transactions by different users when everyone can keep transactions open for as long as they like).
It can be done but you need to carefully design your database for it, the business rules must be OK with it, etc.
To general approach would be to have a table which contains the information which rows were modified (= created,updated,deleted) by the script plus the script name plus the time when it was executed.
With this information, you can generate SQL which can undo the changes created by a script. To fill such a table, use triggers or generate your scripts in such a way that they write this information as well (note: This is probably beyond a "simple" sqlplus solution; you will have to write your own data loader for this).
Ok I solved the problem by creating a DDL and DML TRIGGER. The first one takes "extra" column (which is the DDL statement you have just entered) from v$open_cursor and inserts into my table. The second gets "undo_sql" from flashback_transaction_query which is the opposite action of your DML action - if INSERT then undo_sql is DELETE with all necessary data.
Triggers work before DELETE,INSERT (DML) on specific table and ALTER,DROP,CREATE (DDL) on specific SCHEMA or VIEW.
I know that in MySQL ddl statements such as alter table/create table/etc cause an implicit transaction commit.
As we are moving to PostgreSQL is it possible to wrap multiple DDL statments in a transaction?
This would make migration scripts a lot more robust, a failed DDL change would cause everything to rollback.
DDL statements are covered by transactions. I can't find the relevant section in the official documentation, but have provided a link to the wiki which covers it.
Just remember that transactions aren't automatically opened in postgresql, you must start them with BEGIN or START TRANSACTION.
Postgresql Wiki about Transactional DDL
Not every Postgres DDL statement can be wrapped in transaction. Statements like DROP DATABASE / DROP TABLESPACE and some other file-system-related cant rollback.
Also:
ALTER TYPE ... ADD VALUE (the form that adds a new value to an enum
type) cannot be executed inside a transaction block.
Also some statements like TRUNCATE are 'not MVCC save'. Changes, made by that kind of statements can affect other queries, even if they are rolled back.
So - read the official manual for your version of postgres to find out if your DDL's are transaction safe.
I have an application in which I have to insert in a database, with SQL Server 2008, in groups of N tuples and all the tuples have to be inserted to be a success insert, my question is how I insert these tuples and in the case that someone of this fail, I do a rollback to eliminate all the tuples than was inserted correctly.
Thanks
On SQL Server you might consider doing a bulk insert.
From .NET, you can use SQLBulkCopy.
Table-valued parameters (TVPs) are a second route. In your insert statement, use WITH (TABLOCK) on the target table for minimal logging. eg:
INSERT Table1 WITH (TABLOCK) (Col1, Col2....)
SELECT Col1, Col1, .... FROM #tvp
Wrap it in a stored procedure that exposes #tvp as parameter, add some transaction handling, and call this procedure from your app.
You might even try passing the data as XML if it has a nested structure, and shredding it to tables on the database side.
You should look into transactions. This is a good intro article that discusses rolling back and such.
If you are inserting the data directly from the program, it seems like what you need are transactions. You can start a transaction directly in a stored procedure or from a data adapter written in whatever language you are using (for instance, in C# you might be using ADO.NET).
Once all the data has been inserted, you can commit the transaction or do a rollback if there was an error.
See Scott Mitchell's "Managing Transactions in SQL Server Stored Procedures for some details on creating, committing, and rolling back transactions.
For MySQL, look into LOAD DATA INFILE which lets you insert from a disk file.
Also see the general MySQL discussion on Speed of INSERT Statements.
For a more detailed answer please provide some hints as to the software stack you are using, and perhaps some source code.
You have two competing interests, doing a large transaction (which will have poor performance, high risk of failure), or doing a rapid import (which is best not to do all in one transaction).
If you are adding rows to a table, then don't run in a transaction. You should be able to identify which rows are new and delete them should you not like how the look on the first round.
If the transaction is complicated (each row affects dozens of tables, etc) then run them in transactions in small batches.
If you absolutely have to run a huge data import in one transaction, consider doing it when the database is in single user mode and consider using the checkpoint keyword.
I am new to oracle. I need to process large amount of data in stored proc. I am considering using Temporary tables. I am using connection pooling and the application is multi-threaded.
Is there a way to create temporary tables in a way that different table instances are created for every call to the stored procedure, so that data from multiple stored procedure calls does not mix up?
You say you are new to Oracle. I'm guessing you are used to SQL Server, where it is quite common to use temporary tables. Oracle works differently so it is less common, because it is less necessary.
Bear in mind that using a temporary table imposes the following overheads:read data to populate temporary tablewrite temporary table data to fileread data from temporary table as your process startsMost of that activity is useless in terms of helping you get stuff done. A better idea is to see if you can do everything in a single action, preferably pure SQL.
Incidentally, your mention of connection pooling raises another issue. A process munging large amounts of data is not a good candidate for running in an OLTP mode. You really should consider initiating a background (i.e. asysnchronous) process, probably a database job, to run your stored procedure. This is especially true if you want to run this job on a regular basis, because we can use DBMS_SCHEDULER to automate the management of such things.
IF you're using transaction (rather than session) level temporary tables, then this may already do what you want... so long as each call only contains a single transaction? (you don't quite provide enough detail to make it clear whether this is the case or not)
So, to be clear, so long as each call only contains a single transaction, then it won't matter that you're using a connection pool since the data will be cleared out of the temporary table after each COMMIT or ROLLBACK anyway.
(Another option would be to create a uniquely named temporary table in each call using EXECUTE IMMEDIATE. Not sure how performant this would be though.)
In Oracle, it's almost never necessary to create objects at runtime.
Global Temporary Tables are quite possibly the best solution for your problem, however since you haven't said exactly why you need a temp table, I'd suggest you first check whether a temp table is necessary; half the time you can do with one SQL what you might have thought would require multiple queries.
That said, I have used global temp tables in the past quite successfully in applications that needed to maintain a separate "space" in the table for multiple contexts within the same session; this is done by adding an additional ID column (e.g. "CALL_ID") that is initially set to 1, and subsequent calls to the procedure would increment this ID. The ID would necessarily be remembered using a global variable somewhere, e.g. a package global variable declared in the package body. E.G.:
PACKAGE BODY gtt_ex IS
last_call_id integer;
PROCEDURE myproc IS
l_call_id integer;
BEGIN
last_call_id := NVL(last_call_id, 0) + 1;
l_call_id := last_call_id;
INSERT INTO my_gtt VALUES (l_call_id, ...);
...
SELECT ... FROM my_gtt WHERE call_id = l_call_id;
END;
END;
You'll find GTTs perform very well even with high concurrency, certainly better than using ordinary tables. Best practice is to design your application so that it never needs to delete the rows from the temp table - since the GTT is automatically cleared when the session ends.
I used global temporary table recently and it was behaving very unwantedly manner.
I was using temp table to format some complex data in a procedure call and once the data is formatted, pass the data to fron end (Asp.Net).
In first call to the procedure, i used to get proper data and any subsequent call used to give me data from last procedure call in addition to current call.
I investigated on net and found out an option to delete rows on commit.
I thought that will fix the problem.. guess what ? when i used on commit delete rows option, i always used to get 0 rows from database. so i had to go back to original approach of on commit preserve rows, which preserves the rows even after commiting the transaction.This option clears rows from temp table only after session is terminated.
then i found out this post and came to know about the column to track call_id of a session.
I implemented that solution and still it dint fix the problem.
then i wrote following statement in my procedure before i starting any processing.
Delete From Temp_table;
Above statemnet made the trick. my front end was using connection pooling and after each procedure call it was commitng the transaction but still keeping the connection in connection pool and subsequent request was using the same connection and hence the database session was not terminated after every call..
Deleting rows from temp table before strating any processing made it work....
It drove me nuts till i found this solution....