I am doing a test that updates my database each time I run it
And I cannot do the test again with the updated values
I am recreating the WHOLE database with:
postgres=# drop database mydb;
DROP DATABASE
postgres=# CREATE DATABASE mydb WITH TEMPLATE mycleandb;
CREATE DATABASE
This takes a while
Is there any way I can update just the tables that I changed with tables from mycleandb?
Transactions
You haven't mentioned what your programming language or framework are. Many of them have built in test mechanisms that take care of this sort of thing. If you are not using one of them, what you can do is to start a transaction with each test setup. Then roll it back when you tear down the test.
BEGIN;
...
INSERT ...
SELECT ...
DELETE ...
ROLLBACK;
Rollback, as the name suggests reverses all that has been done to the database so that it remains at the original condition.
There is one small problem with this approach though, you can't do integration tests where you intentionally enter incorrect values and cause a query to fail integrity tests. If you do that the transaction ends and no new statements can be executed until rolled back.
pg_dump/pg_restore
it's possible to use the -t option of pg_dump to dump and then restore one or a few tables. This maybe the next best option when transactions are not practical.
Non Durable Settings / Ramdisk
If both above options are inapplicable please see this answer: https://stackoverflow.com/a/37221418/267540
It's on a question about django testing but there's very little django specific stuff on that. However coincidentally django's rather excellent test framework relies on the begin/update/rollback mechanis described above by default.
Test inside a transaction:
begin;
update t
set a = 1;
Check the results and then:
rollback;
It will be back to a clean state;
Related
I made a silly mistake at work once on one of our in house test databases. I was updating a record I just added because I made a typo but it resulted in many records being updated because in the where clause I used the foreign key instead of the unique id for the particular record I just added
One of our senior developers told me to do a select to test out what rows it will affect before actually editing it. Besides this, is there a way you can execute your query, see the results but not have it commit to the db until I tell it to do so? Next time I might not be so lucky. It's a good job only senior developers can do live updates!.
It seems to me that you just need to get into the habit of opening a transaction:
BEGIN TRANSACTION;
UPDATE [TABLENAME]
SET [Col1] = 'something', [Col2] = '..'
OUTPUT DELETED.*, INSERTED.* -- So you can see what your update did
WHERE ....;
ROLLBACK;
Than you just run again after seeing the results, changing ROLLBACK to COMMIT, and you are done!
If you are using Microsoft SQL Server Management Studio you can go to Tools > Options... > Query Execution > ANSI > SET IMPLICIT_TRANSACTIONS and SSMS will open the transaction automatically for you. Just dont forget to commit when you must and that you may be blocking other connections while you dont commit / rollback close the connection.
First assume you will make a mistake when updating a db so never do it unless you know how to recover, if you don't don't run the code until you do,
The most important idea is it is a dev database expect it to be messed up - so make sure you have a quick way to reload it.
The do a select first is always a good idea to see which rows are affected.
However for a quicker way back to a good state of the database which I would do anyway is
For a simple update etc
Use transactions
Do a begin transaction and then do all the updates etc and then select to check the data
The database will not be affected as far as others can see until you do a last commit which you only do when you are sure all is correct or a rollback to get to the state that was at the beginning
If you must test in a production database and you have the requisite permissions, then write your queries to create and use temporary tables that in name are similar to the production tables and whose schema other than index names is identical. Index names are unique across a databse, at least on Informix.
Then run your queries and look at the data.
Other than that, IMHO you need a development database, and perhaps even a development server with a development instance. That's paranoid advice, but you'd have to be very careful, even if you were allowed -- MS SQLSERVER lingo here -- a second instance on the same server.
I can reload our test database at will, and that's why we have a test system. Our production system contains citizens' tax payments and other information that cannot be harmed, "or else".
For our production data changes, we always ensure that we use a BEGIN TRAN and a ROLLBACK TRAN and then all statements have an OUTPUT clause. This way we can run the script first (usually in a copy of PRODUCTION db first) and see what is affected before changing the ROLLBACK TRAN to COMMIT TRAN
Have you considered explain ?
If there is a mistake in the command, it will report it as with usual commands.
But if there are no mistakes it will not run the command, it will just explain it.
Example of a "passed" test:
testdb=# explain select * from sometable ;
QUERY PLAN
------------------------------------------------------------
Seq Scan on sometable (cost=0.00..12.60 rows=260 width=278)
(1 row)
Example of a "failed" test:
testdb=# explain select * from sometaaable ;
ERROR: relation "sometaaable" does not exist
LINE 1: explain select * from sometaaable ;
It also works with insert, update and delete (i.e. the "dangerous" ones)
In my multi-threaded program, one thread drops indexes on a table (this happens first), and other threads insert records in the same table. It so happened that when dropping index is attempted, the table gets locked and the insert transactions become "waiting".
After wasting a lot of time on non-solutions to the problem, I found the real solution is to commit immediately after dropping the index. When commit is issued, the table is unlocked and the insert transactions complete successfully.
My question is, why? I was under the impression that Drop Index is a DDL statement and therefore does not need to be committed. Postgres seems to prove me wrong.
In PostgreSQL, all DDL commands are transactional. So if you start a transaction block, or your driver starts a transaction block for you, or your driver is not in autocommit mode, you need to commit all DDL commands, just like other SQL commands.
Other SQL databases do this differently.
(Nitpicking: Some DDL commands in PostgreSQL cannot be run in a transaction block, only in a transaction by themselves. So you may consider those to be exceptions to the above "all DDL commands" claim. But that's not quite the same thing as your question: Those commands still need to be committed, they just can't be run in a transaction together with other commands.)
I don't know about Postgres, but DDL statements are not always auto-committed.
In Oracle for example they are, but in DB2 they are not (you can do a create table + indexes and then rollback the whole lot). I think SQL Server also needs the commit (unless auto-commit is on).
Basically (depending on the DB flavour) a DDL statement is not always auto-committed.
I have a problem to solve which requires undo operation of each executed sql file in Oracle Database.
I execute them in an xml file with MSBuild - exec command sqlplus with log in and #*.sql.
Obviously rollback won't do, because it can't rollback already commited transaction.
I have been searching for several days and still can't find the answer. What I learned is Oracle Flashback and Point in Time Recovery. The problem is that I want the changes to be undone only for the current user i.e. if another user makes some changes at the same time then my solution performs undo only on user 'X' not 'Y'.
I found the start_scn and commit_scn in flashback_transaction_query. But does it identify only one user? What if I flashback to a given SCN? Will that undo only for me or for other users as well? I have taken out
select start_scn from flashback_transaction_query WHERE logon_user='MY_USER_NAME'
and
WHERE table_name = "MY_TABLE NAME"
and performed
FLASHBACK TO SCN"here its number"
on a chosen operation's SCN. Will that work for me?
I also found out about Point in Time Recovery but as I read it makes the whole database unavailable so other users will be unable to work with it.
So I need something that will undo a whole *.sql file.
This is possible but maybe not with the tools that you use. sqlplus can rollback your transaction, you just have to make sure auto commit isn't enabled and that your scripts only contain a single commit right before you end the sqlplus session (if you don't commit at all, sqlplus will always roll back all changes when it exits).
The problems start when you have several scripts and you want, for example, to rollback a script that you ran yesterday. This is a whole new can of worms and there is no general solution that will always work (it's part of the "merge problem" group of problems, i.e. how can you merge transactions by different users when everyone can keep transactions open for as long as they like).
It can be done but you need to carefully design your database for it, the business rules must be OK with it, etc.
To general approach would be to have a table which contains the information which rows were modified (= created,updated,deleted) by the script plus the script name plus the time when it was executed.
With this information, you can generate SQL which can undo the changes created by a script. To fill such a table, use triggers or generate your scripts in such a way that they write this information as well (note: This is probably beyond a "simple" sqlplus solution; you will have to write your own data loader for this).
Ok I solved the problem by creating a DDL and DML TRIGGER. The first one takes "extra" column (which is the DDL statement you have just entered) from v$open_cursor and inserts into my table. The second gets "undo_sql" from flashback_transaction_query which is the opposite action of your DML action - if INSERT then undo_sql is DELETE with all necessary data.
Triggers work before DELETE,INSERT (DML) on specific table and ALTER,DROP,CREATE (DDL) on specific SCHEMA or VIEW.
In my client place they have a database. Once I complete the incremental changes on database, I have prepare the list of SQL object changes in one SQL file.
The script is like this:
If sql object 1 present in database
DROP the SQL object 1
GO
create the SQL Object 1
If sql object 2 present in database
DROP the SQL object 2
create the SQL Object 2
All the time I have drop the existing Object and re-create the same.
Now this batch may contains some error.
My requirement is that if there any error in file the file. non of the the sql objects has been re-created. it should rollback the old sql objects.
If there is no error then it would create all the SQL objects.
Due the GO statement in middle I could not able to user the TRANSACTION in sql.
How can this be solved?
Don't use GO, then. Simply remove it from your script, and add your BEGIN and COMMIT TRANSACTION commands where you need them.
BEGIN TRAN
IF EXISTS Object1
BEGIN
DROP Object1;
END
CREATE Object1;
IF EXISTS Object2
BEGIN
DROP Object2;
END
CREATE Object2;
COMMIT TRAN
Modifying database schema via DROP/CREATE has many problems:
it may loose data
it looses permissions and extended properties added to the objects dropped
cross object dependencies (eg. foreign keys) require a certain order of drop/create
Usually is better to try to ALTER the object from schema version to schema version. This requires you to know which schema version is currently deployed, but that problem is easily solvable (use a database extended property, see Version Control and your Database).
Back to your question, a naive approach is to wrap your entire script in a big BEGIN TRAN/COMMIT but that seldom works:
it creates a potentially large transaction that requires much log space.
the result is impossible to validate until after the commit when is too late to do anything about it
the behavior mingling exceptions and transactions is messy at best. XACT_ABORT ON helps somehow, but only so much.
Not all DLL statements can be run from inside a transaction
For these resons I would recommnd a much simpler and safer approach: take a backup, WITH COPY_ONLY, of the database before modifying the schema. If anything goes wrong, rollback to the copy. Alternative, a database snapshot can be used as a backup. See How to: Revert a Database to a Database Snapshot.
Note that BEGIN TRAN/COMMIT can span batches (ie. can be separated by multiple GO) so your concern is not an issue.
When running a stored procedure (from a .NET application) that does an INSERT and an UPDATE, I sometimes (but not that often, really) and randomly get this error:
ERROR [40001] [DataDirect][ODBC Sybase Wire Protocol driver][SQL Server]Your server command (family id #0, process id #46) encountered a deadlock situation. Please re-run your command.
How can I fix this?
Thanks.
Your best bet for solving you deadlocking issue is to set "print deadlock information" to on using
sp_configure "print deadlock information", 1
Everytime there is a deadlock this will print information about what processes were involved and what sql they were running at the time of the dead lock.
If your tables are using allpages locking. It can reduce deadlocks to switch to datarows or datapages locking. If you do this make sure to gather new stats on the tables and recreate indexes, views, stored procedures and triggers that access the tables that are changed. If you don't you will either get errors or not see the full benefits of the change depending on which ones are not recreated.
I have a set of long term apps which occasionally over lap table access and sybase will throw this error. If you check the sybase server log it will give you the complete info on why it happened. Like: The sql that was involved the two processes trying to get a lock. Usually one trying to read and the other doing something like a delete. In my case the apps are running in separate JVMs, so can't sychronize just have to clean up periodically.
Assuming that your tables are properly indexed (and that you are actually using those indexes - always worth checking via the query plan) you could try breaking the component parts of the SP down and wrapping them in separate transactions so that each unit of work is completed before the next one starts.
begin transaction
update mytable1
set mycolumn = "test"
where ID=1
commit transaction
go
begin transaction
insert into mytable2 (mycolumn) select mycolumn from mytable1 where ID = 1
commit transaction
go