I have a problem to solve which requires undo operation of each executed sql file in Oracle Database.
I execute them in an xml file with MSBuild - exec command sqlplus with log in and #*.sql.
Obviously rollback won't do, because it can't rollback already commited transaction.
I have been searching for several days and still can't find the answer. What I learned is Oracle Flashback and Point in Time Recovery. The problem is that I want the changes to be undone only for the current user i.e. if another user makes some changes at the same time then my solution performs undo only on user 'X' not 'Y'.
I found the start_scn and commit_scn in flashback_transaction_query. But does it identify only one user? What if I flashback to a given SCN? Will that undo only for me or for other users as well? I have taken out
select start_scn from flashback_transaction_query WHERE logon_user='MY_USER_NAME'
and
WHERE table_name = "MY_TABLE NAME"
and performed
FLASHBACK TO SCN"here its number"
on a chosen operation's SCN. Will that work for me?
I also found out about Point in Time Recovery but as I read it makes the whole database unavailable so other users will be unable to work with it.
So I need something that will undo a whole *.sql file.
This is possible but maybe not with the tools that you use. sqlplus can rollback your transaction, you just have to make sure auto commit isn't enabled and that your scripts only contain a single commit right before you end the sqlplus session (if you don't commit at all, sqlplus will always roll back all changes when it exits).
The problems start when you have several scripts and you want, for example, to rollback a script that you ran yesterday. This is a whole new can of worms and there is no general solution that will always work (it's part of the "merge problem" group of problems, i.e. how can you merge transactions by different users when everyone can keep transactions open for as long as they like).
It can be done but you need to carefully design your database for it, the business rules must be OK with it, etc.
To general approach would be to have a table which contains the information which rows were modified (= created,updated,deleted) by the script plus the script name plus the time when it was executed.
With this information, you can generate SQL which can undo the changes created by a script. To fill such a table, use triggers or generate your scripts in such a way that they write this information as well (note: This is probably beyond a "simple" sqlplus solution; you will have to write your own data loader for this).
Ok I solved the problem by creating a DDL and DML TRIGGER. The first one takes "extra" column (which is the DDL statement you have just entered) from v$open_cursor and inserts into my table. The second gets "undo_sql" from flashback_transaction_query which is the opposite action of your DML action - if INSERT then undo_sql is DELETE with all necessary data.
Triggers work before DELETE,INSERT (DML) on specific table and ALTER,DROP,CREATE (DDL) on specific SCHEMA or VIEW.
Related
So, I can successfully run any SELECT statement, but doing any UPDATE statements just hang until they eventually time out. This occurs with trying to execute any stored procedures as well. Other users that connect to the database can run anything without running into this problem.
Is there a cache per user that I can dump or something along those lines? I usually get sick of waiting and cancel the operation, so I don't know if that has contributed to the problem or not.
Just for reference, it's things as simple as these:
UPDATE SOME_TABLE
SET SOME_COLUMN = 'TEST';
EXECUTE SOME_PROCEDURE(1234);
But this works:
SELECT * FROM SOME_TABLE; -- various WHERE clauses don't cause any problems.
UPDATE:
Probably a little disappointing for anyone who came here looking for an answer to a similar problem, but the issue ended up being twofold: The DBA didn't think it was important to give me many details, but there were limitations on the Oracle server that were intentionally set for procedures in general (temp space issues, and things of that ilk). And second, there was an update to the procedure that I wasn't aware of that'd run a sub-query for every record that's pulled in the query (thousands of records). That was removed and now it's running as expected.
In my experience this happens most often because there is another uncommitted operation on the table. For example: User 1 successfully issues an update but does not commit it or roll it back. User 2 (or even another session of User 1) issues another update which just hangs until the other pending update is committed or rolled back. You say that "other users" don't have the same problem, which makes me wonder if they are committing their changes. And if so, if they are updating the same table or a different one.
Good day,
Two questions:
A) If I have something like this:
COMPLEX QUERY
WAIT FOR LOG TO FREE UP (DELAY)
COMPLEX QUERY
Would this actually work? Or would the log segment of tempdb remain just as full, due to still holding on to the log of the first query.
B) In the situation above, is it possible to have the middle query perform a dump tran with truncate_only ?
(It's a very long chain of various queries that are run together. They don't change anything in the databases and I don't care to even keep the logs if I don't have to.)
The reason for the chain is because I need the same two temp tables, and a whole bunch of variables, for various queries in the chain (Some of them for all of the queries). To simply the usage of the query chain by a user with VERY limited SQL knowledge, I collect very simple information at the beginning of the long script, retrieve the rest automatically, and then use it through out the script
I doubt either of these would work, but I thought I may as well ask.
Sybase versions 15.7 and 12 (12.? I don't remember)
Thanks,
Ziv.
Per my understanding of #michael-gardner 's answers this is what I plan:
FIRST TEMP TABLES CREATION
MODIFYING OPERATIONS ON FIRST TABLES
COMMIT
QUERY1: CREATE TEMP TABLE OF THIS QUERY
QUERY1: MODIFYING OPERATIONS ON TABLE
QUERY1: SELECT
COMMIT
(REPEAT)
DROP FIRST TABLES (end of script)
I read that 'select into' is not written to the log, so I'm creating the table with a create (I have to do it this way due to other reasons), and use select into existing table for initial population. (temp tables)
Once done with the table, I drop it, then 'commit'.
At various points in the chain I check the log segment of tempdb, if it's <70% (normally at >98%), I use a goto to reach the end of the script where I drop the last temp tables and the script ends. (So no need for a manual 'commit' here)
I misunderstood the whole "on commit preserve rows" thing, that's solely on IQ, and I'm on ASE.
Dumping the log mid-transaction won't have any affect on the amount of log space. The Sybase log marker will only move if there is a commit (or rollback), AND if there isn't an older open transaction (which can be found in syslogshold)
There are a couple of different ways you can approach solving the issue:
Add log space to tempdb.
This would require no changes to your code, and is not very difficult. It's even possible that tempdb is not properly sized for the sytem, and the extra log space would be useful to other applications utilizing tempdb.
Rework your script to add a commit at the beginning, and query only for the later transactions.
This would accomplish a couple of things. The commit at the beginning would move the log marker forward, which would allow the log dump to reclaim space. Then since the rest of your queries are only reads, there shouldn't be any transaction space associate with them. Remember the transaction log only stores information on Insert/Update/Delete, not Reads.
Int the example you listed above, the users details could be stored and committed to the database, then the rest of the queries would just be select statements using those details for the variables, then a final transaction would cleanup the table. In this scenario the log is only held for the first transaction, and the last transaction..but the queries in the middle would not fill the log.
Without knowing more about the DB configuration or query details it's hard to get much more detailed.
I am doing a test that updates my database each time I run it
And I cannot do the test again with the updated values
I am recreating the WHOLE database with:
postgres=# drop database mydb;
DROP DATABASE
postgres=# CREATE DATABASE mydb WITH TEMPLATE mycleandb;
CREATE DATABASE
This takes a while
Is there any way I can update just the tables that I changed with tables from mycleandb?
Transactions
You haven't mentioned what your programming language or framework are. Many of them have built in test mechanisms that take care of this sort of thing. If you are not using one of them, what you can do is to start a transaction with each test setup. Then roll it back when you tear down the test.
BEGIN;
...
INSERT ...
SELECT ...
DELETE ...
ROLLBACK;
Rollback, as the name suggests reverses all that has been done to the database so that it remains at the original condition.
There is one small problem with this approach though, you can't do integration tests where you intentionally enter incorrect values and cause a query to fail integrity tests. If you do that the transaction ends and no new statements can be executed until rolled back.
pg_dump/pg_restore
it's possible to use the -t option of pg_dump to dump and then restore one or a few tables. This maybe the next best option when transactions are not practical.
Non Durable Settings / Ramdisk
If both above options are inapplicable please see this answer: https://stackoverflow.com/a/37221418/267540
It's on a question about django testing but there's very little django specific stuff on that. However coincidentally django's rather excellent test framework relies on the begin/update/rollback mechanis described above by default.
Test inside a transaction:
begin;
update t
set a = 1;
Check the results and then:
rollback;
It will be back to a clean state;
We have many SQL Server scripts. But there are a few critical scripts that should only be run at certain times under certain conditions. Is there a way to protect us from ourselves with some kind of popup warning?
i.e. When these critical scripts are run, is there a command to ask the user if they want to continue?
(We've already made some rollback scripts to handle these, but it's better if they not be accidentally run at all).
No, there is no such thing.
You can write an application (windows service?) that will only run the scripts as and when they should be.
The fact that you are even asking the question shows that this is something that should be automated, the sooner the better.
You can mitigate the problem in the meanwhile by using if to test for these conditions and only execute if they are met. If this is a series of scripts you should wrap them in transactions to boot.
One work-around you can use is the following, which would require you to update a value in another table:
CREATE PROC dbo.MyProc
AS
WHILE (SELECT GoBit FROM dbo.OKToRun) = 0
BEGIN
RAISERROR('Waiting for GoBit to be set!', 0,1)
WAITFOR DELAY '00:00:10'
END
UPDATE dbo.OKtoRun
SET GoBit = 0
... DO STUFF ...
This will require you to, in another spid or session, update that table manually before it'll proceed.
This gets a lot more complicated with multiple procedures, so it will only work as a very short-term workaround.
sql is a query language. does not have ability to accept user inputs.
only thing i can think of would be to have it #variable driven. first part should update #shouldRunSecond = 1. and the second part should be wrapped in a
if #shouldRunSecond = 1
begin
...
end
second portion will be skipped if not desired.
The question is - where are these scripts located ?
If you have them as .sql file that you open every time before you run, then you can simply add some "magic numbers" before beginning of the script, that you will have to calculate every time, before you run it. In example below each time before you run your script you have to put correct date and minute into IF fondition, other wise script will not run
IF DATEPART(dd,GETDATE())!=5 or DATEPART(mi,(GETDATE()))!=43
BEGIN
RAISERROR ('You have tried occasionally to run your dangerous script !!!',16,1);
RETURN
END
--Some dangerous actions
drop database MostImportantCustomer
update Personal set Bonus=0 where UserName=SUSER_SNAME()
If your scripts reside in stored procedure - you can add some kind of "I am sure, I know what I do" parameter, where you will always pass, for example Minute multiplied by Day.
Hote it helps
I have seen batch scripts containing SQLCMD ..., so instead of running the .sql script from code or management studio, you could add a prompt in the script.
I have (on limited occasion) created an #AreYouSure parameter that must be passed into a stored procedure, then put comments next to the declaration in the stored procedure explaining the danger of running said procedure.
At least that way, no RANDOs will wander into your environment and kick off stored procedures when they don't understand the consequences. The parameter could be worked into an IF statement that checks it's value, or it doesn't really have to be used at all, but if it must be passed, then they have to at least figure out what to pass.
If you use this too much, though, others may just start passing a 'Y' or a 1 into every stored procedure without reading the comments. You could switch up the datatypes, but at some point it becomes more work to maintain this scheme than it is worth. That is why I use it on limited occasion.
I have a TSQL script that is used to set up a database as part of my product's installation. It takes a number of steps which all together take five minutes or so. Sometimes this script fails on the last step because the user running the script does not have sufficient rights to the database. In this case I would like the script to fail strait away. To do this I want the script to test what rights it has up front. Can anyone point me at a general purpose way of testing if the script is running with a particular security permission?
Edit: In the particular case I am looking at it is trying to do a backup, but I have had other things go wrong and was hoping for a general purpose solution.
select * from fn_my_permissions(NULL, 'SERVER')
This gives you a list of permissions the current session has on the server
select * from fn_my_permissions(NULL, 'DATABASE')
This gives you a list of permissions for the current session on the current database.
See here for more information.
I assume it is failing on an update or insert after a long series of selects.
Just try a simple update or insert inside a transaction. Hard-code the row id, or whatever to make it simple and fast.
Don't commit the transaction--instead roll it back.
If you don't have rights to do the insert or update, this should fail. If you DO, it will roll back and not cause a permanent change.
try the last insert/update up front with some where condition like
insert/update
where 1=2
if (##error <> 0)
raise error 6666 'no permissions'
this would not cause any harm but would raise a flag upfront about the lack of rights.