I'm trying to combine dynamic update and select on db2.
Everything works fine until I'm applying transactioning (i.e. commit,rollback or savepoint)
For instance, executing very basic command
SAVEPOINT SAVEPOINT1 ON ROLLBACK RETAIN CURSORS
gives COMMIT, ROLLBACK, or SAVEPOINT not valid.
Same effect for simple commit and others.
Can anybody explain why I'm unable to execute this commands and how to fix it?
Google shows only ibm docs with examples that are not applicable.
You Library (schema) where you have your tables must be journalised. Look this link for create your journalisation
Related
I accidentally included the word "data" in a SQL script I wrote and now am not sure what this does. Can someone tell me what the following script would actually do if the Rollback transaction were not set?
Begin Transaction
data
Rollback Transaction
I think "Data" is in the future reserved keyword list in our SQl Server. But I'm not sure if this is why the script runs without error.
Begin Transaction
data
Rollback Transaction
Not sure what happened. The script ran correctly
You are aliasing the transaction as data
Without the rollback you would just have an open transaction called data
according to my knowledge and the keywords for SQL Server, disponible here:
https://learn.microsoft.com/pt-br/sql/t-sql/language-elements/reserved-keywords-transact-sql?view=sql-server-2017
there is no meaning for data word, but it could be an abbreviation for database, or it could just be a bug
I can't get my Teradata sql transaction to work through a logstash file.
I am running a somewhat complex transaction with multiple statements (some of them DDL) relying upon previous statements in Teradata. I’m using the jdbc input plugin in logstash. The statement creates multiple volatile tables to provide columns of information upon which I call upon in later statements to complete the transaction. This transaction works perfectly fine when run in Teradata Studio, but has yet to work when I've tried to run it through a jdbc.conf file.
When I run the transaction through my config file from the command line I receive error message 3932 which essentially tells me that I need to enter in COMMIT statements after my volatile tables. I have looked into the error and have, to no productive success tried:
entering in COMMIT statements after each volatile table
placed BT and Et at the beginning and end of the transaction
changed modes within Teradata jdbc_connection_string parameters vector in hopes of having auto commit enabled (not sure if it is disabled or not).
I know the only issue is my transaction through the jdbc, as I (as mentioned before) have gotten the transaction to work in Teradata, and have successfully run my jdbc.conf file with a simpler query.
Any help would be much appreciated.
We have a couple of migration scripts, which alter the schema from version to version.
Sometimes it happens, that a migration step (e.g. adding a column to a table) was already done manually or by a patch installation, and thus the migration script fails.
How do I prevent the script from stopping on error (ideally at specific expected errors) and instead log a message and continue with the script?
We use PostgresQL 9.1, both a solution for PostgresQL as well as a general SQL solution would be fine.
Although #LucM's answer seems to be good recommendation - #TomasGreif pointed me to an answer that went more in the direction of my origin request.
For the given example of adding a column, this can be done by using the DO statement for catching the exception:
DO $$
BEGIN
BEGIN
ALTER TABLE mytable ADD COLUMN counter integer default 0;
EXCEPTION
WHEN duplicate_column THEN RAISE NOTICE 'counter column already exists';
END;
END;
$$;
The hint led me to the right PostgresQL site, describing the error codes one could trap - so that was the right solution for me.
I don't think that you have another solution than running the entire script outside of a transaction.
What I would do if I was in that situation:
Do the modifications to the metadata (drop/create table/column...) outside of a
transaction.
Do all modifications to the data (update/insert/delete)
inside a transaction.
We have a developer connecting to SQL Server using pymssql which uses freetds. His script dynamically generates sql insert queries based on values in a MySQL DB.
The statements are parsed properly and have proper begin transaction/commits in them when you view them in SQL Profiler. The only 'user error message' that comes up a 'changed database context to...' which comes up whenever you issue a USE in SQL. After the batch completes, there is a transaction log event 'rollback' and all the records that were inserted are removed.
We are not using XACT_ABORT_OFF because I haven't seen 'change db context to' be affected by it.
Does anyone have any ideas or experience with this? Thanks!
[edit]:
The code copied out of profiler works fine in SSMS using the same user and there are no triggers.
[2nd edit]:
inside SQL profiler I see a 'TransactionLog' entry with 'rollback' under eventsubtype, however there isn't a TM:Rollback Tran
Perhaps the connection is not being committed or closed correctly. Check the freetds documentation to ensure that you are using the correct usage patterns. Also you might want to check whether its possible to enable autocommit mode on the connection.
So after much searching and triple checking the auto commit setting, we caught that 2 variables were very closely named and it was committing the wrong one. There is a mysql and a pymysql module, but in this case we were using pymssql but it was typed in at pymysql instead. Thanks everyone who commented.
Can the dbms_errlog function be used for SELECT queries?
I earlier encountered an error where Oracle is throwing an ORA-0722, i was trying to identify which column and possibly, row of a PL/SQL statement that was throwing that error. However i found out that dbms_errlog is native to only Oracle 10g and above.
In the case also, what alternatives do i have if i am using Oracle 9i?
DBMS_ERRLOG ist not a function, it is a PL/SQL package. It contains one procedure that creates an error table. To log errors to this error, you need to specify the "log errors" clause to your DML statements. From this description it should be obvious that this is tightly integrated with the transaction layer.
One way to reproduce similar behavior in earlier releases is to
Create your own error table
Create a PL/SQL procedure that inserts into that error table. To
make sure that the log is written in case of errors this procedure has to use
autonomous transactions.
The calls to log errors have to be explicitly added to the
corresponding exception handlers.