Teradata JDBC Warning 3932 Issue - sql

I can't get my Teradata sql transaction to work through a logstash file.
I am running a somewhat complex transaction with multiple statements (some of them DDL) relying upon previous statements in Teradata. I’m using the jdbc input plugin in logstash. The statement creates multiple volatile tables to provide columns of information upon which I call upon in later statements to complete the transaction. This transaction works perfectly fine when run in Teradata Studio, but has yet to work when I've tried to run it through a jdbc.conf file.
When I run the transaction through my config file from the command line I receive error message 3932 which essentially tells me that I need to enter in COMMIT statements after my volatile tables. I have looked into the error and have, to no productive success tried:
entering in COMMIT statements after each volatile table
placed BT and Et at the beginning and end of the transaction
changed modes within Teradata jdbc_connection_string parameters vector in hopes of having auto commit enabled (not sure if it is disabled or not).
I know the only issue is my transaction through the jdbc, as I (as mentioned before) have gotten the transaction to work in Teradata, and have successfully run my jdbc.conf file with a simpler query.
Any help would be much appreciated.

Related

Mule 3.9 Bulk update in batch commit failing all records even if one update fails

My process reads a csv file and updates the DB with the data from CSV. I want to do a bulk update but if I am using batch commit in batch process, and set commit size to 50, it works fine for success records. But if DB update statement fails for even one record, the whole commit size (50 records) are failing to update in DB. I read in mule documentation that some connectors have the ability to handle record-level errors without failing the whole batch(i.e. upsert) and Database connector is one of them. Not sure if this scenario falls under it or not. Did anyone face this kind of issue? Is there a work around this issue without doing record by record update. I would appreciate any thoughts around this issue.
The documentation for database bulk operations says it is up to the JDBC driver
It may happen that while some statements in the bulk operation can be
successfully executed, some may result in an error. When this occurs,
it will be up to the driver to either:
Stop execution immediately and ignore all remaining operations, or
Continue to execute the remaining statements.

Unable to use transactions on db2 with ibm400

I'm trying to combine dynamic update and select on db2.
Everything works fine until I'm applying transactioning (i.e. commit,rollback or savepoint)
For instance, executing very basic command
SAVEPOINT SAVEPOINT1 ON ROLLBACK RETAIN CURSORS
gives COMMIT, ROLLBACK, or SAVEPOINT not valid.
Same effect for simple commit and others.
Can anybody explain why I'm unable to execute this commands and how to fix it?
Google shows only ibm docs with examples that are not applicable.
You Library (schema) where you have your tables must be journalised. Look this link for create your journalisation

Undoing sql scripts

I have a problem to solve which requires undo operation of each executed sql file in Oracle Database.
I execute them in an xml file with MSBuild - exec command sqlplus with log in and #*.sql.
Obviously rollback won't do, because it can't rollback already commited transaction.
I have been searching for several days and still can't find the answer. What I learned is Oracle Flashback and Point in Time Recovery. The problem is that I want the changes to be undone only for the current user i.e. if another user makes some changes at the same time then my solution performs undo only on user 'X' not 'Y'.
I found the start_scn and commit_scn in flashback_transaction_query. But does it identify only one user? What if I flashback to a given SCN? Will that undo only for me or for other users as well? I have taken out
select start_scn from flashback_transaction_query WHERE logon_user='MY_USER_NAME'
and
WHERE table_name = "MY_TABLE NAME"
and performed
FLASHBACK TO SCN"here its number"
on a chosen operation's SCN. Will that work for me?
I also found out about Point in Time Recovery but as I read it makes the whole database unavailable so other users will be unable to work with it.
So I need something that will undo a whole *.sql file.
This is possible but maybe not with the tools that you use. sqlplus can rollback your transaction, you just have to make sure auto commit isn't enabled and that your scripts only contain a single commit right before you end the sqlplus session (if you don't commit at all, sqlplus will always roll back all changes when it exits).
The problems start when you have several scripts and you want, for example, to rollback a script that you ran yesterday. This is a whole new can of worms and there is no general solution that will always work (it's part of the "merge problem" group of problems, i.e. how can you merge transactions by different users when everyone can keep transactions open for as long as they like).
It can be done but you need to carefully design your database for it, the business rules must be OK with it, etc.
To general approach would be to have a table which contains the information which rows were modified (= created,updated,deleted) by the script plus the script name plus the time when it was executed.
With this information, you can generate SQL which can undo the changes created by a script. To fill such a table, use triggers or generate your scripts in such a way that they write this information as well (note: This is probably beyond a "simple" sqlplus solution; you will have to write your own data loader for this).
Ok I solved the problem by creating a DDL and DML TRIGGER. The first one takes "extra" column (which is the DDL statement you have just entered) from v$open_cursor and inserts into my table. The second gets "undo_sql" from flashback_transaction_query which is the opposite action of your DML action - if INSERT then undo_sql is DELETE with all necessary data.
Triggers work before DELETE,INSERT (DML) on specific table and ALTER,DROP,CREATE (DDL) on specific SCHEMA or VIEW.

Transactions in SQL coming from pymssql rollback on their own

We have a developer connecting to SQL Server using pymssql which uses freetds. His script dynamically generates sql insert queries based on values in a MySQL DB.
The statements are parsed properly and have proper begin transaction/commits in them when you view them in SQL Profiler. The only 'user error message' that comes up a 'changed database context to...' which comes up whenever you issue a USE in SQL. After the batch completes, there is a transaction log event 'rollback' and all the records that were inserted are removed.
We are not using XACT_ABORT_OFF because I haven't seen 'change db context to' be affected by it.
Does anyone have any ideas or experience with this? Thanks!
[edit]:
The code copied out of profiler works fine in SSMS using the same user and there are no triggers.
[2nd edit]:
inside SQL profiler I see a 'TransactionLog' entry with 'rollback' under eventsubtype, however there isn't a TM:Rollback Tran
Perhaps the connection is not being committed or closed correctly. Check the freetds documentation to ensure that you are using the correct usage patterns. Also you might want to check whether its possible to enable autocommit mode on the connection.
So after much searching and triple checking the auto commit setting, we caught that 2 variables were very closely named and it was committing the wrong one. There is a mysql and a pymysql module, but in this case we were using pymssql but it was typed in at pymysql instead. Thanks everyone who commented.

How can I force a Snapshot Isolation failure of 3960

Story
I have a SPROC using Snapshot Isolation to perform several inserts via MERGE. This SPROC is called with very high load and often in parallel so it occasionally throws an Error 3960- which indicates the snapshot rolled back because of change conflicts. This is expected because of the high concurrency.
Problem
I've implemented a "retry" queue to perform this work again later on, but I am having difficulty reproducing the error to verify my checks are accurate.
Question
How can I reproduce a snapshot failure (3960, specifically) to verify my retry logic is working?
Already Tried
RAISEERROR doesn't work because it doesn't allow me to raise existing errors, only user defined ones
I've tried re-inserted the same record, but this doesn't throw the same failure since it's not two different transactions "racing" another
Open two connections, start a snapshot transaction on both, on connection 1 update a record, on the connection 2 update the same record (in background because it will block), then on connection 1 commit
Or treat a user error as a 3960 ...
Why not just do this:
RAISERROR(3960, {sev}, {state})
Replacing {sev} and {state} with the actual values that you see when the error occurs in production?
(Nope, as Martin pointed out, that doesn't work.)
If not that then I would suggest trying to run your test query multiple times simultaneously. I have done this myself to simulate other concurrency errors. It should be doable as long as the test query is not too fast (a couple of seconds at least).