Using MS Access DB is nearly always a pain. What really hurts is that you can't even use a distributed transaction in SSIS when utilizing an OLE-DB Connection accessing an MS Access file. EVEN IF YOU ONLY INTEND TO READ IT! WTF? What kind of transaction support is that? So can anybody tell me how to make a SSIS package transactional in this case? Using the normal Transaction mode results in a The Acquire Connection manager failed with error code 0xC0202009
I've run into scenarios like this (ancient MySQL instance and a DB2 where they denied us create transaction rights) and what I found to be an effective solution is to cache the source that doesn't support transactions in a raw file, cache connection manager, or a recordset destination. Raw file would be my "go to" for sheer simplicity sake but the others may be needed based on component requirements.
General appearance of a package would look like: Package (or enclosing container) with transaction set to Required. This will create a transaction that contained tasks can enlist in. I then create a "starter" container that explicitly opts out of a transaction (transaction option: notsupported). That's where we will want to access the resource that doesn't support transactions. The default transaction option, Supported, means it will enlist in an open transaction if one is available. I put all the tasks that need the transaction in the supported container.
In the dataflow that is outside the transaction, I dump to a RAW file. I use the same query that I'd use in a "normal" transaction supported source.
In the data flow that consumes the cache output, I use the file generated in the first step and voila, transactions working as intended for the destination and no transaction attempted on the source that didn't support it.
You can set the RetainSameConnection property on the OLE DB Connection Manager to True, e.g. when writing to a SQL Server as seen here. This in return enables you to write a BEGIN TRANSACTION in one Execute SQL Task and then choose to COMMIT or ROLLBACK in another one.
But Beware: Avoid loose ends as this will create multiple connections when running tasks parallel. You have to create a chain from BEGIN TRAN to COMMIT, otherwise it will fail.
See http://www.morrenth.com/transaction-and-retainsameconnection-in-ssis-ole-db-connection-manager.aspx
Related
I am currently implementing a server system which has both an SQL database and a Redis datastore. The data written to Redis heavily depends on the SQL data (cache, objects representing logic entities defined by a number of SQL models and their relationships).
While looking for an error handling methodology to wrap client requests, something similar to SQL's transaction commit & rollback (Redis doesn't support rollbacks), I thought of a mechanism which can serve this purpose and I'd appreciate input regarding it.
Basically, I intend to wrap (with before/after middleware) every client request with an SQL transaction and a Redis multi command (pipes commands until exec or discard command is invoked), and allow both transactions to occur only if the request was processed successfully.
The problem is that once you start a Redis multi command, you are not able to preform any reads/writes and actually use their values while processing a request. I reduced the problem just for reads since depending on just-now written values can be optimized out.
My (simple) solution: split the Redis connection into two - a writer and a reader. The writer connection will be the one to be initialized with the multi command and executed/discarded at the end. Of course, all writing will be preformed through it, while reading is done using the reader (executed instantly).
The down side: as opposed to SQL, you can't rely on values written in the same request (transaction). Then again, usually quite easy to overcome.
I have been reading the SQLite documentation and also referencing code I have written previously but I don't seem to be able to find a definitive answer to what I imagine to be a rather simple question.
I would like to execute many (separate) compiled statements within a transaction, but child threads may also be creating transactions or just executing statements at the same time and I would not want them included in this particular transaction. Currently, I have a single database handle that I share between all threads.
So, my question is,
1) .. is it generally better to have some kind of semaphore around transactions to ensure they will not clash/collect with other statements being executed against a database handle. I already marshal writes to prevent problems with multithreaded issues with SQLite (although with WAL now it's very hard to unsettle it at all).
2) .. or are you expected to open multiple database connections and start/commit the transactions one per database connection if they will be concurrent?
Changes made in one database connection are invisible to all other database connections prior to commit.
So it seems a hybrid approach of having several connections open to the database provides adequate concurrency guarantees, trading off the expense of opening a new connection with the benefit of allowing multi-threaded write transactions.
A query sees all changes that are completed on the same database connection prior to the start of the query, regardless of whether or not those changes have been committed.
If changes occur on the same database connection after a query starts running but before the query completes, then it is undefined whether or not the query will see those changes.
If changes occur on the same database connection after a query starts running but before the query completes, then the query might return a changed row more than once, or it might return a row that was previously deleted.
For the purposes of the previous four items, two database connections that use the same shared cache and which enable PRAGMA read_uncommitted are considered to be the same database connection, not separate database connections.
Here is the SQLite information on isolation. Which is exceptionally useful to read and understand for this problem.
I was not able to find a solution for this question online, so I hope I can find help here.
I've inherited a Java web application that performs changes to an Oracle database and displays data from it. The application uses a 'pamsdb' user ID. The application inserted a new row in the one of the tables (TTECHNOLOGY). When the application later queries the db, the result set includes the new row (I can see it in print outs and in the application screens).
However, when I query the database directly using sqldeveloper (using the same user id 'pamsdb'), I do not see the new row in the modified table.
A couple of notes:
1) I read here and in other locations that all INSERT operations should be followed by a COMMIT, otherwise other users cannot see the changes. The Java application does not do COMMIT, which I thought could be the source of the problem, but since I'm using the same user ID in sqldeveloper, I'm surprised I can't see the changes there.
2) I tried doing COMMIT WORK from sqldeveloper, but it didn't change my situation.
Can anyone suggest what's causing the discrepancy and how can it be resolved?
Thanks in advance!
You're using the same user, but in a different session. Once session can't see uncommitted changes made in another session, for any user - they are independent.
You have to commit from the session that did the insert - i.e. your Java code has to commit for its changes to be visible anywhere else. You can't make the Java session's changes commit from elsewhere, and committing from SQL Developer - even as the same user - only commits any changes made in that session.
You can read more about connections and sessions, and transactions, and the commit documentation summarises as:
Use the COMMIT statement to end your current transaction and make permanent all changes performed in the transaction. A transaction is a sequence of SQL statements that Oracle Database treats as a single unit. This statement also erases all savepoints in the transaction and releases transaction locks.
Until you commit a transaction:
You can see any changes you have made during the transaction by querying the modified tables, but other users cannot see the changes. After you commit the transaction, the changes are visible to other users' statements that execute after the commit.
You can roll back (undo) any changes made during the transaction with the ROLLBACK statement (see ROLLBACK).
The "other users cannot see the changes" really means other user sessions.
If the changes are being committed and are visible from a new session via your Java code (after the web application and/or its connection pool have been restarted), but are still not visible from SQL Developer; or changes made directly in SQL Developer (and committed there) are not visible to the Java session - then the changes are being made either in different databases, or in different schemas for the same database, or (less likely) are being hidden by VPD. That should be obvious from the connection settings being used by the two sessions.
From comments it seems that was the issue here, with the Java web application and SQL Developer accessing different schemas which both had the same tables.
I am working on a VB.NET application.
As per the nature of the application, One module has to monitor database (SQLite DB) each second. This Monitoring is done by simple select statement which run to check data against some condition.
Other Modules performs a select,Insert and Update statements on same SQLite DB.
on SQLite concurrent select statements are working fine, but I'm having hard time here to find out, why it is not allowing Inset and Update.
I understand it's a file based lock, but is there anyway to get it done?
each module, in fact statement opens and close the connection to DB.
I've restricted user to run single statement at a time by GUI design.
any Help will be appreciated.
If your database file is not on a network, you could allow a certain amount of read/write concurrency by enabling WAL mode.
But perhaps you should use only a single connection and do your own synchronization for all DB accesses.
You can use some locking mechanism to make sure the database works in a multithreading situation. Since your application is a read intensive one according to what you said, you can consider using a ReaderWriterLock or ReaderWriterLockSlim. (refer to here and here for more details)
If you have only one database, then creating just one instance of the lock is OK; if you have more than one database, each of them can be assigned a lock. Every time you do some read or write, enter the lock (by EnterReadLock() for ReaderWriterLockSlim, or by AcquireReaderLock() for ReaderWriterLock) before you do something, and after you're done exit the lock. Note that you can place the exit of the lock in a finally clause lest you forget to release it.
The strategy above is being used in our production applications. It's not so good as to use a single thread in your case because you have to take performance into account.
It's a test environment, I needed some data to test an Update query, but accidentally updated a column in all rows to have wrong data. Must I use a backup to restore the data back to the previous instance, or is there some secret with transaction log that I can take advantage of?
Thanks in advance.
There is a non-secret transaction log called transaction log that you can recover from to a point in time. Here's how... That annoying little file with the ldf extension is the transaction log, as opposed to the .mdf file that is your normal db data.
Unless you have truncated the transaction log (ldf) or otherwise mucked with it, you should be able to do exactly the kind of restore (undo) that you're looking for.
If your database was in full recovery mode then you can try reading transaction log using third party tool such as this one or you can try doing this yourself with DBCC LOG command.
If db is in full recovery then a lot of data is stored in transaction log but it’s not easily readable because MS never polished official documentation for this and because it’s purpose is not recovery but making sure transaction is committed correctly.
However there are workarounds to reading it like using the tool above (paid tool unfortunately but has a trial) or decoding the results of DBCC LOG yourself.
Not unless you wrapped your sql in a transaction block - begin transaction, rollback, commit. That one of the dangerous things about sql server. With Oracle you have to physically commit each transaction which is much safer imho.