How to hide data of uncommitted transactions from other connections? - sql

I am inserting data into multiple tables and expected all data to be invisible to others until I committed them. But in fact some other application is starting to pick up the data before I am done. I verified this by using a delay between inserts and saw the data immediately.
I read about isolation levels, but it looks like i.e. SET TEMPORARY OPTION isolation_level = 3; has no effect when set only on my side.
Is this a difference between Sybase and other databases, or are there just wrong settings somewhere?
I'm using Sybase SQL Anywhere 11+16.

Here is the proper page for isolation levels in SQL Anywhere 11.0.
I think you should use SET OPTION isolation_level=1; on the user accessing your table (or the group PUBLIC).

Related

SQL database locking difference between SELECT and UPDATE

I am re-writing an old stored procedure which is called by BizTalk. Now this has the potential to have 50-60 messages pushed through at once.
I occasionally have an issue with database locking when they are all trying to update at once.
I can only make changes in SQL (not BizTalk) and I am trying to find the best way to run my SP.
With this in mind what i have done is to make the majority of the statement to determine if an UPDATE is needed by using a SELECT statement.
What my question is - What is the difference regarding locking between an UPDATE statement and a SELECT with a NOLOCK against it?
I hope this makes sense - Thank you.
You use nolock when you want to read uncommitted data and want to avoid taking any shared lock on the data so that other transactions can take exclusive lock for updating/deleting.
You should not use nolock with update statement, it is really a bad idea, MS says that nolock are ignored for the target of update/insert statement.
Support for use of the READUNCOMMITTED and NOLOCK hints in the FROM
clause that apply to the target table of an UPDATE or DELETE statement
will be removed in a future version of SQL Server. Avoid using these
hints in this context in new development work, and plan to modify
applications that currently use them.
Source
Regarding your locking problem during multiple updates happening at the same time. This normally happens when you read data with the intention to update it later by just putting a shared lock, the following UPDATE statement can’t acquire the necessary Update Locks, because they are already blocked by the Shared Locks acquired in the different session causing the deadlock.
To resolve this you can select the records using UPDLOCK like following
DECLARE #IdToUpdate INT
SELECT #IdToUpdate =ID FROM [Your_Table] WITH (UPDLOCK) WHERE A=B
UPDATE [Your_Table]
SET X=Y
WHERE ID=#IdToUpdate
This will take the necessary Update lock on the record in advance and will stop other sessions to acquire any lock (shared/exclusive) on the record and will prevent from any deadlocks.
NOLOCK: Specifies that dirty reads are allowed. No shared locks are issued to prevent other transactions from modifying data read by the current transaction, and exclusive locks set by other transactions do not block the current transaction from reading the locked data. NOLOCK is equivalent to READUNCOMMITTED.
Thus, while using NOLOCK you get all rows back but there are chances to read Uncommitted (Dirty) data. And while using READPAST you get only Committed Data so there are chances you won’t get those records that are currently being processed and not committed.
For your better understanding please go through below link.
https://www.mssqltips.com/sqlservertip/2470/understanding-the-sql-server-nolock-hint/
https://www.mssqltips.com/sqlservertip/4468/compare-sql-server-nolock-and-readpast-table-hints/
https://social.technet.microsoft.com/wiki/contents/articles/19112.understanding-nolock-query-hint.aspx

Can I perform a dump tran mid-transaction? Also, effects of using delay on log?

Good day,
Two questions:
A) If I have something like this:
COMPLEX QUERY
WAIT FOR LOG TO FREE UP (DELAY)
COMPLEX QUERY
Would this actually work? Or would the log segment of tempdb remain just as full, due to still holding on to the log of the first query.
B) In the situation above, is it possible to have the middle query perform a dump tran with truncate_only ?
(It's a very long chain of various queries that are run together. They don't change anything in the databases and I don't care to even keep the logs if I don't have to.)
The reason for the chain is because I need the same two temp tables, and a whole bunch of variables, for various queries in the chain (Some of them for all of the queries). To simply the usage of the query chain by a user with VERY limited SQL knowledge, I collect very simple information at the beginning of the long script, retrieve the rest automatically, and then use it through out the script
I doubt either of these would work, but I thought I may as well ask.
Sybase versions 15.7 and 12 (12.? I don't remember)
Thanks,
Ziv.
Per my understanding of #michael-gardner 's answers this is what I plan:
FIRST TEMP TABLES CREATION
MODIFYING OPERATIONS ON FIRST TABLES
COMMIT
QUERY1: CREATE TEMP TABLE OF THIS QUERY
QUERY1: MODIFYING OPERATIONS ON TABLE
QUERY1: SELECT
COMMIT
(REPEAT)
DROP FIRST TABLES (end of script)
I read that 'select into' is not written to the log, so I'm creating the table with a create (I have to do it this way due to other reasons), and use select into existing table for initial population. (temp tables)
Once done with the table, I drop it, then 'commit'.
At various points in the chain I check the log segment of tempdb, if it's <70% (normally at >98%), I use a goto to reach the end of the script where I drop the last temp tables and the script ends. (So no need for a manual 'commit' here)
I misunderstood the whole "on commit preserve rows" thing, that's solely on IQ, and I'm on ASE.
Dumping the log mid-transaction won't have any affect on the amount of log space. The Sybase log marker will only move if there is a commit (or rollback), AND if there isn't an older open transaction (which can be found in syslogshold)
There are a couple of different ways you can approach solving the issue:
Add log space to tempdb.
This would require no changes to your code, and is not very difficult. It's even possible that tempdb is not properly sized for the sytem, and the extra log space would be useful to other applications utilizing tempdb.
Rework your script to add a commit at the beginning, and query only for the later transactions.
This would accomplish a couple of things. The commit at the beginning would move the log marker forward, which would allow the log dump to reclaim space. Then since the rest of your queries are only reads, there shouldn't be any transaction space associate with them. Remember the transaction log only stores information on Insert/Update/Delete, not Reads.
Int the example you listed above, the users details could be stored and committed to the database, then the rest of the queries would just be select statements using those details for the variables, then a final transaction would cleanup the table. In this scenario the log is only held for the first transaction, and the last transaction..but the queries in the middle would not fill the log.
Without knowing more about the DB configuration or query details it's hard to get much more detailed.

What happens to Isolation Level in Stored Proc that fails

I have a SP that takes about 30 seconds to run. 10 seconds of this creating an initial temp table on which I operate eg.
SELECT * INTO #tmp FROM view_name ....(lots of joins and where conditions)
My understanding is that by default this long select (insert) can lock the table and therefore the application attached to the database?
My solution to this was to add
SET ISOLATION TRANSACTION LEVEL READ UNCOMMITTED;
to the top of the Stored Proc and then reset it to COMMITED at the end.
I guess this is a two part question firstly is my understanding of this correct and secondly if somewhere in my SP it fails with an error. Will the DB remain in UNCOMMITTED mode?
For the specific case of code executed from a stored procedure, the isolation level is reset to what it was before you called the stored procedure, regardless of what happens. From the docs:
If you issue SET TRANSACTION ISOLATION LEVEL in a stored procedure or
trigger, when the object returns control the isolation level is reset
to the level in effect when the object was invoked.
If you're not using a stored procedure, or you're dealing with code inside the stored procedure, the (much more complicated) story follows below.
Isolation level is a property of the session, not of the database. Whether or not a statement like SET TRANSACTION ISOLATION LEVEL executes after an error depends on whether the error aborts the entire batch or only the statement. The rules for this are complicated and unintuitive. To make things even more interesting, the isolation level of a session is not reset if it's associated with a pooled connection. If you want foolproof code that maintains an isolation level, preferably set the isolation level consistently from client code. If you must do it in T-SQL, it's a good idea to use SET XACT_ABORT ON and TRY .. CATCH, or the WITH (NOLOCK) table hint to restrict the locking options to a particular table or tables.
But why complicate your life? Consider if you really need to use READ UNCOMMITTED (or WITH (NOLOCK)), because you have no guarantee of data consistency anymore when you do, and even if you think you're OK with that, it can lead to results that aren't just a little bit wrong but completely useless. Unfortunately, this is true especially when the load on the database increases (which is why you might consider reducing locking in the first place).
If your query takes a long time while locking data, and that is a problem to applications (neither of which should be assumed, because SQL Server is good at locking data for only as long as necessary), your first instinct should be to see if the query itself can be optimized to reduce lock time (by adding indexes, most likely). If you've still got unacceptable locking left, consider using snapshot isolation. Only after those things have been evaluated should you consider dirty reads, and even then only for cases where consistency of the data is not your primary concern (for example, the query gets some monitoring statistics that are refreshed every so often anyway).

How to use transaction and select together, with sql2008

we have a ASP.NET site with a SQL2008 DB.
I'm trying to implement transactions in our current system.
We have a method that is inserting a new row to a table (Table_A), and it works fine with transaction. If an exception is throwed, it will do the rollback.
My problem is, that another user can't do anything that invole with Table_A in the same time - probably because it's locked.
I can understand why the transaction have to lock the new row, but I don't want it to lock the whole table - forcing user_B to wait for user_A to finish.
I have tryed to set isolationlevel ReadUncommitted and Serializable on the transaction, but it didn't work.
So i'm wondering if I also need to rewrite all my select methods, so they are able to select all the rows except the new row there is in the making?
Example:
Table T has 10 rows(records?)
User A is inserting a new row into table T. This take some time.
At the same time, User B want to select the 10 rows, and dosn't care about the new row.
So I guess I somehow need to rewrite user B's select query from a "select all query" to a "select all rows that isnt bind to a transaction query". :)
Any tips is much appriciated, thanks.
You'll need to make changes to your selects, not to the transaction that is doing the insert.
You can
set isolation level read uncommitted for the connection doing the select, or
specify the WITH (NOLOCK) hint against the table that is being locked, or
specify the WITH (READPAST) hint against the table being locked
As I say, all 3 of these options are things that you'd apply to the SELECT, not the INSERT.
A final option may be to enable SNAPSHOT isolation, and change the database default to use that instead (There are probably many warnings I should include here, if the application hasn't been built/tested with snapshot isolation turned on)
Probably what you need is to set your isolation level to Read Committed Snapshot.
Although it needs more space for the tempdb it is great when you don't want your selects to lock the tables.
Beware of posible problems when using this isolation level.

Where's the best place to SET NOCOUNT?

For a large database (thousands of stored procedures) running on a dedicated SQL Server, is it better to include SET NOCOUNT ON at the top of every stored procedure, or to set that option at the server level (Properties -> Connections -> "no count" checkbox)? It sounds like the DRY Principle ("Don't Repeat Yourself") applies, and the option should be set in just one place. If the SQL Server also hosted other databases, that would argue against setting it at the server level because other applications might depend on it. Where's the best place to SET NOCOUNT?
Make it the default for the server (which it would be except for historical reasons). I do this for all servers from the start. Ever wonder why it's SET NOCOUNT ON instead of SET COUNT OFF? It's because way way back in Sybase days the only UI was the CLI; and it was natural to show the count when a query might show no results, and therefore no indication it was complete.
Since it is a dedicated server I would set it at the server level to avoid having to add it to every stored procedure.
The only issue would come up is if you wanted a stored procedure that did not have no-count.