I'm running SQL Server 2000.
I have a situation where users are timing out.
In enterprise manager I look at the locks/ProcessIDs.
I see
spid 79 (Blocked By 79)
How is it possible for a session to block itself?
This was introduced with Service pack 4 in SQL Server 2000. See this KB article: http://support.microsoft.com/kb/906344 for an explanation
When an SPID is waiting for an I/O
page latch, you may notice that the
blocked column briefly reports that
the SPID is blocking itself. This
behavior is a side effect of the way
that latches are used for I/O
operations on data pages
Related
I have an application that is connecting to a SQL server to run some inserts/updates. We've been getting deadlocks, But I’ve noticed that in addition to reporting the deadlock, the connection to the database is dropped and the client has to reconnect.
Is this normal? Even if it's not normal behaviour, is it possible that SQL server is not only deciding that my client will be the deadlock victim, but it is also terminating the connection?
Is there a way to stop the connection being dropped?
By definition a deadlock means two connections are stuck where SPID 1 has something locked that SPID 2 needs and SPID 2 has something locked that SPID 1 needs. Neither can complete their transaction because they need something the other has locked. In cases like this the server will choose a victim SPID and kill it so the other can complete its transaction.
The only way to stop it is to figure out why the deadlocks are occurring in the first place. You can run a trace to monitor deadlocks and capture the information relating to them into a diagram and then view the diagram in SSMS.
More information available here: http://www.simple-talk.com/sql/learn-sql-server/how-to-track-down-deadlocks-using-sql-server-2005-profiler/
This sounds pretty strange, but there is ocurring in a databases of a bunch we've got in a server, we can tell by the output on the log, but this seems to be affecting another databases, since the systems hungs when the deadlock occur.
We've identified the objects involved in the deadlock event, but none lives in the databases from the system we are using.
I still need to look at the procedure bodies, but is this possible? processes from other databases entering in deadlock and hunging the entire server or other databases?
A deadlock is not a fatal event in MS Sql Server (unlike, eg., in code). This is because Sql Server will periodically scan for deadlocks, and then pick one of the of the processes to kill. That's when you get the log messages.
Absent a Sql Server bug (which I've never encountered), I'd think it's more likely that the order is reversed - the hung server/database prevents normal execution of queries, resulting in deadlocks as procedures take longer to execute.
I have seen this happen when two processes that are in a deadlock also have objects locked in TempDB.
The locked objects in tempdb then stop other processes from being able to create objects and thus hang.
This was an issue on older versions of SQL Server (2000), but I can't recall seeing it on more recent version.
it is possible. if your server can't react to any interupt it can't execute other requests (correctly).
I have a SQL Server 2005 database that keeps locking and won't release. My application cannot commit updates to the database because there are tasks that are waiting to be processed and my app keeps crashing. If I look on activity monitor, waiting tasks just keeps going up and up until I kill the process. The problem is I can see what is in activity monitor that is causing the lock but I do not have enough information it just says blocked by session ID.
Is there any way by using TSQL to find out what that process is exactly and what is it doing? i.e. query for locks on database with long wait time and how to force them to release, or prevent them?
Try sp_lock #ProcessID then pick out the exclusive locks (mode = X or IX). You can then find out the offending objects using SELECT object_name(id). The ID is obtained from the ObjId column.
Use the Deadlock Graph event in Profiler: See Track Down Deadlocks Using SQL Server 2005 Profiler
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am responsible for a third-party application (no access to source) running on IIS and SQL Server 2005 (500 concurrent users, 1TB data, 8 IIS servers). We have recently started to see significant blocking on the database (after months of running this application in production with no problems). This occurs at random intervals during the day, approximately every 30 minutes, and affects between 20 and 100 sessions each time. All of the sessions eventually hit the application time out and the sessions abort.
The problem disappears and then gradually re-emerges. The SPID responsible for the blocking always has the following features:
WAIT TYPE = ASYNC_NETWORK_IO
The SQL being run is “(#claimid
varchar(15))SELECT claimid, enrollid,
status, orgclaimid, resubclaimid,
primaryclaimid FROM claim WHERE
primaryclaimid = #claimid AND
primaryclaimid <> claimid)”. This is
relatively innocuous SQL that should
only return one or two records, not a
large dataset.
NO OTHER SQL statements have been
implicated in the blocking, only this
SQL statement.
This is parameterized SQL for which
an execution plan is cached in
sys.dm_exec_cached_plans.
This SPID has an object-level S lock on the claim table, so all UPDATEs/INSERTs to the claim table are also blocked.
HOST ID varies. Different web servers are responsible for the blocking sessions. E.g., sometimes we trace back to web server 1, sometimes web server 2.
When we trace back to the web server implicated in the blocking, we see the following:
There is always some sort of
application related error in the
Event Log on the web server, linked
to the Host ID and Host Process ID
from the SQL Session.
The error messages vary, usually some
sort of SystemOutofMemory. (These
error messages seem to be similar to
error messages that we have seen in
the past without such dramatic
consequences. We think was happening
before, but didn’t lead to blocking.
Why now?)
No known problems with the network
adapters on either the web servers or
the SQL server.
(In any event the record set returned by the offending query would be small.)
Things ruled out:
Indexes are regularly defragmented.
Statistics regularly updated.
Increased sample size of statistics
on claim.primaryclaimid.
Forced recompilation of the cached
execution plan.
Created a compound index with
primaryclaimid, claimid.
No networking problems.
No known issues on the web server.
No changes to application software on
web servers.
We hypothesize that the chain of events goes something like this:
Web server process submits SQL
above.
SQL server executes the SQL, during
which it acquires a lock on the
claim table.
Web server process gets an error and
dies.
SQL server session is hung waiting
for the web server process to read
the data set.
SQL Server sessions that need to get
X locks on parts of the claim table
(anyone processing claims) are
blocked by the lock on the claim
table and remain blocked until they
all hit the application time out.
Any suggestions for troubleshooting while waiting for the vendor's assistance would be most welcome.
Is there a way to force SQL Server to lock at the row/page level for this particular SQL statement only?
Is there a way to set a threshold on ASYNC_NETWORK_IO waits only?
ASYNC_NETWORK_IO is caused by clients not able to receive data quick enough and filling network buffers (simply put). There is no magic SQL Server setting to fix it.
reboot the client (even if it's web server)
ensure NICs are set correctly (firmware, full duplex etc)
ensure physical cables are ok (any packet losses etc?)
etc
It is not a SQL Server issue, as such...
Blog article 1
BOL:
ASYNC_NETWORK_IO Occurs on network
writes when the task is blocked behind
the network. Verify that the client is
processing data from the server.
And another with link to MS whitepaper
I had the same problem and it got solved when I disabled the Kaspersky antivirus on the client.
What is the optimal number of connections that can be open on a SQL Server 2000 DB. I know in the previous company I was working for, on a tru 64 box with Oracle 8i, 8 processor machine we'd figured out that 8*12= 96 connections seemed to be a good number. Is there any such calc for SQL Server 2000. The DB runs on a 2-processor(hyper threaded 4) machine. There are a lot of transactions that run against the DB. The reason I ask is because we have an app that typically tends to leave around 100 connections open even if it is not doing anything and I am having difficulty explaining that that might be a cause for our performance issues. Maybe, SQL Server does not have such a limitation... Can any of you pour forth some wisdom on this? Much appreciate it. Thanks,
I should add it is the Standard Edition.
If you don't know if this is your performance bottleneck then you should be trying to determine that, not trying to limit the connections or something.
If you haven't, you should:
Use SQL Profiler to find long-running queries.
Monitor your db server's cpu load, memory/page file usage, and network usage
Find one of your longest running queries (see #1 above) and write a very lean test app that can throw this query at your db server during peak load and record some response times.
If #1 and #2 don't uncover anything, and #3 shows your db server has slow response times during load then you know you have a problem like "too many connections". But if you haven't done #3 then it seems advisable to do that, as mucking with connection limits and such seems like it will just create artificial bottlenecks, and not really get you to the root of your problem, IMO.
Your performance issue will not be caused by the number of connections.
As well as sliderhouserules' answer, as a quick fix I'd suggest switch off hyperthreading rather than limiting your connections.
link1, link2 (note: this guy worked on the MS SQL 2005 code)
Each connection takes a trivial amount of memory. A shared db lock is for stability only.
This blog post on MSDN indicates there is no limit - at least in the Express editions: http://blogs.msdn.com/euanga/archive/2006/03/09/545576.aspx
And this indicates that it might be 256, for lite editions - http://blogs.msdn.com/stevelasker/archive/2006/04/10/SqlEverywhereInfo.aspx
This also shows no limit: http://channel9.msdn.com/forums/TechOff/169030-The-difference-between-SQL-Server-2005-Express-and-Developer-Edition/?CommentID=299642
addition - from a comment, http://msdn.microsoft.com/en-us/library/aa196730(SQL.80).aspx indicates the max is 32767, while there is no "ideal"
If the app is a long running app and it's on the same server, if the app leaves open db handles that have created a lock this is truly bad for performance. You can check something like select * from sys.dm_tran_locks or sp_lock to give you an idea.