I starting at a new company and am a web applications developer. So I am not a DB Guru, but have worked with it my entire career obviously being a developer. The company keeps randomly getting a locking issues, that then holds up the internal application an locks everyone. They then have to kill a service to correct the issue, and free up SQL. Now, I am wondering if when a query occurs, if it is a long query and the user thinks the system froze, so they close out the app, will that freeze up SQL? Basically, if the lock query is in the middle of a query, and the application who called the query just does a hard exit, does SQL then dispose that thread, or does it freeze it, in return hanging up next threads that are to run after that lock completes?
I have done in prior companies where if a table is currently locked, we alert the end user so they are aware of this and do not think it is an error. Wondering if we should do the same here?
Thanks
Related
I have an application that connects with a SQL Server database and cycles through batches of records to perform various tasks and then updates the database accordingly (i.e. "success", "error", etc...).
The potential problem I'm running into is that since it takes roughly a minute or so to get through each record (long story), if I have more than one user running the application there's a high chance of "data collisions" or users trying to process the same records at the same time. Which cannot happen if it is to execute properly.
Initially, I thought about adding a LOCKED column to help the application determine if the record was already opened by another user, however if the app were to crash or to be exited without completing the record it was currently on, then it would show that record as opened by another user indefinitely... right? Or am I missing an easy solution here?
Anyway, what would be ideal is if it were possible to have the application SELECT 100 records at a time, and "lock them out" on the database while the application processes them AND so that other users can run the application and SELECT a different set of 100 so as not to overlap. Is that possible? I've tried to do some research on the matter, but to be honest my experience in SQL Server is very limited. Thanks for any and all help!
I have a split MS Access database. Most of the data is populated through SQL queries run through VBA. When I first connect to the back end data, it takes a long time and the back end file (.accdc file) locks and unlocks 3 or 4 times. It's not the same number of locks every time, but this locking and unlocking corresponds to taking a while to open. When I first open the front end, it does not connect to the back end. This step is done very quickly. The first time that I connect to the back end, it can take a while, though.
Any suggestions on things to look into to speed this up, and make it happen more reliably on the first try? This is a multi-user file and I was not wanting to make any changes to the registry since that would require making that update for everyone in my department. I'm mostly concerned about it taking a while to open, but thought the locking and unlocking seemed peculiar and might be contributing or a symptom of something else going on.
In most cases if you use a persistent connection, then the slow process you note only occurs once at startup.
This and some other performance tips can be found here:
http://www.fmsinc.com/MicrosoftAccess/Performance/LinkedDatabase.html
9 out of 10 times, the above will thus fix the "delays" when running the application. You can for testing simply open any linked tables, minimize that table, and now try running your code or startup form - note how the delays are gone.
Here's the sequence of events my hypothetical program makes...
Open a connection to server.
Run an UPDATE command.
Go off and do something that might take a significant amount of time.
Run another UPDATE that reverses the change in step 2.
Close connection.
But oh-no! During step 3, the machine running this program literally exploded. Other machines querying the same database will now think that the exploded machine is still working and doing something.
What I'd like to do, is just as the connection is opened, but before any changes have been made, tell the server that should this connection close for whatever reason, to run some SQL. That way, I can be sure that if something goes wrong, the closing update will run.
(To pre-empt the answer, I'm not looking for table/record locks or transactions. I'm not doing resource claims here.)
Many thanks, billpg.
I'm not sure there's anything built in, so I think you'll have to do some bespoke stuff...
This is totally hypothetical and straight off the top of my head, but:
Take the SPID of the connection you
opened and store it in some temp
table, with the text of the reversal
update.
Use an a background process (either
SSIS or something else) to monitor
the temp table and check that the
SPID is still present as an open connection.
If the connection dies then the background process can execute the stored revert command
If the connection completes properly then the SPID can be removed from the temp table so that the background process no longer reverts it when the connection closes.
Comments or improvements welcome!
I'll expand on my comment. In general, I think you should reconsider your approach. All database access code should open a connection, execute a query then close the connection where you rely on connection pooling to mitigate the expense of opening lots of database connections.
If it is the case that we are talking about a single SQL command whose rows on which it operates should not change, that is a problem that should be handled by the transaction isolation level. For that you might investigate the Snapshot isolation level in SQL Server 2005+.
If we are talking about a series of queries that are part of a long running transaction, that is more complicated and can be handled via storage of a transaction state which other connections read in order to determine whether they can proceed. Going down this road, you need to provide users with tools where they can cancel a long running transaction that might no longer be applicable.
Assuming it's even possible... this will only help you if the client machine explodes during the transaction. Also, there's a risk of false positives - the connection might get dropped for a few seconds due to network noise.
The approach that I'd take is to start a process on another machine that periodically pings the first one to check if it's still on-line, then takes action if it becomes unreachable.
We have a huge Oracle database and I frequently fetch data using SQL Navigator (v5.5). From time to time, I need to stop code execution by clicking on the Stop button because I realize that there are missing parts in my code. The problem is, after clicking on the Stop button, it takes a very long time to complete the stopping process (sometimes it takes hours!). The program says Stopping... at the bottom bar and I lose a lot of time till it finishes.
What is the rationale behind this? How can I speed up the stopping process? Just in case, I'm not an admin; I'm a limited user who uses some views to access the database.
Two things need to happen to stop a query:
The actual Oracle process has to be notified that you want to cancel the query
If the query has made any modification to the DB (DDL, DML), the work needs to be rolled back.
For the first point, the Oracle process that is executing the query should check from time to time if it should cancel the query or not. Even when it is doing a long task (big HASH JOIN for example), I think it checks every 3 seconds or so (I'm looking for the source of this info, I'll update the answer if I find it). Now is your software able to communicate correctly with Oracle? I'm not familiar with SLQ Navigator but I suppose the cancel mechanism should work like with any other tool so I'm guessing you're waiting for the second point:
Once the process has been notified to stop working, it has to undo everything it has already accomplished in this query (all statements are atomic in Oracle, they can't be stopped in the middle without rolling back). Most of the time in a DML statement the rollback will take longer than the work already accomplished (I see it like this: Oracle is optimized to work forward, not backward). If you are in this case (big DML), you will have to be patient during rollback, there is not much you can do to speed up the process.
If your query is a simple SELECT and your tool won't let you cancel, you could kill your session (needs admin rights from another session) -- this should be instantaneous.
When you cancel a query, the Oracle client should send OCIBreak() but this isn't implemented on a Windows server, that could be the cause.
Also, have your DBA check the value of SQLNET.EXPIRE_TIME.
I have a problem that seems like its a result of a deadlock-situation.
Whe are now searching for the root of the problem but meantime we wanted to restart the server and get the customer going.
And now everytime we start the program it just says "SqlConnection does not support parallel transactions". We have not changed anything in the program, its compiled and on the customers server, but after the "possible deadlock"-situation it want go online again.
We have 7 clients (computers) running the program, each client is talking to a webservice on a local server, and the webservice is talking to the sql-server (same machine as webserver).
We have restarted both the sql-server and the iis-server, but not rebooted the server because of other important services running on the server so its the last thing we do.
We can se no locks or anything in the management tab.
So my question is, why does the "SqlConnection does not support parallel transactions" error comming from one time to another without changing anything in the program and it still lives between sql-restart.
It seems like it happens at the first db-request the program does when it start.
If you need more information just ask. Im puzzled...
More information:
I dont think I have "long" running transactions. The scenario is often that I have a dataset with 20-100 rows (ContractRows) in that Ill do a .Update on the tableAdapter. I also loop throug those 20-100 rows and for some of them Ill create ad-hook-sql-querys (for example if a rented product is marked as returned I create a sql-query to mark the product as returned in the database)
So I do this very simplified:
Create objTransactionObject
Create objtableadapter (objTransactionObject)
for each row in contractDS.contractrows
if row.isreturned then
strSQL &= "update product set instock=1 where prodid=" & row.productid & vbcrlf
End if
next
objtableadapter.update(contractDS)
objData.ExecuteQuery(strSQL, objTransactionObject)
if succsesfull
objtransactionobject.commit
else
objtransactionobject.rollback
end if
objTran.Dispose()
And then Im doing commit or rollback depending on if It went well or not.
Edit: None of the answers have solved the problem, but I'll thank you for the good trouble shooting pointers.
The "SqlConnection does not support parallel transactions" dissapeared suddenly and now the sql-server just "goes down" 4-5 times a day, I guess its a deadlock that does that but I have not the right knowledge to find out and are short on sql-experts who can monitor this for me at the moment. I just restart the sql-server and everything works again. 1 of 10 times I also have to restart the computer. Its really bugging me (and my customers of course).
Anyone knowing a person with good knowledge in analyzing troubles with deadlocks or other sql problems in sweden (or everywhere in the world,english speaking) are free to contact me. I know this is'nt a contact site but I take my chanse to ask the question because I have run out of options, I have spent 3 days and nights optimizing the clients to be sure we close connections and dont do too much stupid things there. Without luck.
It seems to be that you are sharing connections and creating new transactions on the same open connection (this is the parallel part of the exception you are seeing).
Your example seems to support this as you have no mention of how you acquire the connection in it.
You should do a review of your code and make sure that you are only opening a connection and then disposing of it when you are done (and by all means, use the using statement to make sure that you close the connection), as it seems like you are leaving one open somewhere.
Yours doesn't appear to be an unusual problem. Google found a lot of hits when I pasted your error string into the query box.
Reading past answers, it sounds like it has something to do with interleaving transactions improperly or isolation level.
How long are connections held open? Do you have long-running transactions?
Do you have implicit transactions turned on somewhere, so that there are some transactions where you wouldn't have expected them? Have you opened Activity Monitor to see if there are any unexpected transactions?
Have you tried doing a backup of your transaction log? That might clear it out as well if I remember a previous, similar experience correctly.