Oracle can return time out when another connection already use the same table? - sql

if i need run an DML (insert, update, delete) in one table of database, firstly he verify if has an active DML using that table. In this momment, if has another operation, my connection wait he has finished.
There's a way to get an "time out" in this cases? Not in a global mode, only for specific cases.
--Edit for more specifications of the problem
Not sure if any kind of lock is actually used. But in my case, there is an old application in Oracle Forms and a new application written by me.
The problem is that when the user opens a specific record to update any field in the old application, and i try to edit the same record in my app, the line is blocked.
So my app it's waiting for the unlock. But the problem is that the user thinks the application is frozen and kill him, losing the changes.
But this is not the case if another Oracle Forms application attempts to edit. When it does, Oracle Forms displays the message "Could not reserve record (2). Keep trying?". Maybe it's because this old app uses any kind of lock. But i need validate this in the code.
Obs: The number 2 is the number of tries to update.

If you do a 'lock table .... wait', then it will wait until any DML on this table that is inflight commits, then gives you the lock. This will make any one coming after you wait till you release the lock. Look at the doc to see how to use this.
Then there's the possibility of locking a single row (select for update). which is more granular.
That being said, can you please explain what are you exactly trying to do? As you may not need to do this at all.

Related

Firebird lock table / lock record

Suppose you have one table for a Desktop application and several users.
When a user opens a record, i want to lock this record. I have tried "WITH LOCK" statement. It works fine.
But when a second users want to update the same record, i want to put a message "Sorry, you cannot work on this order because it is locked. Somebody else has opened this record before you". Firebird waits the first user to commit/rollback. I don t want to wait. I want to put an error message. Is there a simple way to ask firebird record lock status ?
Is there a way to lock a full table ? Or to put a semaphore/mutex (like get_lock on mysql)
i have tried reserving on set transaction statement but it does not work.
My wish is to display a message to the user. Not waiting.
Thanks
If you don't want to wait, then configure your transaction to use NO WAIT, or a wait timeout. However controlling business rules like this through database transactions is not advisable as it requires long running transactions which inhibit garbage collection, increases the chain of interesting transactions, and increases the chance of update conflicts.
I'd advise to use different options like:
First to update wins
Change detection (eg by a timestamp or record version counter which is also used as a condition in the update statement), and allowing the user to overwrite or abandon his update (or maybe merge)
Explicit reservation by updating the record (setting the username) in a separate transaction. This might require cleanup or the ability for a user to break the reservation (eg if someone had it open for too long).
Note that Firebird uses multi version concurrency control (MVCC), so explicit locking is not really natural. See also this answer to Locking tables firebird, delphi.
Locking tables using RESERVING should be possible, but I have never used it, so I am not entirely sure how to use it although you probably also need to specify FOR PROTECTED READ (see Interbase 6.0 Embedded SQL Guide, pages 70/71).

How to Avoid SQL Server hangs due to uncommited transaction caused by poor SW design

The problem: a .NET application trying to save many records to SQL Server. BeginTrans was used, and right before commit a warning messages shows to end user to confirm to proceed to save data or not. The user simply left the computer and go away!!!
Now all other users are unable to access the locked records. Sometimes almost the entire system is affected. Almost all transaction are updating the same records; the confirmation message must be shown after data gets updated, and before commit so if user can rollback. What could be the best solution?
If no solution is found, the last thing i might do is to rollback, show the confirmation message, if user accepts then i will again save the data without any confirmation message (which i don't thing the right way)
My question is: What best i can do? any ideas?
This sounds like a WinForms app? It also sounds like you want to confirm the intent of user's action. Are you in a position to only start the transaction once they confirm they intend to save the data?
Ideally, you should
Prompt the user via [OK | Cancel]
Perform the database transaction
If the result of the transaction is deadlock (or any other failure), inform the user the save operation failed
In other words, the update of records should be a synchronous call.
EDIT: after understanding the specifics as mentioned in the comment below, I would recommend some form of server side task queue that all these requests would need to flow through. Your client would submit a request to the server, and the server application would then become the software responsible for updating records in the database. The clients would make their requests to this application and would be processed in the order they were received. I don't have much experience with inventory tracking software, but understand it's need to be absolutely correct. So this is just a rough idea, I'm sure someone with more experience in inventory tracking will have a better pattern. The proposed pattern creates a large bottleneck on the server that is responsible for updating the records. For example, this pattern would be terrible for someone like Amazon.

SQLite concurrent connections issue

I am working on a VB.NET application.
As per the nature of the application, One module has to monitor database (SQLite DB) each second. This Monitoring is done by simple select statement which run to check data against some condition.
Other Modules performs a select,Insert and Update statements on same SQLite DB.
on SQLite concurrent select statements are working fine, but I'm having hard time here to find out, why it is not allowing Inset and Update.
I understand it's a file based lock, but is there anyway to get it done?
each module, in fact statement opens and close the connection to DB.
I've restricted user to run single statement at a time by GUI design.
any Help will be appreciated.
If your database file is not on a network, you could allow a certain amount of read/write concurrency by enabling WAL mode.
But perhaps you should use only a single connection and do your own synchronization for all DB accesses.
You can use some locking mechanism to make sure the database works in a multithreading situation. Since your application is a read intensive one according to what you said, you can consider using a ReaderWriterLock or ReaderWriterLockSlim. (refer to here and here for more details)
If you have only one database, then creating just one instance of the lock is OK; if you have more than one database, each of them can be assigned a lock. Every time you do some read or write, enter the lock (by EnterReadLock() for ReaderWriterLockSlim, or by AcquireReaderLock() for ReaderWriterLock) before you do something, and after you're done exit the lock. Note that you can place the exit of the lock in a finally clause lest you forget to release it.
The strategy above is being used in our production applications. It's not so good as to use a single thread in your case because you have to take performance into account.

How to handle errors in a trigger?

I'm writing some SQL code that needs to be executed when rows are inserted in a database table, so I'm using an AFTER INSERT trigger; the code is quite complex, thus there could still be some bugs around.
I've discovered that, if an error happens when executing a trigger, SQL Server aborts the batch and/or the whole transaction. This is not acceptable for me, because it causes problems to the main application that uses the database; I also don't have the source code for that application, so I can't perform proper debugging on it. I absolutely need all database actions to succeed, even if my trigger fails.
How can I code my trigger so that, should an error happen, SQL Server will not abort the INSERT action?
Additionally, how can I perform proper error handling so that I can actually know the trigger has failed? Sending an email with the error data would be ok for me (the trigger's main purpose is actually sending emails), but how do I detect an error condition in a trigger and react to it?
Edit:
Thanks for the tips about optimizing performance by using something else than a trigger, but this code is not "complex" in the sense that it's long-running or performance intensive; it simply builds and sends a mail message, but in order to do so, it must retrieve data from various linked tables, and since I am reverse-engineering this application, I don't have the database schema available and am still trying to find my way around it; this is why conversion errors or unexpected/null values can still creep up, crashing the trigger execution.
Also, as stated above, I absolutely can't perform debugging on the application itself, nor modify it to do what I need in the application layer; the only way to react to an application event is by firing a database trigger when the application writes to the DB that something has just heppened.
If the operations in the trigger are complex and/or potentially long running, and you don't want the activity to affect the original transaction, then you need to find a way to decouple the activity.
One way might be to use Service Broker. In the trigger, just create message(s) (one per row) and send them on their way, then do the rest of the processing in the service.
If that seems too complex, the older way to do it is to insert the rows needing processing into a work/queue table, and then have a job continuously pulling rows from there are doing the work.
Either way, you're now not preventing the original transaction from committing.
Triggers are part of the transaction. You could do try catch swallow around the trigger code, or somewhat more professional try catch log swallow, but really you should let it go bang and then fix the real problem which can only be in your trigger.
If none of the above are acceptable, then you can't use a trigger.

What locks are enforced by SQL Server 2005 Express?

Consider a web page having grid-view connected to SqlDataSource having all permission to insert update and delete.
Publish the web page.
This is all on one computer local
Now
opening website on browser A - pressing edit of grid-view
opening website on broswer B - pressing edit of grid-view.
Now I edit in both browsers and press update one by one fine no problem
The last update is the one retained.
But hypothetical situation:
what if there were two computers, or
what if I had two mouse pointers controlled by two independent mice
Computer has capability of running two apps at the same time
Both users get ready and press the update in the browsers at the same time
Even if you consider two different computers this is not possible but for this question
Consider it as possible
Update from two different sources to the same database same table same same row
At the same time, same second, same micro second no delay, both hit the database server at the same time.
What will happen?
In theory I have studied that database management software implement locks when writing no reading, no other writing, etc but does SQL Server 2005 Express implement locks in practical or is it assumed that situation like above will never occur?
If locks there please provide explanation or resource which would explain it keeping in view different scenarios of access
Thank you
edit:-- I am not using control like sqldatasource so please when by providing statements to avoid bling update
its like-- algo---
sqlconnection conn=new .....
sqlcommand
command text is "sql statement for updating values of a particular row"
conn.Open();
cmd.ExecuteNonQuery();
conn.close;
so as seen how can I define the check that before executenonquery that if the data is recently changed are you sure you want to proceed? or something
I am kind of confused here I think..
}
This is solved by most applications using Optimistic Concurency control. Applications simply add more conditions to the update WHERE clause in order to detect changes that occured between the time the data was read and the moment the update is applied. Is called optimistic concurency because the applicaiton assumes no concurent changes will occur, and if they do occur they are are detected and the appplicaiton has to restart the operation. The alternative to optimistic concurency is pesimistic concurency where the applications explicitly locks the data it plans to update. In practice operaitons involving user interaction are never done under pesimistic concurency model.
Other concurency model, specially in distributed applications, is the one implied by the Fiefdom and Emissaries model.
So while database locks and transaction concurency models are always omnipresent in any database operation, when user interaction is involved no application will ever rely on the database locks. User interactions are simply way to long in terms of database transactions. Acquiring locks for the while forgetful Fred is out to lunch and has a data screen open on his desktop simply doesn't work.
SQL 2005 will enforce locks. Before a row can be updated the transaction must acquire an exclusive lock on it. Only 1 transaction can be granted this at a time so the other one will have to wait for that transaction to commit (2 phase locking) before being granted the lock that it needs for the update.
The second write will "win" in that it will overwrite the first one. You can implement optimistic concurrency controls in the sqldatasource to detect that the row has changed and abort the second one rather than blindly overwriting the first edit.
Edit
Following clarification to the question. If you want to roll your own you could add a timestamp column to the table (In SQL Server 2005 this is updated automatically when a row is updated) and bring that back as a hidden dataitem in the gridview then in your UPDATE statement add a where clause UPDATE ... WHERE PrimaryKeyColumn=#PKValue AND TimeStampCol=#OriginalTimestampValue If no rows were affected (retrievable from ExecuteNonQuery - generally) then another transaction modified the row. This might be a bit more lightweight than the alternative used by the data source control where it passes back the original values of all columns and adds them into the WHERE clause with similar logic.