Firebird lock table / lock record - locking

Suppose you have one table for a Desktop application and several users.
When a user opens a record, i want to lock this record. I have tried "WITH LOCK" statement. It works fine.
But when a second users want to update the same record, i want to put a message "Sorry, you cannot work on this order because it is locked. Somebody else has opened this record before you". Firebird waits the first user to commit/rollback. I don t want to wait. I want to put an error message. Is there a simple way to ask firebird record lock status ?
Is there a way to lock a full table ? Or to put a semaphore/mutex (like get_lock on mysql)
i have tried reserving on set transaction statement but it does not work.
My wish is to display a message to the user. Not waiting.
Thanks

If you don't want to wait, then configure your transaction to use NO WAIT, or a wait timeout. However controlling business rules like this through database transactions is not advisable as it requires long running transactions which inhibit garbage collection, increases the chain of interesting transactions, and increases the chance of update conflicts.
I'd advise to use different options like:
First to update wins
Change detection (eg by a timestamp or record version counter which is also used as a condition in the update statement), and allowing the user to overwrite or abandon his update (or maybe merge)
Explicit reservation by updating the record (setting the username) in a separate transaction. This might require cleanup or the ability for a user to break the reservation (eg if someone had it open for too long).
Note that Firebird uses multi version concurrency control (MVCC), so explicit locking is not really natural. See also this answer to Locking tables firebird, delphi.
Locking tables using RESERVING should be possible, but I have never used it, so I am not entirely sure how to use it although you probably also need to specify FOR PROTECTED READ (see Interbase 6.0 Embedded SQL Guide, pages 70/71).

Related

Oracle can return time out when another connection already use the same table?

if i need run an DML (insert, update, delete) in one table of database, firstly he verify if has an active DML using that table. In this momment, if has another operation, my connection wait he has finished.
There's a way to get an "time out" in this cases? Not in a global mode, only for specific cases.
--Edit for more specifications of the problem
Not sure if any kind of lock is actually used. But in my case, there is an old application in Oracle Forms and a new application written by me.
The problem is that when the user opens a specific record to update any field in the old application, and i try to edit the same record in my app, the line is blocked.
So my app it's waiting for the unlock. But the problem is that the user thinks the application is frozen and kill him, losing the changes.
But this is not the case if another Oracle Forms application attempts to edit. When it does, Oracle Forms displays the message "Could not reserve record (2). Keep trying?". Maybe it's because this old app uses any kind of lock. But i need validate this in the code.
Obs: The number 2 is the number of tries to update.
If you do a 'lock table .... wait', then it will wait until any DML on this table that is inflight commits, then gives you the lock. This will make any one coming after you wait till you release the lock. Look at the doc to see how to use this.
Then there's the possibility of locking a single row (select for update). which is more granular.
That being said, can you please explain what are you exactly trying to do? As you may not need to do this at all.

Fire SQL Trigger only when a particular user update the row

There is a trigger in postgres that gets called whenever a particular table is updated.
It is used to send updates to another API.
Is there a way one can control the firing of this trigger?
Sometimes when I update the table I don't want the trigger to be fired. How do I do this?
Is there a silence trigger sql syntax?
If not
Can I fire triggers when a row is updated by PG user X and when PG user Y updates the table no trigger should be fired?
In recent Postgres versions, there is a when clause that you can use to conditionally fire the trigger. You could use it like:
... when (old.* is distinct from new.*) ...
I'm not 100% this one will work (can't test atm):
... when (current_user = 'foo') ...
(If not, try placing it in an if block in your plpgsql.)
http://www.postgresql.org/docs/current/static/sql-createtrigger.html
(There also is the [before|after] update of [col_name] syntax, but I tend to find it less useful because it'll fire even if the column's value remains the same.)
Adding this extra note, seeing that #CraigRinger's answer highlights what you're up to...
Trying to set up master-master replication between Salesforce and Postgres using conditional triggers is, I think, a pipe dream. Just forget it... There's going to be a lot more to it than that: you'll need to lock data as appropriate on both ends (which won't necessarily be feasible in a reasonable way), manage the resulting deadlocks (which might not automatically get detected), and deal with conflicting data.
Your odds of successfully pulling this off with a tiny team is about about zero -- especially if your Postgres skills are at the level where investing time in reading the manual would answer your own questions. You can safely bet that someone much more competent at Salesforce or some major SQL shop (e.g. like the one Craig works for) considered the same, and either miserably failed or ruled it out.
Moreover, I'd stress that implementing efficient, synchronous, multi-master replication is not a solved problem. You read that right: not solved. Just a few years ago, doing it at all wasn't well solved enough to make it in the Postgres core. So you've no prior art that works well to base your work on and iterate upon.
This seems to be the same problem as this post a few minutes ago, approaching it from a different direction.
If so, while you can indeed do as Denis suggests, don't attempt to reinvent this wheel. Use an established tool like Slony-I or Bucardo if you are attempting two-way (multi-master) replication. You also need to understand the major limitations involved in multi-master when dealing with conflicting updates.
In general, there are a few ways to control trigger firing:
Let the trigger fire, then put logic in the PL/PgSQL trigger body to cause it to take no action if a certain condition is met. This is often the only option when the rules are complex.
As Denis points out, use a trigger WHEN clause to conditionally fire the trigger
Use session_replication_role to control the firing of all triggers
Directly enable/disable triggers.
In particular, if your application shares a single SQL-level user ID for all database access and does its own user management above the SQL level, and you want to control trigger firing on a per-user basis, the only way to do it will be with in-trigger logic. You might find this prior answer about getting user IDs within triggers useful:
Passing user id to PostgreSQL triggers

SQLite concurrent connections issue

I am working on a VB.NET application.
As per the nature of the application, One module has to monitor database (SQLite DB) each second. This Monitoring is done by simple select statement which run to check data against some condition.
Other Modules performs a select,Insert and Update statements on same SQLite DB.
on SQLite concurrent select statements are working fine, but I'm having hard time here to find out, why it is not allowing Inset and Update.
I understand it's a file based lock, but is there anyway to get it done?
each module, in fact statement opens and close the connection to DB.
I've restricted user to run single statement at a time by GUI design.
any Help will be appreciated.
If your database file is not on a network, you could allow a certain amount of read/write concurrency by enabling WAL mode.
But perhaps you should use only a single connection and do your own synchronization for all DB accesses.
You can use some locking mechanism to make sure the database works in a multithreading situation. Since your application is a read intensive one according to what you said, you can consider using a ReaderWriterLock or ReaderWriterLockSlim. (refer to here and here for more details)
If you have only one database, then creating just one instance of the lock is OK; if you have more than one database, each of them can be assigned a lock. Every time you do some read or write, enter the lock (by EnterReadLock() for ReaderWriterLockSlim, or by AcquireReaderLock() for ReaderWriterLock) before you do something, and after you're done exit the lock. Note that you can place the exit of the lock in a finally clause lest you forget to release it.
The strategy above is being used in our production applications. It's not so good as to use a single thread in your case because you have to take performance into account.

What locks are enforced by SQL Server 2005 Express?

Consider a web page having grid-view connected to SqlDataSource having all permission to insert update and delete.
Publish the web page.
This is all on one computer local
Now
opening website on browser A - pressing edit of grid-view
opening website on broswer B - pressing edit of grid-view.
Now I edit in both browsers and press update one by one fine no problem
The last update is the one retained.
But hypothetical situation:
what if there were two computers, or
what if I had two mouse pointers controlled by two independent mice
Computer has capability of running two apps at the same time
Both users get ready and press the update in the browsers at the same time
Even if you consider two different computers this is not possible but for this question
Consider it as possible
Update from two different sources to the same database same table same same row
At the same time, same second, same micro second no delay, both hit the database server at the same time.
What will happen?
In theory I have studied that database management software implement locks when writing no reading, no other writing, etc but does SQL Server 2005 Express implement locks in practical or is it assumed that situation like above will never occur?
If locks there please provide explanation or resource which would explain it keeping in view different scenarios of access
Thank you
edit:-- I am not using control like sqldatasource so please when by providing statements to avoid bling update
its like-- algo---
sqlconnection conn=new .....
sqlcommand
command text is "sql statement for updating values of a particular row"
conn.Open();
cmd.ExecuteNonQuery();
conn.close;
so as seen how can I define the check that before executenonquery that if the data is recently changed are you sure you want to proceed? or something
I am kind of confused here I think..
}
This is solved by most applications using Optimistic Concurency control. Applications simply add more conditions to the update WHERE clause in order to detect changes that occured between the time the data was read and the moment the update is applied. Is called optimistic concurency because the applicaiton assumes no concurent changes will occur, and if they do occur they are are detected and the appplicaiton has to restart the operation. The alternative to optimistic concurency is pesimistic concurency where the applications explicitly locks the data it plans to update. In practice operaitons involving user interaction are never done under pesimistic concurency model.
Other concurency model, specially in distributed applications, is the one implied by the Fiefdom and Emissaries model.
So while database locks and transaction concurency models are always omnipresent in any database operation, when user interaction is involved no application will ever rely on the database locks. User interactions are simply way to long in terms of database transactions. Acquiring locks for the while forgetful Fred is out to lunch and has a data screen open on his desktop simply doesn't work.
SQL 2005 will enforce locks. Before a row can be updated the transaction must acquire an exclusive lock on it. Only 1 transaction can be granted this at a time so the other one will have to wait for that transaction to commit (2 phase locking) before being granted the lock that it needs for the update.
The second write will "win" in that it will overwrite the first one. You can implement optimistic concurrency controls in the sqldatasource to detect that the row has changed and abort the second one rather than blindly overwriting the first edit.
Edit
Following clarification to the question. If you want to roll your own you could add a timestamp column to the table (In SQL Server 2005 this is updated automatically when a row is updated) and bring that back as a hidden dataitem in the gridview then in your UPDATE statement add a where clause UPDATE ... WHERE PrimaryKeyColumn=#PKValue AND TimeStampCol=#OriginalTimestampValue If no rows were affected (retrievable from ExecuteNonQuery - generally) then another transaction modified the row. This might be a bit more lightweight than the alternative used by the data source control where it passes back the original values of all columns and adds them into the WHERE clause with similar logic.

Undo in a transactional database

I do not know how to implement undo property of user-friendly interfaces using a transactional database.
On one hand it is advisable that the user has multilevel (infinite) undoing possibility as it is stated here in the answer. Patterns that may help in this problem are Memento or Command.
However, using a complex database including triggers, ever-growing sequence numbers, and uninvertable procedures it is hard to imagine how an undo action may work at different points than transaction boundaries.
In other words, undo to a point when the transaction committed for the last time is simply a rollback, but how is it possible to go back to different moments?
UPDATE (based on the answers so far): I do not necessarily want that the undo works when the modification is already committed, I would focus on a running application with an open transaction. Whenever the user clicks on save it means a commit, but before save - during the same transaction - the undo should work. I know that using database as a persistent layer is just an implementation detail and the user should not bother with it. But if we think that "The idea of undo in a database and in a GUI are fundamentally different things" and we do not use undo with a database then the infinite undoing is just a buzzword.
I know that "rollback is ... not a user-undo".
So how to implement a client-level undo given the "cascading effects as the result of any change" inside the same transaction?
The idea of undo in a database and in a GUI are fundamentally different things; the GUI is going to be a single user application, with low levels of interaction with other components; a database is a multi-user application where changes can have cascading effects as the result of any change.
The thing to do is allow the user to try and apply the previous state as a new transaction, which may or may not work; or alternatively just don't have undos after a commit (similar to no undos after a save, which is an option in many applications).
Some (all?) DBMSs support savepoints, which allow partial rollbacks:
savepoint s1;
insert into mytable (id) values (1);
savepoint s2;
insert into mytable (id) values (2);
savepoint s3;
insert into mytable (id) values (3);
rollback to s2;
commit;
In the above example, only the first insert would remain, the other two would have been undone.
I don't think it is practical in general to attempt undo after commit, for the reasons you gave* and probably others. If it is essential in some scenario then you will have to build a lot of code to do it, and take into account the effects of triggers etc.
I don't see any problem with ever-increasing sequences though?
We developped such a possibility in our database by keeping track of all transactions applied to the data (not really all, just the ones that are less than 3 months old). The basic idea was to be able to see who did what and when. Each database record, uniquely identified by its GUID, can then be considered as the result of one INSERT, multiple UPDATEs statements, and finally one DELETE statement. As we keep tracks of all these SQL statements, and as INSERTs are global INSERTS (a track of all fields value is kept in the INSERT statement), it is then possible to:
Know who modified which field and when: Paul inserted a new line in the proforma invoice, Bill renegociated the unit price of the item, Pat modified the final ordered quantity, etc)
'undo' all previous transactions with the following rules:
an 'INSERT' undo is a 'DELETE' based on the unique identifier
an 'UPDATE' undo is equivalent to the previous 'UPDATE'
a 'DELETE' undo is equilavent to the first INSERT followed by all updates
As we do not keep tracks of
transactions older than 3 months,
UNDO's are not allways available.
Access to these functionalities are strictly limited to database managers only, as other users are not allowed to do any data update outside of the business rules (example: what would be the meaning of an 'undo' on a Purchase Order line once the Purchase Order has been agreed by the supplier?). To tell you the truth, we use this option very rarely (a few times a year?)
It's nearly the same like William's post (which I actually voted up), but I try to point out a little more detailed, why it is neccessary to implement a user undo (vs. using a database rollback).
It would be helpful to know more about your application, but I think for a user(-friendly) Undo/Redo the database is not the adequate layer to implement the feature.
The user wants to undo the actions he did, independent of if these lead to no/one/more database transactions
The user wants to undo the actions HE did (not anyone else's)
The database from my point of view is implementation detail, a tool, you use as a programmer for storing data. The rollback is a kind of undo that helps YOU in doing so, it's not a user-undo. Using the rollback would mean to involve the user in things he doesn't want to know about and does not understand (and doesn't have to), which is never a good idea.
As William posted, you need an implementation inside the client or on server side as part of a session, which tracks the steps of what you define as user-transaction and is able to undo these. If database transactions were made during those user transactions you need other db transactions to revoke these (if possible). Make sure to give valuable feedback if the undo is not possible, which again means, explain it business-wise not database-wise.
To maintain arbitrary rollback-to-previous semantics you will need to implement logical deletion in your database. This works as follows:
Each record has a 'Deleted' flag, version number and/or 'Current Indicator' flag, depending on how clever you need your reconstruction to be. In addition, you need a per entity key across all versions of that entity so you know which records actually refer to which specific entity. If you need to know the times a version was applicable to, you can also have 'From' and 'To' columns.
When you delete a record, you mark it as 'deleted'. When you change it, you create a new row and update the old row to reflect its obsolescence. With the version number, you can just find the previous number to roll back to.
If you need referential integrity (which you probably do, even if you think you don't) and can cope with the extra I/O you should also have a parent table with the key as a placeholder for the record across all versions.
On Oracle, clustered tables are useful for this; both the parent and version tables can be co-located, minimising the overhead for the I/O. On SQL Server, a covering index (possibly clustered on the entity key) including the key will reduce the extra I/O.