foreign table insertion is getting failed in concurrent transaction - redis

Redis foreign data wrapper in postgres is not responding correctly when concurrent transaction happened in production environment.I am inserting foreign tables using rabbitmq and keeping a log table for each transaction.But, every time I am getting key already exists(SQLSTATE-23505). In my db function in each transaction, I am checking whether key exists or not.
Multiple transactions are creating this issue. I am not facing the same issue in a single transaction and development environment.
Any os level change is required?
Can anyone explain me the solution?

Related

Linking Users table to other entities/tables

I have a data driven ASP application. And like most data driven applications, i have a Users table and a CreatedBy field in most of the tables.
I am trying to create a DeleteUserFunction in my application. Before deleting any user i must check each and every table to see if that user has created any records.
Building relationships between the users table and the rest of database tables can make the DeleteUserFunction easier to validate.
Therefore, I am trying to figure whether a users table must be explicitly linked to other tables (via foreign key constaints) or must it be handle in application business logic.
First, your functional requirements need to be clarified. What should happen if a user gets deleted?
1 User may not get deleted if records linked to her are present.
2 User may be deleted, and
2.1 all records linked to her stay present, without link to that user,
2.2 all records linked to her must be deleted as well.
A foreign key constraint can support 1 and 2.2, but not 2.1, because it won't change the user foreign key in the referring record.
However, using a foreign key constraint as the only way to enforce this might lead to strange software structure and user experience. In all cases, the violation of the constraint will be detected by the database, and will, by default lead to some exception in the data access module. But probably, you don't want your application to crash with an exception, but rather tell your user that this function is not available (in case 1) or trigger some application function (in cases 2.1, 2.2). This would mean passing the exception around in the software until the right layer to handle the case is reached.
Therefore, I'd recommend to perform the necessary checks to find out whether the deletion is legal and to trigger the logical consequences as part of the application logic. The foreign key constraint may still be useful as a way to detect application error during tests.

VirtoCommerce - Deleting a Large Catalog

I have a catalog with 1,500+ items in it hosted on Azure in VC version 2.4.644. When I attempt to delete it, the admin UI spins for 3-4 minutes, and then the deletion eventually fails with no error message. This is on a database that has been scaled up to S2. Deleting a catalog with a smaller number of items succeeds.
Is there a log somewhere I can review that might tell me why this is failing?
If I wanted to do this manually via SQL, is it "safe" to just delete the items from the dbo.Item table, or are there records in foreign tables that would be orphaned by this operation?
Unfortunately catalog delete operation not optimal in current version and possible reason of you issue is a request timeout. We try to improve perfomance deletion in next release (aprox next week).
Delete throught native SQL not simple because Catalog contains hierarhical structures and more related tables without cascade deletion (because some tables contains self references MS SQL limitation).

Is the communication between primary and active secondary secured and how it works

Premium service tier of Azure SQL database provides active geo replication due which upto 4 readable secondaries can be created. I want to know if the communication between primary and secondary database is secure and are there any chances of data being hacked in the transit?
For more infomation:Azure SQL Database Inside#High Availability
First, a transaction is not considered to be committed unless the
primary replica and at least one secondary replica can confirm that
the transaction log records were successfully written to disk. Second,
if both a primary replica and a secondary replica must report success,
small failures that might not prevent a transaction from committing
but that might point to a growing problem can be detected.

NHibernate and Auto Incremented ID Deadlocks

I've been using SharpArchitecture with NHibernate to build my site which can have many users. My tables are setup with Primary Keys in the database setup as IDENTITY(1,1). The last couple of days I've been noticing a bunch of deadlock problems occurring based on the log file as such:
System.Data.SqlClient.SqlException: Transaction (Process ID 55) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
And I get this error sometimes which may be related:
System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
In my web.config I've included this line to set the db isolation property:
<property name="connection.isolation">ReadUncommitted</property>
Based on what I've found through my searches is that the auto incremented ID is locking the table, even though I have ReadUncommitted. That being said:
Am I correct with this conclusion?
If so, I assume if I go with generating the ID with something like HiLo or GuidComb it would solve the issue?
Thank you!
probably yes
yes
with many users HiLo is also reduces traffic and load on the database so switching to it would be good anyway.

Deadlock error in INSERT statement

We've got a web-based application. There are time-bound database operations (INSERTs and UPDATEs) in the application which take more time to complete, hence this particular flow has been changed into a Java Thread so it will not wait (block) for the complete database operation to be completed.
My problem is, if more than 1 user comes across this particular flow, I'm facing the following error thrown by PostgreSQL:
org.postgresql.util.PSQLException: ERROR: deadlock detected
Detail: Process 13560 waits for ShareLock on transaction 3147316424; blocked by process 13566.
Process 13566 waits for ShareLock on transaction 3147316408; blocked by process 13560.
The above error is consistently thrown in INSERT statements.
Additional Information:
1) I have PRIMARY KEY defined in this table.
2) There are FOREIGN KEY references in this table.
3) Separate database connection is passed to each Java Thread.
Technologies
Web Server: Tomcat v6.0.10
Java v1.6.0
Servlet
Database: PostgreSQL v8.2.3
Connection Management: pgpool II
One way to cope with deadlocks is to have a retry mechanism that waits for a random interval and tries to run the transaction again. The random interval is necessary so that the colliding transactions don't continuously keep bumping into each other, causing what is called a live lock - something even nastier to debug. Actually most complex applications will need such a retry mechanism sooner or later when they need to handle transaction serialization failures.
Of course if you are able to determine the cause of the deadlock it's usually much better to eliminate it or it will come back to bite you. For almost all cases, even when the deadlock condition is rare, the little bit of throughput and coding overhead to get the locks in deterministic order or get more coarse-grained locks is worth it to avoid the occasional large latency hit and the sudden performance cliff when scaling concurrency.
When you are consistently getting two INSERT statements deadlocking it's most likely an unique index insert order issue. Try for example the following in two psql command windows:
Thread A | Thread B
BEGIN; | BEGIN;
| INSERT uniq=1;
INSERT uniq=2; |
| INSERT uniq=2;
| block waiting for thread A to commit or rollback, to
| see if this is an unique key error.
INSERT uniq=1; |
blocks waiting |
for thread B, |
DEADLOCK |
V
Usually the best course of action to resolve this is to figure out the parent objects that guard all such transactions. Most applications have one or two of primary entities, such as users or accounts, that are good candidates for this. Then all you need is for every transaction to get the locks on the primary entity it touches via SELECT ... FOR UPDATE. Or if touches several, get locks on all of them but in the same order every time (order by primary key is a good choice).
What PostgreSQL does here is covered in the documentation on Explicit Locking. The example in the "Deadlocks" section shows what you're probably doing. The part you may not have expected is that when you UPDATE something, that acquires a lock on that row that continues until the transaction involved ends. If you have multiple clients all doing updates of more than one thing at once, you'll inevitably end up with deadlocks unless you go out of your way to prevent them.
If you have multiple things that take out implicit locks like UPDATE, you should wrap the whole sequence in BEGIN/COMMIT transaction blocks, and make sure you're consistent about the order they acquire locks (even the implicit ones like what UPDATE grabs) at everywhere. If you need to update something in table A then table B, and one part of the app does A then B while the other does B then A, you're going to deadlock one day. Two UPDATEs against the same table are similarly destined to fail unless you can enforce some ordering of the two that's repeatable among clients. Sorting by primary key once you have the set of records to update and always grabbing the "lower" one first is a common strategy.
It's less likely your INSERTs are to blame here, those are much harder to get into a deadlocked situation, unless you violate a primary key as Ants already described.
What you don't want to do is try and duplicate locking in your app, which is going to turn into a giant scalability and reliability mess (and will likely still result in database deadlocks). If you can't work around this within the confines of the standard database locking methods, consider using either the advisory lock facility or explicit LOCK TABLE to enforce what you need instead. That will save you a world of painful coding over trying to push all the locks onto the client side. If you have multiple updates against a table and can't enforce the order they happen in, you have no choice but to lock the whole table while you execute them; that's the only route that doesn't introduce a potential for deadlock.
Deadlock explained:
In a nutshell, what is happening is that a particular SQL statement (INSERT or other) is waiting on another statement to release a lock on a particular part of the database, before it can proceed. Until this lock is released, the first SQL statement, call it "statement A" will not allow itself to access this part of the database to do its job (= regular lock situation). But... statement A has also put a lock on another part of the database to ensure that no other users of the database access (for reading, or modifiying/deleting, depending on the type of lock). Now... the second SQL statement, is itself in need of accessing the data section marked by the lock of Statement A. That is a DEAD LOCK : both Statement will wait, ad infinitum, on one another.
The remedy...
This would require to know the specific SQL statement these various threads are running, and looking in there if there is a way to either:
a) removing some of the locks, or changing their types.
For example, maybe the whole table is locked, whereby only a given row, or
a page thereof would be necessary.
b) preventing multiple of these queries to be submitted at a given time.
This would be done by way of semaphores/locks (aka MUTEX) at the level of the
multi-threading logic.
Beware that the "b)" approach, if not correctly implemented may just move the deadlock situation from within SQL to within the program/threads logic. The key would be to only create one mutex to be obtained first by any thread which is about to run one of these deadlock-prone queries.
Your problem, probably, is the insert command is trying to lock one or both index and the indexes is locked for the other tread.
One common mistake is lock resources in different order on each thread. Check the orders and try to lock the resources in the same order in all threads.