I have these two below tables on which I am performing delete followed by an insert but intermittently deadlocks are being encountered.
Schedule.Assignments (Parent table)
[Schedule.Assignments (Parent table)]
Schedule.Schedules (Child table)
[Schedule.Schedules (Child table)]
Intermittently two types of deadlocks are occurring on schedule.Schedules table (child) although the operation is being performed on schedule.Assignments table (parent). Both are having the same deadlock graph as shown below.
Deadlock between an Insert and Delete statements on schedule.Assignments table.
Deadlock between same Delete statement on schedule.Assignments table.
[Deadlock Graph]
Deadlock Graph1 : https://pastebin.com/raw/ZpQUrjBV
Deadlock Graph2 : https://pastebin.com/raw/DhnuyZ7a
StoredProc containing the insert and delete statements: https://pastebin.com/raw/6DNh2RxH
Query Execution Plan: PasteThePlan
[Edit]
Assignments Schema: Assignments Schema
Assignments Indexes: Assignments Indexes
Schedules Schema: Schedules Schema
Schedules Indexes: Schedules Indexes
What I am not able to understand as to why deadlock object is showing as child table whereas the process involved in the deadlock shows insert/delete on parent table.
Please share your thoughts as how to solve these deadlocks?
It looks like your deadlocking is caused by a big table scan on Schedules. The scan happens in three different places in your procedure. What should happen instead is a simple Nested Loops/Index Seek on ParentId.
The reason you have a scan is because the join condition on ParentId is between a nvarchar(50) column and a bigint. I suggest you fix this by making ParentId a bigint.
ALTER TABLE schedule.Schedules
ALTER COLUMN ParentId bigint NULL;
You may need to drop and re-create indexes or constraints when you do this.
As a side point, although it appears that you have an index on schedule.Assignments (OldResourceRequestId), it is not unique. This is causing an Assert on the various subqueries to ensure only one row is returned, and may also be affecting query statistics/estimates.
I suggest you change it (if possible) to a unique index. If there are duplicates then you need to rethink these joins anyway, as you would get duplicate results or fail the Assert.
CREATE NONCLUSTERED INDEX [IX_Assignments_OldResourceRequestId] ON schedule.Assignments
(
OldResourceRequestId ASC
)
WITH (DROP_EXISTING = ON, ONLINE = ON) ON PRIMARY;
You should also take note of your IF statements. They are not indented, and it is not clear that what is actually happening is that only the first statement afterwards is conditional, due to the lack of BEGIN END. As mentioned in the other answer, the IF may not be necessary anyway.
There's not enough information to be sure, but I'll tell you where I'd start.
Eliminate the IF EXISTS test. Just go straight to the DELETE. If the value isn't there, the DELETE will be quick anyway. You're also not in a transaction, which leaves you open to the table changing between SELECT and DELETE.
Re-write the proprietary DELETE...JOIN as an ANSI DELETE using a WHERE EXISTS subquery. The proprietary version has problems whose details elude me right now. Better to write it in a standard way, and not invite problems.
You say "child" and "parent" tables. Does Schedules have a defined foreign key to Assignments? It should.
I'm not sure those changes will eliminate the problem, but IMO they'll make it less likely. You're increasing the atomicity by reducing the number of statements, and by eliminating branches you force each invocation of the procedure to execute the exact same logical sequence.
Related
I have a DB with about 100 tables. One table Client is referenced by almost all other tables (directly or indirectly).
I am trying to delete all data of one Client by executing this query:
DELETE FROM Client
WHERE Id = SomeNumber
this query should CASCADE delete all rows in all the tables that are
connected, directly or indirectly, to this Id of Client
The problem is that the query is getting stuck and I don't understand why.
this is the query plan
I checked for locks by this script
select * from sysprocesses where blocked > 0
but no result. didn't get any errors also. and I don't have any triggers in my DB.
I do see that a couple hundred of rows from some table are been deleted but
after a few seconds the query get stuck.
You can quite clearly see in the plan, that some of the dependent tables do not have indexes in the foreign key.
When a cascade happens, the plan starts by dumping all rows into an in-memory table. You can see this with a Table Spool at the top left of the plan, feeding off the Clustered Delete.
Then it reads these rows back, and joins them to the dependent tables. These tables must have an index with the foreign key being the leading key, otherwise you will get a full table scan.
This has happened in your case with a large number of tables, in some cases double-cascaded with two scans and a hash join.
When you create indexes, don't just make a one-column index. Create sensible indexes with other columns and INCLUDE columns, just make sure the foreign key is the leading column.
I must say, with this many foreign-keys, you are always going to have some issues, and you may want to turn off CASCADE because of that.
Currently I'm working in an environment where nothing - absolutely nothing in our database is relational besides what we define being relational through stored procedures and other functions.
I have no idea currently what would happen to our system if we decided to actually connect everything through proper constraining, but what I'm asking is this:
Would there be any performance increases, in implementing proper constraining?
Primary keys and foreign keys are not performance boosters, they are not designed for performance and the reason to use them has nothing to do with performance.
They exist for a much more important reason - to make sure your data is valid.
This is called data integrity, and it's one of the main reasons why relational databases are used on the first place.
Primary keys are implemented as a unique index on a non-nullable column (or a group of non-nullable columns) which means they might help with query performance (as with all indexes, they can boost select performance and somewhat degrade insert/update/delete performance) - but that is not what they are used for - it's merely a side-effect caused by how they are implemented by the database.
So, to answer your question - You might expect some performance gain in select statements when adding primary keys, but much, much more important - you gain data integrity by adding keys (both primary and foreign) to your database.
Would there be any performance increases, in implementing proper constraining?
Yes, there likely will be.
Imagine you want to insert a "child" row which cannot exist without the corresponding "parent" row. How would you check for it in a stored procedure?
Well, naively, you would:
IF EXISTS (SELECT * FROM PARENT WHERE ...)
INSERT INTO CHILD VALUES (...);
ELSE
THROW ...;
But of course, a concurrent transaction might delete the parent row between IF and INSERT, so you'd need to lock the parent row:
...SELECT * FROM PARENT WITH(UPDLOCK)...
But now that lock is held to the end of the transaction, blocking anyone wishing to modify parent. Or SQL Server might decide to escalate to table lock, at which point your concurrency goes down the drain...
Letting the database enforce FKs will likely allow for more concurrency and better performance.
A SELECT query can also benefit from declarative foreign keys.
If you have an INNER JOIN, but select columns from only the child table, the query optimizer may skip accessing the parent table completely - it already knows that if the child row exists the parent row must exist too, so it doesn't need to check for parent existence explicitly.
Simply omitting the parent from the query is easy enough if you have just one query, but may not be very practical if you have layers of views and inline table-valued functions. In that case you'd like to reuse the existing code without having to modify it just to cull the "extra" processing that you don't need, so you can let the query optimizer cull it for you.
I have a SQL Server table with a nvarchar(50) column. This column must be unique and can't be the PK, since another column is the PK. So I have set a non-clustered unique index on this column.
During a large amount of insert statements in a serializable transaction, I want to perform select queries based on this column only, in different transaction. But these inserts seem to lock the table. If I change the datatype of the unique column to bigint for example, no locking occurs.
Why isn't nvarchar working, whereas bigint does? How can I achieve the same, using nvarchar(50) as the datatype?
After all, mystery solved! Rather stupid situation I guess..
The problem was in the select statement. The where clause was missing the quotes, but due to a devilish coincidence of the existing data were only numbers, the select wasn't failing but just wasn't executing until the inserts committed. When the first alphanumeric data were inserted, the select statement begun failing with 'Error converting data type nvarchar to numeric'
e.g
Instead of
SELECT [my_nvarchar_column]
FROM [dbo].[my_table]
WHERE [my_nvarchar_column] = '12345'
the select statement was
SELECT [my_nvarchar_column]
FROM [dbo].[my_table]
WHERE [my_nvarchar_column] = 12345
I guess a silent cast was performed, the unique index was not being used which resulted to the block.
Fixed the statement and everything works as expected now.
Thanks everyone for their help, and sorry for the rather stupid issue!
First, you can change the PK to be a non-clustered index, then you you could create a clustered index on this field. Of course, that may be a bad idea based on your usage, or just simply not help.
You might have a use case for a covering index, see previous question re: covering index
You might be able to change your "other queries" to non-blocking by changing the isolation level of those queries.
It is relatively uncommon for it to be a necessity to insert a large number of rows in of a single transaction. You may be able to simply not use a transaction, or split up into a smaller set of transactions to avoid locking large sections of the table. E.g., you can insert the records into a pending table (that is not otherwise used in normal activity) in a transaction, then migrate these records in smaller transactions to the main table if real-time posting to the main table is not required.
ADDED
Perhaps the most obvious question. Are you sure you have to use a serializable transaction to insert a large number of records? These are relatively rarely necessary outside of financial transactions, and impose a high concurrency cost compared to the other isolation levels?
ADDED
Based on your comment about "all or none", you are describing atomicity, not serializability. I.e., you might be able to use a different isolation level for your large insert transaction, and still get atomicity.
Second thing, I notice you specify a large amount of insert statements. This just sounds like you should be able to push these inserts into a pending/staging table, then perform a single insert or batches of inserts from the staging table into the production table. Yes, it is more work, but you may just have an existing problem that requires the extra effort.
You may want to add the NOLOCK hint (a.k.a. READUNCOMMITTED) to your query. It will allow to to perform a "dirty read" of the data that has been already inserted.
e.g.
SELECT
[my_nvarchar_column]
FROM
[dbo].[locked_table] WITH (NOLOCK)
Take a look at a better explanation here:
http://www.mssqltips.com/sqlservertip/2470/understanding-the-sql-server-nolock-hint/
And the READUNCOMMITTED section here:
http://technet.microsoft.com/en-us/library/ms187373.aspx
I have a string vector data containing items that I want to insert into a table named foos. It's possible that some of the elements in data already exist in the table, so I must watch out for those.
The solution I'm using starts by transforming the data vector into virtual table old_and_new; it then builds virtual table old which contains the elements which are already present in foos; then, it constructs virtual table new with the elements
which are really new. Finally, it inserts the new elements in table foos.
WITH old_and_new AS (SELECT unnest ($data :: text[]) AS foo),
old AS (SELECT foo FROM foos INNER JOIN old_and_new USING (foo)),
new AS (SELECT * FROM old_and_new EXCEPT SELECT * FROM old)
INSERT INTO foos (foo) SELECT foo FROM new
This works fine in a non-concurrent setting, but fails if concurrent threads
try to insert the same new element at the same time. I know I can solve this
by setting the isolation level to serializable, but that's very heavy-handed.
Is there some other way I can solve this problem? If only there was a way to
tell PostgreSQL that it was safe to ignore INSERT errors...
Is there some other way I can solve this problem?
There are plenty, but none are a panacea...
You can't lock for inserts like you can do a select for update, since the rows don't exist yet.
You can lock the entire table, but that's even heavier handed that serializing your transactions.
You can use advisory locks, but be super wary about deadlocks. Sort new keys so as to obtain the locks in a consistent, predictable order. (Someone more knowledgeable with PG's source code will hopefully chime in, but I'm guessing that the predicate locks used in the serializable isolation level amount to doing precisely that.)
In pure sql you could also use a do statement to loop through the rows one by one, and trap the errors as they occur:
http://www.postgresql.org/docs/9.2/static/sql-do.html
http://www.postgresql.org/docs/9.2/static/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING
Similarly, you could create a convoluted upsert function and call it once per piece of data...
If you're building $data at the app level, you could run the inserts one by one and ignore errors.
And I'm sure I forgot some additional options...
Whatever your course of action is (#Denis gave you quite a few options), this rewritten INSERT command will be much faster:
INSERT INTO foos (foo)
SELECT n.foo
FROM unnest ($data::text[]) AS n(foo)
LEFT JOIN foos o USING (foo)
WHERE o.foo IS NULL
It also leaves a much smaller time frame for a possible race condition.
In fact, the time frame should be so small, that unique violations should only be popping up under heavy concurrent load or with huge arrays.
Dupes in the array?
Except, if you your problem is built-in. Do you have duplicates in the input array itself? In this case, transaction isolation is not going to help you. The enemy is within!
Consider this example / solution:
INSERT INTO foos (foo)
SELECT n.foo
FROM (SELECT DISTINCT foo FROM unnest('{foo,bar,foo,baz}'::text[]) AS foo) n
LEFT JOIN foos o USING (foo)
WHERE o.foo IS NULL
I use DISTINCT in the subquery to eliminate the "sleeper agents", a.k.a. duplicates.
People tend to forget that the dupes may come within the import data.
Full automation
This function is one way to deal with concurrency for good. If a UNIQUE_VIOLATION occurs, the INSERT is just retried. The newly present rows are excluded from the new attempt automatically.
It does not take care of the opposite problem, that a row might have been deleted concurrently - this would not get re-inserted. One might argue, that this outcome is ok, since such a DELETE happened concurrently. If you want to prevent this, make use of SELECT ... FOR SHARE to protect rows from concurrent DELETE.
CREATE OR REPLACE FUNCTION f_insert_array(_data text[], OUT ins_ct int) AS
$func$
BEGIN
LOOP
BEGIN
INSERT INTO foos (foo)
SELECT n.foo
FROM (SELECT DISTINCT foo FROM unnest(_data) AS foo) n
LEFT JOIN foos o USING (foo)
WHERE o.foo IS NULL;
GET DIAGNOSTICS ins_ct = ROW_COUNT;
RETURN;
EXCEPTION WHEN UNIQUE_VIOLATION THEN -- tag.tag has UNIQUE constraint.
RAISE NOTICE 'It actually happened!'; -- hardly ever happens
END;
END LOOP;
END
$func$
LANGUAGE plpgsql;
I made the function return the count of inserted rows, which is completely optional.
-> SQLfiddle demo
I like both Erwin and Denis' answers, but another approach might be to have concurrent sessions performing the unnesting and loading into a separate temporary table, and optionally eliminating what duplicates they can against the target table, and having a single session selecting from this temporary table, resolving temp table internal duplicates in an appropriate manner, inserting to the target table checking again for existing values, and deleting the selected temporary table records (in the same query using a common table expression).
This would be more batch oriented, in the style of a data warehouse extraction-load-transform paradigm, but would guarantee that no unique constraint issues would need to be dealt with.
Other advantages/disadvantages apply, such as decoupling the final insert from the data gathering (possible advantage), and needing to vacuum the temp table frequently (possible disadvantage), which may not be relevant to Jon's case, but might be useful info to others in the same situation.
I've got the user entering a list of values that I need to query for in a table. The list could be potentially very large, and the length isn't known at compile time. Rather than using WHERE ... IN (...) I was thinking it would be more efficient to use a temporary table and execute a join against it. I read this suggestion in another SO question (can't find it at the moment, but will edit when I do).
The gist is something like:
CREATE TEMP TABLE my_temp_table (name varchar(160) NOT NULL PRIMARY KEY);
INSERT INTO my_temp_table VALUES ('hello');
INSERT INTO my_temp_table VALUES ('world');
//... etc
SELECT f.* FROM foo f INNER JOIN my_temp_table t ON f.name = t.name;
DROP TABLE my_temp_table;
If I have two of these going at the same time, would I not get an error if Thread 2 tries to create the TEMP table after Thread 1?
Should I randomly generate a name for the TEMP table instead?
Or, if I wrap the whole thing in a transaction, will the naming conflict go away?
This is Postgresql 8.2.
Thanks!
There is no need to worry about the conflict.
The pg_temp schema is session specific. If you've a concurrent statement in a separate session, it'll use a different schema (even if you see it as having the same name).
Two notes, however:
Every time you create temporary objects, the system catalog creates a temporary schema and the objects themselves. This can lead to clutter if used frequently.
Thus, for small sets/frequent uses, it's usually better stick to an in or a with statement (both of which Postgres copes quite well with). It's also occasionally useful to "trick" the planner into using whichever plan you're seeking by using an immutable set returning function.
In the event you decide to actually use temporary tables, it's usually better to index and analyze them once you've filled them up. Else you're doing little more than writing a with statement.
Consider using WITH query insteed: http://www.postgresql.org/docs/9.0/interactive/queries-with.html
It also creates temporary table, which is destroyed when query / transaction finishes, so I believe there should be no concurrency conflicts.