I have come across an interesting problem in Entity Framework 6 and SQL Server.
We have a table with a Composite Key. Here is an example;
ID Col1 Col2
-- ---- ----
1 1 1
2 1 2
3 2 1
4 2 2
5 2 3
So, Col2 is unique for each Col1. I have a requirement to swap 2 values to produce this desired result...
ID Col1 Col2
-- ---- ----
1 1 2
2 1 1
I am using Entity Framework, load the object from the database, make my change, and call SaveChanges.
I Receive the exception: "Violation of UNIQUE KEY constraint 'UQ_TableA_Constraint1'. Cannot insert duplicate key in object 'dbo.TableA'."
Supposedly, SaveChanges is called in a transaction. The EF Source seems to indicate it is, and the fact that a failed update is atomic would indicate this is working. However, it also appears that updates are completed ROW BY ROW, even inside the transaction. Thus that EF first performs an update to record 1 which temporarily produces a duplicate unique key.
Is there a manner to mitigate this? I would rather not update to a temp value, call savechanges, and then update to the correct value as this potentially could fail and leave the data in an incorrect state.
Are there any options?
I hope I understand your question.
Contraints must be satisfied also inside a transaction, this is a SQL Server (and other DBMSs) behaviour not an EF behaviour.
You can use a temporary value inside a transaction to be sure that everything went well.
If you need to run update queries (on multiple records), you can use an external library https://github.com/loresoft/EntityFramework.Extended (but if I understand your question, you won't solve contraint issues).
Related
I'm trying to create a new table on my DB, the table has 2 important columns
id_brands (This is an FK from the table brands)
id_veiculo
What I would like to have is something like this:
id_brands
id_veiculo
1
1
1
2
2
1
2
2
3
1
1
3
3
2
I create the table but I'm trying to find a way to make this condition with a trigger but without success, I don't know if it's possible or if a trigger is the best way to do that.
What you are probably trying to do, by the pattern of the example table, is setting up an auxiliary N to N relationship table.
In this case, by having another table, for id_veiculo and its properties, you will be able to have both ids as FKs. As for the primary key in this auxiliary table, it would be both id_brands and id_veiculo:
PRIMARY KEY (id_veiculo, id_brands);
Here's another Stackoverflow question about NxM/NxN relationships.
Also, it isn't very clear what you're trying to do with the table, but if it's the population/seeding of data, then yes, a Trigger is an viable solution.
Is there a way to tell nHibernate to remove the duplicate value on a row's column that is uniquely constrained when updating a row with a duplicate value.
For example (OtherId and Animal is compositely unique constrained)
Id | OtherId | Animal
------------
1 | 1 | Dog
2 | 1 | Cat
3 | 1 | Bear
4 | 2 | Dog
Updating Id 3 to Dog, should result in this
Id | OtherId | Animal
1 | 1 | NULL
2 | 1 | Cat
3 | 1 | Dog
4 | 2 | Dog
EDIT:
I was able to solve my problem by creating an unique index in my table
CREATE UNIQUE INDEX [Id_OtherId_Animal_Index]
ON [dbo].[Animals] (OtherId, Animal)
WHERE Animal IS NOT NULL;
This way, I prevent insertion of duplicate (1, Dog) and still allow (2, Dog). This will also allow multiple (1, NULL) to be inserted.
Next, based on Frédéric's suggestion below, I edited my service layer to check BEFORE insertion if it will be a duplicate. If it will, then NULL the animal column of which would be uniquely constrained.
This answer has been outdated by substantial changes in OP question
I am quite sure there is no such feature in NHibernate, or any other ORM.
By the way, what should yield updating Id 3 to Cat after having updated it to Dog?
Id | Animal
1 |
2 |
3 | Cat
If that means that Id 1&2 now have the empty string value, that will be an unique constraint violation too.
If they have the null value, it depends then on the db engine being ANSI null compliant or not (null not considered equal to null). This is not the case of SQL Server, any version I know of, at least for the case of unique indexes. (I have not tested the unique constraint case.)
Anyway, this kind of logic, updating a row resulting in an additional update on some other rows, has to be handled explicitly.
You have many options for that:
Before each assignment to the Animal property, query the db for finding if another one has that name and take appropriate action on that another one. Take care of flushing right after having handling this other one, for ensuring it get handled prior to the actual update of the first one.
Or inject an event or an interceptor in NHibernate for catching any update on any entities, and add there your check for duplicates. Stack Overflow has examples of NHibernate events or interceptors, like this one.
But your case will probably bit a bit tough, since flushing some other changes while already flushing a session will probably cause troubles. You may have to directly tinker with the sql statement with IInterceptor.OnPrepareStatement by example, for injecting your other update first in it.
Or handle that with some trigger in DB.
Or detect a failed flush due to an unique constraint, analyze it and take appropriate action.
The third option is very likely easier and more robust than the others.
Have a small table of 2 columns on MSSQL Server 2005 which contains a lot of information let's say about 1 billion records and it is constantly being written into.
Definition of the table is :
Create table Test(
id int identity(1,1) primary key ,
name varchar(30) )
Te PK is int which I choose it over uniqueidentifier for a number of reasons. The problem comes with the auto increment I want to reorganize the 'id' every time a row is deleted. The objective to this is leaving no gaps. The table is active and a lot of rows are written into it, so dropping a column is not an option also locking the table for a long time.
Quick example of what I want to accomplish:
I have this :
id | name
----+-------
1 | Roy
2 | Boss
5 | Jane
7 | Janet
I want to reorganize it so it will look like this :
id | name
----+-------
1 | Roy
2 | Boss
3 | Jane
4 | Janet
I am aware of DBCC CHECKIDENT (TableName, RESEED, position) but I am not sure it will benefit my case, because my table is big and it will take a lot of time to reposition also if I am not mistaken it will lock the table for a very long time. This table is not used by any other table. But if you like you can submit a suggestion to the same problem having in mind that the table is used by other tables.
EDIT 1 :
The objective is to prove that the rows follow each other in case a row is deleted so I can see it is deleted and reinstate it.I was thinking of adding a third column that will contain a hash value from the row above , and if the row above is deleted I would know that I have a gap and need to restore it ,in that case the order will not matter because I can compare the has codes and see if they match , so I can see which row follows which.But still I wonder is there a more clever and safer way of doing this ?Maybe involve something else rather then hash codes , some other way of proving that the rows follow each other , or that the new row contains parts of the previous row?
EDIT 2 :
I'll try to explain it one more time if I can't well then I don't want to waste anyone's time.
In the perfect case scenario there will be nothing missing from this table , but due to
server errors some data maybe deleted or some of my associates might be wasteful and delete it by fault.
I have logs and can recover that data, but I want to prove that the records are sequenced , that they follow
each other even if there is a server error and some of them are deleted but later on reinstated.
Is there a way to do this ?
Example:
well let's say that 7 is deleted and after that reinstated as 23 , how would you prove that 23 is 7, meaning that 23 came after 6 and before 8 ?
I would suggest not worrying about trying to reseed your Identity column -- let SQL Server maintain it's uniqueness for each row.
Generally this is wanted for presentation logic instead, in which case, you could use the ROW_NUMBER() analytic function:
SELECT Row_Number() Over (Order By Id) NewId,
Id, Name
FROM YourTable
I agree with others that this shouldn't typically be done, but if you absolutely want to do it you can utilize the quirky update to get it done quickly, should be something like this:
DECLARE #prev_id INT = 0
UPDATE Test
SELECT id = CASE WHEN id - #prev_id = 1 THEN id
ELSE #prev_id + 1
END
,#prev_id = id
FROM test
You should read about the limitations of quirky update, primarily the conditions that must be met to ensure consistent output. This is a good article but they annoyingly have you sign in, but you can find other resources: http://www.sqlservercentral.com/articles/T-SQL/68467/
Edit: Actually, in this case I think you could just use:
DECLARE #prev_id INT = 0
UPDATE Test
SELECT id = #prev_id + 1
,#prev_id = id
FROM Test
The way to do it is to not implement your proposed fix.
Leave the identity alone.
If identity 7 is deleted you know it is just after 6 and and just before 8.
If you need them to stay in the same order then simple.
Place unique constraint on name.
Don't delete the record.
Just add a bool column for active.
I'm thinking of adding a relationship table to a database and I'd like to include a sort of reverse relation functionality by using a FK pointing to a PK within the same table. For example, Say I have table RELATIONSHIP with the following:
ID (PK) Relation ReverseID (FK)
1 Parent 2
2 Child 1
3 Grandparent 4
4 Grandchild 3
5 Sibling 5
First, is this even possible? Second, is this a good way to go about this? If not, what are your suggestions?
1) It is possible.
2) It may not be as desirable in your case as you might want - you have cycles, as opposed to an acyclic structure - because of this if your FK is in place you cannot insert any of those rows as they are. One possibility is that after allowing NULLs in your ReverseID column in your table DDL, you would have to INSERT all the rows with NULL ReverseID and then doing an UPDATE to set the ReverseID columns which will now have valid rows to reference. Another possibility is to disable the foregin key or don't create it until the data is in a completely valid state and then apply it.
3) You would have to do an operation like this almost every time, and if EVERY relationship has an inverse you either wouldn't be able to enforce NOT NULL in the schema or you would regularly be disabling and re-enabling constraints.
4) The sibling situation is the same.
I would be fine using the design if this is controlled in some way and you understand the implications.
I am a bit lost trying to insert my data in a specific scenario from an excel sheet into 4 tables, using SSIS.
Each row of my excel sheet needs to be split into 3 tables. The identity column value then needs to be inserted into a 4th mapping table to hold the relationship. How do I achieve this efficiently using SSIS 2008?
Note in the below example, its fixed that both col4 and 5 go into 3rd table.
Here is data example
Excel
col1 col2 col3 col4 col5
a b c d 3
a x c y 5
Table1
PK col
1 a
2 a
Table2
PK col1 col2
1 b c
2 x c
Table3
PK Col
1 d
2 3
3 y
4 5
Map_table
PK Table1_ID Table2_ID Table3_ID
1 1 1 1
2 1 1 2
2 2 2 3
2 2 2 4
I am fine even if just a SQL based approach is suggested, as I do not ave any mandate to use SSIS only. Additional challenge is that in table 2, if a same data row exists, I want to use that ID in the map table, instead of inserting duplicate rows!
Multicast is the component you are looking for. This component takes an input source and DUPLICATE it as many output. You can, in that scenario, have an Excel source and duplicate the flow to insert the data into your Table1, Table2 and Table3.
Now, the tricky part is getting back those identities into your Map_Table. Either you dont use IDENTITY and use some other means (like a GUID, or an incremental counter of your own that you would setup as a derived column before the multicast) or you use the ##IDENTITY to retrive the last inserted identity. Using ##IDENTITY sounds like a pain to me for your current scenario, but that's up to you. If the data is not that huge, I would go for a GUID.
##IDENTITY don't work well with BULK operations. It will retrieve only the last identity created. Also, keep in mind that I talked about ##IDENTITY, but you may want to use IDENT_CURRENT('TableName') instead to retrieve the last identity for a specific table. ##IDENTITY retrieve the last identity created within your session, whatever the scope. You can use SCOPE_IDENTITY() to retrive the last identity within your scope.