Check if relation is not exists and update another entity - sql

I have a Product table, a Location table and ProductLocationRel table, which is relation table with locationId to productId.
I need to update Location entity (mark deactivated) if there is no relation with given location exists.
I thought about having a single SQL query for that, I'd like to keep such business rule on the code level, rather then delegating it to database.
Therefore, the idea then is programmatically check if there are any relation exist in a single transaction with SERIALIZABLE isolation level through find relation, check condition and then update steps, like so:
(pseudocode)
t = transaction.start()
exist = t.find(relation with locationId).
if(exist) throw Error("can't do this");
location.isActive = false;
t.update(location);
t.commit();
But I'm not sure how transaction would behave itself in this case.
The questions on this one which I have are:
If during transaction new relation records appear in DB, would transaction fail? I think yes, but I'm not sure.
Would that approach block whole relation table for this operation? This might become a bottleneck in this case.
If it would be just simple location delete, I wouldn't need to care, as db level reference integrity would catch this on delete step, but this is not the case.
I don't think it's relevant, as this touches purely transactions execution and SQL, but the database is postgres and runtime is node.js.

Related

Oracle Audit Trail to get the list of columns which got updated in last transaction

Consider a table(Student) under a schema say Candidates(NOT DBA):
Student{RollNumber : VARCHAR2(10),Name : VARCHAR2(100),CLass : VARCHAR2(5),.........}
Let us assume that the table already contains some valid data.
I executed an update query to modify the name and class of the Student table
UPDATE STUDENT SET Name = 'ASHWIN' , CLASS = 'XYZ'
WHERE ROLLNUMBER = 'AQ1212'
Followed by another update query in which I am updating some other fields
UPDATE STUDENT SET Math_marks = 100 ,PHY_marks , CLASS = 'XYZ'
WHERE ROLLNUMBER = 'AQ1212'
Since I modified different columns in two different queries. I need to fetch the particular list of columns which got updated in last transaction. I am pretty sure that oracle must be maintaining this in some table logs which could be accessed by DBA. But I don't have the DBA access.
All I need is a the list of columns that got updated in last transaction under schema Candidates I DO NOT have the DBA rights
Please suggest me some ways.
NOTE : Here above I mentioned a simple table. But In actual I have got 8-10 tables for which I need to do this auditing where a key factor lets say ROLLNUMBER acts a foreign key for all other tables. Writing triggers would be a complex for all tables. So please help me out if there exists some other way to fetch the same.
"I am pretty sure that oracle must be maintaining this in some table logs which could be accessed by DBA."
Actually, no, not be default. An audit trail is a pretty expensive thing to maintain, so Oracle does nothing out of the box. It leaves us to decide what we what to audit (actions, objects, granularity) and then to switch on auditing for those things.
The Oracle requires DBA access to enable the built-in functionality, so that may rule it out for you anyway.
Auditing is a very broad topic, with lots of things to consider and configure. The Oracle documentation devotes a big chunk of the Security manual to Auditing. Find the Introduction To Auditing here. For monitoring updates to specific columns, what you're talking about is Fine-Grained Audit. Find out more.
"I have got 8-10 tables ... Writing triggers would be a complex for all tables."
Not necessarily. The triggers will all resemble each other, so you could build a code generator using the data dictionary view USER_TAB_COLUMNS to customise some generic boilerplate text.

How to effectively refresh many to many relationship

Lets say I have entity A, which have many to many relationship with another entities of type A. So on entity A, I have collection of A. And lets say I have to "update" this relationships according to some external service - from time to time I receive notification that relations for certain entity has changed, and array of IDs of current related entities - some relations can be new, some existing, some of existing no longer there... How can I effectively update my database with EF ?
Some ideas:
eager load entity with its related entities, do foreach on collection of IDs from external service, and remove/add as needed. But this is not very effective - need to load possibly hundreds of related entities
clear current relations and insert new. But how ? Maybe perform delete by stored procedure, and then insert by "fake" objects
a.Related.Add(new A { Id = idFromArray })
but can this be done in transaction ? (call to stored procedure and then inserts done by SaveChanges)
or is there any 3rd way ?
Thanx.
Well, "from time to time" does not sound like a situation to think much about performance improvement (unless you mean "from millisecond to millisecond") :)
Anyway, the first approach is the correct idea to do this update without a stored procedure. And yes, you must load all old related entities because updating a many-to-many relationship goes only though EFs change detection. There is no exposed foreign key you could leverage to update the relations without having loaded the navigation properties.
An example how this might look in detail is here (fresh question from yesterday):
Selecting & Updating Many-To-Many in Entity Framework 4
(Only the last code snippet before the "Edit" section is relevant to your question and the Edit section itself.)
For your second solution you can wrap the whole operation into a manually created transaction:
using (var scope = new TransactionScope())
{
using (var context = new MyContext())
{
// ... Call Stored Procedure to delete relationships in link table
// ... Insert fake objects for new relationships
context.SaveChanges();
}
scope.Complete();
}
Ok, solution found. Of course, pure EF solution is the first one proposed in original question.
But, if performance matters, there IS a third way, the best one, although it is SQL server specific (afaik) - one procedure with table-valued parameter. All new related IDs goes in, and the stored procedure performs delete and inserts in transaction.
Look for the examples and performance comparison here (great article, i based my solution on it):
http://www.sommarskog.se/arrays-in-sql-2008.html

Can I use nHibernate with a legacy-database with no referential-integrity?

If I have a legacy database with no referential-integrity or keys and it uses stored procedures for all external access is there any point in using nHibernate to persist entities (object-graphs)?
Plus, the SP's not only contain CRUD operations but business logic as well...
I'm starting to think sticking with a custom ado.net DAL would be easier :(
Cheers
Ollie
You most likely CAN. But you probably shouldn't :-)
Hibernate does not care about referential integrity per se; while it obviously needs to have some sort of link between associated tables, it does not matter whether actual FK constraint exists. For example, if Product is mapped as many-to-one to Vendor, PRODUCTS table should have some sort of VENDOR_ID in it but it doesn't have to be a FK.
Depending on your SP signatures, you may or may not be able to use them as custom CRUD in your mappings; if SPs indeed have business logic in them that is applied during all CRUD operations, that may be your first potential problem.
Finally, if your SPs are indeed used for ALL CRUD operations (including all possible queries) it's probably just not worth it to try and introduce Hibernate to the mix - you'll gain pretty much nothing and you'll have a yet another layer to deal with.
okay, an example of the problem is this:
A SP uses a sql statement similar to the following to select the next Id to be inserted into the 'Id' column of a table (this column is just an int column but NOT an identity column),
statement: 'select #cus_id = max(id) + 1 from customers',
so once the next id is calculated it's inserted into table A with other data, then a row is inserted into table B where there is ref to table A (no foreign key constraint) on another column from table A, then finally a row is inserted into table C using the same ref to table A.
When I mapped this into NH using the fluent NH the map generated a correct 'insert' sql statement for the first table but when the second table was mapped as a 'Reference' an 'update' sql statement was generated, I was expecting to see an 'insert' statement...
Now the fact there is no identity columns, no keys and no referential-integrity means to me that I can't guarantee relationship are one-to-one, one-to-many etc...
If this is true, how can NH (fluent) configured either...
Cheers
Ollie

What is the best way to handle this constraint in SQL Server 2005?

I have SMS based survey application which takes in a survey domain, and a answer.
I've gotten requests for detailed DDL, so.... The database looks like this
SurveyAnswer.Answer must be unique within all active Surveys for that SurveyDomain. In SQL terms, this should always return 0..1 rows:
select * from survey s, surveyanswer sa
where s.surveyid = sa.surveyid and
s.active = 1 and
s.surveydomainid = #surveydomainid
sa.answer = #answer
I plan on handling this constraint at the application level, but would also like some database integrity to be enforced. What is the best way to do this? Trigger? Possible in a constraint?
As you are covering 2 tables there is AFAIK only 2 ways to enforce this.
Trigger as you suggested.
Indexed view with unique constraint accross the 3 columns.
As far as reliability is concerned I would go for the Indexed view but the only downside is that it will be difficult to understand by third parties.
It is possible to add a constraint that is implemented in a UDF like this:
alter table MyTable add constraint complexConstraint
check (dbo.complexConstraintFct()=0)
Where complexConstraintFct would be a function containing a query on other tables. However this approach has some issues as check constraints were designed to be evaluated on a single row at a time but updates can affect more that one row at a time.
So, the bottom line is: stick with triggers.
Assuming you are using stored procedures to perform DML operations, you could add a guard clause to the SP that adds answers to surveys to check for the existence of an equivalent answer. You could then either throw an exception or return a status code to indicate that the answer could not be added.
You can't do it at the row level (eg CHECK constraint) so you have to have something that can view all rows
A trigger can send "nice" messages, but they run after the DML statement. You have fine control over processing.
An indexed view prevents the DML statement, but it gives a technical error message. It's an extra object and indexes to maintain.
I think what you're saying is that for any active question, the tuple (surveyDomain, surveyQuestion, surveyAnswer) must be unique?
Or in other words, survey:surveyanswer is 1:1 if the survey is active, even though survey:surveyanswer is set up to be 1:many.
If so, the answer is to change your table structure. Adding a nullable activeAnswerId column to survey will effectively make the relation 1:1; your existing constraint unique SurveyId (or unique SurveyId, SurvetDomainId) will suffice to enforce uniqueness.
Indeed, unless I'm misunderstanding, I'm surprised that Survey has a Question column; I'd expect Survey:Question to be 1:many (a survey has many questions) or even many:many, if a question can show up on more than one survey.
More generally, I suspect the reason that figuring out how to enforce the constraint is difficult and requires "heroics" like triggers or user defined functions, is a symptom of a schema that doesn't accurately model your problem domain.
OP comments:
no, you're missing it. Survey:Answer is 1:n. "Question" is the survey question – Tuple would be (SurveyDomain.SurveyDomainId, Survey.Answer)
You mean that for every domain, there's at most one answer? Again, looking at your schema, it's misleading at best. A SurveyDomain has many Surveys (each of which has a Question column) and a Survey has many Answers? (Schema)
But if the Survey's active bit is set, there should be only one Answer?
Is Survey a misnomer for Question?
It's really not clear what you're trying to model.
Again, if it's hard to add a constraint, that suggests that your model doesn't work.

NHibernate transaction and race condition

I've got an ASP.NET app using NHibernate to transactionally update a few tables upon a user action. There is a date range involved whereby only one entry to a table 'Booking' can be made such that exclusive dates are specified.
My problem is how to prevent a race condition whereby two user actions occur almost simultaneously and cause mutliple entries into 'Booking' for >1 date. I can't check just prior to calling .Commit() because I think that will still leave be with a race condition?
All I can see is to do a check AFTER the commit and roll the change back manually, but that leaves me with a very bad taste in my mouth! :)
booking_ref (INT) PRIMARY_KEY AUTOINCREMENT
booking_start (DATETIME)
booking_end (DATETIME)
make the isolation level of your transaction SERIALIZABLE (session.BeginTransaction(IsolationLevel.Serializable) and check and insert in the same transaction. You should not in general set the isolationlevel to serializable, just in situations like this.
or
lock the table before you check and eventually insert. You can do this by firing a SQL query through nhibernate:
session.CreateSQLQuery("SELECT null as dummy FROM Booking WITH (tablockx, holdlock)").AddScalar("dummy", NHibernateUtil.Int32);
This will lock only that table for selects / inserts for the duration of that transaction.
Hope it helped
The above solutions can be used as an option. If I sent 100 request by using "Parallel.For" while transaction level is serializable, yess there is no duplicated reqeust id but 25 transaction is failed. It is not acceptable for my client. So we fixed the problem with only storing request id and adding an unique index on other table as temp.
Your database should manage your data integrity.
You could make your 'date' column unique. Therefore, if 2 threads try to get the same date. One will throw a unique key violation and the other will succeed.