Creating a Trigger so that a member is not allowed to rent more than 5 movies at a given time [duplicate] - sql

How do I start a trigger so that this allows nobody to be able to rent a movie if their unpaid balance exceeds 50 dollars?

What you have here is a cross-row table constraint - i.e. you can't just put a single Oracle CONSTRAINT on a column, as these can only look at data within a single row at a time.
Oracle has support for only two cross-row constraint types - uniqueness (e.g. primary keys and unique constraints) and referential integrity (foreign keys).
In your case, you'll have to hand-code the constraint yourself - and with that comes the responsibility to ensure that the constraint is not violated in the presence of multiple sessions, each of which cannot see data inserted/updated by other concurrent sessions (at least, until they commit).
A simplistic approach is to add a trigger that issues a query to count how many records conflict with the new record; but this won't work because the trigger cannot see rows that have been inserted/updated by other sessions but not committed yet; so the trigger will sometimes allow members to rent 6 videos, as long as (for example) they get two cashiers to enter the data in separate terminals.
One way to get around this problem is to put some element of serialization in - e.g. the trigger would first request a lock on the member record (e.g. with a SELECT FOR UPDATE) before it's allowed to check the rentals; that way, if a 2nd session tries to insert rentals, it will wait until the first session does a commit or rollback.
Another way around this problem is to use an aggregating Materialized View, which would be based on a query that is designed to find any rows that fail the test; the expectation is that the MV will be empty, and you put a table constraint on the MV such that if a row was ever to appear in the MV, the constraint would be violated. The effect of this is that any statement that tries to insert rows that violate the constraint will cause a constraint violation when the MV is refreshed.
Writing the query for this based on your design is left as an exercise for the reader :)

If you want to restrict something about your table data then you should have a look at Constraints and not Triggers.
Constraints are ensuring some conditions about your table data. Like your example.
Triggers are fired when some action (i.e. INSERT, UPDATE, DELETE) took place and you can do some work then as a reaction to this action.

Related

constraints different moments

I have a table of schedules
So my question is this : How can I make a constraint to forbid a values to be scheduled no more than once a day.
Thanks ahead.
Simply add a unique constraint/index on the vessel and date:
create unique index unq_tourschedule_vesselid_tourdate on tourschedule(vesselid, tourdate);
(A unique constraint is implemented using a unique index.)
You should do this in the database, so even manual changes to the data enforce this constraint.
It depends on what level you need to "prevent" the scheduling. Do you want to prevent it from the UI, the middle-tier, or at the database level?
UI - Do an AJAX check against DB or middle-tier check and prevent insertion of the record there (not a secure solution, but worth mentioning because it informs your users of an existing record).
Middle Tier - best place. Query your DB to see if a record exists with that given vesselID and TourDate. If any records are returned, do not allow insertion. You could then redirect to the page with a helpful message to the user. Business logic goes here typically, and it is best to decouple your business logic from your database.
Database level - most robust, but least maintainable and bad practice for business logic visibility. Many options, all of them cumbersome:
Stored procedure - upon insert, check the records, same procedure as middle tier, but you have to funnel your "error" message up through all the tiers.
Compound key using vesselID and TourDate ensures automatically that only unique entries can be inserted.
Constraint on the table data upon insertion - not just an index, which is for searching optimization, but an actual constraint. This constraint may be added to an existing table or be part of the table creation statement itself.
Yes I have created a unique Index and everything worked out all right thank you for helping me out.

Postgres access a single column by two different programs

My question is probably very specific to Postgres, probably not.
A program which I cannot modify has access to Postgress via npgsql and a simple select command, all I know.
I also have access via npgsql. The table is defined as:
-- Table: public.n_data
-- DROP TABLE public.n_data;
CREATE TABLE public.n_data
(
u_id integer,
p_id integer NOT NULL,
data text,
CONSTRAINT nc PRIMARY KEY (p_id)
)
WITH (
OIDS=FALSE
);
ALTER TABLE public.n_data
OWNER TO postgres;
(If that info is useful anyway)
I access one single big column, read from it and write back to it.
This all works fine so far.
The Question is: how does Postgres handles it if we write at the same time.
Any Problems there?
And if Postgres does not handle that automatically, how about when I read the data, process it and in the meantime data changes, and I write back that data after I processed it---> lost data.
Its a bit tricky to test for data integrity, since this datablock is huge, and corruptions are hard to find.
I do it with c# if that means anything.
Locking (in most1) relational databases (including Postgres) is always on row level, never on column level (it's columns and rows in a relational database not "cells", "fields" or "records")
If two transactions modify the same row, the second one will have to wait until the first one commits or rolls back.
If two transactions modify different rows then they can do that without any problems as long as they don't modify columns that are part of a unique constraint or primary key to the same value.
Read access to data is never blocked in Postgres by regular DML statements. So yes while one transaction modifies data, another one will see the old data until the first transaction commits the changes ("read consistency").
To handle lost updates you can either use the serializable isolation level or make all transactions follow the pattern that they first need to obtain a lock on the row (select ... for update) and hold that until they are finished. Search for "pessimistic locking" to get more details about this pattern.
Another option is to include a "modified" timestamp in your table. When a process reads the data it also reads the modification timestamp. When it sends back the new changes it includes a where modified_at = <value obtained when reading> - if the data has changed the condition will not hold true and nothing will be updated and you need to restart your transaction. Search for "optimistic locking" to find more details about this pattern.
1 some DBMS do page locking and some escalate many row level locks to a table lock. Neither is the case in Postgres

Ordered DELETE of records in self-referencing table

I need to delete a subset of records from a self referencing table. The subset will always be self contained (that is, records will only have references to other records in the subset being deleted, not to any records that will still exist when the statement is complete).
My understanding is that this might cause an error if one of the records is deleted before the record referencing it is deleted.
First question: does postgres do this operation one-record-at-a-time, or as a whole transaction? Maybe I don't have to worry about this problem?
Second question: is the order of deletion of records consistent or predictable?
I am obviously able to write specific SQL to delete these records without any errors, but my ultimate goal is to write a regression test to show the next person after me why I wrote it that way. I want to set up the test data in such a way that a simplistic delete statement will consistently fail because of the records referencing the same table. That way if someone else messes with the SQL later, they'll get notified by the test suite that I wrote it that way for a reason.
Anyone have any insight?
EDIT: just to clarify, I'm not trying to work out how to delete the records safely (that's simple enough). I'm trying to figure out what set of circumstances will cause such a DELETE statement to consistently fail.
EDIT 2: Abbreviated answer for future readers: this is not a problem. By default, postgres checks the constraints at the end of each statement (not per-record, not per-transaction). Confirmed in the docs here: http://www.postgresql.org/docs/current/static/sql-set-constraints.html And by the SQLFiddle here: http://sqlfiddle.com/#!15/11b8d/1
In standard SQL, and I believe PostgreSQL follows this, each statement should be processed "as if" all changes occur at the same time, in parallel.
So the following code works:
CREATE TABLE T (ID1 int not null primary key,ID2 int not null references T(ID1));
INSERT INTO T(ID1,ID2) VALUES (1,2),(2,1),(3,3);
DELETE FROM T WHERE ID2 in (1,2);
Where we've got circular references involved in both the INSERT and the DELETE, and yet it works just fine.
fiddle
A single DELETE with a WHERE clause matching a set of records will delete those records in an implementation-defined order. This order may change based on query planner decisions, statistics, etc. No ordering guarantees are made. Just like SELECT without ORDER BY. The DELETE executes in its own transaction if not wrapped in an explicit transaction, so it'll succeed or fail as a unit.
To force order of deletion in PostgreSQL you must do one DELETE per record. You can wrap them in an explicit transaction to reduce the overhead of doing this and to make sure they all happen or none happen.
PostgreSQL can check foreign keys at three different points:
The default, NOT DEFERRABLE: checks for each row as the row is inserted/updated/deleted
DEFERRABLE INITIALLY IMMEDIATE: Same, but affected by SET CONSTRAINTS DEFERRED to instead check at end of transaction / SET CONSTRAINTS IMMEDIATE
DEFERRABLE INITIALLY DEFERRED: checks all rows at the end of the transaction
In your case, I'd define your FOREIGN KEY constraint as DEFERRABLE INITIALLY IMMEDIATE, and do a SET CONSTRAINTS DEFERRED before deleting.
(Actually if I vaguely recall correctly, despite the name IMMEDIATE, DEFERRABLE INITIALLY IMMEDIATE actually runs the check at the end of the statement instead of the default of after each row change. So if you delete the whole set in a single DELETE the checks will then succeed. I'll need to double check).
(The mildly insane meaning of DEFERRABLE is IIRC defined by the SQL standard, along with gems like a TIMESTAMP WITH TIME ZONE that doesn't have a time zone).
If you issue a single DELETE that affects multiple records (like delete from x where id>100), that will be handled as a single transaction and either all will succeed or fail. If multiple DELETEs, you have to put them in a transaction yourself.
There will be problems. If you have a constraint with DELETE CASCADE, you might delete more than you want with a single DELETE. If you don't, the integrity check might stop you from deleting. Constraints other than NO ACTION are not deferrable, so you'd have to disable the constraint before delete and enable it afterwards (basically drop/create, which might be slow).
If you have multiple DELETEs, then the order is as the DELETE statements are sent. If a single DELETE, the database will delete in the order it happens to find them (index, oids, something else...).
So I would also suggest thinking about the logic and maybe handling the deletes differently. Can you elaborate more on the actual logic? A tree in database?
1) It will do as transaction if enclosed within "BEGIN/COMMIT". Otherwise in general no.
For more see http://www.postgresql.org/docs/current/static/tutorial-transactions.html
The answer in general to your question depends on how is self-referencing implemented.
If it is within application logic, it is solely your responsibility to check the things yourself.
Otherwise, it is in general possible to restrict or cascade deletes for rows with foreign keys and DELETE CASCADE . However, as far as PG docs go, I understand we are talking about referencing columns in other tables, not sure if same-table foreign keys are supported:
http://www.postgresql.org/docs/current/static/ddl-constraints.html#DDL-CONSTRAINTS-FK
2) In general, the order of deletion will be the order in which you issue delete statements. If you want them all to be "uninterruptible" with no other statements modifying table in between, you enclose them in a transaction.
As a warning, I may be wrong, but what you seem to be trying to do, must not be done. You should not have to rely on some esoteric "order of deletion" or some other undocumented and/or implicit features of database. The underlying logic does not seem sound, there should be another way.

Unique constraint - these columns currently doesn't have unique values..?

Unique constraint - these columns currently doesn't have unique values..??
i have created Unique Constraint with 3 columns ;
my code works perfectly but once - two users # same time entered same data and somehow it saved in DB; after that incident
This UniqueConstraint gives me error - These columns currently doesn't have unique values
How can i check if user enters this kind of entry or how can i restrict the ENTRY ???
Can you give more details about the constraints and how you are enforcing them?
(I thought) Any modern DBMS should be able to handle concurrency/multiple users at the same time without constraint problems. My guess is that you are using ADO.NET DataSet/DataTable and adding constraints yourself.
If that's the case, I think the easiest/best thing to do is to add the constraint in the database as well. If two users update/save at the same time, the database will handle it correctly; one will successfully save data, the other will receive an error. You can handle that error in the application gracefully.
I guess you're using SQL Server, in which case, if the constraints have been defined such that the constraint is checked upon insert. According to the SQL Server documentation, what you describe cannot happen:
The Database Engine automatically
creates a UNIQUE index to enforce the
uniqueness requirement of the UNIQUE
constraint. Therefore, if an attempt
to insert a duplicate row is made, the
Database Engine returns an error
message that states the UNIQUE
constraint has been violated and does
not add the row to the table.
Even if the inserts happen (almost) simultaneously, the requests will be queued in the database, so that one of the requests will fail if it detects the constraint would be violated.
As Rob P says, it looks as though you are creating the constraints outside of the DB layer.

Adding a constraint or otherwise preventing the removal of a record in a SQL DB table

I have a table in a SQL Server database that contains a row I never want deleted. This is a row required by the application to work as designed.
Is there a way to add a constraint to that row that prevents it from being deleted? Or another way to handle this scenario?
Here is an example of using a FOR DELETE trigger to prevent the deletion of a row when a certain condition is satisfied:
CREATE TRIGGER KeepImportantRow ON MyTable
FOR DELETE
AS BEGIN
-- This next line assumes that your important table has a
-- column called id, and your important row has an id of 0.
-- Adjust accordingly for your situation.
IF DELETED.id = 0 BEGIN
RAISERROR('Cannot delete important row!', 16, 1)
ROLLBACK TRAN
END
END
If you want to prevent accidental deletes then you could have a dummy table that declares a foreign key into your table with ON DELETE NO ACTION, and add one row in it with the foreign key matching your 'precious' row primary key. This way if the 'parent' row is deleted, the engine will refuse and raise an error.
If you want to prevent intentional deletes then you should rely on security (deny DELETE permission on the table). Of course, privileged users that have the required permission can delete the row, there is no way to prevent that, nor should you try. Since SQL Server does not support row level security, if you need to deny only certain rows then you have to go back to the drawing broad and change your table layout so that all rows that have to be denied are stored in one table, and rows that are allowed to be delete are stored in a different table.
Other solutions (like triggers) will ultimately be a variation on these themes, what you really must solve is the question whether you want to prevent accidental deletes (solvable) or intentional deletes (unsolvable, is their database, not yours).
You could do it in a number of ways, although it depends on the situation.
If the table only contains that row, do not grant deletion / truncate privledges.
If the table contains other rows as well you could use a before deletion trigger.
One issue you will have is that someone with DBA / SA access to the database, can get around anything you put in, if they desire, so what are you trying to protect against, casual user, or anyone.