I have made a library management system using Postgresql and I would like to limit the number of books a student/employee is able to borrow. If someone wants to add a new tuple where a student/employee has borrowed a book, and that particular user has already borrowed for example 7 books, the table won't accept another addition.
According to me, either you need to handle this from business logic perspective i.e before insert retrieve the data of a specific student and then take action
or
From a Rule-based perspective
Do not wait for any additional rows to be inserted by the application, but, constantly watch the table for the count, upon reaching the count, db notifies the app instead.
You can call/trigger a stored procedure based on the
number of books taken by a specific user, if count_num_books > 7
then, the app would handle it.
Please take a look at ON CONFLICT as mentioned in their document
http://www.postgresqltutorial.com/postgresql-upsert/
You can create a stored procedure with insert on conflict and take action accordingly.
INSERT INTO table_name(column_list) VALUES(value_list)
ON CONFLICT target action;
In general, SQL does not make this easy. The typical solution is something like this:
Keep a table with one row per book borrowed and student.
Keep a count of outstanding books in the students table.
Maintain this count using triggers.
Add a check constraint on the count.
Postgres does have more convenient methods. One method is to store the list of borrowed books as an array or in a JSON structure. Alas, this is not a relational format. And, it doesn't allow the declaration of foreign key constraints.
That said, it does allow a simple check constraint on the books_borrowed column -- by using cardinality() for instance. And it doesn't make it easy to validate that there are no duplicates in the array. Also, INSERTs, UPDATEs, and DELETEs are more complicated.
For your particular problem, I would recommend the first approach.
As mentioned the best place for this the APPLICATION checking. But otherwise perhaps this is a case where the easiest method is doing nothing - ie don't try keeping a running total number of active checkouts. As Postgres has no issue with a trigger selecting from the table firing the trigger just derive the outstanding books checked out. The following assumes the existence of an checkout table as:
create table checkouts
( checkout_id serial
, student_employee_id integer not null
, book_id integer not null
, out_date date not null
, return_date date default null
) ;
Then create an Insert row trigger on this table and call the following:
create or replace function limit_checkouts()
returns trigger
language plpgsql
as $$
declare
checkout_count integer;
begin
select count(*)
into checkout_count
from checkouts c
where c.student_employee_id = new.student_employee_id
and returned_date is null ;
if checkout_count > 7
then
raise exception 'Checkout limit exceeded';
end if;
return new;
end;
$$;
Related
I have one table "inventory" with a column "stock" and another table "bike_with_inventory" with column "input". I'm working with Oracle APEX but that doesn't matter.
I just want to do a constraint check(stock>=input) so that I cant book a part which is not there.
Any suggestions? I can't find anything to do that so help would be appreciated.
I have one table "inventory" with a column "stock" and one table "bike_with_inventory" with column "input".
I just want to do a constraint check(stock>=input) so that I cant book a part which is not there.
You cannot - a CHECK constraint can only reference columns in the same table.
Instead you can wrap the logic in a stored procedure and use that to validate the data before the INSERT.
CREATE PROCEDURE book_part(
i_id IN BIKE_WITH_INVENTORY.ID%TYPE,
i_input IN BIKE_WITH_INVENTORY.INPUT%TYPE,
o_success OUT NUMBER
)
IS
p_stock INVENTORY.STOCK%TYPE;
BEGIN
SELECT stock
INTO p_stock
FROM inventory
WHERE id = i_id;
IF p_stock < i_input THEN
o_success := 0;
RETURN;
END IF;
INSERT INTO bike_with_inventory ( id, input )
VALUES ( i_id, i_input );
UPDATE inventory
SET stock = stock - i_input
WHERE id = i_id;
o_success := 1;
EXCEPTION
WHEN NO_DATA_FOUND THEN
o_success := 0;
END;
/
Or you could use a trigger.
I believe you are not thinking about this the right way. You really want a CHECK CONSTRAINT, not a verification at the time of insertion (or update) only.
The constraint should be valid (return TRUE) at all times, and it should prevent "invalid" changes to BOTH tables. One shouldn't be allowed to reduce the quantity in the INVENTORY table without a sufficient reduction in the corresponding quantity in BIKE_WITH_INVENTORY. Doesn't the inequality stock >= input have to be true AT ALL TIMES, and not just at initial insertion into BIKE_WITH_INVENTORY?
One method to implement such check constraints is to create a materialized view with fast refresh on commit. It should have three columns: ID, STOCK and INPUT (selected from the join of the two tables on ID). On the materialized view, you can have check constraints - in this case it would be STOCK >= INPUT.
The MV will cause transactions to fail at COMMIT time - which is bad in one sense (you don't get immediate feedback) and good in another (you can make changes to both tables, and if the end result after the FULL transaction or transactions is valid, then you can COMMIT and the transactions will be successful).
I won't show an illustration of how this should work here; do a Google search for "materialized view to implement multi-table constraint" and see what comes back.
In a Oracle database table, how are auto incremented sequence values done with PL/SQL such as for a key value or id typed columns?
Included is a discussion on sizing table resources based on what you know about the projected growth of all the related tables in a given schema. How many piano tuners are there in the city of Chicago? Probably less than the population of the city altogether... :) and so on.
Know your data.
How do I do that? Read on.
Using Database Triggers to Auto Increment Column Values
One possible approach is to use database triggers. The values are ensured unique through the implementation of sequence typed database objects.
In the example above, the table FUNDRAISER has an associated sequence called FUNDRAISER_SEQ for a primary key value.
The Parts List: What you need for the example.
CREATE TABLE "FUND_PEOPLE"
( "PERSON_ID" NUMBER NOT NULL ENABLE,
"ANONYMOUS_IND" VARCHAR2(1) NOT NULL ENABLE,
"ALIAS_IND" VARCHAR2(1) NOT NULL ENABLE,
"ALIAS_NAME" VARCHAR2(50),
"FIRST_NAME" VARCHAR2(50),
"LAST_NAME" VARCHAR2(50),
"PHONE_NUMBER" VARCHAR2(20),
"EMAIL_ADDRESS" VARCHAR2(100),
CONSTRAINT "FUND_PEOPLE_PK" PRIMARY KEY ("PERSON_ID") ENABLE
) ;
This is an older sample of code, but it the
SELECT FROM ... INTO DUAL
Construct was what we did with older releases of the Oracle Database. Release 11g changes that so that sequence increments can be assigned directly to a PL/SQL variable or a SQL statement call. Both will work.
Challenge yourself to figure out what the revised PL/SQL might look like...
CREATE OR REPLACE TRIGGER "BI_FUND_PEOPLE"
before insert on "FUND_PEOPLE"
for each row
begin
if :NEW."PERSON_ID" is null then
:NEW."PERSON_ID" := "FUND_PEOPLE_SEQ".nextval;
end if;
end;
The notation is:
"BI" for Before INSERT
:NEW for the value of the column value after the event.
FUND_ID is the column that is ignored in the insert statement. The starting value of each sequence is 20. A call to the SEQ.NEXTVAL value increments the sequence by the amount identified in the sequence declaration.
A Brief Discussion on Sequence Sizing
If you spec a sequence without a MAXIMUM value, the system assigns the largest possible integer handled by the database, which is 20+ digits long.
Now what are we going to do with a table that contains records
numbering twenty or more orders of magnitude:
"9999999999999999999999999999"?
Hmmmm... I've heard of "defensiveness in coding", but for most applications, this is... in defensible?
Consider looking at the original project design I have worked with on Google Code:
Google Code Project: Fundraiser Organizer
This is the rough relation in record counts between each entity in the
schema I set up. The two circles are metrics which are proportional
to what values they calculate or aggregate. Considering the data
cases, what are the relative magnitudes for each count of records?
A look at the first screenshot shows my estimates. This project started out as a sheet of data to keep track of an fund raising drive. Hope you find it informative and thought inspiring!
The UI and my testing efforts were conducted on a public demo site hosted by Oracle at Oracle Apex Demo System
Let's say I have the following Categories table:
Category MinValue MaxValue
A 1 2
B 3 9
C 10 0
Above I'm using 0 to indicate no maximum. These values will be configurable by end users. They will be able to add and remove categories, and modify the max and min values. Is there any sort of a constraint I can place on the table to ensure that no two ranges overlap?
This table will be modified using a web application so I could pre-validate changes to the table using Javascript so even an algorithm to prevent duplicates might suffice.
Maybe I'm missing the obvious here, but I don't think this is easy in Oracle.
I've seen solutions using a materialized view
that contains the overlaps from the Categories table
is refresh on commit
has a check constraint that it not contain any rows. This can be achieved by having a "rownum" column in the materialized view and a check constraint that this "rownum" column's value is always 0.
The check constraint on the materialized will then be violated on commit if a user enters any overlapping data.
You'll need to write your front end to allow for exceptions to be raised by Oracle on commit and to present an appropriate message to the user.
Now in the latest version of Postgresql for example, this is very easy with exclusion constraints.
I don't think that you can do it with a constraint, but you should be able to create a before insert/update trigger and use raise_application_error to abort the insert if it violates the conditions.
Something like...
if exists (select * from yourtable where :new.minvalue<maxvalue and :new.maxvalue>minvalue)
begin
raise_application_error(...)
end
I would like to insert a row into a history table when any column is updated in a table.
I'm just looking to capture the column name, old value and new value.
I'd like this trigger to be as reusable as possible as I'm going to use the same concept on other tables.
I'm familiar with triggers and with how to capture updates on one column. I'm specifically looking for how to write one trigger that inserts a record into a history table for any column that gets updated in the history table's corresponding table.
EDIT 1
I have stated NOWHERE in my post that I'm looking for source code so shame on anyone that downvotes me and thinks that I'm looking for that. You can check my previous questions/answers to see I'm not one looking for "free source code".
As I stated in my original question, I'm looking for how to write this. I've examined http://plsql-tutorial.com/plsql-triggers.htm and there's a code block which shows how to write a trigger for when ONE column is updated. I figured that maybe someone would have the know-how to give direction on having a more generic trigger for the scenario I've presented.
Assuming a regular table rather than an object table, you don't have a whole lot of options. Your trigger would have to be something of the form
CREATE OR REPLACE TRIGGER trigger_name
AFTER UPDATE ON table_name
FOR EACH ROW
BEGIN
IF( UPDATING( 'COLUMN1' ) )
THEN
INSERT INTO log_table( column_name, column_value )
VALUES( 'COLUMN1', :new.column1 );
END IF;
IF( UPDATING( 'COLUMN2' ) )
THEN
INSERT INTO log_table( column_name, column_value )
VALUES( 'COLUMN2', :new.column2 );
END IF;
<<repeat for all columns>>
END;
You could fetch the COLUMN1, COLUMN2, ... COLUMN<<n>> strings from the data dictionary (USER_TAB_COLS) rather than hard-coding them but you'd still have to hard-code the references to the columns in the :new pseudo-record.
You could potentially write a piece of code that generated the trigger above by querying the data dictionary (USER_TAB_COLS or ALL_TAB_COLS most likely), building a string with the DDL statement, and then doing an EXECUTE IMMEDIATE to execute the DDL statement. You'd then have to call this script any time a new column is added to any table to re-create the trigger for that column. It's tedious but not particularly technically challenging to write and debug this sort of DDL generation code. But it rarely is worthwhile because someone inevitably adds a new column and forgets to re-run the script or someone needs to modify a trigger to do some additional work and it's easier to just manually update the trigger than to modify and test the script that generates the triggers.
More generally, though, I would question the wisdom of storing data this way. Storing one row in the history table for every column of every row that is modified makes using the history data very challenging. If someone wants to know what state a particular row was in at a particular point in time, you would have to join the history table to itself N times where N is the number of columns in the table at that point in time. That's going to be terribly inefficient which very quickly is going to make people avoid trying to use the history data because they can't do useful stuff with it in a reasonable period of time without tearing their hair out. It's generally much more effective to have a history table with the same set of columns that the live table has (with a few more added for tracking dates and the like) and to insert one row in the history table each time the row is updated. That will consume more space but it is generally much easier to use.
And Oracle has a number of ways to audit data changes-- you can AUDIT DML, you can use fine-grained auditing (FGA), you can use Workspace Manager, or you can use Oracle Total Recall. If you are looking for more flexibility than writing your own trigger code, I'd strongly suggest that you investigate these other technologies which are inherently much more automatic rather than trying to develop your own architecture.
You might setup the history table to be the SAME as the main table, + a date and type field. You only need to capture the old values, as the new values are in the main table.
try this (untested):
create or replace trigger "MY_TRIGGER"
before update or delete
on MY_TABLE referencing new as new old as old
for each row
declare
l_dml_type varchar2(10);
begin
if (updating) then
l_dml_type := 'UPD';
else
l_dml_type := 'DEL';
end if;
insert into MY_TABLE_HIST
(
col1,
col2,
col3,
dml_type,
dml_date
)
values
(
:old.col1,
:old.col2,
:old.col3,
l_dml_type,
sysdate
);
end;
/
As a note, depending on your design, if space is a limit, you can create a view that would track the changes in the way you were going for, and just show what the record was at the time.
I need to update multiple rows in a Parts table when a field in another table changes and I want to use a trigger. The reason for the trigger is many existing application use and modify the data and I don't have access to all of them. I know some databases support a For Each Row in the trigger statement but I don't think Microsoft does.
Specificly I have two tables Parts and Categories.
Parts has Part#, Category_ID, Part_Name and Original and lots of other stuff
Category has Category_ID and Category_name.
Original is a concatenation of Category_Name and Part_Name separated by a ':'
For example Bracelets:BB129090
If someone changes the Category_Name (for excample from Bracelets to Bracelets), the Original field must be updated in every row of the Parts table. While this is an infrequent event it happens enough to cause trouble.
No Web and desktop applications uses Original
All Accounting application use only Original
It is my task to keep Accounting and the other application in sync.
I did not design the database and the company that wrote the accounting program will not change it.
Or another option: why don't you just create a view over those two tables, for your Accounting department, which contains this concatenated column:
CREATE VIEW dbo.AccountingView
AS
SELECT
p.PartNo, p.Part_Name, p.Category_ID,
c.Category_Name + ':' + p.PartName as 'Original'
FROM
Parts p
INNER JOIN
Category c ON p.Category_ID = c.Category_ID
Now your Accounting people can use this view for their reporting, it's always fresh, always up to date, and you don't have to worry about update and insert triggers and all those tricky things.....
Marc
The Original column violates 1NF, which is a very bad idea. You can either
Skip the column completely and concatenate it in each query (probably not the best solution, but I argue that it's probably better than the trigger).
Create a view over the table and have the Original column in the view (probably what I would do), or
Make Original a computed column, which is the best way if you want to create an index on it.
I guess in your case there is no need for a row-level trigger.
You can do something like
IF UPDATE(Category_Name)
UPDATE Parts
SET Original = inserted.Category_Name + ':' + Part_Name
FROM Parts
INNER JOIN inserted ON Parts.Category_ID = inserted.Category_ID
as an UPDATE trigger on the Category table.
If you really need per-row processing (say, of a stored procedure), you need a CURSOR or a WHILE loop over inserted.
If you can alter the table schemas, on option that you would have that would ensure that the Original column is always up to date, no matter what, is to make Original a computed column - a column that's computed from the Category_Name plus the Part_Name as needed.
For this, you need to create a stored function that will do that computation for you - something like this:
CREATE FUNCTION dbo.CreateOriginal(#Category_ID INT, #Part_Name VARCHAR(50))
RETURNS VARCHAR(50)
WITH SCHEMABINDING
AS BEGIN
DECLARE #Category_Name VARCHAR(50)
SELECT #Category_Name = Category_Name FROM dbo.Category
WHERE Category_ID = #Category_ID
RETURN #Category_Name + ': ' + #Part_Name
END
and then you need to add a column to your Parts table which will show the result of this function for each row in the table:
ALTER TABLE Parts
ADD Original AS dbo.CreateOriginal(Category_ID, Part_Name)
The main drawback is the fact that to display the column value, the function has to be called each time, for each row.
On the other hand, your data is always up to date and always guaranteed to be correct, no matter what. No triggers needed, either.
See if that works for you - depending on your needs and the amount of data you have, it might well perform just fine for you.
Marc