SQL: Compare two data from different tables sql in a check statement - sql

I have one table "inventory" with a column "stock" and another table "bike_with_inventory" with column "input". I'm working with Oracle APEX but that doesn't matter.
I just want to do a constraint check(stock>=input) so that I cant book a part which is not there.
Any suggestions? I can't find anything to do that so help would be appreciated.

I have one table "inventory" with a column "stock" and one table "bike_with_inventory" with column "input".
I just want to do a constraint check(stock>=input) so that I cant book a part which is not there.
You cannot - a CHECK constraint can only reference columns in the same table.
Instead you can wrap the logic in a stored procedure and use that to validate the data before the INSERT.
CREATE PROCEDURE book_part(
i_id IN BIKE_WITH_INVENTORY.ID%TYPE,
i_input IN BIKE_WITH_INVENTORY.INPUT%TYPE,
o_success OUT NUMBER
)
IS
p_stock INVENTORY.STOCK%TYPE;
BEGIN
SELECT stock
INTO p_stock
FROM inventory
WHERE id = i_id;
IF p_stock < i_input THEN
o_success := 0;
RETURN;
END IF;
INSERT INTO bike_with_inventory ( id, input )
VALUES ( i_id, i_input );
UPDATE inventory
SET stock = stock - i_input
WHERE id = i_id;
o_success := 1;
EXCEPTION
WHEN NO_DATA_FOUND THEN
o_success := 0;
END;
/
Or you could use a trigger.

I believe you are not thinking about this the right way. You really want a CHECK CONSTRAINT, not a verification at the time of insertion (or update) only.
The constraint should be valid (return TRUE) at all times, and it should prevent "invalid" changes to BOTH tables. One shouldn't be allowed to reduce the quantity in the INVENTORY table without a sufficient reduction in the corresponding quantity in BIKE_WITH_INVENTORY. Doesn't the inequality stock >= input have to be true AT ALL TIMES, and not just at initial insertion into BIKE_WITH_INVENTORY?
One method to implement such check constraints is to create a materialized view with fast refresh on commit. It should have three columns: ID, STOCK and INPUT (selected from the join of the two tables on ID). On the materialized view, you can have check constraints - in this case it would be STOCK >= INPUT.
The MV will cause transactions to fail at COMMIT time - which is bad in one sense (you don't get immediate feedback) and good in another (you can make changes to both tables, and if the end result after the FULL transaction or transactions is valid, then you can COMMIT and the transactions will be successful).
I won't show an illustration of how this should work here; do a Google search for "materialized view to implement multi-table constraint" and see what comes back.

Related

How to create a sql trigger to set a table column equal to a value after insert?

new to oracle and sql but trying to learn triggers. I think I'm having some syntax errors here, but let me explain what I am trying to do.
I have two tables: 1. group_membership with the columns
user_internal_id | group_internal_id (FK) | joined_time
and 2. group_details with the columns
group_internal_id (PK) | group_name | group_owner | created_time | movie_cnt | member_cnt|
(PK and FK stand for Primary Key and Foreign Key that relates to that Primary Key respectively.)
What I want to do:
After a new row is inserted into the group_membership table, I want to
update the value of member_cnt in the group_details table with the amount of times a particular group_internal_id appears in the group_membership table.
--
Now, my DBA for the app we are working on has created a trigger that simply updates the member_cnt of a particular group by reading the group_internal_id of the row inserted to group_membership, then adding 1 to the member_cnt. Which works better probably, but I want to figure out how come my trigger is having errors. Here is the code below
CREATE OR REPLACE TRIGGER set_group_size
AFTER INSERT ON group_membership
FOR EACH ROW
DECLARE g_count NUMBER;
BEGIN
SELECT COUNT(group_internal_id)
INTO g_count
FROM group_membership
GROUP BY group_internal_id;
UPDATE group_details
SET member_cnt = g_count
WHERE group_details.group_internal_id = group_membership.group_internal_id;
END;
The errors I'm receiving are:
Error(7,5): PL/SQL: SQL Statement ignored
Error(9,45): PL/SQL: ORA-00904: "GROUP_MEMBERSHIP"."GROUP_INTERNAL_ID": invalid identifier
I came here because my efforts have bene futile in troubleshooting. Hope to hear some feedback. Thanks!
The immeidate issue with your code is the update query of your trigger:
UPDATE group_details
SET member_cnt = g_count
WHERE group_details.group_internal_id = group_membership.group_internal_id;
group_membership is not defined in that scope. To refer to the value on the rows that is being inserted, use pseudo-table :new instead.
WHERE group_details.group_internal_id = :new.group_internal_id;
Another problem is the select query, that might return multiple rows. It would need a where clause that filters on the newly inserted group_internal_id:
SELECT COUNT(*)
INTO g_count
FROM group_membership
WHERE group_internal_id = :new.group_internal_id;
But these obvious fixes are not sufficient. Oracle won't let you select from the table that the trigger fired upon. On execution, you would meet error:
ORA-04091: table GROUP_MEMBERSHIP is mutating, trigger/function may not see it
There is no easy way around this. Let me suggest that this whole design is broken; the count of members per group is derived information, that can easily be computed on the fly whenever needed. Instead of trying to store it, you could, for example, use a view:
create view view_group_details as
select group_internal_id, group_name,
(
select count(*)
from group_membership gm
where gm.group_internal_id = gd.group_internal_id
) member_cnt
from group_details gd
Agree with #GMB that your design is fundamentally flawed, but if you insist on keeping a running count there is a easy solution to mutating they point out. The entire process is predicated on maintaining the count in group_details.member_count column. Therefore since that column has the previous count you do not need to count them - so eliminate the select. Your trigger becomes:
create or replace trigger set_group_size
after insert on group_membership
for each row
begin
update group_details
set member_cnt = member_cnt + 1
where group_details.group_internal_id = :new.group_internal_id;
end;
Of course then you need to handle group_membership Deletes and Updates of group_internal_id. Also, what happens when 2 users process the same group_membership simultaneously? Maintaining a running total for a derivable column is just not worth the effort. Best option just create the view as GMB suggested.

SQL, limit the amout of times something can be added

I have made a library management system using Postgresql and I would like to limit the number of books a student/employee is able to borrow. If someone wants to add a new tuple where a student/employee has borrowed a book, and that particular user has already borrowed for example 7 books, the table won't accept another addition.
According to me, either you need to handle this from business logic perspective i.e before insert retrieve the data of a specific student and then take action
or
From a Rule-based perspective
Do not wait for any additional rows to be inserted by the application, but, constantly watch the table for the count, upon reaching the count, db notifies the app instead.
You can call/trigger a stored procedure based on the
number of books taken by a specific user, if count_num_books > 7
then, the app would handle it.
Please take a look at ON CONFLICT as mentioned in their document
http://www.postgresqltutorial.com/postgresql-upsert/
You can create a stored procedure with insert on conflict and take action accordingly.
INSERT INTO table_name(column_list) VALUES(value_list)
ON CONFLICT target action;
In general, SQL does not make this easy. The typical solution is something like this:
Keep a table with one row per book borrowed and student.
Keep a count of outstanding books in the students table.
Maintain this count using triggers.
Add a check constraint on the count.
Postgres does have more convenient methods. One method is to store the list of borrowed books as an array or in a JSON structure. Alas, this is not a relational format. And, it doesn't allow the declaration of foreign key constraints.
That said, it does allow a simple check constraint on the books_borrowed column -- by using cardinality() for instance. And it doesn't make it easy to validate that there are no duplicates in the array. Also, INSERTs, UPDATEs, and DELETEs are more complicated.
For your particular problem, I would recommend the first approach.
As mentioned the best place for this the APPLICATION checking. But otherwise perhaps this is a case where the easiest method is doing nothing - ie don't try keeping a running total number of active checkouts. As Postgres has no issue with a trigger selecting from the table firing the trigger just derive the outstanding books checked out. The following assumes the existence of an checkout table as:
create table checkouts
( checkout_id serial
, student_employee_id integer not null
, book_id integer not null
, out_date date not null
, return_date date default null
) ;
Then create an Insert row trigger on this table and call the following:
create or replace function limit_checkouts()
returns trigger
language plpgsql
as $$
declare
checkout_count integer;
begin
select count(*)
into checkout_count
from checkouts c
where c.student_employee_id = new.student_employee_id
and returned_date is null ;
if checkout_count > 7
then
raise exception 'Checkout limit exceeded';
end if;
return new;
end;
$$;

How can i make certain Oracle Table Rows marked as 'historical' invisible/un-available?

I have a huge existing Order Management Application.
Now, in the main ORDER Table, i am adding a new column: IS_HISTORICAL. If its value is: TRUE, means the Order is Historical now, and should not show up in application.
Now, i have to modify many SQL Queries in my existing application so that they select only those orders whose IS_HISTORICAL is 'FALSE' - i.e add following in WHERE clause:
AND IS_HISTORICAL='FALSE'
Question: *Is there a easier way - so that i do not have to modify so many application queries (to hide away historical orders)?
Essentially all ORDERS marked as IS_HISTORICAL='TRUE' should become invisible/un-available for read/updates!!*
Note: Right now the table sizes are not very huge, but ultimately i intend to partition the table by IS_HISTORICAL true/false.
If you're only going to use the historical data for analysis then I prefer Florin's solution as the amount of data you need to look at for each query remains smaller. It makes the analysis queries more difficult as you need to UNION ALL but everything else will run "quicker" (it may not be noticable).
If some applications/users require access to the historical data the better solution would be to rename your table and create a view on top of it with the query that you need.
The problem with re-writing all your queries is that you're going to forget one or get one incorrect, either now or in the future. A view removes that problem for you as the query is static, every time you query the view the additional conditions you require are automatically added.
Something like:
rename orders to order_history;
create or replace view orders as
select *
from order_history
where is_historical = 'FALSE';
Two further points.
I wouldn't bother with TRUE / FALSE, if the table gets large it's a lot of additional data to scan. Create your column as a VARCHAR2(1) and use T / F or Y / N, they are as immediately obvious but are smaller. Alternatively use a NUMBER(1,0) and 1 / 0.
Don't forget to put a constraint on your table so that the IS_HISTORICAL column can only have the values you've chosen.
If you're only ever going to have the two values then you may want to consider a CHECK CONSTRAINT:
alter table order_history
add constraint chk_order_history_historical
check ( is_historical in ('T','F') );
Otherwise, maybe you should do this anyway, use a FOREIGN KEY CONSTRAINT. Define an extra table, ORDER_HISTORY_TYPES
create table order_history_types (
id varchar2(1)
, description varchar2(4000)
, constraint pk_order_history_types primary key (id)
);
Fill it with your values and then add the foreign key:
alter table order_history
add constraint fk_order_history_historical
foreign key (is_historical)
references order_history_types (id)
You could look into using Virtual Private Database/row-level security. This can be used to automatically add the is_historical = 'FALSE' predicate when certain conditions are met (e.g. you're connected as the application user).
If the user only need nonhistorical records, an option is to create an ORDER_HIST table and move there the historical records. (delete and insert)
If some users/applications need both type of records then the partition aproach is the best.

Ensure max min columns don't overlap

Let's say I have the following Categories table:
Category MinValue MaxValue
A 1 2
B 3 9
C 10 0
Above I'm using 0 to indicate no maximum. These values will be configurable by end users. They will be able to add and remove categories, and modify the max and min values. Is there any sort of a constraint I can place on the table to ensure that no two ranges overlap?
This table will be modified using a web application so I could pre-validate changes to the table using Javascript so even an algorithm to prevent duplicates might suffice.
Maybe I'm missing the obvious here, but I don't think this is easy in Oracle.
I've seen solutions using a materialized view
that contains the overlaps from the Categories table
is refresh on commit
has a check constraint that it not contain any rows. This can be achieved by having a "rownum" column in the materialized view and a check constraint that this "rownum" column's value is always 0.
The check constraint on the materialized will then be violated on commit if a user enters any overlapping data.
You'll need to write your front end to allow for exceptions to be raised by Oracle on commit and to present an appropriate message to the user.
Now in the latest version of Postgresql for example, this is very easy with exclusion constraints.
I don't think that you can do it with a constraint, but you should be able to create a before insert/update trigger and use raise_application_error to abort the insert if it violates the conditions.
Something like...
if exists (select * from yourtable where :new.minvalue<maxvalue and :new.maxvalue>minvalue)
begin
raise_application_error(...)
end

How to get last inserted records(row) in all tables in Firebird database?

I have a problem. I need to get last inserted rows in all tables in Firebird db. And one more, these rows must contain specfied column name. I read some articles about rdb$ but have a few experience with that one.
There is no reliable way to get "last row inserted" unless the table has a timestamp field which stores that information (insertion timestamp).
If the table uses integer PK generated by sequense (generator in Firebird lingo) then you could query for the higest PK value but this isn't reliable either.
There is no concept of 'last row inserted'. Visibility and availability to other transactions depends on the time of commit, transaction isolation specified etc. Even use of a generator or timestamp as suggested by ain does not really help because of this visibility issue.
Maybe you are better of specifying the actual problem you are trying to solve.
SELECT GEN_ID(ID_HEDER,0)+1 FROM ANY_TABLE INTO :ID;
INSERT INTO INVOICE_HEADER (No,Date_of,Etc) VALUES ('122','2013-10-20','Any text')
/* ID record of INVOICE_HEADER table gets the ID_number from the generator above. So
now we have to check if the ID =GEN_ID(ID_HEADER,0) */
IF (ID=GEN_ID(ID_HEADER,0)) THEN
BEGIN
INSERT INTO INVOICE_FOOTER (RELACION_ID, TEXT, Etc) Values (ID, 'Text', Etc);
END
ELSE
REVERT TRANSACTION
That is all