I am trying to query a table(wishlist_table) to see how many times a member appears in it.
The business rule I am trying to implement is that a member can have at most five items on there wishlist at any one time.
I have been told to do this as a domain constraint so i have created a function to check how many time a membersId appear in the wishlist table but i get an error when calling the from my check constraint
CREATE TABLE WishlistTest
(
WishlistId NUMERIC(6) NOT NULL PRIMARY KEY,
CONSTRAINT chk_Wishlist CHECK (sw3.wishListUpToFiveItems() >= 0 AND sw3.wishListUpToFiveItems() < 5)
);
CREATE OR REPLACE FUNCTION functionWishListUpToFiveItems
RETURN number IS
total number(1) := 0;
BEGIN
SELECT count(*) into total
FROM Member
WHERE MemberId = 1;
IF total < 5 THEN
return total;
ELSE RETURN -1;
END IF;
END;
If someone could tell me a better way of going about this or see what I am doing wrong it would be great
I would guess that your instructor wants you to
Add an integer column to the table to store the wish list position
Add a constraint that insures that the combination of member and wish list position is unique
Add a constraint that limits the wish list position to a value between 1 and 5
For alternate approaches using triggers or materialized views (with a constraint on the materialized view) and probably more discussion about how Oracle should allow some sort of assertion syntax to enforce this sort of constraint, you can look through this askTom thread.
Related
I have a geospatial db with (a.o.) a table with locations, and a table with features. The primary key for the locations table is location_id. Location_id is also a foreign key in the features table. The features table also includes the fields "type" (in which a two-letter code is entered to denote particular types of features), and N (which differentiates the different features that may be linked to one location). I figured a combination of location_id, type, and N would make a decent primary key for the features table. Previously, I entered these ids manually. However, I would like for this to be automatically done when a "user" enters a location ID, N, and type. (Ideally I want to find a way to automatically generate the correct N, so that "users" need only enter location_id and type, but I think this should be posted as a separate question?).
I have been trying to achieve this via triggers (see code below), but when I test it by trying to add a new data row to my features table, I get the error message "duplicate key value violates unique constraint features_pkey". Could someone point me in the direction of help for this issue?
CREATE OR REPLACE FUNCTION set_features_id()
RETURNS TRIGGER
LANGUAGE PLPGSQL
AS
$$
DECLARE
compos_id text;
BEGIN
SELECT loc_id || type || N FROM features INTO compos_id;
NEW.id := compos_id;
RETURN NEW;
END;
$$;
DROP TRIGGER IF EXISTS set_lf_id_trigger on public.landscape_features_point;
CREATE TRIGGER set_features_id_trigger
BEFORE INSERT
ON "features"
FOR EACH ROW
EXECUTE PROCEDURE set_features_id();
I'm working on a DB and would like to implement a system where a tables unique ID is generated by combining several other IDs/factors. Basically, I'd want an ID that looks like this:
1234 (A reference to a standard incrementing serial ID from another table)
10 (A reference to a standard incrementing serial ID from another table)
1234 (A number that increments from 1000-9999)
So the ID would look like:
1234101234
Additionally, each of those "entries" will have multiple time sensitive instances that are stored in another table. For these IDs I want to take the above ID and append a time stamp, so it'll look like:
12341012341234567890123
I've looked a little bit at PSQL sequences, but they seem like they're mostly used for simply incrementing up or down at certain levels, I'm not sure how to do this sort of concatenation in creating an ID string or whether it's even possible.
Don't do it! Just use a serial primary key id and then have three different columns:
otherTableID
otherTable2ID
timestamp
You can uniquely identify each row using your serial id. You can look up the other information. And -- even better -- you can create foreign key constraints to represent the relationships among the tables.
I'm not sure what do you want to achive, but
SELECT col_1::text || col_2::text || col_3::text || now()::text
should work. You should also add UNIQUE constraint on the column, i.e.
ALTER TABLE this_table ADD UNIQUE INDEX (this_new_column);
But the real question is: why do you want to do this? If you just want a unique meaningless ID, you need just to create column of type serial.
create procedure f_return_unq_id(
CONDITIONAL_PARAMS IN INTEGER,
v_seq in out integer
)
is
QUERY_1 VARCHAR2(200);
RESP INTEGER;
BEGIN
QUERY_1:='SELECT TAB1.SL_ID||TAB2.SL_ID||:v_seq||SYSTIMESTAMP FROM TABLE1 TAB1,TABLE2 TAB2 WHERE TAB1.CONDITION=:V_PARAMS';
BEGIN
EXECUTE IMMEDIATE QUERY_1 INTO RESP USING v_seq,CONDITIONAL_PARAMS;
EXCEPTION
when others then
DBMS_OUTPUT.PUT_LINE(SQLCODE);
END;
v_seq:=RESP;
EXCEPTION
when others then
DBMS_OUTPUT.PUT_LINE(SQLCODE);
END;
pass the v_seq to this procedure as your sequence number 1000-9999 and conditional parameters if any are there.
My Tables are:
CREATE TABLE member
(
svn INTEGER,
campid INTEGER,
tentname VARCHAR(4),
CONSTRAINT member_fk_svn FOREIGN KEY (svn) REFERENCES people,
CONSTRAINT member_fk_campid FOREIGN KEY (campid) REFERENCES camp ON
DELETE CASCADE,
CONSTRAINT member_pk PRIMARY KEY (svn, campid),
CONSTRAINT member_fk_tentname FOREIGN KEY (tentname) REFERENCES tent,
CONSTRAINT check_teilnehmer_zelt CHECK (Count(zeltname) over (PARTITION BY (zeltname
AND lagerid)) )<= zelt.schlafplaetze
);
With the last constraint, I want to check that there are not more members assigned to a tent than the capacity of it.
Thank you in advances for your help
This would require a SQL assertion, which is not currently supported by Oracle (or indeed any DBMS). However, Oracle are considering adding support for these in the future (please upvote that idea!)
Solution using a Materialized View
Currently you may be able to implement this constraint using a materialized view (MV) with a check constraint - something I blogged about many years ago. In your case the materialized view query would be something like:
select t.tent_id
from tents t, members m
where m.tent_id = t.tent_id
group by t.tent_id
having sum(m.num_members) > t.capacity;
The check constraint could be:
check (t.tent_id is null)
The check constraint would be violated for any row returned by the materialized view, so ensures that the MV is always empty i.e. no tents exist that are over capacity.
Notes:
I deliberately did not use ANSI join syntax, since MVs don't tend to like it (the same join may be permitted in old syntax but not permitted in ANSI syntax). Of course feel free to try ANSI first.
I haven't confirmed that this particular query is permitted in an MV with REFRESH COMPLETE ON COMMIT. The rules on what can and cannot be used vary from version to version of Oracle.
Watch out for the performance impact of maintaining the MV.
Alternative solution using a trigger
Another way would be to add a column total_members to the tents table, and use a trigger on members to maintain that e.g.
create trigger members_trg
after insert or delete or update of num_members on members
for each row
declare
l_total_members tents.total_members%type;
begin
select total_members
into l_total_members
from tents
where tent_id = nvl(:new.tent_id,:old.tent_id)
for update of total_members;
if inserting then
l_total_members := l_total_members + :new.num_members;
elsif deleting then
l_total_members := l_total_members - :old.num_members;
elsif updating then
l_total_members := l_total_members - :old.num_members + :new.num_members;
end if;
update tents
set total_members = l_total_members
where tent_id = nvl(:new.tent_id,:old.tent_id);
end;
Then just add the check constraint:
alter table tents add constraint tents_chk
check (total_members <= capacity);
By maintaining the total in the tents table, this solution serializes transactions and thus avoids the data corruption you will get with other trigger-based solutions in multi-user environments.
No, it is not. From the documentation:
The search condition must always return the same value if applied to
the same values. Thus, it cannot contain any of the following:
* Dynamic parameters (?)
* Date/Time Functions (CURRENT_DATE, CURRENT_TIME, CURRENT_TIMESTAMP)
* Subqueries
* User Functions (such as USER, SESSION_USER, CURRENT_USER)
I'm trying to formulate some check constraints in SQL Anywhere 9.0.
Basically I have schema like this:
CREATE TABLE limits (
id INT IDENTITY PRIMARY KEY,
count INT NOT NULL
);
CREATE TABLE sum (
user INT,
limit INT,
my_number INT NOT NULL CHECK(my_number > 0),
PRIMARY KEY (user, limit)
);
I'm trying to force a constraint my_number for each limit to be at most count in table.
I've tried
CHECK ((SELECT sum(my_number) FROM sum WHERE limit = limit) <= (SELECT count FROM limits WHERE id = limit))
and
CHECK (((SELECT sum(my_number) FROM sum WHERE limit = limit) + my_number) <= (SELECT count FROM limits WHERE id = limit))
and they both seem not to do the correct thing. They are both off by one (meaning once you get a negative number, then insertion will fail, but not before that.
So my question is, with what version of the table are these subqueries being executed against? Is it the table before the insertion happens, or does the subquery check for consistency after the insert happens, and rolls back if it finds it invalid?
I do not really understand what you try to enforce here but based on this help topic.
Using CHECK constraints on columns
Once a CHECK condition is in place, future values are evaluated
against the condition before a row is modified.
I would go for a before insert trigger. You have more options and can bring up a better error message.
I have a table in which I have a numeric field A, which is set to be UNIQUE. This field is used to indicate an order in which some action has to be performed. I want to make an UPDATE of all the values that are greater, for example, than 3. For example,
I have
A
1
2
3
4
5
Now, I want to add 1 to all values of A greater than 3. So, the result would be
A
1
2
3
5
6
The question is, whether it is possible to be done using only one query? Remember that I have a UNIQUE constraint on the column A.
Obviously, I tried
UPDATE my_table SET A = A + 1 WHERE A > 3;
but it did not work as I have the constraint on this field.
PostgreSQL 9.0 and later
PostgreSQL 9.0 added deferrable unique constraints, which is exactly the feature you seem to need. This way, uniqueness is checked at commit-time rather than update-time.
Create the UNIQUE constraint with the DEFERRABLE keyword:
ALTER TABLE foo ADD CONSTRAINT foo_uniq (foo_id) DEFERRABLE;
Later, before running the UPDATE statement, you run in the same transaction:
SET CONSTRAINTS foo_uniq DEFERRED;
Alternatively you can create the constraint with the INITIALLY DEFERRED keyword on the unique constraint itself -- so you don't have to run SET CONSTRAINTS -- but this might affect the performance of your other queries which don't need to defer the constraint.
PostgreSQL 8.4 and older
If you only want to use the unique constraint for guaranteeing uniqueness -- not as a target for a foreign key -- then this workaround might help:
First, add a boolean column such as is_temporary to the table that temporarily distinguishes updated and non-updated rows:
CREATE TABLE foo (value int not null, is_temporary bool not null default false);
Next create a partial unique index that only affects rows where is_temporary=false:
CREATE UNIQUE INDEX ON foo (value) WHERE is_temporary=false;
Now, every time do make the updates you described, you run them in two steps:
UPDATE foo SET is_temporary=true, value=value+1 WHERE value>3;
UPDATE foo SET is_temporary=false WHERE is_temporary=true;
As long as these statements occur in a single transaction, this will be totally safe -- other sessions will never see the temporary rows. The downside is that you'll be writing the rows twice.
Do note that this is merely a unique index, not a constraint, but in practice it shouldn't matter.
You can do it in 2 queries with a simple trick :
First, update your column with +1, but add a with a x(-1) factor :
update my_table set A=(A+1)*-1 where A > 3.
You will swtich from 4,5,6 to -5,-6,-7
Second, convert back the operation to restore positive :
update my_table set A=(A)*-1 where A < 0.
You will have : 5,6,7
You can do this with a loop. I don't like this solution, but it works:
CREATE TABLE update_unique (id INT NOT NULL);
CREATE UNIQUE INDEX ux_id ON update_unique (id);
INSERT INTO update_unique(id) SELECT a FROM generate_series(1,100) AS foo(a);
DO $$
DECLARE v INT;
BEGIN
FOR v IN SELECT id FROM update_unique WHERE id > 3 ORDER BY id DESC
LOOP
UPDATE update_unique SET id = id + 1 WHERE id = v;
END LOOP;
END;
$$ LANGUAGE 'plpgsql';
In PostgreSQL 8.4, you might have to create a function to do this since you can't arbitrarily run PL/PGSQL from a prompt using DO (at least not to the best of my recollection).