Inserting values into the table - sql

I'm trying to insert the values into the tables that I created.
This is the values that I'm trying to insert.
INSERT INTO DDR_Rental (customer_ID, rental_date, rent_fee, film_title, start_date, expiry_date, rating)
VALUES (12345, '12-Mar-19', '4.99', 'Peppermint', '12-Mar-19', '22-Mar-19', 4);
This is the datatypes and the constraints.
CREATE TABLE DDR_Rental
(customer_ID NUMBER(5),
rental_date DATE,
rent_fee NUMBER(3,2) CONSTRAINT SYS_RENTAL_FEE_NN NOT NULL,
film_title VARCHAR2(20),
start_date DATE,
expiry_date DATE,
rating NUMBER(5),
CONSTRAINT SYS_RENTAL_PK PRIMARY KEY ((customer_ID), (rental_date), (film_title)),
CONSTRAINT SYS_RENTAL_CUS_ID_FK1 FOREIGN KEY (customer_ID) REFERENCES
DDR_CUSTOMER(CUSTOMER_ID),
CONSTRAINT SYS_RENTAL_FILM_TITLE_FK2 FOREIGN KEY (film_title) REFERENCES
DDR_MOVIE_TITLE(FILM_TITLE),
CONSTRAINT SYS_RENTAL_EXP_DATE_CK CHECK (expiry_date >= start_date),
CONSTRAINT SYS_RENTAL_START_DATE_CK CHECK (start_date >= rental_date),
CONSTRAINT SYS_RENTAL_RATING_CK CHECK (REGEXP_LIKE(rating,('[12345]'))));
The error says unique constraint (CPRG250.SYS_RENTAL_PK) violated

It seems like you are trying to add a duplicate rental event for the same film by the same customer on exact one day. That can obviously happen, if you allow in your business logic the situation that a customer can rent a movie, give it back on the same day and rent it back again.
Knowing your business, you have 2 ways to deal with this situation:
Your business model don't allow that. This means that this is a duplicate record and you shouldn't add currently existing record, in which case showing that error is perfectly fine and doesn't allow for duplicates, since this event happened only once.
Your business model allows that. In this case, you should modify your rental_date column to store time along with the date, instead of only storing date, so that you know when the rental event actually happen. You could use datetime type for example to store date with time. This can be done when creating your table, just replace rental_date date with rental_date datetime. If the table is already created you will need to drop and recreate PRIMARY KEY and then after that you could change type of your column using ALTER TABLE ddr_rental ALTER COLUMN rental_date datetime and re-create the primary key. Check values stored in your table after that, since 2019-01-01 will now be represented as 2019-01-01 00:00:00.000 appending the time which wasn't specified before.
In addition to (1) you could wrap your code and handle this exception to return a clear message when this happens, showing that the movie has already been rented.
Moreover, since you don't have a table for storing movies in your inventory, this can lead to a possible mistake, since you may have more than 1 copy of a movie. In this case I suggest that you create separate film and film_copy tables to properly identify which copy of a film has been rented, so that you can rent another copy.

You have a unique constraint in your table. Your table already has a record with customer_id, rental_date and film_title that you want to insert.
Try this query and you will see that there already is a record
select * from DDR_Rental
where customer_id=12345 and rental_date='12-Mar-19' and film_title='Peppermint'

Related

Creating tables - integrity constraints

Create the following tables:
Customer
KNr (primary key)
Name (at most 15 characters)
City (at most 10 characters)
Country (at most 10 characters)
Balance (Type FLOAT)
Discount (Type FLOAT)
Products
PNr (greater than 1 and primary key)
Descr (not NULL, at most 10 characters and unique)
Weight (Type FLOAT)
Think about the integrity constraints for the columns Price, StorageLocation and Stock.
Orders
OrdNr (Type INTEGER, greater than 0 and primary key)
Mon (Type INTEGER, not NULL and between 1 and 12)
Day (Type INTEGER, not NULL and between 1 and 31)
PNr (Foreign Key)
KNr (Foreign Key)
The attributes Month, Day, Pnr and Knr must together be unique. Think about the integrity constraints for the columns Quantity, Sum and Status.
I have done the following :
For 1 :
CREATE TABLE Customer
(
KNr PRIMARY KEY,
Name CHAR(15),
City CHAR(10)
Country CHAR(10)
Balance FLOAT
Discount FLOAT
);
Is that correct?
For 2 :
CREATE TABLE Products
(
PNr PRIMARY KEY CHECK (PNr > 1) ,
Descr NOT NULL CHAR(10) UNIQUE.
Weight FLOAT
Price FLOAT CHECK (Price > 0) // Is checking if it is positive an integrity constraint?
StorageLocation CHAR(15) // What integrity constraint do we use here? If it is not Null for example?
Stock INTEGER // What integrity constraint do we use here? If it is not negative for example?
);
Is that correct?
For 3 :
CREATE TABLE Orders
(
BestNr INTEGER PRIMARY KEY CHECK (BestNr > 0) ,
Mon INTEGER NOT NULL CHECK(Mon >= 1 and Mon <=12)
Day INTEGER NOT NULL CHECK(Day >= 1 and Day <=31)
FOREIGN KEY (PNr) REFERENCES Customer (PNr),
FOREIGN KEY (KNr) REFERENCES Products (KNr)
Quantity INTEGER CHECK(Quantity >0) // It is the ordered quantity, or not? What integrity constraints can we consider?
Sum FLOAT // Is this the sum of invoices? Or what is this meant? What integrity constraints can we consider?
Status CHAR(20) // It is meant if is paid, delivered, etc? So this contains words, right? What integrity constraints can we consider?
UNIQUE (Mon, Day, Pnr, Knr)
);
Do we write that as in the last line that the attributes Month, Day, Pnr and Knr must together be unique ?
You are actually pretty close if viewed as logical model defining requirements. From a physical model however, the syntax is considerable off.
I will not do each table but just Orders, and I will slice and dice along the way, leaving some things you need to correct and some suggestions for your considerations.
First off If you want comment on your ddl you can do so, but they begin with -- instead of //. A better approach just use Comment On where they become part of the permanent record.
BestNr:
As a column name nothing wrong but is it clear what BestNr refers to, and what makes it better than any other number. Perhaps a better name would be Ord_nr. (But the is of course just an opinion). Declaring it as Primary comes with 2 automatic constraints: Not Null and Unique. Check constraint again there is nothing wrong. However a better process would be just tell the DBMS to generate identity column (see Create table ... generated ...).
Mon and Day:
Technically nothing wrong. However there is a data integrity hole as it still permits invalid date. The date Feb 30 would pass both your constraints. But it is still an invalid date. Other months have the same issue, day = 31 for a month with only 30 days passes the constraints but remains invalid. To ensure only valid dates just define a date column. This also eliminates the need for the check constraint. The month and date can be extracted when needed.
FOREIGN KEY (PNr) REFERENCES Customer (PNr): FOREIGN KEY (KNr) REFERENCES Products (KNr):
Your reference is backwards. PNr refers to Product, KNr to customer. However you must define them as columns then generate the FK. While nothing is wrong with these as columns names, are the descriptive of what they refer to. PNr perhaps, but not so KNr (unless Customer is always referred to as K...) Perhaps better prod_nr and cust_nr. (but perhaps no product reference at all - later).
Sum:
This column can easily be derived when needed, and will be difficult to keep current (what happens when another item is added to the Order, or Updated, or Deleted). Further this is a very poor choice for a column name as it is a SQL Standard reserved word (not by all RDBMS however, Postgres being one). Drop the column and derive it when needed.
Status:
You would want to constrain this to a set of predefined values. Either a CHECK constraint, an ENUM or a lookup (reference) table.
Normalization:
Consider normalizing a bit further. An order typically will contain multiple items (lines). These can/should be extracted into another table; call it Order_Lines and move PNr and Quantity into it.
Taking all the above into consideration arrive at:
-- method to constrain status
create type order_status as enum ('pending', 'picked', 'shipped', 'delivered', 'billed', 'paid', 'back ordered', 'on hold', 'canceled' ); -- or others
create table orders ( ord_nr integer generated always as identity primary key
, ord_dt date
, cust_nr integer references customers (cust_nr)
, status order_status -- questionable: Can it be derived?
, constraint one_per_cust_per_day unique (cust_nr, ord_dt) -- combine multiple orders for customer into 1 per day. ??
);
create table order_lines ( ord_ln_nr integer generated always as identity primary key -- optional
, ord_nr integer not null references orders(ord_nr)
, prod_nr integer not null references products(prod_nr)
, quantity integer not null check (quantity>0)
, price float -- Note1
, status order_status
, constraint one_ln_per_ord_prod unique ( ord_nr, prod_nr)
);
Note1: Normally do not copy columns from referenced tables. You normally avoid this as it creates duplicate data, just get the value through the reference. However, price tends to be a volatile column. If a price change occurs, we should not automatically apply that to existing orders. For this reason the Price from the Product will be copied when order is placed.

Date time format in Oracle

I am new to sql and am working on an example. Among the tables I have created, I have the comments table:
CREATE TABLE comments (
club VARCHAR2(60) NOT NULL,
nick VARCHAR2(35),
msg_date DATE,
title VARCHAR2(100) NOT NULL,
director VARCHAR2(50) NOT NULL,
subject VARCHAR2(100),
message VARCHAR2(1500),
valoration NUMBER(2),
CONSTRAINT PK_COMMENTS PRIMARY KEY (nick,msg_date),
CONSTRAINT FK_COMMENTS_MEMBER FOREIGN KEY (nick,club) REFERENCES membership ON DELETE CASCADE,
CONSTRAINT FK_COMMENTS_MOVIES FOREIGN KEY (title,director) REFERENCES movies,
CONSTRAINT CK_COMMENTS_VAL CHECK (valoration<11)
);
I am asked to create a trigger that does the following:
if a comment arrives on the same date as another one already stored, register it with the date
'one second later'.
The problem I have is that I do not know how to convert the 'one second' later into a date. Any idea on how to solve this problem?
msg_date + interval '1' second
or alternatively msg_date + (1/(24*60*60))
However this whole scenario is fraught with danger. While checking existing messages in the table within the trigger the table may be changing in other transactions and so there is a real risk of race conditions here - two messages both adding 1 second to an existing message will then have the same date. This would be the case whether the check was in a trigger or application code.
If this is a real world scenario I would avoid the trigger, use a timestamp rather than date, where the precision is between millis and nanos and consider how to deal with the lower risk of messages with the same timestamp as a business problem - what is the implication if it does occur.

Restrict the number of entries in a relation based on conditions across several relations

I am using PostgreSQL and am trying to restrict the number of concurrent loans that a student can have. To do this, I have created a CTE that selects all unreturned loans grouped by StudentID, and counts the number of unreturned loans for each StudentID. Then, I am attempting to create a check constraint that uses that CTE to restrict the number of concurrent loans that a student can have to 7 at most.
The below code does not work because it is syntactically invalid, but hopefully it can communicate what I am trying to achieve. Does anyone know how I could implement my desired restriction on loans?
CREATE TABLE loan (
id SERIAL PRIMARY KEY,
copy_id INTEGER REFERENCES media_copies (copy_id),
account_id INT REFERENCES account (id),
loan_date DATE NOT NULL,
expiry_date DATE NOT NULL,
return_date DATE,
WITH currentStudentLoans (student_id, current_loans) AS
(
SELECT account_id, COUNT(*)
FROM loan
WHERE account_id IN (SELECT id FROM student)
AND return_date IS NULL
GROUP BY account_id
)
CONSTRAINT max_student_concurrent_loans CHECK(
(SELECT current_loans FROM currentStudentLoans) BETWEEN 0 AND 7
)
);
For additional (and optional) context, I include an ER diagram of my database schema.
You cannot do this using an in-line CTE like this. You have several choices.
The first is a UDF and check constraint. Essentially, the logic in the CTE is put in a UDF and then a check constraint validates the data.
The second is a trigger to do the check on this table. However, that is tricky because the counts are on the same table.
The third is storing the total number in another table -- probably accounts -- and keeping it up-to-date for inserts, updates, and deletes on this table. Keeping that value up-to-date requires triggers on loans. You can then put the check constraint on accounts.
I'm not sure which solution fits best in your overall schema. The first is closest to what you are doing now. The third "publishes" the count, so it is a bit clearer what is going on.

Creating unique primary key to ignore duplicates

I have a main large table which I have had to put into 3rd normal form and into smaller tables (with primary and foreign keys linking them). The table is about renting books.
I have a customer table which I need to create a primary key for. In the main large table there are duplicates of the customer_id, as the table as a whole is for renting the books, so one customer may have more than one renting.
The table I am currently trying to add a primary key for will not have any nulls or duplicates, however i am unsure how to create the primary key for this without the error- unsure how to make it unique.
CREATE TABLE customer AS
SELECT cust_id, country_id, name, address, postcode
FROM BOOKS
WHERE cust_id != 0;
ALTER TABLE customer
ADD PRIMARY KEY (cust_id);
Is anyone able to help me in how to create the primary key on my customer table, but just taking each unique cust_id from the main table.
In SQL Server the straightforward way to add unique keys is to use IDENTITY. Identity fields are integer fields that auto populate successive values by a specified start value and interval. If you don't specify the interval it will start at 1 and increase the value by 1 each time a value is assigned.
While it's usually done when creating a table, you can do it in your ALTER TABLE step, and it will assign values when added to an existing table. I've explicitly specified the start value and interval that matches the default to show the syntax :
ALTER TABLE customer
ADD cust_id int not null PRIMARY KEY IDENTITY(1,1)

Oracle Sql Check Constraint

What I want to do is simple and below are details. I have two tables.
Create Table Event(
IDEvent number (8) primary key,
StartDate date not null,
EndDate date not null
);
This is fine.
Here is second table.
Create Table Game(
IDGame number (8) primary key,
GameDate date not null,
constraint checkDate
check (GameDate >= to_date(StartDate references from Event(StartDate)))
);
The constraint checkDate is to check if the date is bigger than the startdate. While checking I'm getting error : Missing right parenthesis.
My question is, If this is possible to do then why it is giving me an error?
A check constraint in a table can only verify conditions on the columns of that particular table. You can not refer to columns from other tables.
If you need to verify conditions that involves columns from a different table, you can do it from a before insert/update trigger on that table.
What you want to do is far from simple.
The syntax you propose, doesn't work on any RDBMS. It would be nice to have, but none of the RDBMS vendors have implemented it, because it enforcing such a cross table integrity rule would mean locking the referenced table while updating the game table. If you try to build it yourself, you'll have to do the locking yourself. You'll have to take into account all actions that could possibly violate your rule, such as:
inserting a game
updating the gamedate to a less recent date
updating the event startdate to a more recent date
deleting an event
And for each of these actions you'll have to think of writing code that is multi user proof, by locking the right records in the other table.
If you want to reduce this complexity, you might want to look at a product called RuleGen (www.rulegen.com)
Or you may want to build a specific API and include the checks in just the right places. You'll still have to manually lock yourself in this scenario.
Hope this helps.
Regards,
Rob.
There is one hack that you can make, but I doubt that performance of inserting games or events will be acceptable, once the tables grow to a certain size:
CREATE TABLE Event
(
IDEvent NUMBER(8) PRIMARY KEY,
StartDate DATE NOT NULL,
EndDate DATE NOT NULL
);
CREATE TABLE Game
(
IDGame NUMBER(8) PRIMARY KEY,
GameDate DATE NOT NULL,
eventid NUMBER(8), -- this is different to your table definition
CONSTRAINT fk_game_event FOREIGN KEY (eventid) REFERENCES event (idevent)
);
CREATE INDEX game_eventid ON game (eventid);
CREATE MATERIALIZED VIEW LOG ON event
WITH ROWID, SEQUENCE (idevent, startdate) INCLUDING NEW VALUES;
CREATE MATERIALIZED VIEW LOG ON game
WITH ROWID, SEQUENCE (idgame, eventid, gamedate) INCLUDING NEW VALUES;
CREATE MATERIALIZED VIEW mv_event_game
REFRESH FAST ON COMMIT WITH ROWID
AS
SELECT ev.idevent,
ev.startdate,
g.gamedate
FROM event ev, game g
WHERE g.eventid = ev.idevent;
ALTER TABLE mv_event_game
ADD CONSTRAINT check_game_start check (gamedate >= startdate);
Now any transaction that inserts a game that starts before the referenced event will throw an error when trying to commit the transaction:
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning and OLAP options
SQL> INSERT INTO event
2 (idevent, startdate, enddate)
3 values
4 (1, date '2012-01-22', date '2012-01-24');
1 row created.
SQL>
SQL> INSERT INTO game
2 (idgame, eventid, gamedate)
3 VALUES
4 (1, 1, date '2012-01-01');
1 row created.
SQL> commit;
commit
*
ERROR at line 1:
ORA-12008: error in materialized view refresh path
ORA-02290: check constraint (FOOBAR.CHECK_GAME_START) violated
But again: This will make inserts in both tables slower as the query inside the mview needs to be run each time a commit is performed.
I wasn't able to change the refresh type to FAST which probably would improve commit performance.