I am creating a database that is for booking hotel rooms. I am stuck on a constraint that checks if a 'dateFrom' variable for a guest is between any 'dateFrom' and 'dateTo' variables for that specific guest. ie) a guest cannot book more than 1 room at a time.
I am getting an error: "cannot use subquery in check constraint":
CREATE TABLE tomsBooking
(
hotelNo HotelNo NOT NULL,
guestNo INT NOT NULL,
dateFrom DATE NOT NULL,
dateTo DATE NOT NULL,
roomNo RoomNumber
CONSTRAINT GuestOverlap
CHECK ( NOT EXISTS
(SELECT * FROM tomsBooking b
WHERE b.guestNo = b.guestNo
AND b.dateTo >= dateFrom
AND b.dateFrom <= dateTo
)
)
);
Unfortunately, Postgres does not support sub-queries for check constraints.
But this case is is exactly what exclusion constraints where created for:
CREATE TABLE tomsBooking
(
hotelNo HotelNo NOT NULL,
guestNo INT NOT NULL,
dateFrom DATE NOT NULL,
dateTo DATE NOT NULL,
roomNo RoomNumber
);
alter table tomsbooking
add constraint guestoverlap
exclude using gist (guestno with =, daterange(datefrom, dateto) with &&);
For more details and examples, see the manual: https://www.postgresql.org/docs/current/static/rangetypes.html#RANGETYPES-CONSTRAINT
In order for a GIST index to be able to use the = operator you need to install the btree_gist module using:
create extension btree_gist;
(That only needs to be done once per database)
Bearing in mind that I don't know Postgres SQL... it looks to me that you need the term VALUE near the BETWEEN statement so the comparison knows what value it is checking between.
As an alternative though, and based on Postgresql query between date ranges, I would structure it as this:
CREATE DOMAIN DateFrom AS DATE
CHECK (VALUE > '2016-10-16' AND NOT EXISTS (SELECT * FROM tomsBooking b
WHERE b.guestNo = g.guestNo
AND VALUE >= dateFrom
AND VALUE <= dateTo
)
);
As I say, I don't know Postgres, so you may have to tweak my suggestion.
Related
I'm working on a project to monitor downtimes of production lines with an embedded device. I want to automate acknowledging of these downtimes by generic rules the user can configure.
I want to use a TRIGGER but get a syntax error near UPDATE even though the documentation says it should be fine to use the WITH statement.
CREATE TRIGGER autoAcknowledge
AFTER UPDATE OF dtEnd ON ackGroups
FOR EACH ROW
WHEN old.dtEnd IS NULL AND new.dtEnd IS NOT NULL
BEGIN
WITH sub1(id, stationId, groupDur) AS (
SELECT MIN(d.id), d.station,
strftime('%s', ag.dtEnd) - strftime('%s', ag.dtStart)
FROM ackGroups AS ag
LEFT JOIN downtimes AS d on d.acknowledge = ag.id
WHERE ag.id = old.id
GROUP BY ag.id ),
sub2( originId, groupDur, reasonId, above, ruleDur) AS (
SELECT sub1.stationId, sub1.groupDur, aar.reasonId, aar.above, aar.duration
FROM sub1
LEFT JOIN autoAckStations AS aas ON aas.stationId = sub1.stationId
LEFT JOIN autoAckRules AS aar ON aas.autoAckRuleId = aar.id
ORDER BY duration DESC )
UPDATE ackGroups SET (reason, dtAck, origin)=(
SELECT reasonId, datetime('now'), originId
FROM sub2 as s
WHERE ( s.ruleDur < s.groupDur AND above = 1 ) OR (s.ruleDur > s.groupDur AND above = 0)
LIMIT 1
)
WHERE id = old.id;
END
Background: First we have the downtimes table. Each production line consists of multiple parts called stations. Each station can start the line downtime and they can overlap with other stations downtimes.
CREATE TABLE "downtimes" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT,
"station" integer NOT NULL,
"acknowledge" integer,
"dtStart" datetime NOT NULL,
"dtEnd" datetime,
"dtLastModified" datetime)
Overlaping downtimes are grouped to acknowledge groups using TRIGGER AFTER INSERT on downtimes to set acknowledge id right or create a new group.
CREATE TABLE "ackGroups" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT,
"reason" integer,
"dtAck" datetime,
"dtStart" datetime NOT NULL,
"dtEnd" datetime,
"line" integer NOT NULL,
"origin" integer)
The autoAckRules table represents the configuration. The user decides whether the rule should apply to durations higher or lower a certain value and which rasonId should be used to acknowledge.
CREATE TABLE "autoAckRules" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT,
"description" text NOT NULL,
"reasonId" integer NOT NULL,
"above" bool NOT NULL,
"duration" real NOT NULL)
The autoAckStations table is used to manage M:N relationship. Each rule allow multiple stations which started the ackGroup.
CREATE TABLE autoAckStations (
autoAckRuleId INTEGER NOT NULL,
stationId INTEGER NOT NULL,
PRIMARY KEY ( autoAckRuleId, stationId )
)
When the last downtime ends dtEnd of ackGroups is set to datetime('now') and the trigger is fired to check if there is a autoAckRule that fits.
If I substitute the sub selects with a SELECT .. FROM( SELECT .. FROM(SELECT .. FROM ))) cascade
is there a nice way to avoid the need to write and evaluate it twice?
Or am I missing something stupid?
Common table expression are not supported for statements inside of triggers. You need to convert CTE to sub-query such as
CREATE TRIGGER autoAcknowledge
AFTER UPDATE OF dtEnd ON ackGroups
FOR EACH ROW
WHEN old.dtEnd IS NULL AND new.dtEnd IS NOT NULL
BEGIN
UPDATE ackGroups
SET (reason, dtAck, origin)= (
SELECT reasonId, datetime('now'), originId
FROM (SELECT sub1.stationId AS originId,
sub1.groupDur AS groupDur,
aar.reasonId AS reasonId,
aar.above AS above,
aar.duration AS ruleDur
FROM (SELECT MIN(d.id) AS id,
d.station AS stationId,
strftime('%s', ag.dtEnd) - strftime('%s', ag.dtStart) AS groupDur
FROM ackGroups AS ag
LEFT
JOIN downtimes AS d
ON d.acknowledge = ag.id
WHERE ag.id = old.id
GROUP BY ag.id ) AS sub1
LEFT
JOIN autoAckStations AS aas
ON aas.stationId = sub1.stationId
LEFT
JOIN autoAckRules AS aar
ON aas.autoAckRuleId = aar.id
ORDER BY duration DESC) as s
WHERE ( s.ruleDur < s.groupDur AND above = 1 ) OR (s.ruleDur > s.groupDur AND above = 0)
LIMIT 1
);
END;
I have this table:
CREATE TABLE Publications (
publicationId INT NOT NULL AUTO_INCREMENT,
PRIMARY KEY (publicationId),
title VARCHAR(60) NOT NULL UNIQUE,
professorId INT NOT NULL,
autors INT NOT NULL,
magazine VARCHAR(60) NOT NULL,
post_date DATE NOT NULL,
FOREIGN KEY (professorId) REFERENCES Professors (professorId),
CONSTRAINT invalidPublication UNIQUE (professorId, magazine, post_date),
CONSTRAINT invalidAutors CHECK (autors >= 1 AND autors <= 10)
);
And I want to create a view that returns the professors sorted by the amount of publications they have done, so I have created this view:
CREATE OR REPLACE VIEW ViewTopAutors AS
SELECT professorId
FROM publications
WHERE autors < 5
ORDER by COUNT(professorId)
LIMIT 3;
I've populated the main table, but when I run the view it only returns one autor (the one with the highest Id)
¿How can I do it?
I think an aggregation is missing from your query:
CREATE OR REPLACE VIEW ViewTopAutors AS
SELECT professorId
FROM publications
WHERE autors < 5
GROUP BY professorId
ORDER BY COUNT(*)
LIMIT 3;
This would return the 3 professors with the fewest number of publications. To return professors with the 3 greatest, use a DESC sort in the ORDER BY step.
I have a problem with making a proper SELECT for my exercise:
There are two tables that I have created:
1. Customer
2. Order
ad. 1
CREATE TABLE public."Customer"
(
id integer NOT NULL DEFAULT nextval('"Customer_id_seq"'::regclass),
name text NOT NULL,
surname text NOT NULL,
address text NOT NULL,
email text NOT NULL,
password text NOT NULL,
CONSTRAINT "Customer_pkey" PRIMARY KEY (id),
CONSTRAINT "Customer_email_key" UNIQUE (email)
)
ad.2
CREATE TABLE public."Order"
(
id integer NOT NULL DEFAULT nextval('"Order_id_seq"'::regclass),
customer_id integer NOT NULL,
item_list text,
order_date date,
execution_date date,
done boolean DEFAULT false,
confirm boolean DEFAULT false,
paid boolean DEFAULT false,
CONSTRAINT "Order_pkey" PRIMARY KEY (id),
CONSTRAINT "Order_customer_id_fkey" FOREIGN KEY (customer_id)
REFERENCES public."Customer" (id) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION
)
Please do not mind how columns properties were set.
The problem I have is following:
How to make a SELECT query which will give me as a result ids and emails of customers who have ordered something after '2017-09-15'
I suppose that this should go with JOIN but none of the queries I tried have worked :/.
Thanks!
You should post the queries that you tried, but in the meantime try this. It's a simple join :
SELECT DISTINCT id
, email
FROM public."Customer" c
JOIN public."Order" o
ON c.id = o.customer_id
WHERE order_date > '2017-09-15'
In table "Order" you just need to add current constraint for customer id:
customer_id integer REFERENCES Customer (id)
for more information check this page:
https://www.postgresql.org/docs/9.2/static/ddl-constraints.html
So, the query should be like this:
SELECT id, email
FROM Customer
INNER JOIN Order
ON (Order.customer_id = Customer.id)
WHERE order_date >= '2017-09-15'
Also, the useful docs you can check: https://www.postgresql.org/docs/current/static/tutorial-join.html
I get the following error when I want to execute a SQL query:
"Msg 209, Level 16, State 1, Line 9
Ambiguous column name 'i_id'."
This is the SQL query I want to execute:
SELECT DISTINCT x.*
FROM items x LEFT JOIN items y
ON y.i_id = x.i_id
AND x.last_seen < y.last_seen
WHERE x.last_seen > '4-4-2017 10:54:11'
AND x.spot = 'spot773'
AND (x.technology = 'Bluetooth LE' OR x.technology = 'EPC Gen2')
AND y.id IS NULL
GROUP BY i_id
This is how my table looks like:
CREATE TABLE [dbo].[items] (
[id] INT IDENTITY (1, 1) NOT NULL,
[i_id] VARCHAR (100) NOT NULL,
[last_seen] DATETIME2 (0) NOT NULL,
[location] VARCHAR (200) NOT NULL,
[code_hex] VARCHAR (100) NOT NULL,
[technology] VARCHAR (100) NOT NULL,
[url] VARCHAR (100) NOT NULL,
[spot] VARCHAR (200) NOT NULL,
PRIMARY KEY CLUSTERED ([id] ASC));
I've tried a couple of things but I'm not an SQL expert:)
Any help would be appreciated
EDIT:
I do get duplicate rows when I remove the GROUP BY line as you can see:
I'm adding another answer in order to show how you'd typically select the lastest record per group without getting duplicates. You's use ROW_NUMBER for this, marking every last record per i_id with row number 1.
SELECT *
FROM
(
SELECT
i.*,
ROW_NUMBER() over (PARTITION BY i_id ORDER BY last_seen DESC) as rn
FROM items i
WHERE last_seen > '2017-04-04 10:54:11'
AND spot = 'spot773'
AND technology IN ('Bluetooth LE', 'EPC Gen2')
) ranked
WHERE rn = 1;
(You'd use RANK or DENSE_RANK instead of ROW_NUMBER if you wanted duplicates.)
You forgot the table alias in GROUP BY i_id.
Anyway, why are you writing an anti join query where you are trying to get rid of duplicates with both DISTINCT and GROUP BY? Did you have issues with a straight-forward NOT EXISTS query? You are making things way more complicated than they actually are.
SELECT *
FROM items i
WHERE last_seen > '2017-04-04 10:54:11'
AND spot = 'spot773'
AND technology IN ('Bluetooth LE', 'EPC Gen2')
AND NOT EXISTS
(
SELECT *
FROM items other
WHERE i.i_id = other.i_id
AND i.last_seen < other.last_seen
);
(There are other techniques of course to get the last seen record per i_id. This is one; another is to compare with MAX(last_seen); another is to use ROW_NUMBER.)
I've got a table that looks like this (I wasn't sure what all might be relevant, so I had Toad dump the whole structure)
CREATE TABLE [dbo].[TScore] (
[CustomerID] int NOT NULL,
[ApplNo] numeric(18, 0) NOT NULL,
[BScore] int NULL,
[OrigAmt] money NULL,
[MaxAmt] money NULL,
[DateCreated] datetime NULL,
[UserCreated] char(8) NULL,
[DateModified] datetime NULL,
[UserModified] char(8) NULL,
CONSTRAINT [PK_TScore]
PRIMARY KEY CLUSTERED ([CustomerID] ASC, [ApplNo] ASC)
);
And when I run the following query (on a database with 3 million records in the TScore table) it takes about a second to run, even though if I just do: Select BScore from CustomerDB..TScore WHERE CustomerID = 12345, it is instant (and only returns 10 records) -- seems like there should be some efficient way to do the Max(ApplNo) effect in a single query, but I'm a relative noob to SQL Server, and not sure -- I'm thinking I may need a separate key for ApplNo, but not sure how clustered keys work.
SELECT BScore
FROM CustomerDB..TScore (NOLOCK)
WHERE ApplNo = (SELECT Max(ApplNo)
FROM CustomerDB..TScore sc2 (NOLOCK)
WHERE sc2.CustomerID = 12345)
Thanks much for any tips (pointers on where to look for optimization of sql server stuff appreciated as well)
When you filter by ApplNo, you are using only part of the key. And not the left hand side. This means the index has be scanned (look at all rows) not seeked (drill to a row) to find the values.
If you are looking for ApplNo values for the same CustomerID:
Quick way. Use the full clustered index:
SELECT BScore
FROM CustomerDB..TScore
WHERE ApplNo = (SELECT Max(ApplNo)
FROM CustomerDB..TScore sc2
WHERE sc2.CustomerID = 12345)
AND CustomerID = 12345
This can be changed into a JOIN
SELECT BScore
FROM
CustomerDB..TScore T1
JOIN
(SELECT Max(ApplNo) AS MaxApplNo, CustomerID
FROM CustomerDB..TScore sc2
WHERE sc2.CustomerID = 12345
) T2 ON T1.CustomerID = T2.CustomerID AND T1.ApplNo= T2.MaxApplNo
If you are looking for ApplNo values independent of CustomerID, then I'd look at a separate index. This matches your intent of the current code
CREATE INDEX IX_ApplNo ON TScore (ApplNo) INCLUDE (BScore);
Reversing the key order won't help because then your WHERE sc2.CustomerID = 12345 will scan, not seek
Note: using NOLOCK everywhere is a bad practice