SQLite: Workaround for SQLite-TRIGGER with WITH - sql

I'm working on a project to monitor downtimes of production lines with an embedded device. I want to automate acknowledging of these downtimes by generic rules the user can configure.
I want to use a TRIGGER but get a syntax error near UPDATE even though the documentation says it should be fine to use the WITH statement.
CREATE TRIGGER autoAcknowledge
AFTER UPDATE OF dtEnd ON ackGroups
FOR EACH ROW
WHEN old.dtEnd IS NULL AND new.dtEnd IS NOT NULL
BEGIN
WITH sub1(id, stationId, groupDur) AS (
SELECT MIN(d.id), d.station,
strftime('%s', ag.dtEnd) - strftime('%s', ag.dtStart)
FROM ackGroups AS ag
LEFT JOIN downtimes AS d on d.acknowledge = ag.id
WHERE ag.id = old.id
GROUP BY ag.id ),
sub2( originId, groupDur, reasonId, above, ruleDur) AS (
SELECT sub1.stationId, sub1.groupDur, aar.reasonId, aar.above, aar.duration
FROM sub1
LEFT JOIN autoAckStations AS aas ON aas.stationId = sub1.stationId
LEFT JOIN autoAckRules AS aar ON aas.autoAckRuleId = aar.id
ORDER BY duration DESC )
UPDATE ackGroups SET (reason, dtAck, origin)=(
SELECT reasonId, datetime('now'), originId
FROM sub2 as s
WHERE ( s.ruleDur < s.groupDur AND above = 1 ) OR (s.ruleDur > s.groupDur AND above = 0)
LIMIT 1
)
WHERE id = old.id;
END
Background: First we have the downtimes table. Each production line consists of multiple parts called stations. Each station can start the line downtime and they can overlap with other stations downtimes.
CREATE TABLE "downtimes" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT,
"station" integer NOT NULL,
"acknowledge" integer,
"dtStart" datetime NOT NULL,
"dtEnd" datetime,
"dtLastModified" datetime)
Overlaping downtimes are grouped to acknowledge groups using TRIGGER AFTER INSERT on downtimes to set acknowledge id right or create a new group.
CREATE TABLE "ackGroups" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT,
"reason" integer,
"dtAck" datetime,
"dtStart" datetime NOT NULL,
"dtEnd" datetime,
"line" integer NOT NULL,
"origin" integer)
The autoAckRules table represents the configuration. The user decides whether the rule should apply to durations higher or lower a certain value and which rasonId should be used to acknowledge.
CREATE TABLE "autoAckRules" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT,
"description" text NOT NULL,
"reasonId" integer NOT NULL,
"above" bool NOT NULL,
"duration" real NOT NULL)
The autoAckStations table is used to manage M:N relationship. Each rule allow multiple stations which started the ackGroup.
CREATE TABLE autoAckStations (
autoAckRuleId INTEGER NOT NULL,
stationId INTEGER NOT NULL,
PRIMARY KEY ( autoAckRuleId, stationId )
)
When the last downtime ends dtEnd of ackGroups is set to datetime('now') and the trigger is fired to check if there is a autoAckRule that fits.
If I substitute the sub selects with a SELECT .. FROM( SELECT .. FROM(SELECT .. FROM ))) cascade
is there a nice way to avoid the need to write and evaluate it twice?
Or am I missing something stupid?

Common table expression are not supported for statements inside of triggers. You need to convert CTE to sub-query such as
CREATE TRIGGER autoAcknowledge
AFTER UPDATE OF dtEnd ON ackGroups
FOR EACH ROW
WHEN old.dtEnd IS NULL AND new.dtEnd IS NOT NULL
BEGIN
UPDATE ackGroups
SET (reason, dtAck, origin)= (
SELECT reasonId, datetime('now'), originId
FROM (SELECT sub1.stationId AS originId,
sub1.groupDur AS groupDur,
aar.reasonId AS reasonId,
aar.above AS above,
aar.duration AS ruleDur
FROM (SELECT MIN(d.id) AS id,
d.station AS stationId,
strftime('%s', ag.dtEnd) - strftime('%s', ag.dtStart) AS groupDur
FROM ackGroups AS ag
LEFT
JOIN downtimes AS d
ON d.acknowledge = ag.id
WHERE ag.id = old.id
GROUP BY ag.id ) AS sub1
LEFT
JOIN autoAckStations AS aas
ON aas.stationId = sub1.stationId
LEFT
JOIN autoAckRules AS aar
ON aas.autoAckRuleId = aar.id
ORDER BY duration DESC) as s
WHERE ( s.ruleDur < s.groupDur AND above = 1 ) OR (s.ruleDur > s.groupDur AND above = 0)
LIMIT 1
);
END;

Related

Returning values using a query that have been awarded a bonus

I'm trying to transactionId that were randomly chosen to be awarded a bonus which is essentially just having their rewardValue be double.
Following is my schema:
CREATE TABLE rewards (
rewardId INTEGER PRIMARY KEY,
rewardType VARCHAR(20),
rewardValue NUMERIC(6,2)
);
CREATE TABLE deposit (
depositId INTEGER PRIMARY KEY,
depositDate DATE,
customerId INTEGER NOT NULL REFERENCES Customers
);
CREATE TABLE transactions (
transactionId SERIAL PRIMARY KEY,
depositId INTEGER NOT NULL UNIQUE REFERENCES Orders,
transactionAmount NUMERIC(18,2)
);
Here's my query:
select distinct t.transactionId
from transactions t join deposit d on t.depositId = d.depositId join
rewards r on 2 * r.rewardValue <= t.transactionAmount;
I get some output which is just a few values repeating over and over. Does anyone know how to fix this?
before answer your question, seems like your join have a issue.
if you want to find transactions eligible for rewards
try this ->
select distinct transactionId from
(select t.transactionId,r.rewardValue,t.transactionAmount
from transactions t join deposit d on t.depositId = d.depositId,rewards r ) tbl where 2*rewardValue <= transactionAmount

Optimizing a query in PostgreSQL

CREATE TABLE master.estado_movimiento_inventario
(
id integer NOT NULL,
eliminado boolean NOT NULL DEFAULT false,
fundamentacion text NOT NULL,
fecha timestamp without time zone NOT NULL,
id_empresa integer NOT NULL,
id_usuario integer NOT NULL,
id_estado integer NOT NULL,
id_movimiento integer NOT NULL,
se_debio_tramitar_hace bigint DEFAULT 0,
CONSTRAINT "PK15estadomovtec" PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);
ALTER TABLE master.estado_movimiento_inventario
OWNER TO postgres;
This table tracks the state of every inventory movement in my business logic. So, every movement that is not ended yet (there is not any id_estado =3 or id_estado=4 for any id_movimiento in the master.estado_movimiento_inventario table) must store in its last state's se_debio_tramitar_hace field the difference between now() and fecha field every time a scheduled task runs (Windows).
The query I built in order to do so is this:
with update_time as(
SELECT distinct on(id_movimiento) id
from master.estado_movimiento_inventario
where id_movimiento not in (
select id_movimiento
from master.estado_movimiento_inventario
where id_estado = 2 or id_estado=3
) order by id_movimiento, id desc
)
update master.estado_movimiento_inventario mi
set se_debio_tramitar_hace= EXTRACT(epoch FROM now()::timestamp - mi.fecha )/3600
where mi.id in (select id from update_time);
This works as expected but I suspect it is not optimal, specially in the update operation and here is my biggest doubt: what is optimal when performing this update operation:
Perform the updates as it currently does
or
The equivalent of postgresql function to this:
foreach(update_time.id as row_id){
update master.estado_movimiento_inventario mi set diferencia = now() - mi.fecha where mi.id=row_id;
}
Sorry if I am not explicit enough, I have not much experience working with databases, although I have understood the theory behind, yet not worked too much with it.
Edit
Please notice the id_estado is not unique per id_movimiento, just like the picture shows:
I think this improves the CTE:
with update_time as (
select id_movimiento, max(id) as max_id
from master.estado_movimiento_inventario
group by id_movimiento
having sum( (id_estado in (2, 3))::int ) = 0
)
update master.estado_movimiento_inventario mi
set diferencia = now() - mi.fecha
where mi.id in (select id from update_time);
If the last id would be the one in the "2" or "3" state, I would simply do:
update master.estado_movimiento_inventario mi
set diferencia = now() - mi.fecha
where mi.id = (select max(mi2.id)
from master.estado_movimiento_inventario mi
where mi2.id_movimiento = mi.id_movimiento
) and
mi.id_estado not in (2, 3);

Ambiguous column name SQL

I get the following error when I want to execute a SQL query:
"Msg 209, Level 16, State 1, Line 9
Ambiguous column name 'i_id'."
This is the SQL query I want to execute:
SELECT DISTINCT x.*
FROM items x LEFT JOIN items y
ON y.i_id = x.i_id
AND x.last_seen < y.last_seen
WHERE x.last_seen > '4-4-2017 10:54:11'
AND x.spot = 'spot773'
AND (x.technology = 'Bluetooth LE' OR x.technology = 'EPC Gen2')
AND y.id IS NULL
GROUP BY i_id
This is how my table looks like:
CREATE TABLE [dbo].[items] (
[id] INT IDENTITY (1, 1) NOT NULL,
[i_id] VARCHAR (100) NOT NULL,
[last_seen] DATETIME2 (0) NOT NULL,
[location] VARCHAR (200) NOT NULL,
[code_hex] VARCHAR (100) NOT NULL,
[technology] VARCHAR (100) NOT NULL,
[url] VARCHAR (100) NOT NULL,
[spot] VARCHAR (200) NOT NULL,
PRIMARY KEY CLUSTERED ([id] ASC));
I've tried a couple of things but I'm not an SQL expert:)
Any help would be appreciated
EDIT:
I do get duplicate rows when I remove the GROUP BY line as you can see:
I'm adding another answer in order to show how you'd typically select the lastest record per group without getting duplicates. You's use ROW_NUMBER for this, marking every last record per i_id with row number 1.
SELECT *
FROM
(
SELECT
i.*,
ROW_NUMBER() over (PARTITION BY i_id ORDER BY last_seen DESC) as rn
FROM items i
WHERE last_seen > '2017-04-04 10:54:11'
AND spot = 'spot773'
AND technology IN ('Bluetooth LE', 'EPC Gen2')
) ranked
WHERE rn = 1;
(You'd use RANK or DENSE_RANK instead of ROW_NUMBER if you wanted duplicates.)
You forgot the table alias in GROUP BY i_id.
Anyway, why are you writing an anti join query where you are trying to get rid of duplicates with both DISTINCT and GROUP BY? Did you have issues with a straight-forward NOT EXISTS query? You are making things way more complicated than they actually are.
SELECT *
FROM items i
WHERE last_seen > '2017-04-04 10:54:11'
AND spot = 'spot773'
AND technology IN ('Bluetooth LE', 'EPC Gen2')
AND NOT EXISTS
(
SELECT *
FROM items other
WHERE i.i_id = other.i_id
AND i.last_seen < other.last_seen
);
(There are other techniques of course to get the last seen record per i_id. This is one; another is to compare with MAX(last_seen); another is to use ROW_NUMBER.)

SQL NOT EXISTS ( BETWEEN (...) AND (...))

I am creating a database that is for booking hotel rooms. I am stuck on a constraint that checks if a 'dateFrom' variable for a guest is between any 'dateFrom' and 'dateTo' variables for that specific guest. ie) a guest cannot book more than 1 room at a time.
I am getting an error: "cannot use subquery in check constraint":
CREATE TABLE tomsBooking
(
hotelNo HotelNo NOT NULL,
guestNo INT NOT NULL,
dateFrom DATE NOT NULL,
dateTo DATE NOT NULL,
roomNo RoomNumber
CONSTRAINT GuestOverlap
CHECK ( NOT EXISTS
(SELECT * FROM tomsBooking b
WHERE b.guestNo = b.guestNo
AND b.dateTo >= dateFrom
AND b.dateFrom <= dateTo
)
)
);
Unfortunately, Postgres does not support sub-queries for check constraints.
But this case is is exactly what exclusion constraints where created for:
CREATE TABLE tomsBooking
(
hotelNo HotelNo NOT NULL,
guestNo INT NOT NULL,
dateFrom DATE NOT NULL,
dateTo DATE NOT NULL,
roomNo RoomNumber
);
alter table tomsbooking
add constraint guestoverlap
exclude using gist (guestno with =, daterange(datefrom, dateto) with &&);
For more details and examples, see the manual: https://www.postgresql.org/docs/current/static/rangetypes.html#RANGETYPES-CONSTRAINT
In order for a GIST index to be able to use the = operator you need to install the btree_gist module using:
create extension btree_gist;
(That only needs to be done once per database)
Bearing in mind that I don't know Postgres SQL... it looks to me that you need the term VALUE near the BETWEEN statement so the comparison knows what value it is checking between.
As an alternative though, and based on Postgresql query between date ranges, I would structure it as this:
CREATE DOMAIN DateFrom AS DATE
CHECK (VALUE > '2016-10-16' AND NOT EXISTS (SELECT * FROM tomsBooking b
WHERE b.guestNo = g.guestNo
AND VALUE >= dateFrom
AND VALUE <= dateTo
)
);
As I say, I don't know Postgres, so you may have to tweak my suggestion.

sql join on range giving double row for single record

I needed to join three tables Result, ResultITems and GradeScale. When i do, i get double or two of the same row. I tried Creating the records in sqlfiddle but i get a different correct result. The schema i used in creating the tables in my local sqlite db is exactly the same, which is shown here.
The result table
CREATE TABLE Result (
ID INTEGER PRIMARY KEY AUTOINCREMENT,
SubjectID INTEGER REFERENCES Subjects ( ID ) ON DELETE CASCADE,
SessionID INT REFERENCES Sessions ( ID ),
TermID INT REFERENCES terms ( ID ),
ClassID INTEGER REFERENCES Classes ( ID )
);
The resultItems table
CREATE TABLE ResultItems (
StudentID INTEGER,
ResultID INTEGER REFERENCES Result ( ID ) ON DELETE CASCADE,
Total DECIMAL( 10, 2 )
);
And the gradescale table
CREATE TABLE gradeScale
(ID INTEGER PRIMARY KEY AUTOINCREMENT,
minscore tinyint NOT NULL,
maxscore tinyint NOT NULL,
grade char(1) NOT NULL,
ClassCatID INTEGER
);
now when i execute this query below, i et double row for each record in the ResultItems table
Select ri.studentid, ri.Total,g.grade
From ResultItems ri
left join GradeScale g
ON ( ri.total >= g.minscore AND ri.total <= g.maxscore )
left join Result r on r.id=ri.resultid
WHERE r.sessionid = 4
AND
r.termid = 1
AND
r.classid = 9
ORDER BY grade ASC;
Please see the picture below to see what i mean
![enter image description here][1]
and here is the sql fibble which i created http://sqlfiddle.com/#!7/ffb42/1
why am i getting double rows in the output when i execute in my local db?
From #JotaBet's help, i was able to trace the error to the GradeScale table wihci had double entries for each grade letter for each class group.
So i rewrote the sql to take notice of that
left join GradeScale g
ON ( AND c.classcatid = g.classcatid (ri.total >= g.minscore AND ri.total <= g.maxscore) )