UPDATE one table upon INSERT INTO another table - sql

I have a database with two tables:
devices
temperature
The schema follows:
CREATE TABLE IF NOT EXISTS devices(
device_id serial PRIMARY KEY,
device_name varchar(255) UNIQUE NOT NULL,
last_record_time timestamp without time zone DEFAULT '1995-10-30 10:30:00'
);
CREATE TABLE IF NOT EXISTS temperature(
device_id integer NOT NULL,
temperature decimal NOT NULL,
record_time timestamp without time zone NOT NULL,
CONSTRAINT temperature_device_id_fkey FOREIGN KEY (device_id)
REFERENCES devices (device_id) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION
);
The devices table keeps a list of all the devices. So there is a unique id for each device. The temperature table aggregates data from all of the devices. You can select by device_id to see all the entries that pertain to a specific device.
I have the constraint that I cannot delete from the devices table because the temperature table depends on it. I would also like the devices table to be updated when a new record is inserted into the temperature table.
That is, the record_time from a new record in the temperature should become the last_record_time for that device's entry in the devices table. That way I always know when was the last time a device inserted data.
I am currently doing this programmatically. I insert records, and immediate select them right back out and write into the other table. This is introducing some bugs. So, I would prefer to automate this at the database level. How can I go about resolving this?

Alternative to using trigger would be CTE:
WITH ins AS (
INSERT INTO temperature (device_id, temperature, record_time)
VALUES (1, 35.21, '2018-01-30 09:55:23')
RETURNING device_id, record_time
)
UPDATE devices AS d SET last_record_time = ins.record_time
FROM ins
WHERE d.device_id = ins.device_id;

use trigger to do this implicitly.
create trigger on temperature table for events such as insert/delete/update and update temparature table inside that trigger.
CREATE OR REPLACE FUNCTION update_last_record_time()
RETURNS trigger AS
$$
BEGIN
UPDATE devices
SET last_record_time = NEW.record_time
WHERE device_id = NEW.device_id;
RETURN NEW;
END;
$$ LANGUAGE 'plpgsql';
CREATE TRIGGER my_trigger
AFTER INSERT
ON temperature
FOR EACH ROW
EXECUTE PROCEDURE update_last_record_time();

For Deleting row from child table Use Cascaded delete when creating foreign key it will automatically delete records from the child table when Parent table record will be deleted

Related

postgresql option to both insert new row despite "conflict" or update existing row

If I have a postgresql table like:
CREATE TABLE IF NOT EXISTS public.time_series
(id serial PRIMARY KEY,
variable_id integer NOT NULL,
datetime timestamp with time zone NOT NULL,
value real NOT NULL,
edit_datetime timestamp with time zone NOT NULL DEFAULT now())
Im not having any constrains here except the primary key. For now..
And i want to have the option to either UPDATE OR INSERT new data where it is essentially the same variable for the same datetime. So that I can choose to either overwrite or just have a new version.. How do a proceede to do that?
Without constrains and if I want to INSERT some data; I do some WHERE NOT EXISTS where i make sure that the new data does not correlate with a row that has the same variable_id, datetime, and value. And it works just fine.
Now if the case is that I want to UPDATE if it exists and else INSERT I make a CONSTRAINT like:
ALTER TABLE public.time_series
ADD CONSTRAINT unique_comp UNIQUE (variable_id, datetime)
And updates the row if there is a conflict and INSERT if there is non.
And it works just fine..
But, again, what if I for some variables wants pervious "versions" where i can see them based on edit_datetime and want some other variables to be overwritten with the new data? Currently one way rules out the other one.
BR

Trigger after insert to check and compare records between table

In an Oracle Database, I need to create some trigger or procedure to treat this case in the most performative way possible (is an extremely large amount of data).
I have a table called ORDER_A that every day receives a full load (its truncated, and all records are inserted again).
I have a table called ORDER_B which is a copy of ORDER_A, containing the same data and some additional control dates.
Each insertion on ORDER_A must trigger a process that looks for a record with the same identifier (primary key: order_id) in table B.
If a record exists with the same order_id, and any of the other columns have changed, an update must be performed on table B
If a record exists with the same order_id, and no values ​​in the other columns have been modified, nothing should be performed, the record must remain the same in table B.
If there is no record with the same order_id, it must be inserted in table B.
My tables are like this
CREATE TABLE ORDER_A
(
ORDER_ID NUMBER NOT NULL,
ORDER_CODE VARCHAR2(50),
ORDER_STATUS VARCHAR2(20),
ORDER_USER_ID NUMBER,
ORDER_DATE TIMESTAMP(6),
PRIMARY KEY (ORDER_ID)
);
CREATE TABLE ORDER_B
(
ORDER_ID NUMBER NOT NULL,
ORDER_CODE VARCHAR2(50),
ORDER_STATUS VARCHAR2(20),
ORDER_USER_ID NUMBER,
ORDER_DATE TIMESTAMP(6)
INSERT_AT TIMESTAMP(6),
UPDATED_AT TIMESTAMP(6),
PRIMARY KEY (ORDER_ID)
);
I have no idea how to do this and what is the best way (with a trigger, procedure, using merge, etc.)
Can someone give me a direction, please?
Here is some pseudo-code to show you a potential trigger based solution that does not fall back into slow row-by-row processing.
create or replace trigger mytrg
for insert or update delete on ordera
compound trigger
pklist sys.odcinumberlist;
before statement is
begin
pklist := sys.odcinumberlist();
end before statement ;
after each row is
begin
pklist.extend;
pklist(pklist.count) := :new.order_id;
end before each row;
after statement is
begin
merge into orderb b
using (
select a.*
from ordera a,
table(pklist) t
where a.order_id = t.column_value
) m
when matched then
update
set b.order_code = m.order_code,
b.order_status = m.order_status,
...
where decode(b.order_code,m.order_code,0,1)=1
or decode(b.order_status,m.order_status,0,1)=1
....
when not matched then
insert (b.order_id,b.order_code,....)
values (m.order_id,m.order_code,....);
end after statement ;
end;
We hold the impacted primary keys, and then build a single merge later, with an WHERE embed to minimise update activities.
If your application allows the update of primary keys, you'd need some additions, but this should get you started

UPDATE one row in a table from a SELECT statement from another table

I am using PostgreSQL. My tables were created as follows. For a given device_id, I'd like to select the record from the temperature table with the max record_time, and UPDATE the last_record_time column in the devices tables for that device_id.
CREATE TABLE IF NOT EXISTS devices(
device_id serial PRIMARY KEY,
device_name varchar(255) UNIQUE NOT NULL,
last_record_time timestamp without time zone NOT NULL DEFAULT '1995-10-30 10:30:00'
);
CREATE TABLE IF NOT EXISTS temperature(
device_id integer NOT NULL,
temperature decimal NOT NULL,
record_time timestamp without time zone,
CONSTRAINT temperature_device_id_fkey FOREIGN KEY (device_id)
REFERENCES devices (device_id) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION
);
I get the max record_time from temperature table as follow:
SELECT MAX(record_time) FROM temperature WHERE device_id=4;
This works. How do I pipe the output of the second query into an UPDATE statement so that the last_record_time in the record in the devices table that corresponds to the provided device_id is updated. I have read about INNER JOIN, but that seems to put together several records. The device_id is unique in the devices table. So, I need this to UPDATE exactly one record every time.
You could try like this;
update devices set last_record_time = (select max(record_time) from
temperature where device_id = devices.device_id)
where device_id = 4

Storing date & time and updater/user of row data in Oracle Apex

Looking for an ideal way to store (1) date & time of update (2) updater/user per row of table data using Oracle Apex. I am thinking of adding 2 extra columns to store the info and trying to come up with a good as to how changes per row can be tracked.
If you want create logs of insert, update , delete on your table, adding 2 columns not enough. Each new update will erase previous and delete couldn't be logged. So you need to store log table separately from data table, and fill it on before and after triggers created on your data table. If you want sample I can provide some.
Here simplified example, of course in real life data will be more complex and I guess a trigger should be more smarter, but this is a simple start point to create your own. After executing codes below, try to insert, update, delete delete records in table TEST_DATA, see what happens in TEST_LOG
Create data table
create table TEST_DATA (
UNID number,
COL_B varchar2(50)
);
-- Create/Recreate primary, unique and foreign key constraints
alter table TEST_DATA
add constraint PK_TEST_DATA_UNID primary key (UNID);
Create log table for it
create table TEST_LOG (
UNID number,
OPERATION varchar2(1),
COL_OLD varchar2(50),
COL_NEW varchar2(50),
CHNGUSER varchar2(50),
CHNGDATE date
);
and finally create trigger which tracks changes
create or replace trigger TR_LOG_TEST_DATA
after update or insert or delete on TEST_DATA
referencing new as new old as old
for each row
begin
if Inserting then
insert into TEST_LOG
(UNID, OPERATION, COL_OLD, COL_NEW, CHNGUSER, CHNGDATE)
values
(:new.unid, 'I', null, :new.col_b, user, sysdate);
end if;
if Updating then
insert into TEST_LOG
(UNID, OPERATION, COL_OLD, COL_NEW, CHNGUSER, CHNGDATE)
values
(:new.unid, 'U', :old.col_b, :new.col_b, user, sysdate);
end if;
if Deleting then
insert into TEST_LOG
(UNID, OPERATION, COL_OLD, COL_NEW, CHNGUSER, CHNGDATE)
values
(:old.unid, 'D', :old.col_b, null, user, sysdate);
end if;
end;

Restoring a Truncated Table from a Backup

I am restoring the data of a truncated table in an Oracle Database from an exported csv file. However, I find that the primary key auto-increments and does not insert the actual values of the primary key constrained column from the backed up file.
I intend to do the following:
1. drop the primary key
2. import the table data
3. add primary key constraints on the required column
Is this a good approach? If not, what is recommended? Thanks.
EDIT: After more investigation, I observed there's a trigger to generate nextval on a sequence to be inserted into the primary key column. This is the source of the predicament. Hence, following the procedure above would not solve the problem. It lies in the trigger (and/or sequence) on the table. This is solved!
easier to use your .csv as an external table and then go
create table your_table_temp as select * from external table
examine the data in the new temp table to ensure you know what range of primary keys is present
do a merge into the new table
samples from here and here
CREATE TABLE countries_ext (
country_code VARCHAR2(5),
country_name VARCHAR2(50),
country_language VARCHAR2(50)
)
ORGANIZATION EXTERNAL (
TYPE ORACLE_LOADER
DEFAULT DIRECTORY ext_tab_data
ACCESS PARAMETERS (
RECORDS DELIMITED BY NEWLINE
FIELDS TERMINATED BY ','
MISSING FIELD VALUES ARE NULL
(
country_code CHAR(5),
country_name CHAR(50),
country_language CHAR(50)
)
)
LOCATION ('Countries1.txt','Countries2.txt')
)
PARALLEL 5
REJECT LIMIT UNLIMITED;
and the merge
MERGE INTO employees e
USING hr_records h
ON (e.id = h.emp_id)
WHEN MATCHED THEN
UPDATE SET e.address = h.address
WHEN NOT MATCHED THEN
INSERT (id, address)
VALUES (h.emp_id, h.address);
Edit: after you have merged the data you can drop the temp table and the result is your previous table with the old data and the new data together
Edit you mention " During imports, the primary key column does not insert from the file, but auto-increments". This can only happen when there is a trigger on the table, likely, Before insert on each row. Disable the trigger and then do your import. Re-enable the trigger after committing your inserts.
I used the following procedure to solve it:
drop trigger trigger_name
Imported the table data into target table
drop sequence sequence_name
CREATE SEQUENCE SEQ_NAME INCREMENT BY 1 START WITH start_index_for_next_val MAXVALUE max_val MINVALUE 1 NOCYCLECACHE 20 NOORDER
CREATE OR REPLACE TRIGGER "schema_name"."trigger_name"
before insert on target_table
for each row
begin
select seq_name.nextval
into :new.unique_column_name
from dual;
end;