UPDATE one row in a table from a SELECT statement from another table - sql

I am using PostgreSQL. My tables were created as follows. For a given device_id, I'd like to select the record from the temperature table with the max record_time, and UPDATE the last_record_time column in the devices tables for that device_id.
CREATE TABLE IF NOT EXISTS devices(
device_id serial PRIMARY KEY,
device_name varchar(255) UNIQUE NOT NULL,
last_record_time timestamp without time zone NOT NULL DEFAULT '1995-10-30 10:30:00'
);
CREATE TABLE IF NOT EXISTS temperature(
device_id integer NOT NULL,
temperature decimal NOT NULL,
record_time timestamp without time zone,
CONSTRAINT temperature_device_id_fkey FOREIGN KEY (device_id)
REFERENCES devices (device_id) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION
);
I get the max record_time from temperature table as follow:
SELECT MAX(record_time) FROM temperature WHERE device_id=4;
This works. How do I pipe the output of the second query into an UPDATE statement so that the last_record_time in the record in the devices table that corresponds to the provided device_id is updated. I have read about INNER JOIN, but that seems to put together several records. The device_id is unique in the devices table. So, I need this to UPDATE exactly one record every time.

You could try like this;
update devices set last_record_time = (select max(record_time) from
temperature where device_id = devices.device_id)
where device_id = 4

Related

How to select from table A and then insert selected id inside table B with one query?

I'm trying to implement a very basic banking system.
the goal is to have different types of transactions ( deposit, withdraw, transfer ) inside a table and refer to them as IDs inside transaction tables.
CREATE TABLE transaction_types (
id INTEGER AUTO_INCREMENT PRIMARY KEY,
name VARCHAR UNIQUE NOT NULL
)
CREATE TABLE transactions (
id INTEGER AUTO_INCREMENT PRIMARY KEY,
type_id INTEGER NOT NULL,
amount FLOAT NOT NULL
)
What I'm trying to accomplish is:
When inserting into transactions table no record can have an invalid type_id ( type_id should exist in transaction_types table )
First of all get type_id from transaction_types table and then insert inside transactions table, with one query ( if it's possible, I'm fairly new )
I'm using Node.js/Typescript and PostgreSQL, any help is appreciated a lot.
For (1): modify Transactions table definition by adding REFERENCES transaction_types(id) to the end of the type_id column definition prior to the comma.
For (2), assuming you know the name of the transaction_type, you can accomplish this by:
INSERT INTO transactions(type_id, amount)
VALUES ((SELECT id from transaction_types WHERE name = 'Withdrawal'), 999.99)
By the way, my PostgreSQL requires SERIAL instead of INTEGER AUTOINCREMENT

How to find out how many records have been loaded into table per load timestamp

I need to create log table (group data from COR and put information to log table about how many
records have been loaded into COR table per load timestamp). I've already tried add a timestamp column to table but after add a records it shows NULL value.
Thanks in Regards
Even If I use this statement:
CREATE TABLE ExampleTable (PriKey int PRIMARY KEY, created_at timestamp);
and add test value:
INSERT INTO ExampleTable (PriKey, created_at ) Values (1, DEFAULT);
My result on created_at column is rowversion data type...
Anyone helps?

How to update a column in a table A using the value from another table B wherein the relationship between tables A & B is 1:N by using max() function

I have two tables namely loan_details and loan_his_mapping with 1:N relationship. I need to set the hhf_request_id of loan_details table by the value which is present in the loan_his_mapping table for each loan.
Since the relationship is 1:N , I want to consider the record for each loan from loan_his_mapping table with two conditions mentioned below. The table definitions are as follows:
CREATE TABLE public.loan_details
(
loan_number bigint NOT NULL,
hhf_lob integer,
hhf_request_id integer,
status character varying(100),
CONSTRAINT loan_details_pkey PRIMARY KEY (loan_number)
);
CREATE TABLE public.loan_his_mapping
(
loan_number bigint NOT NULL,
spoc_id integer NOT NULL,
assigned_datetime timestamp without time zone,
loan_spoc_map_id bigint NOT NULL,
line_of_business_id integer,
request_id bigint,
CONSTRAINT loan_spoc_his_map_id PRIMARY KEY (loan_spoc_map_id),
CONSTRAINT fk_loan_spoc_loan_number_his FOREIGN KEY (loan_number)
REFERENCES public.loan_details (loan_number) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION );
The joining conditions while updating are:
The Records of loan_details with hhf_lob = 4 and status='Release'
I should consider that record for updating value among 'N' number of records from loan_his_mapping table with value max(loan_spoc_map_id) for each loan.
The query I have right now
update lsa_loan_details ldet
set hhf_request_id = history.request_id
from loan_his_mapping history
where ldet.loan_number = history.loan_number and ldet.status='Release' and ldet.hhf_lob=4 and
history.line_of_business_id=4 ;
I want to know how to use that record for each loan from loan_his_mapping with max(loan_spoc_map_id) to update column of loan_details table. Please Assist!
You need a sub-query to fetch the row corresponding to the highest loan_spoc_map_id
Something along the lines:
update loan_details ldet
set hhf_request_id = history.request_id
from (
select distinct on (loan_spoc_map_id) loan_number, request_id
from loan_his_mapping lhm
where lhm.line_of_business_id = 4
order by loan_spoc_map_id desc
) as history
where ldet.loan_number = history.loan_number
and ldet.status = 'Release'
and ldet.hhf_lob = 4;

UPDATE one table upon INSERT INTO another table

I have a database with two tables:
devices
temperature
The schema follows:
CREATE TABLE IF NOT EXISTS devices(
device_id serial PRIMARY KEY,
device_name varchar(255) UNIQUE NOT NULL,
last_record_time timestamp without time zone DEFAULT '1995-10-30 10:30:00'
);
CREATE TABLE IF NOT EXISTS temperature(
device_id integer NOT NULL,
temperature decimal NOT NULL,
record_time timestamp without time zone NOT NULL,
CONSTRAINT temperature_device_id_fkey FOREIGN KEY (device_id)
REFERENCES devices (device_id) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION
);
The devices table keeps a list of all the devices. So there is a unique id for each device. The temperature table aggregates data from all of the devices. You can select by device_id to see all the entries that pertain to a specific device.
I have the constraint that I cannot delete from the devices table because the temperature table depends on it. I would also like the devices table to be updated when a new record is inserted into the temperature table.
That is, the record_time from a new record in the temperature should become the last_record_time for that device's entry in the devices table. That way I always know when was the last time a device inserted data.
I am currently doing this programmatically. I insert records, and immediate select them right back out and write into the other table. This is introducing some bugs. So, I would prefer to automate this at the database level. How can I go about resolving this?
Alternative to using trigger would be CTE:
WITH ins AS (
INSERT INTO temperature (device_id, temperature, record_time)
VALUES (1, 35.21, '2018-01-30 09:55:23')
RETURNING device_id, record_time
)
UPDATE devices AS d SET last_record_time = ins.record_time
FROM ins
WHERE d.device_id = ins.device_id;
use trigger to do this implicitly.
create trigger on temperature table for events such as insert/delete/update and update temparature table inside that trigger.
CREATE OR REPLACE FUNCTION update_last_record_time()
RETURNS trigger AS
$$
BEGIN
UPDATE devices
SET last_record_time = NEW.record_time
WHERE device_id = NEW.device_id;
RETURN NEW;
END;
$$ LANGUAGE 'plpgsql';
CREATE TRIGGER my_trigger
AFTER INSERT
ON temperature
FOR EACH ROW
EXECUTE PROCEDURE update_last_record_time();
For Deleting row from child table Use Cascaded delete when creating foreign key it will automatically delete records from the child table when Parent table record will be deleted

Why Postgresql doesn't allow grouping sets in INSERT SELECT queries?

The issue here is simple as that, Postgresql doesn't allow the following query structure:
-- TABLE OF FACTS
CREATE TABLE facts_table (
id integer NOT NULL,
description CHARACTER VARYING(50),
amount NUMERIC(12,2) DEFAULT 0,
quantity INTEGER,
detail_1 CHARACTER VARYING(50),
detail_2 CHARACTER VARYING(50),
detail_3 CHARACTER VARYING(50),
time TIMESTAMP(0) WITHOUT TIME ZONE DEFAULT LOCALTIMESTAMP(0)
);
ALTER TABLE facts_table ADD PRIMARY KEY(id);
-- SUMMARIZED TABLE
CREATE TABLE table_cube (
id INTEGER,
description CHARACTER VARYING(50),
amount NUMERIC(12,2) DEFAULT 0,
quantity INTEGER,
time TIMESTAMP(0) WITHOUT TIME ZONE DEFAULT LOCALTIMESTAMP(0)
);
ALTER TABLE table_cube ADD PRIMARY KEY(id);
INSERT INTO table_cube(id, description, amount, quantity, time)
SELECT
id,
description,
SUM(amount) AS amount,
SUM(quantity) AS quantity,
time
FROM facts_table
GROUP BY CUBE(id, description, time);
----------------------------------------------------------------
ERROR: grouping sets are not allowed in INSERT SELECT queries.
I think it's pretty obvious that CUBE produces null results on every field indicated as a grouping set (as it computes every possible combination), therefore I can not insert that row in my table_cube table, so , does Postgres just assume, that I'm trying to insert a row in a table with a PK field? Even if the table_cube table doesn't have a PK, this cannot be accomplished.
Thanks.
Version: PostgreSQL 9.6
You have define table_cube(id) as Primary Key. So, If Cube contains null
values, it can't be inserted. I have checked without having id as Primary
Key, It works fine and when define id as primary key I got error:
"ERROR:id contains null values" SQL state: 23502
As suggested by Haleemur Ali,
"If a constraint is required, use a unique index with all the grouping
set columns: CREATE UNIQUE INDEX unq_table_cube_id_description_time ON
table_cube(id, description, time); Please update your question with more
information on database & version."
is a good option. But you have to remove Primary Key On "Id" and assign only Unique Key as suggested above as with having primary key and unique key again get this error:
ERROR: null value in column "id" violates not-null constraint
DETAIL: Failing row contains (null, null, 1300, 1522, null).
SQL state: 23502
So, the conclusion is, with unique index there is no need of Primary Key or with cube there is no need of unique index or Primary Key.