postgresql - key/value lookup to json object - sql

Running Postgres 9.6.
So I have this key/value lookup table which establishes the deepest child value of a huge JSON object. Given a table of this structure:
CREATE TABLE myschema.file_items
(
id integer NOT NULL DEFAULT nextval('file_items_id_seq'::regclass),
file_id integer NOT NULL,
key character varying[] COLLATE pg_catalog."default" NOT NULL,
value character varying COLLATE pg_catalog."default",
status character varying COLLATE pg_catalog."default",
CONSTRAINT file_items_pkey PRIMARY KEY (id)
)
WITH (
OIDS = FALSE
)
TABLESPACE pg_default;
ALTER TABLE verification.file_items
OWNER to postgres;
insert into file_items (file_id, key, value, status)
values (1, '{"cogs","cog1","description"}', 'val1', 'approved');
insert into file_items (file_id, key, value, status)
values (1, '{"cogs","cog1","cost"}', '100', null);
insert into file_items (file_id, key, value, status)
values (1, '{"cogs","cog1","window"}', '[-200,500]', 'not verified');
insert into file_items (file_id, key, value, status)
values (1, '{"cogs","cog2","description"}', 'val2', 'approved');
insert into file_items (file_id, key, value, status)
values (1, '{"cogs","cog2","cost"}', '200', null);
insert into file_items (file_id, key, value, status)
values (1, '{"cogs","cog2","window"}', '[-300,500]', null);
insert into file_items (file_id, key, value, status)
values (1, '{"widgets","widget1","description"}', 'wid1', 'approved');
insert into file_items (file_id, key, value, status)
values (1, '{"widgets","widget1","cost"}', '100', 'approved');
insert into file_items (file_id, key, value, status)
values (1, '{"widgets","widget1","window"}', '[-200,500]', 'not verified');
insert into file_items (file_id, key, value, status)
values (1, '{"widgets","widget2","description"}', 'wid2', null);
insert into file_items (file_id, key, value, status)
values (1, '{"widgets","widget2","cost"}', '300', 'approved');
insert into file_items (file_id, key, value, status)
values (1, '{"widgets","widget2","window"}', '[-1000,700]', null);
I can query all my cogs like so:
select *
from file_items
where 'cogs' = any(key)
How would I reverse-engineer this object? Rather, I'd like to somehow generate a json object with the following format:
"cogs": {
"cog1": {
"description": "val1",
"cost":100,
"window":[-200,500]
},
"cog2": {
"description": "val2",
"cost":200,
"window":[-300,500]
}
}
Note that I'm deliberately not wanting to do an array of cogs objects. They are actual properties to the cogs object. Done this way as we can have incoming json objects to which we don't know all the properties of, thus we utilize a key/value mapping table to be able to dynamically identify what these property values are (ie, we don't know before hand that we're going to have a "cog67" object, or what kind of properties will be affixed to that object....).
Since this query would ultimately be fired from a Node.js package ('pg' module...), if I can't re-create the json object via query, I may need to do it in the javascript itself. Just wondering if it's possible to to correctly build the json object at the database level and return that though, rather than querying a bunch of rows and re-constructing the object in the server-side code.
Any help would be greatly appreciated! Thank you!

Use jsonb_object_agg() twice, for both levels of aggregation (jsonb_pretty() not necessary, used for a nice output):
select jsonb_pretty(jsonb_build_object(key, jsonb_object_agg(subkey, value)))
from (
select key[1], key[2] as subkey, jsonb_object_agg(key[3], value) as value
from file_items
where 'cogs' = any(key)
group by key[1], key[2]
) s
group by key;
jsonb_pretty
-------------------------------------
{ +
"cogs": { +
"cog1": { +
"cost": "100", +
"window": "[-200,500]",+
"description": "val1" +
}, +
"cog2": { +
"cost": "200", +
"window": "[-300,500]",+
"description": "val2" +
} +
} +
}
(1 row)

Related

Unique constraint on inserting new row

I wrote a SQL statement within PostgreSQL 12 and I first created an unique constraint like:
CONSTRAINT post_comment_response_approval__tm_response__uidx UNIQUE (post_comment_response_id, team_member_id)
On a SQL query:
INSERT INTO post_comment_response_approval (post_comment_response_id, team_member_id, approved, note)
VALUES (:postCommentResponseId, :workspaceMemberId, :approved, :note)
ON CONFLICT ON CONSTRAINT post_comment_response_approval__tm_response__uidx DO
UPDATE SET approved = :approved, note = :note
Fist, I wanted to use it for the same row whenever ever some action is made, but now I just want to make sure the API shows them if multiple actions have been submitted by the same member.
An example is that someone might suggest a change, then that change is made, then that person who suggested it later approves it. That would generate multiple post_comment_response_approval rows for that approver.
Is there a way to make it happen without removing unique constraint or maybe it should be deleted? I am new with PostgreSQL.
I didn't understand your question in detail. But I think I understood what you need. You can use PostgreSQL partial indexing.
Examples for you:
CREATE TABLE table6 (
id int4 NOT NULL,
id2 int4 NOT null,
approve bool NULL
);
-- creating partial indexing
CREATE UNIQUE INDEX table6_id_idx ON table6 (id,id2) where approve is true;
insert into table6 (id, id2, approve) values (1, 1, false);
-- success insert
insert into table6 (id, id2, approve) values (1, 1, false);
-- success insert
insert into table6 (id, id2, approve) values (1, 1, false);
-- success insert
insert into table6 (id, id2, approve) values (1, 1, true);
-- success insert
insert into table6 (id, id2, approve) values (1, 1, true);
-- error: duplicate key value violates unique constraint "table6_id_idx"
So, you get unique fields by condition.

writing trigger containing multiple conditions

So I have a table trans which has two columns tx_type and ref_nbr
and I want to create a trigger such that the trigger ensures the following condition
in the trans table.
The following two conditions should be ensured:
if tx_type = D or W then ref_nbr should match the branch_nbr in branch table
if tx_type= B , P or R then ref_nbr should match mer_nbr in mer table
Triggers are not intended for keeping consistency in database relations. Use foreign keys for that. So make a table trans not with one column ref_nbr but use 2 columns - one for each relation (foreign key). Additionaly you can create check constraint for making sure that correct column is filled for given tx_type.
If you try to use triggers, you will have problems with concurrent transactions changing related tables like deleting your ref_nbr.
Example definitions for mer, branch and trans tables with some sample inserts:
create table branch(
branch_nbr number generated by default on null as identity start with 3 primary key,
branch_name varchar2(100) not null
);
create table mer (
mer_nbr number generated by default on null as identity start with 2 primary key,
mer_name varchar2(100) not null
);
create table trans (
id number generated by default on null as identity primary key,
tx_type varchar2(1) not null,
ref_branch_nbr number,
ref_mer_nbr number,
constraint ck_tx_type check (tx_type in ('D', 'W', 'B', 'P', 'R')),
constraint ck_correct_ref_for_tx_type
check (
(tx_type in ('D', 'W') and ref_branch_nbr is not NULL and ref_mer_nbr is NULL)
or (tx_type in ('B', 'P', 'R') and ref_branch_nbr is NULL and ref_mer_nbr is not NULL)
),
constraint fk_trans_ref_branch_nbr
foreign key (ref_branch_nbr)
references branch(branch_nbr),
constraint fk_trans_ref_mer_nbr
foreign key (ref_mer_nbr)
references mer(mer_nbr)
);
insert into branch(branch_nbr, branch_name) values(1, 'Master');
insert into branch(branch_nbr, branch_name) values(2, 'Test');
insert into mer(mer_nbr, mer_name) values(1, 'Test to Master');
commit;
-- working:
insert into trans(tx_type, ref_mer_nbr) values('P', 1);
insert into trans(tx_type, ref_branch_nbr) values('D', 1);
-- not working - non existing parent:
insert into trans(tx_type, ref_mer_nbr) values('P', 999);
insert into trans(tx_type, ref_branch_nbr) values('D', 999);
-- not working - wrong tx_type or wrong ref column:
insert into trans(tx_type, ref_mer_nbr) values('D', 1);
insert into trans(tx_type, ref_branch_nbr) values('P', 1);
insert into trans(tx_type, ref_branch_nbr, ref_mer_nbr ) values('P', 1, 1);
-- not working - cant insert without tx_type
insert into trans(ref_mer_nbr, ref_branch_nbr) values(1, 1);

Migrating data from old table to new table Postgres with extra column

Table Structure:
Old Table Structure:
New Table Structure:
Query:
INSERT INTO hotel (id, name, hotel_type, active, parent_hotel_id)
SELECT id, name, hotel_type, active, parent_hotel_id
FROM dblink('demopostgres', 'SELECT id, name, hotel_type, active, parent_hotel_id FROM hotel')
AS data(id bigint, name character varying, hotel_type character varying, active boolean, parent_hotel_id bigint);
Following error occurs:
ERROR: null value in column "created_by" violates not-null constraint
DETAIL: Failing row contains (1, Test Hotel, THREE_STAR, t, null,
null, null, null, null, null). SQL state: 23502
I tried to insert other required columns
Note: created_by as Jsonb
created_by = '{
"id": 1,
"email": "tes#localhost",
"login": "test",
"lastName": "Test",
"firstName": "Test",
"displayName": "test"
}'
created_date = '2020-02-22 16:09:08.346'
How can I pass default values for created_by and created_date column while moving data from the old table?
There are several choices.
First the INSERT is failing because the field is NOT NULL. You could ALTER TABLE(https://www.postgresql.org/docs/12/sql-altertable.html)as to unset that for the import, update the fields with values and the reset NOT NULL.
ALTER [ COLUMN ] column_name { SET | DROP } NOT NULL
Two, as #XraySensei said you could add DEFAULT values to the table using ALTER TABLE:
ALTER TABLE [ IF EXISTS ] [ ONLY ] name [ * ]
action [, ... ]
ALTER [ COLUMN ] column_name SET DEFAULT expression
Third option is to embed the defaults into the query:
create table orig_test(id integer NOT NULL, fld_1 varchar, fld_2 integer NOT NULL);
insert into orig_test(id, fld_1, fld_2) values (1, 'test', 4);
insert into orig_test(id, fld_1, fld_2) values (2, 'test', 7);
insert into default_test (id, fld_1, fld_2) select id, fld_1, fld_2 from orig_test ;
ERROR: null value in column "fld_3" violates not-null constraint
DETAIL: Failing row contains (1, test, 4, null).
insert into default_test (id, fld_1, fld_2, fld_3) select id, fld_1, fld_2, '06/14/2020' AS fld_3 from orig_test ;
INSERT 0 2

Uniqe constrain on condition psql

I required to implement unique constrain on psql table
i have columns,
1) date
2) employee
3) client_id
4) start_time
trying to add two constrain like,
1) check unique rule for date, employee, start_time, client_id this will simply work with unique constrain
BUT SECOND CONSTRAIN Is for condition where we already create entry with date,employee,start_time and client_id is False
-> so if someone try to create same entry we require to check constrain like,
does any entry already exist with fields "date, employee_id,start_time" AND client_id= False
in SIMPLE words
1) check if all 4 fields exist with unique constrain > display warning record exist
2) check if record with 3 fields and client_id = null exist > display warning record exist assign client_id
if anybody have little hint
it would be helpful
I think you need partial indexes. One will cover case when client_id is provided and other will deal with NULL client_id.
create table uni(val1 int, val2 text, val3 date, client_id int);
create unique index record_exists on uni(val1, val2, val3, client_id)
where client_id is not null;
create unique index record_exists_assign_client_id on uni(val1, val2, val3)
where client_id is null;
insert into uni values (1, 'test', current_date, 43), (2, 'test2', current_date, null);
--OK
insert into uni values (1, 'test', current_date, 43);
--duplicate key value violates unique constraint "record_exists"
insert into uni values (1, 'test', current_date, null);
--OK
insert into uni values (2, 'test2', current_date, null);
--duplicate key value violates unique constraint "record_exists_assign_client_id"

Conditional composite key in MySQL?

So I have this table with a composite key, basically 'userID'-'data' must be unique (see my other question SQL table - semi-unique row?)
However, I was wondering if it was possible to make this only come into effect when userID is not zero? By that I mean, 'userID'-'data' must be unique for non-zero userIDs?
Or am I barking up the wrong tree?
Thanks
Mala
SQL constraints apply to every row in the table. You can't make them conditional based on certain data values.
However, if you could use NULL instead of zero, you can get around the unique constraint. A unique constraint allows multiple entries that have NULL. The reason is that uniqueness means no two equal values can exist. Equality means value1 = value2 must be true. But in SQL, NULL = NULL is unknown, not true.
CREATE TABLE MyTable (id SERIAL PRIMARY KEY, userid INT, data VARCHAR(64));
INSERT INTO MyTable (userid, data) VALUES ( 1, 'foo');
INSERT INTO MyTable (userid, data) VALUES ( 1, 'bar');
INSERT INTO MyTable (userid, data) VALUES (NULL, 'baz');
So far so good, now you might think the following statements would violate the unique constraint, but they don't:
INSERT INTO MyTable (userid, data) VALUES ( 1, 'baz');
INSERT INTO MyTable (userid, data) VALUES (NULL, 'foo');
INSERT INTO MyTable (userid, data) VALUES (NULL, 'baz');
INSERT INTO MyTable (userid, data) VALUES (NULL, 'baz');