Migrating data from old table to new table Postgres with extra column - sql

Table Structure:
Old Table Structure:
New Table Structure:
Query:
INSERT INTO hotel (id, name, hotel_type, active, parent_hotel_id)
SELECT id, name, hotel_type, active, parent_hotel_id
FROM dblink('demopostgres', 'SELECT id, name, hotel_type, active, parent_hotel_id FROM hotel')
AS data(id bigint, name character varying, hotel_type character varying, active boolean, parent_hotel_id bigint);
Following error occurs:
ERROR: null value in column "created_by" violates not-null constraint
DETAIL: Failing row contains (1, Test Hotel, THREE_STAR, t, null,
null, null, null, null, null). SQL state: 23502
I tried to insert other required columns
Note: created_by as Jsonb
created_by = '{
"id": 1,
"email": "tes#localhost",
"login": "test",
"lastName": "Test",
"firstName": "Test",
"displayName": "test"
}'
created_date = '2020-02-22 16:09:08.346'
How can I pass default values for created_by and created_date column while moving data from the old table?

There are several choices.
First the INSERT is failing because the field is NOT NULL. You could ALTER TABLE(https://www.postgresql.org/docs/12/sql-altertable.html)as to unset that for the import, update the fields with values and the reset NOT NULL.
ALTER [ COLUMN ] column_name { SET | DROP } NOT NULL
Two, as #XraySensei said you could add DEFAULT values to the table using ALTER TABLE:
ALTER TABLE [ IF EXISTS ] [ ONLY ] name [ * ]
action [, ... ]
ALTER [ COLUMN ] column_name SET DEFAULT expression
Third option is to embed the defaults into the query:
create table orig_test(id integer NOT NULL, fld_1 varchar, fld_2 integer NOT NULL);
insert into orig_test(id, fld_1, fld_2) values (1, 'test', 4);
insert into orig_test(id, fld_1, fld_2) values (2, 'test', 7);
insert into default_test (id, fld_1, fld_2) select id, fld_1, fld_2 from orig_test ;
ERROR: null value in column "fld_3" violates not-null constraint
DETAIL: Failing row contains (1, test, 4, null).
insert into default_test (id, fld_1, fld_2, fld_3) select id, fld_1, fld_2, '06/14/2020' AS fld_3 from orig_test ;
INSERT 0 2

Related

How do we design schema for user settings table for postgresql?

How do we design schema for user settings/preferences table in a sql database like postgresql?
I am interested to know the proper way to design the schema of users_setting table where users are able to modify their settings. This seems to be a 1-to-1 relationship because each row of users table corresponds to a single row in the users_setting table
so this is like a 1-to-1 table relation between users and users_setting. Is this the wrong way to do this? I have searched online and could not really find any useful example schemas where users manage their settings. So here i am asking this question. I am certain this will help many people also
Here is what my current design looks like
DROP TABLE if exists users cascade;
DROP TABLE IF EXISTS "users";
DROP SEQUENCE IF EXISTS users_id_seq;
CREATE SEQUENCE users_id_seq INCREMENT 1 MINVALUE 1 MAXVALUE 9223372036854775807 CACHE 1;
CREATE TABLE "public"."users" (
"id" bigint DEFAULT nextval('users_id_seq') NOT NULL,
"email" text NOT NULL,
"password" text NOT NULL,
"full_name" text NOT NULL,
"status" text NOT NULL,
"is_verified" boolean NOT NULL,
"role" text NOT NULL,
"created_at" timestamptz NOT NULL,
"updated_at" timestamptz NOT NULL,
"verified_at" timestamptz NOT NULL,
CONSTRAINT "users_email_key" UNIQUE ("email"),
CONSTRAINT "users_pkey" PRIMARY KEY ("id")
) WITH (oids = false);
DROP TABLE if exists users_setting cascade;
DROP TABLE IF EXISTS "users_setting";
DROP SEQUENCE IF EXISTS users_setting_id_seq;
CREATE SEQUENCE users_setting_id_seq INCREMENT 1 MINVALUE 1 MAXVALUE 9223372036854775807 CACHE 1;
CREATE TABLE "public"."users_setting" (
"id" bigint DEFAULT nextval('users_setting_id_seq') NOT NULL,
"default_currency" text NOT NULL,
"default_timezone" text NOT NULL,
"default_notification_method" text NOT NULL,
"default_source" text NOT NULL,
"default_cooldown" integer NOT NULL,
"updated_at" timestamptz NOT NULL,
"user_id" bigint,
CONSTRAINT "users_setting_pkey" PRIMARY KEY ("id")
) WITH (oids = false);
ALTER TABLE ONLY "public"."users_setting" ADD CONSTRAINT "users_setting_user_id_fkey" FOREIGN KEY (user_id) REFERENCES "users"(id) NOT DEFERRABLE;
begin transaction;
INSERT INTO "users" ("id", "email", "password", "full_name", "status", "is_verified", "role", "created_at", "updated_at", "verified_at") VALUES
(1, 'users1#email.com', 'password', 'users1', 'active', '1', 'superuser', '2022-07-05 01:05:50.22384+00', '0001-01-01 00:00:00+00', '2022-07-11 14:10:26.615722+00'),
(2, 'users2#email.com', 'password', 'users2', 'active', '0', 'user', '2022-07-05 01:05:50.22384+00', '0001-01-01 00:00:00+00', '2022-07-11 14:10:26.615722+00');
INSERT INTO "users_setting" ("id", "default_currency", "default_timezone", "default_notification_method", "default_source", "default_cooldown", "updated_at", "user_id") VALUES
(1, 'usd', 'utc', 'email', 'google', 300, '2022-07-13 01:05:50.22384+00', 2),
(2, 'usd', 'utc', 'sms', 'yahoo', 600, '2022-07-14 01:05:50.22384+00', 2);
commit;
so lets say i want to return a single row where a users.email is users1#email.com for example, here is query i can run
select * from users, users_setting where users.id = users_setting.user_id AND users.email = 'users1#email.com';
id email password full_name status is_verified role created_at updated_at verified_at id default_currency default_timezone default_notification_method default_source default_cooldown updated_at user_id
1 users1#email.com password users1 active 1 superuser 2022-07-05 01:05:50.22384+00 0001-01-01 00:00:00+00 2022-07-11 14:10:26.615722+00 1 usd utc email google 300 2022-07-13 01:05:50.22384+00 1
i can have a single table for this but the table will get really long row-wise as i add more and more thing. user settings is just one, there are other tables similar to this. So will be great to know how to design a situation like this properly
In your case a JSON could do the job:
ALTER TABLE public.users ADD user_settings jsonb NULL;
Update of settings will be something like:
UPDATE users
SET user_settings = '{"default_currency": "usd", "default_timezone" : "utc"}'
WHERE id = 1;
And select:
select * from users WHERE id = 1;
You will find:
Also consider in Postgresql you can index a JSON, for example to query on a particular setting. Se here: https://www.postgresql.org/docs/current/datatype-json.html#JSON-INDEXING
Specific:
Still, with appropriate use of expression indexes, the above query can
use an index. If querying for particular items within the "tags" key
is common, defining an index like this may be worthwhile:
CREATE INDEX idxgintags ON api USING GIN ((jdoc -> 'tags'));
With this solution you can avoid JSON. Drawback is that setting_value cannot be tailored to exact type you need, compared to your first idea.
For example you can create:
CREATE TABLE public.user_setting (
user_id bigint NOT NULL,
setting_name text NOT NULL,
setting_value text NULL,
CONSTRAINT user_setting_pk PRIMARY KEY (user_id,setting_name)
);
ALTER TABLE public.user_setting ADD CONSTRAINT user_setting_fk FOREIGN KEY (user_id) REFERENCES public.users(id);
At this point I suggest you to have 2 query, one for users and one for settings:
SELECT *
FROM user_setting us
where user_id = 1;

postgresql - key/value lookup to json object

Running Postgres 9.6.
So I have this key/value lookup table which establishes the deepest child value of a huge JSON object. Given a table of this structure:
CREATE TABLE myschema.file_items
(
id integer NOT NULL DEFAULT nextval('file_items_id_seq'::regclass),
file_id integer NOT NULL,
key character varying[] COLLATE pg_catalog."default" NOT NULL,
value character varying COLLATE pg_catalog."default",
status character varying COLLATE pg_catalog."default",
CONSTRAINT file_items_pkey PRIMARY KEY (id)
)
WITH (
OIDS = FALSE
)
TABLESPACE pg_default;
ALTER TABLE verification.file_items
OWNER to postgres;
insert into file_items (file_id, key, value, status)
values (1, '{"cogs","cog1","description"}', 'val1', 'approved');
insert into file_items (file_id, key, value, status)
values (1, '{"cogs","cog1","cost"}', '100', null);
insert into file_items (file_id, key, value, status)
values (1, '{"cogs","cog1","window"}', '[-200,500]', 'not verified');
insert into file_items (file_id, key, value, status)
values (1, '{"cogs","cog2","description"}', 'val2', 'approved');
insert into file_items (file_id, key, value, status)
values (1, '{"cogs","cog2","cost"}', '200', null);
insert into file_items (file_id, key, value, status)
values (1, '{"cogs","cog2","window"}', '[-300,500]', null);
insert into file_items (file_id, key, value, status)
values (1, '{"widgets","widget1","description"}', 'wid1', 'approved');
insert into file_items (file_id, key, value, status)
values (1, '{"widgets","widget1","cost"}', '100', 'approved');
insert into file_items (file_id, key, value, status)
values (1, '{"widgets","widget1","window"}', '[-200,500]', 'not verified');
insert into file_items (file_id, key, value, status)
values (1, '{"widgets","widget2","description"}', 'wid2', null);
insert into file_items (file_id, key, value, status)
values (1, '{"widgets","widget2","cost"}', '300', 'approved');
insert into file_items (file_id, key, value, status)
values (1, '{"widgets","widget2","window"}', '[-1000,700]', null);
I can query all my cogs like so:
select *
from file_items
where 'cogs' = any(key)
How would I reverse-engineer this object? Rather, I'd like to somehow generate a json object with the following format:
"cogs": {
"cog1": {
"description": "val1",
"cost":100,
"window":[-200,500]
},
"cog2": {
"description": "val2",
"cost":200,
"window":[-300,500]
}
}
Note that I'm deliberately not wanting to do an array of cogs objects. They are actual properties to the cogs object. Done this way as we can have incoming json objects to which we don't know all the properties of, thus we utilize a key/value mapping table to be able to dynamically identify what these property values are (ie, we don't know before hand that we're going to have a "cog67" object, or what kind of properties will be affixed to that object....).
Since this query would ultimately be fired from a Node.js package ('pg' module...), if I can't re-create the json object via query, I may need to do it in the javascript itself. Just wondering if it's possible to to correctly build the json object at the database level and return that though, rather than querying a bunch of rows and re-constructing the object in the server-side code.
Any help would be greatly appreciated! Thank you!
Use jsonb_object_agg() twice, for both levels of aggregation (jsonb_pretty() not necessary, used for a nice output):
select jsonb_pretty(jsonb_build_object(key, jsonb_object_agg(subkey, value)))
from (
select key[1], key[2] as subkey, jsonb_object_agg(key[3], value) as value
from file_items
where 'cogs' = any(key)
group by key[1], key[2]
) s
group by key;
jsonb_pretty
-------------------------------------
{ +
"cogs": { +
"cog1": { +
"cost": "100", +
"window": "[-200,500]",+
"description": "val1" +
}, +
"cog2": { +
"cost": "200", +
"window": "[-300,500]",+
"description": "val2" +
} +
} +
}
(1 row)

SQL - Inserting into postgresql table produces error on semi-colon

I'm trying to insert some test data into a table to check the functionality of a web servlet, however, using pgAdmin4 to do the insert, I am running into an issue I'm not sure how to rectify. What I want to see is the last value (an image byte stream) is null for this test info. Here is my insert statement:
INSERT INTO schema.tablename("Test Title", "Test Content", "OldWhovian", "2016-07-29 09:13:00", "1469808871694", "null");
I get back:
ERROR: syntax error at or near ";"
LINE 1: ...ldWhovian", "2016-07-29 09:13:00", "1469808871694", "null");
^
********** Error **********
ERROR: syntax error at or near ";"
SQL state: 42601
Character: 122
I've tried removing the semi-colon just for kicks, and it instead errors on the close parenthesis. Is it an issue related to the null? I tried doing this without putting quotations around the null and I get back the same error but on the null instead of the semi-colon. Any help is appreciated, I am new to DBA/DBD related activities.
Related: Using PostgreSql 9.6
The insert statement usually has first part where you specify into which columns you want to insert and second part where you specify what values you want to insert.
INSERT INTO table_name (column1, column2) VALUES (value1, value2);
You do not need to specify into which columns part only if you supply all values in the second part. If you have a table with seven columns you can omit the first part if in the second part you supply seven values.
INSERT INTO table_name VALUES (value1, value2, value3, ...);
Example:
drop table if exists my_table;
create table my_table (
id int not null,
username varchar(10) not null,
nockname varchar(10),
created timestamptz
);
INSERT INTO my_table (id, username) VALUES (1, 'user01');
You insert into columns id and username. The column created has default value specified so when you do not supply value in insert the default is used instead. Nickname and identification_number can accept null values. When no value is supplied NULL is used.
INSERT INTO my_table VALUES (2, 'user02', NULL, NULL, current_timestamp);
That is the same as the previous but here is omitted the fist part so you must supply values for all columns. If you did not you would get an error.
If you want insert multiple values you can use several statements.
INSERT INTO my_table (id, username, identification_number) VALUES (3, 'user03', 'BD5678');
INSERT INTO my_table (id, username, created) VALUES (4, 'user04', '2016-07-30 09:26:57');
Or you can use the postgres simplification for such inserts.
INSERT INTO my_table (id, username, nickname, identification_number) VALUES
(5, 'user05', 'fifth', 'SX59445'),
(6, 'user06', NULL, NULL),
(7, 'user07', NULL, 'AG1123');
At the beginning I have written that you can omit the first part (where you specify columns) only if you supply values for all columns in the second part. It is not completely true. In special cases when you have table that has nullable columns (columns that can contain NULL value) or you have specified DEFAUL values you can also omit the first part.
create sequence my_seq start 101;
create table my_table2 (
id int not null default nextval('my_seq'),
username varchar(10) not null default 'default',
nickname varchar(10),
identification_number varchar(10),
created timestamptz default current_timestamp
);
INSERT INTO my_table2 DEFAULT VALUES;
INSERT INTO my_table2 DEFAULT VALUES;
INSERT INTO my_table2 DEFAULT VALUES;
Result:
101 default NULL NULL 2016-07-30 10:28:27.797+02
102 default NULL NULL 2016-07-30 10:28:27.797+02
103 default NULL NULL 2016-07-30 10:28:27.797+02
When you do not specify values defaults are used or null. In the example above the id column has default value from sequence, username has default string "default", nickname and identification_number are null if not specified and created has default value current timestamp.
More information:
PostgreSQL INSERT

SQLite3 UNIQUE constraint failed error

I am trying to create a database which allows users to create 'to do' lists and fill them with items to complete. However, when inserting data into the tables it gives me a UNIQUE constraint failed error and I don't know how to solve it. This is my code for creating the database and inserting data.
CREATE TABLE user (
user_id integer NOT NULL PRIMARY KEY,
first_name varchar(15) NOT NULL,
title varchar(5) NOT NULL,
username varchar(15) NOT NULL,
password varchar(20) NOT NULL,
email varchar(50) NOT NULL,
bio text NOT NULL
);
CREATE TABLE list (
list_id integer NOT NULL PRIMARY KEY,
list_name varchar(10) NOT NULL,
user_user_id integer NOT NULL,
FOREIGN KEY (user_user_id) REFERENCES user(user_id)
);
CREATE TABLE item (
item_id integer NOT NULL PRIMARY KEY,
item text NOT NULL,
completed boolean NOT NULL,
list_list_id integer NOT NULL,
FOREIGN KEY (list_list_id) REFERENCES list(list_id)
);
-- Data:
INSERT INTO user VALUES (1, "Name1", "Title1", "Username1", "Password1", "Email1", "Bio1");
INSERT INTO user VALUES (2, "Name2", "Title2", "Username2", "Password2", "Email2", "Bio2");
INSERT INTO user VALUES (3, "Name3", "Title3", "Username3", "Password3", "Email3", "Bio3");
INSERT INTO list VALUES (1, "user1-list1", 1);
INSERT INTO list VALUES (2, "user1-list2", 1);
INSERT INTO list VALUES (3, "user1-list3", 1);
INSERT INTO list VALUES (1, "user2-list1", 2);
INSERT INTO list VALUES (1, "user3-list1", 3);
INSERT INTO list VALUES (2, "user3-list2", 3);
INSERT INTO item VALUES (1, "user1-list1-item1", "FALSE", 1);
INSERT INTO item VALUES (2, "user1-list1-item2", "FALSE", 1);
INSERT INTO item VALUES (1, "user1-list2-item1", "FALSE", 2);
INSERT INTO item VALUES (1, "user1-list3-item1", "FALSE", 3);
INSERT INTO item VALUES (2, "user1-list3-item2", "FALSE", 3);
INSERT INTO item VALUES (1, "user2-list1-item1", "FALSE", 1);
INSERT INTO item VALUES (2, "user2-list1-item1", "FALSE", 1);
INSERT INTO item VALUES (1, "user3-list1-item1", "FALSE", 1);
INSERT INTO item VALUES (1, "user3-list3-item1", "FALSE", 2);
I have copied the errors I receive below:
Error: near line 43: UNIQUE constraint failed: list.list_id
Error: near line 44: UNIQUE constraint failed: list.list_id
Error: near line 45: UNIQUE constraint failed: list.list_id
Error: near line 49: UNIQUE constraint failed: item.item_id
Error: near line 50: UNIQUE constraint failed: item.item_id
Error: near line 51: UNIQUE constraint failed: item.item_id
Error: near line 52: UNIQUE constraint failed: item.item_id
Error: near line 53: UNIQUE constraint failed: item.item_id
Error: near line 54: UNIQUE constraint failed: item.item_id
Error: near line 55: UNIQUE constraint failed: item.item_id
Any help would be appreciated!
You get a UNIQUE constraint failed error when the data that you are inserting has an entry which is already in the corresponding column of the table that you are inserting into.
If you want SQL to IGNORE that error and continue adding other records , then do this :
INSERT or IGNORE into tablename VALUES (value1,value2 , so on );
If you want to replace the values in the table whenever the entry already exists , then do this:
INSERT or REPLACE into tablename VALUES (value1,value2 , so on );
This saves lot of processing on your part and quite useful.
You have set list_id to be the primary key on the list table, which means that value must be unique for each record. Trying to insert multiple records with the same list_id table is therefore causing the error.
The issue is the same for the item table.

sql insert fails on postgresql database

I am trying to run the following insert statement using pgadmin3:
INSERT INTO device
VALUES
(12345,
'asdf',
'OY8YuDFLYdv',
'2',
'myname',
'2013-04-24 11:30:08',
Null,Null)
But I keep getting the following error message:
ERROR: invalid input syntax for integer: "asdf"
LINE 4: 'asdf',
^
********** Error **********
ERROR: invalid input syntax for integer: "asdf"
SQL state: 22P02
Character: 42
Here's the table definition:
CREATE TABLE device
(
device_id integer NOT NULL DEFAULT nextval('device_device_id_seq'::regclass),
userid integer NOT NULL,
description character varying(255),
password character varying(255) NOT NULL,
user_id integer NOT NULL,
createdname character varying(255),
createddatetime timestamp without time zone,
updatedname character varying(255),
updateddatetime timestamp without time zone,
CONSTRAINT device_pkey PRIMARY KEY (device_id )
)
WITH (
OIDS=FALSE
);
ALTER TABLE device
OWNER TO appadmin;
Can you tell me where I'm going wrong? I've tried changing the single quotes to double quotes but that didn't help.
I don't want to have to list all the column names in the INSERT if I dont have to.
Thanks.
Apparently you're expecting the INSERT to skip device_id since it is the primary key and has a default that comes from a sequence. That's not going to happen so PostgreSQL thinks you mean this:
insert into device (device_id, userid, ...)
values (12345, 'asdf', ...);
If you insist on not listing your columns explicitly (and making the people that get to maintain your code suffer needlessly) then you can specify DEFAULT in the VALUES to tell PostgreSQL to use the PK's default value; from the fine manual:
INSERT INTO table_name [ ( column_name [, ...] ) ]
{ DEFAULT VALUES | VALUES ( { expression | DEFAULT } [, ...] ) [, ...] | query }
[ RETURNING * | output_expression [ [ AS ] output_name ] [, ...] ]
[...]
DEFAULT
The corresponding column will be filled with its default value.
For example:
INSERT INTO device
VALUES
(DEFAULT,
12345,
'asdf',
...
But really, you should just specify the columns to make the SQL easier to understand and more robust when the schema changes.