INSERT or UPDATE tables with SERIAL primary key - sql

Simple question,
How can I UPSERT in PostgreSQL tables, that have SERIAL (also knowns as auto increment) primary keys?
I couldn't find really believable source this is why I ask here.
This is what I got so far but It doesn't work.
INSERT INTO public.friendship (Username_1, Username_2, Status)
VALUES ("User1", "User2", "Accepted")
ON CONFLICT (Username_1, Username_2)
DO UPDATE SET (Status) = ("Accepted")
WHERE Username_1 = "User1" AND Username_2 = "User2";
Full schema (ER diagram). Table is "Friendship" but UMLStar exports weird SQL with "public.Friendship" like table names. But that doesn't matter, it works with other cases (I got Register/Login working successfully).
Command \d, List of relations
Schema | Name | Type | Owner
--------+---------------------------------+----------+----------
public | activitydata | table | postgres
public | activitydata_activitydataid_seq | sequence | postgres
public | chatroom | table | postgres
public | chatroom_chatroomid_seq | sequence | postgres
public | country | table | postgres
public | friendship | table | postgres
public | friendship_friendshipid_seq | sequence | postgres
public | message | table | postgres
public | message_messageid_seq | sequence | postgres
public | participates | table | postgres
public | participates_participatesid_seq | sequence | postgres
public | user | table | postgres
(12 rows)
messenger=# \d Friendship
Table "public.friendship"
Column | Type | Modifiers
--------------+---------+-------------------------------------------------------------------
friendshipid | integer | not null default nextval('friendship_friendshipid_seq'::regclass)
username_1 | text | not null
username_2 | text | not null
status | text | not null
Indexes:
"friendship_pkey" PRIMARY KEY, btree (friendshipid)
"friendship_username_1_idx" btree (username_1)
"friendship_username_2_idx" btree (username_2)
Executed command:
messenger=# INSERT INTO public.friendship (Username_1, Username_2, Status)
messenger-# VALUES ('User1', 'User2', 'Accepted')
messenger-# ON CONFLICT (Username_1, Username_2)
messenger-# DO UPDATE SET (Status) = ('Accepted')
messenger-# WHERE Username_1 = 'User1' AND Username_2 = 'User2';
ERROR: column reference "username_1" is ambiguous
LINE 5: WHERE Username_1 = 'User1' AND Username_2 = 'User2';

Table "public.friendship"
Column | Type | Modifiers
--------------+---------+-------------------------------------------------------------------
friendshipid | integer | not null default nextval('friendship_friendshipid_seq'::regclass)
username_1 | text | not null
username_2 | text | not null
status | text | not null
Indexes:
"friendship_pkey" PRIMARY KEY, btree (friendshipid)
"friendship_username_1_idx" btree (username_1)
"friendship_username_2_idx" btree (username_2)
This doesn't show a unique key on (username_1, username_2)
What you should do is
BEGIN;
DROP INDEX friendship_username_1_idx,
CREATE UNIQUE INDEX ON foo(username_1, username_2);
COMMIT;
If you never query on username_2 without also querying on username_1 then you should drop friendship_username_2_idx and let the one compound UNIQUE index work for you.
For reference we wanted \d Friendship because if you had a unique index it would show...
Table "public.foo"
Column | Type | Modifiers
--------+---------+-----------
a | integer |
b | integer |
Indexes:
"foo_a_b_idx" UNIQUE, btree (a, b)
Without something there, how can you get a conflict?

Sorry, if I have missed the complexity of the question but this seems simpler:
ALTER TABLE public.friendship
add column id serial;

Related

How to scan join query results into a struct containing a slice of structs using sqlx

Here are the tables that are relevant to this question:
lists:
Table "public.lists"
Column | Type | Collation | Nullable | Default
-------------+--------+-----------+----------+-------------------
id | uuid | | not null | gen_random_uuid()
user_id | uuid | | not null |
list_name | text | | not null |
description | text | | not null |
created_at | bigint | | not null |
updated_at | bigint | | not null |
Indexes:
"lists_pkey" PRIMARY KEY, btree (id)
Foreign-key constraints:
"fk_user_id" FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE SET NULL
Referenced by:
TABLE "todos" CONSTRAINT "fk_list_id" FOREIGN KEY (list_id) REFERENCES lists(id) ON DELETE SET NULL
and todos:
Table "public.todos"
Column | Type | Collation | Nullable | Default
------------+---------+-----------+----------+-------------------
id | uuid | | not null | gen_random_uuid()
list_id | uuid | | not null |
user_id | uuid | | not null |
content | text | | not null |
done | boolean | | not null | false
created_at | bigint | | not null |
updated_at | bigint | | not null |
Indexes:
"todos_pkey" PRIMARY KEY, btree (id)
"todos_content_key" UNIQUE CONSTRAINT, btree (content)
Foreign-key constraints:
"fk_list_id" FOREIGN KEY (list_id) REFERENCES lists(id) ON DELETE SET NULL
"fk_user_id" FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE SET NULL
and these are the structs that I would ideally like to use:
type Todo struct {
ID string `db:"id"`
ListID string `db:"list_id"`
UserID string `db:"user_id"`
Content string `db:"content"`
Done bool `db:"done"`
CreatedAt int64 `db:"created_at"`
UpdatedAt int64 `db:"updated_at"`
}
type ListWithTodos struct {
ID string `db:"id"`
UserID string `db:"user_id"`
ListName string `db:"list_name"`
Description string `db:"description"`
Todos []Todo
CreatedAt int64 `db:"created_at"`
UpdatedAt int64 `db:"updated_at"`
}
What I would like to do is select a list and add all of its children todos (todos with a list_id equal to the list's id) to the field ListWithTodos.Todos. I currently have this query which returns all of the todos with the parent list attached (and I know this won't work with the struct I have):
SELECT l.*, t.* FROM lists l RIGHT JOIN todos t ON l.id=t.list_id WHERE l.id='insert uuid here';
and my go code and structs obviously don't work with the query.
What query would I use to get the results into a single struct and what sqlx functions would I use to do this?

postgresql update column with least value of column from another table based on condition

I'm trying to run an update query on the column answer_date of a table P. I want to fill each row of answer_date of P with the unique date from create_date column of H where P.ID1 matches with H.ID1 and where P.acceptance_date is not empty.
The query takes a long while to run, so I check the interim changes in answer_date but the entire column is empty like it was created.
Btree indices exists on all the mentioned columns.
Is there something wrong with the query?
UPDATE P
SET answer_date = subquery.date
FROM (SELECT DISTINCT H.create_date as date
FROM H, P
where H.postid=P.acceptance_id
) AS subquery
WHERE P.acceptance_id is not null;
Table schema is as follows:
Table "public.P"
Column | Type | Modifiers | Storage | Stats target | Description
-----------------------+-----------------------------+-----------+----------+--------------+-------------
id | integer | not null | plain | |
acceptance_id | integer | | plain | |
answer_date | timestamp without time zone | | plain | |
Indexes:
"posts_pkey" PRIMARY KEY, btree (id)
"posts_accepted_answer_id_idx" btree (acceptance_id) WITH (fillfactor='100')
and
Table "public.H"
Column | Type | Modifiers | Storage | Stats target | Description
-------------------+-----------------------------+-----------+----------+--------------+-------------
id | integer | not null | plain | |
postid | integer | | plain | |
create_date | timestamp without time zone | not null | plain | |
Indexes:
"H_pkey" PRIMARY KEY, btree (id)
"ph_creation_date_idx" btree (create_date) WITH (fillfactor='100')
Table P as 70 million rows and H has 220 million rows.
Postgres version is 9.6
Hardware is a Windows laptop with 8Gb of RAM.

Setting unique or check constraint to make a field unique in each entry of another unique field

I have a table as follows :
.-----------------------------.
| mytable |
-------------------------------
| id | primary key |
| field | varchar |
| name | varchar |
-------------------------------
id and field will be unique. But I want to make name also unique for each field. e.g.
id | field | name
------------------------------------------------------
1 | f1 | n1
2 | f1 | n2
3 | f1 | n1 < Should be voilated
4 | f2 | n1 < Should be fine
Just add a unique index or constraint:
create unique index idx_mytable_name_field on mytable(name, field)
Note: this solution will not work if you have NULL values in the column and you want the constraint to apply to them as well. By default Postgres ignores NULL values.

Are there problems with this 'Soft Delete' solution using EAV tables?

I've read some information about the ugly side of just setting a deleted_at field in your tables to signify a row has been deleted.
Namely
http://richarddingwall.name/2009/11/20/the-trouble-with-soft-delete/
Are there any potential problems with taking a row from a table you want to delete and pivoting it into some EAV tables?
For instance.
Lets Say I have two tables deleted and deleted_row respectively described as follows.
mysql> describe deleted;
+------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+------------+--------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| tablename | varchar(255) | YES | | NULL | |
| deleted_at | timestamp | YES | | NULL | |
+------------+--------------+------+-----+---------+----------------+
mysql> describe deleted_rows;
+--------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+--------+--------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| entity | int(11) | YES | MUL | NULL | |
| name | varchar(255) | YES | | NULL | |
| value | blob | YES | | NULL | |
+--------+--------------+------+-----+---------+----------------+
Now when you wanted to delete a row from any table you would delete it from the table then insert it into these tables as such.
deleted
+----+-----------+---------------------+
| id | tablename | deleted_at |
+----+-----------+---------------------+
| 1 | products | 2011-03-23 00:00:00 |
+----+-----------+---------------------+
deleted_row
+----+--------+-------------+-------------------------------+
| id | entity | name | value |
+----+--------+-------------+-------------------------------+
| 1 | 1 | Title | A Great Product |
| 2 | 1 | Price | 55.00 |
| 3 | 1 | Description | You guessed it... it's great. |
+----+--------+-------------+-------------------------------+
A few things I see off the bat.
You'll need to use application logic
to do the pivot (Ruby, PHP, Python,
etc)
The table could grow pretty big
because I'm using blob to handle
the unknown size of the row value
Do you see any other glaring problems with this type of soft delete?
Why not mirror your tables with archive tables?
create table mytable(
col_1 int
,col_2 varchar(100)
,col_3 date
,primary key(col_1)
)
create table mytable_deleted(
delete_id int not null auto_increment
,delete_dtm datetime not null
-- All of the original columns
,col_1 int
,col_2 varchar(100)
,col_3 date
,index(col_1)
,primary key(delete_id)
)
And then simply add on-delete-triggers on your tables that inserts the current row in the mirrored table before the deletion? That would provide you with dead-simple and very performant solution.
You could actually generate the tables and trigger code using the data dictionary.
Note that I might not want to have a unique index on the original primary key (col_1) in the archive table, because you may actually end up deleting the same row twice over time if you are using natural keys. Unless you plan to hook up the archive tables in your application (for undo purposes) you can drop the index entirely. Also, I added the time of delete (deleted_dtm) and a surrogate key that can be used to delete the deleted (hehe) rows.
You may also consider range partitioning the archive table on deleted_dtm. This makes it pretty much effortless to purge data from the tables.

How to change a primary key in SQL to auto_increment?

I have a table in MySQL that has a primary key:
mysql> desc gifts;
+---------------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+---------------+-------------+------+-----+---------+-------+
| giftID | int(11) | NO | PRI | NULL | |
| name | varchar(80) | YES | | NULL | |
| filename | varchar(80) | YES | | NULL | |
| effectiveTime | datetime | YES | | NULL | |
+---------------+-------------+------+-----+---------+-------+
but I wanted to make it auto_increment.
The following statement failed. How can it be modified so that it can work? thanks
mysql> alter table gifts modify giftID int primary key auto_increment;
ERROR 1068 (42000): Multiple primary key defined
Leave off the primary key attribute:
ALTER TABLE gifts MODIFY giftID INT AUTO_INCREMENT;
Certain column attributes, such as PRIMARY KEY, aren't exactly properties of the column so much as shortcuts for other things. A column marked PRIMARY KEY, for example, is placed in the PRIMARY index. Futhermore, all columns in the PRIMARY index are given the NOT NULL attribute. (Aside: to have a multi-column primary key, you must use a separate constraint clause rather than multiple PRIMARY KEY column attributes.) Since the column is already in the PRIMARY index, you don't need to specify it again when you modify the column. Try SHOW CREATE TABLE gifts; to see the affects of using the PRIMARY KEY attribute.