Insert followed by delete query - sql

I want to apply the delete query first on my table and then apply the insert query on the same table.
variable_page_category table has 2 columns (page_category_id, variable_id). It's a composite primary key.
My Table can be like this:
page_category_id | 1 2 3 4 5
variable_id | 1 1 1 2 2
result_variable_page_category_delete AS (
DELETE FROM common.variable_page_category
WHERE variable_id = (dynamic_variable_json->>'id')::BIGINT
RETURNING 1
),
result_variable_page_category AS (
INSERT INTO common.variable_page_category (page_category_id, variable_id)
SELECT
(page_category_id::TEXT)::BIGINT,
(dynamic_variable_json->>'id')::BIGINT
FROM jsonb_array_elements_text((dynamic_variable_json->>'page_category_id')::JSONB) AS page_category_id
RETURNING 1
)
but this didn't perform sequentially and I'm getting this error. Both queries are individually correct.
ERROR: duplicate key value violates unique constraint \"variable_page_category_pkey\"\n Detail: Key (page_category_id, variable_id)=(1, 1) already exists.
How can I combine both the query so that delete query complete before insert?
Update query is also an option but because I'm new I can't handle update query with ease that's why trying this approach first.

Why not simply concatenate both queries using ; ??
DELETE FROM common.variable_page_category
WHERE variable_id = (dynamic_variable_json->>'id')::BIGINT
;
INSERT INTO common.variable_page_category (page_category_id, variable_id)
SELECT
(page_category_id::TEXT)::BIGINT,
(dynamic_variable_json->>'id')::BIGINT
FROM jsonb_array_elements_text((dynamic_variable_json->>'page_category_id')::JSONB) AS page_category_id
You might want to add a transaction depending on your client configuration

Related

Using Insert Into Select while converting values to match a third table

I don't know how to simply phrase my question, but here is the scenario. I have two tables with data as follows:
CertStatus Table:
CertStatus_KEY CertStatus_ID
1 ACTIVE
2 EXPIRED
CertImport Table:
Status
ACTIVE
EXPIRED
EXPIRED
ACTIVE
EXPIRED
ACTIVE
What I need to do is take the CertImport.Status column, convert all of those statuses to the CertStatus_KEY that matches the CertStatus_ID, and then copy all of that info into a third table with two columns, so the data would end up as follows.
Certification Table:
Certification_KEY CertStatus_KEY
1 1
2 2
3 2
4 1
5 2
6 1
I'm trying to use an Insert Into Select statement but I get an error that says the Subquery returned more than 1 value. Here's what I've got:
INSERT INTO Certification (CertStatus_KEY)
SELECT (
SELECT CertStatus_KEY from CertStatus where CertStatus_ID in (
SELECT Status from CertImport)
)
Simplified, the goal is to convert the CertImport.Status to the CertStatus.CertStatus_KEY value that corresponds to the matching CertStatus.CertStatus_ID, and then insert that value into Certification.CertStatus_KEY.
Thanks.
Insert into Certification (CertStatus_Key)
select cs.Certstatus_Key
from CertStatus cs
inner join CertImport ci on cs.CertStatus_Id = ci.Status;
DBFiddle demo
Note: In SQL server, it is not guaranteed in which order the results would return from the select unless an order by clause is specified (or at least there is a clustered index). For a small data set like this you would get the result just as you asked but for a larger one there is no guarantee. Actually, design seems to be bad from the start, CertImport should have an ID column?

SQL adding columns together

I have a PostgreSQL table of 7k records. each record has 3 a unique ID and 3 fields with it. childcares, shcools, hospitals. there are all integer fields. i want to add a new column and calculate the total amount of receptors (schools,childcares,hospitals) for each row. I thought this should be pretty straighforward with adding a column and doing an insert with a select but i am not getting the results i want
alter table site add total integer;
insert into site(total) select sum(schools+childcares+hospitals) as s from site;
i have also tried a group by id in the insert select statement
You are looking for Update not Insert
Update site
set total = COALESCE(schools,0)+COALESCE(childcares,0)+COALESCE(hospitals,0)
Added COALESCE to handle NULL values.
Ex :
1 + 2 + NULL = NULL so to replace NULL with 0 I have used COALESCE.
Now it will be 1 + 2 + 0(NULL) = 3

SQL Multiple Row Insert w/ multiple selects from different tables

I am trying to do a multiple insert based on values that I am pulling from a another table. Basically I need to give all existing users access to a service that previously had access to a different one. Table1 will take the data and run a job to do this.
INSERT INTO Table1 (id, serv_id, clnt_alias_id, serv_cat_rqst_stat)
SELECT
(SELECT Max(id) + 1
FROM Table1 ),
'33', --The new service id
clnt_alias_id,
'PI' --The code to let the job know to grant access
FROM TABLE2,
WHERE serv_id = '11' --The old service id
I am getting a Primary key constraint error on id.
Please help.
Thanks,
Colin
This query is impossible. The max(id) sub-select will evaluate only ONCE and return the same value for all rows in the parent query:
MariaDB [test]> create table foo (x int);
MariaDB [test]> insert into foo values (1), (2), (3);
MariaDB [test]> select *, (select max(x)+1 from foo) from foo;
+------+----------------------------+
| x | (select max(x)+1 from foo) |
+------+----------------------------+
| 1 | 4 |
| 2 | 4 |
| 3 | 4 |
+------+----------------------------+
3 rows in set (0.04 sec)
You will have to run your query multiple times, once for each record you're trying to copy. That way the max(id) will get the ID from the previous query.
Is there a requirement that Table1.id be incremental ints? If not, just add the clnt_alias_id to Max(id). This is a nasty workaround though, and you should really try to get that column's type changed to auto_increment, like Marc B suggested.

Insert data from one table to other using select statement and avoid duplicate data

Database: Oracle
I want to insert data from table 1 to table 2 but the catch is, primary key of table 2 is the combination of first 4 letters and last 4 numbers of the primary key of table 1.
For example:
Table 1 - primary key : abcd12349887/abcd22339887/abcder019987
In this case even if the primary key of table 1 is different, but when I extract the 1st 4 and last 4 chars, the output will be same abcd9887
So, when I use select to insert data, I get error of duplicate PK in table 2.
What I want is if the data of the PK is already present then don't add that record.
Here's my complete stored procedure:
INSERT INTO CPIPRODUCTFAMILIE
(productfamilieid, rapport, mesh, mesh_uitbreiding, productlabelid)
(SELECT DISTINCT (CONCAT(SUBSTR(p.productnummer,1,4),SUBSTR(p.productnummer,8,4)))
productnummer,
ps.rapport, ps.mesh, ps.mesh_uitbreiding, ps.productlabelid
FROM productspecificatie ps, productgroep pg,
product p left join cpiproductfamilie cpf
on (CONCAT(SUBSTR(p.productnummer,1,4),SUBSTR(p.productnummer,8,4))) = cpf.productfamilieid
WHERE p.productnummer = ps.productnummer
AND p.productgroepid = pg.productgroepid
AND cpf.productfamilieid IS NULL
AND pg.productietype = 'P'
**AND p.ROWID IN (SELECT MAX(ROWID) FROM product
GROUP BY (CONCAT(SUBSTR(productnummer,1,4),SUBSTR(productnummer,8,4))))**
AND (CONCAT(SUBSTR(p.productnummer,1,2),SUBSTR(p.productnummer,8,4))) not in
(select productfamilieid from cpiproductfamilie));
The highlighted section seems to be wrong, and because of this the data is not picking up.
Please help
Try using this.
p.productnummer IN (SELECT MAX(productnummer) FROM product
GROUP BY (CONCAT(SUBSTR(productnummer,1,4),SUBSTR(productnummer,8,4))))

Is it possible to use a PG sequence on a per record label?

Does PostgreSQL 9.2+ provide any functionality to make it possible to generate a sequence that is namespaced to a particular value? For example:
.. | user_id | seq_id | body | ...
----------------------------------
- | 4 | 1 | "abc...."
- | 4 | 2 | "def...."
- | 5 | 1 | "ghi...."
- | 5 | 2 | "xyz...."
- | 5 | 3 | "123...."
This would be useful to generate custom urls for the user:
domain.me/username_4/posts/1
domain.me/username_4/posts/2
domain.me/username_5/posts/1
domain.me/username_5/posts/2
domain.me/username_5/posts/3
I did not find anything in the PG docs (regarding sequence and sequence functions) to do this. Are sub-queries in the INSERT statement or with custom PG functions the only other options?
You can use a subquery in the INSERT statement like #Clodoaldo demonstrates. However, this defeats the nature of a sequence as being safe to use in concurrent transactions, it will result in race conditions and eventually duplicate key violations.
You should rather rethink your approach. Just one plain sequence for your table and combine it with user_id to get the sort order you want.
You can always generate the custom URLs with the desired numbers using row_number() with a simple query like:
SELECT format('domain.me/username_%s/posts/%s'
, user_id
, row_number() OVER (PARTITION BY user_id ORDER BY seq_id)
)
FROM tbl;
db<>fiddle here
Old sqlfiddle
Maybe this answer is a little off-piste, but I would consider partitioning the data and giving each user their own partitioned table for posts.
There's a bit of overhead to the setup as you will need triggers for managing the DDL statements for the partitions, but would effectively result in each user having their own table of posts, along with their own sequence with the benefit of being able to treat all posts as one big table also.
General gist of the concept...
psql# CREATE TABLE posts (user_id integer, seq_id integer);
CREATE TABLE
psql# CREATE TABLE posts_001 (seq_id serial) INHERITS (posts);
CREATE TABLE
psql# CREATE TABLE posts_002 (seq_id serial) INHERITS (posts);
CREATE TABLE
psql# INSERT INTO posts_001 VALUES (1);
INSERT 0 1
psql# INSERT INTO posts_001 VALUES (1);
INSERT 0 1
psql# INSERT INTO posts_002 VALUES (2);
INSERT 0 1
psql# INSERT INTO posts_002 VALUES (2);
INSERT 0 1
psql# select * from posts;
user_id | seq_id
---------+--------
1 | 1
1 | 2
2 | 1
2 | 2
(4 rows)
I left out some rather important CHECK constraints in the above setup, make sure you read the docs for how these kinds of setups are used
insert into t values (user_id, seq_id) values
(4, (select coalesce(max(seq_id), 0) + 1 from t where user_id = 4))
Check for a duplicate primary key error in the front end and retry if needed.
Update
Although #Erwin advice is sensible, that is, a single sequence with the ordering in the select query, it can be expensive.
If you don't use a sequence there is no defeat of the nature of the sequence. Also it will not result in a duplicate key violation. To demonstrate it I created a table and made a python script to insert into it. I launched 3 parallel instances of the script inserting as fast as possible. And it just works.
The table must have a primary key on those columns:
create table t (
user_id int,
seq_id int,
primary key (user_id, seq_id)
);
The python script:
#!/usr/bin/env python
import psycopg2, psycopg2.extensions
query = """
begin;
insert into t (user_id, seq_id) values
(4, (select coalesce(max(seq_id), 0) + 1 from t where user_id = 4));
commit;
"""
conn = psycopg2.connect('dbname=cpn user=cpn')
conn.set_isolation_level(psycopg2.extensions.ISOLATION_LEVEL_SERIALIZABLE)
cursor = conn.cursor()
for i in range(0, 1000):
while True:
try:
cursor.execute(query)
break
except psycopg2.IntegrityError, e:
print e.pgerror
cursor.execute("rollback;")
cursor.close()
conn.close()
After the parallel run:
select count(*), max(seq_id) from t;
count | max
-------+------
3000 | 3000
Just as expected. I developed at least two applications using that logic and one of then is more than 13 years old and never failed. I concede that if you are Facebook or some other giant then you could have a problem.
Yes:
CREATE TABLE your_table
(
column type DEFAULT NEXTVAL(sequence_name),
...
);
More details here:
http://www.postgresql.org/docs/9.2/static/ddl-default.html