postgres insert into multiple tables after each other and return everything - sql

Given postgres database with 3 tables:
users(user_id: uuid, ...)
urls(slug_id:int8 pkey, slug:text unique not null, long_url:text not null)
userlinks(user_id:fkey users.user_id, slug_id:fkey urls.slug_id)
pkey(user_id, slug_id)
The userlinks table exists as a cross reference to associate url slugs to one or more users.
When a new slug is created by a user I'd like to INSERT into the urls table, take the slug_id that was created there, INSERT into userlinks with current users ID and slug_id
Then if possible return both results as a table of records.
Current users id is accessible with auth.uid()
I'm doing this with a stored procedure in supabase
I've gotten this far but I'm stuck:
WITH urls_row as (
INSERT INTO urls(slug, long_url)
VALUES ('testslug2', 'testlong_url2')
RETURNING slug_id
)
INSERT INTO userlinks(user_id, slug_id)
VALUES (auth.uid(), urls_row.slug_id)
--RETURNING *
--RETURNING (urls_record, userlinks_record)

Try this :
WITH urls_row as (
INSERT INTO urls(slug, long_url)
VALUES ('testslug2', 'testlong_url2')
RETURNING slug_id
), userlink_row AS (
INSERT INTO userlinks(user_id, slug_id)
SELECT auth.uid(), urls_row.slug_id
FROM urls_row
RETURNING *
)
SELECT *
FROM urls_row AS ur
INNER JOIN userlink_row AS us
ON ur.slug_id = us.slug_id

Related

PostgreSQL: Insert if not exist and then Select

Question
Imagine having the following PostgreSQL table:
CREATE TABLE setting (
user_id bigint PRIMARY KEY NOT NULL,
language lang NOT NULL DEFAULT 'english',
foo bool NOT NULL DEFAULT true,
bar bool NOT NULL DEFAULT true
);
From my research, I know to INSERT a row with the default values if the row for the specific user did not exist, would look something like this:
INSERT INTO setting (user_id)
SELECT %s
WHERE NOT EXISTS (SELECT 1 FROM setting WHERE user_id = %s)
(where the %s are placeholders where I would provide the User's ID)
I also know to get the user's setting (aka to SELECT) I can do the following:
SELECT * FROM setting WHERE user_id = %s
However, I am trying to combine the two, where I can retrieve the user's setting, and if the setting for the particular user does not exist yet, INSERT default values and return those values.
Example
So it would look something like this:
Imagine Alice has her setting already saved in the database but Bob is a new user and does not have it.
When we execute the magical SQL query with Alice's user ID, it will return Alice's setting stored in the database. If we execute the same identical magical SQL query on Bob's user ID, it will detect that Bob does not have any setting saved in the database , thus it will INSERT a setting record with all default values, and then return Bob's newly created setting.
Given that there is an UNIQUE or PK constraint on user_id as Frank Heikens said then try to insert, if it violates the constraint do nothing and return the inserted row (if any) in the t CTE, union it with a 'proper' select and pick the first row only. The optimizer will take care than no extra select is done if the insert returns a row.
with t as
(
insert into setting (user_id) values (%s)
on conflict do nothing
returning *
)
select * from t
union all
select * from setting where user_id = %s
limit 1;
No magic necessary. Use returning and union all:
with inparms as ( -- Put your input parameters in CTE so you bind only once
select %s::bigint as user_id
), cond_insert as ( -- Insert the record if not exists, returning *
insert into settings (user_id)
select i.user_id
from inparms i
where not exists (select 1 from settings where user_id = i.user_id)
returning *
)
select * -- If a record was inserted, get it
from cond_insert
union all
select s.* -- If not, then get the pre-existing record
from inparms i
join settings s on s.user_id = i.user_id;

Foreach insert statement based on where clause

I have a scenario where I have thousands of Ids (1-1000), I need to insert each of these Ids once into a table with another record.
For example, UserCars - has columns CarId and UserId
I want to INSERT each user in my Id WHERE clause against CarId 1.
INSERT INTO [dbo].[UserCars]
([CarId]
,[UserId])
VALUES
(
1,
**My list of Ids**
)
I'm just not sure of the syntax for running this kind of insert or if it is at all possible.
As you write in the comments that my list of Ids is coming from another table, you can simply use select into with a select clause
See this for more information
insert into UserCars (CarID, UserID)
select CarID, UserID
from othertable
In the select part you can use joins and whatever you need, complex queries are allowed as long as the columns in the result match the columns (CarID, UserID)
or even this to keep up with your example
insert into UserCars (CarID, UserID)
select 1, UserID
from dbo.User
if your data exists on a file, you can use BULK INSERT command, for example:
BULK INSERT UserCars
FROM '\\path\to\your\folder\users-cars.csv';
Just make sure to have the same columns structure both in the file and in the table (e.g. CarId,UserId).
Otherwise, follow #GuidoG comment to insert your data from another table:
insert into UserCars (CarID, UserID) select CarID, UserID from othertable

Create a record in table A and assign its id to table B

I have a set of companies and for each of them, I need to generate a UUID in another table.
companies table
detail_id (currently NULL for all companies)
details table
id
uuid
date
...
I'd like to update all companies with newly created detail like this:
UPDATE companies
SET details_id =
(
INSERT INTO details
(uuid, date)
VALUES (uuid_generate_v1(), now()::date)
RETURNING id
)
But that gives me a syntax error since I can't use INSERT INTO inside UPDATE.
What is a proper way to create a row in the details table and immediately set the newly created id in the companies table?
(I am using Postgres)
You can use a data-modifying CTE:
with new_rows as (
INSERT INTO details ("uuid", "date")
VALUES (uuid_generate_v1(), current_date)
RETURNING id
)
update companies
set details_id = new_rows.id
from new_rows

Join with json column?

I need find rows in table users by joining column in table queries.
I wrote some sql but it takes 0.200s to run, when SELECT * FROM ... takes 0.80s.
How can I improve performance?
db-fiddle example
The tables are :
CREATE TABLE users (
id INT,
browser varchar
);
CREATE TABLE queries (
id INT,
settings jsonb
);
INSERT INTO users (id,browser) VALUES (1, 'yandex');
INSERT INTO users (id, browser) VALUES (2, 'google');
INSERT INTO users (id, browser) VALUES (3, 'google');
INSERT INTO queries (id, settings) VALUES (1, '{"browser":["Yandex", "TestBrowser"]}');
and the query :
select x2.id as user_id, x1.id as query_id
FROM (
SELECT id, json_array_elements_text((settings->>'browser')::JSON) browser
FROM queries) x1
JOIN users x2 ON lower(x1.browser::varchar) = lower(x2.browser::varchar)
group by 1,2;
json_array_elements_text((settings->>'browser')::JSON)
'->>' converts the result to text. Then you cast it back to JSON. Doing that on one row (if you only have one) is not really going to be a problem, but it is rather pointless.
You could instead do:
jsonb_array_elements_text(settings->'browser')
ON lower(x1.browser::varchar) = lower(x2.browser::varchar)
You can create an index that can be used for this:
create index on users (lower(browser));
It won't do much good on a table with 3 rows. But presumably you don't really have 3 rows.

Inserting the data at a time in 3 tables using an array into Postgresql database

I have to insert the data at a time in 3 tables into PostgreSQL database.
I have to insert data into first and Second table is directly.
But The third table which i received data as array and inserted into 3rd table as same.
Is data inserting as array into PostgreSQL available or possible?
If it possible how can I insert can some correct me
How can i do the same mechanism with Wso2 DSS 3.0.1.
My query is
with first_insert as (insert into sample(name,age)
values(?,?)
RETURNING id
),
second_insert as (insert into sample1(empid,desig)
values((select id from first_insert),?)
RETURNING userid
)
insert into sample2(offid,details)
values((select userid from second_insert),?)
Not sure I understood your question exactly, but at least you could use insert into ... select:
with cte_first_insert as
(
insert into sample1(name, age)
values('John', 25)
returning id
), cte_second_insert as (
insert into sample2(empid, desig)
select id, 1 from cte_first_insert
returning userid
)
insert into sample3(offid, details)
select userid, 'test'
from cte_second_insert;
sql fiddle demo