I have table user (id, firstname, lastname) and id is defined as
id int8 NOT NULL DEFAULT nextval('user_id_seq'::regclass)
But when I first insert a row through database using this SQL:
INSERT INTO user (id, firstname, lastname)
VALUES((SELECT(MAX(id) + 1) FROM user), firstname, lastname);
the data gets inserted, but when I am hitting through API then id is not returned, I get an error
duplicate key value violates unique constraint 'user_pkey'
This is because in the previous insertion through database sequence is not updated.
How to resolve this?
The only good way to prevent that is to use an identity column instead:
ALTER TABLE tab
ALTER id
ADD GENERATED ALWAYS AS IDENTITY (START 1000000);
That automatically creates a sequence, and a normal insert statement is not allowed to override the default.
Related
How to add a primary key to an existing table in SQLite? I know that I have to generate a new table and copy everything from the old table. However, it keeps giving me an error saying datatype mismatch due to one table having a primary key and the other one not. I did run the same commands without including primary key and it works.
CREATE TABLE temp_table
(
id INTEGER PRIMARY KEY,
age INTEGER,
address TEXT
)
INSERT INTO temp_table
SELECT *
FROM original_table
Since I am importing the data from a CSV file I do not know how to add the PRIMARY KEY in the first place. In case anyone knows a solution for that it would also work.
Assuming that the original table has 2 columns: age and address, you can either list the column names of the new table without the primary key:
INSERT INTO temp_table(age, address)
SELECT age, address FROM original_table
or, include the primary key and pass null for its value:
INSERT INTO temp_table(id, age, address)
SELECT null, age, address FROM original_table
or:
INSERT INTO temp_table
SELECT null, age, address FROM original_table
In all cases the id will be filled by SQLite, because it is defined as INTEGER PRIMARY KEY and it will be automatically autoincremented starting from 1.
If there is another column in the original table with unique integer values, you can pass that column to fill the new id:
INSERT INTO temp_table(id, age, address)
SELECT col, age, address FROM original_table
Change col to the actual name of the column.
I want to add large amounts of data to the table.
Before adding, I check whether the data exists in the table or not.
I am dealing with the following:
Example:
Table users
id | name | address
.. | .... | .......
select id from users where id = ... and name = ...
if not exist
insert....
if exist
update ....
My problem is the time taken too long.
I wonder if everyone has a solution to solve this problem faster?
You actually do not need to perform this check manually. It is rather the job of a constraint, e.g. via a primary key.
Table with a primary key constraint based on id and name:
CREATE TABLE users (
id INT, name TEXT, address TEXT,
PRIMARY KEY (id,name));
So, if you try to insert two records with the same id and name you will get an exception - the error message bellow is in German, but it basically says that the pk constraint was violated:
INSERT INTO users VALUES (1,'foo','add 1');
INSERT INTO users VALUES (1,'foo','add 2');
FEHLER: doppelter Schlüsselwert verletzt Unique-Constraint »users_pkey«
DETAIL: Schlüssel »(id, name)=(1, foo)« existiert bereits.
In case you want to update address when id and name already exist, try using an UPSERT:
INSERT INTO users VALUES (1,'foo','add x')
ON CONFLICT (id, name)
DO UPDATE SET address = EXCLUDED.address;
If you want to simply ignore the conflicting insert without raising an exception, just do as follows:
INSERT INTO users VALUES (1,'foo','add x')
ON CONFLICT DO NOTHING;
See this answer for more details.
Regarding speed: you have to rather check if your table has a proper index or even if an index makes sense at all when performing the insert. Sometimes importing large amount of data into a temporary UNLOGGED TABLE table without index, and then populating the target table with an SQL removing the duplicates is the best choice.
I need to be able to insert multiple rows in a table where one of the fields, a foreign key will be included in multiple rows.
Currently when I'm trying to do insert I'm getting this error:
An exception of type 'System.Data.SqlClient.SqlException' occurred in
System.Data.dll but was not handled in user code
Additional information: Cannot insert duplicate key row in object
'dbo.userGroupMembership' with unique index 'IX_userId'. The duplicate
key value is (264673).
Query I'm using:
INSERT INTO userGroupMembership(userId, usergroupId, created, adultAdminAccessLevel)
SELECT [userId], 12, GETDATE(), 0
FROM [dbo].[userGroupMembership]
where usergroupId = #UserGroupId
UserId is the foreign key field.
Any idea if I need to do any configuration change in the table or how can I be able to insert multiple rows with same foreign key?
You have a unique index allowing one row per userID. If you truly want more than one row per userID just drop the unique index.
DROP INDEX dbo.userGroupMembership.IX_userID;
I try to copy a table into another, but the fields are not the same (it is not a problem), and some fields in table destination are optional.
I do somethink like this:
INSERT INTO data(Email, Title, FirstName, LastName)
SELECT champs5, champs1, champs3, champs4
FROM tmp
But the problem comes from the id field who is of course required:
ERROR: null value in column "id" violates not-null constraint
How can I tell to Postgres to auto generate the ID for each line of the INSERT?
I have the following two tables in my Postgres database:
CREATE TABLE User (
Id serial UNIQUE NOT NULL,
Login varchar(80) UNIQUE NOT NULL,
PRIMARY KEY (Id,Login)
);
CREATE TABLE UserData (
Id serial PRIMARY KEY REFERENCES Users (Id),
Password varchar(255) NOT NULL
);
Say, I add a new user with INSERT INTO Users(Id, Login) VALUES(DEFAULT, 'John') and also want to add VALUES(id, 'john1980') in UserData where id is John's new id.
How do I get that id? Running a query for something just freshly created seems superfluous. I have multiple such situations across the database. Maybe my design is flawed in general?
(I'm obviously not storing passwords like that.)
1) Fix your design
CREATE TABLE usr (
usr_id serial PRIMARY KEY,
,login text UNIQUE NOT NULL
);
CREATE TABLE userdata (
usr_id int PRIMARY KEY REFERENCES usr
,password text NOT NULL
);
Start by reading the manual about identifiers and key words.
user is a reserved word. Never use it as identifier.
Use descriptive identifiers. id is useless.
Avoid mixed case identifiers.
serial is meant for a unique column that can be pk on its own. No need for a multicolumn pk.
The referencing column userdata.usr_id cannot be a serial, too. Use a plain integer.
I am just using text instead of varchar(n), that's optional. More here.
You might consider to merge the two tables into one ...
2) Query to INSERT in both
Key is the RETURNING clause available for INSERT, UPDATE, DELETE, to return values from the current row immediately.
Best use in a data-modifying CTE:
WITH ins1 AS (
INSERT INTO usr(login)
VALUES ('John') -- just omit default columns
RETURNING usr_id -- return automatically generated usr_id
)
INSERT INTO userdata (usr_id, password )
SELECT i.usr_id, 'john1980'
FROM ins1 i;
You can consider using a trigger. The Id column of the newly inserted row can be accessed by the name NEW.Id.
References:
CREATE TRIGGER documentation on PostgreSQL Manual
Trigger Procedures