SQL set UNIQUE only for two columns - sql

I would like for example that user can add name JHONE and age 25, so next time he can add JHONE, 26 or ALEX 25, BUT not JHONE, 25 again.
So I'm looking for two column unique NOT separately.
P.S. I'm sorry if same question was mentioned before.
EDIT:
This is my example:
Would like to make userIdG and doWithCar will be like this
102163096246025413003 View
102163096246025413003 Buy
102163096246025413003 Let
102163096246025413003 Sell
And for Id = 102163096246025413003 you can't add any more values, BECAUSE column doWithCar will have only 4 possible choice view, buy, rent and sell

You could specify more than one column in UNIQUE:
CREATE TABLE tab(ID INT IDENTITY(1,1) PRIMARY KEY, name VARCHAR(100), age INT
,UNIQUE(name, age));
INSERT INTO tab(name, age) VALUES ('John', 25);
INSERT INTO tab(name, age) VALUES ('John', 26);
-- INSERT INTO tab(name,age) VALUES ('John', 25);
-- Violation of UNIQUE KEY constraint 'UQ__tab__CF0426FD76D3370A'.
-- Cannot insert duplicate key in object 'dbo.tab'.
-- The duplicate key value is (John, 25).
-- The statement has been terminated.
SELECT * FROM tab;
LiveDemo
Note:
You should store date of birth and not age itself (or make age calculated column and set UNIQUE(name, dob)).
this is what I do not understand) how database will know that it should be two columns as unique and not each column is unique
These are different concepts. DB "knows" it from UNIQUE constraint definition:
UNIQUE(userIdG,doWithCar) -- pair of column is unique
!=
UNIQUE(userIdG),UNIQUE(doWithCar) -- each column is unique

Related

I need to validate data in a Table where 1 column has multiple values but 1 value can only be present in 1 row by unique key

Example:
A table Keyed by Unique Name and Email Address with a column for Type
The Type column can have Original, Work, Personal
where you can have multiple work and personal emails but only 1 Original email
I am using DB2 for i SQL and I want to constrain the data using UNIQUE or CHECK constraints but not sure how I can do this data set.
Scott scott#hotmail.com Original
Scott scott#gmail.com Personal
Scott scott#live.com Personal
Scott scott#NBC.com Work
Scott scott#ABC.com Work
Scott scott#yahoo.com Original
I want to identify that I cant have yahoo as Original if I already have hotmail as original.
the rest are valid.
Let me know if I need to add more.
If you have Db2 for IBM i, then you may create a UNIQUE INDEX with the corresponding WHERE clause.
CREATE TABLE TEST_IND_EXPR
(
NAME VARCHAR (20) NOT NULL
, EMAIL VARCHAR (20) NOT NULL
, TYPE VARCHAR (20) NOT NULL
);
CREATE UNIQUE INDEX TEST_IND_EXPR1 ON TEST_IND_EXPR (NAME, EMAIL);
CREATE UNIQUE INDEX TEST_IND_EXPR2 ON TEST_IND_EXPR (NAME, TYPE) WHERE TYPE = 'Original';
INSERT INTO TEST_IND_EXPR VALUES ('Scott', 'scott#hotmail.com', 'Original');
INSERT INTO TEST_IND_EXPR VALUES ('Scott', 'scott#gmail.com', 'Personal');
INSERT INTO TEST_IND_EXPR VALUES ('Scott', 'scott#live.com', 'Personal');
INSERT INTO TEST_IND_EXPR VALUES ('Scott', 'scott#yahoo.com', 'Original');
The last statement returns SQL0803 as this row violates uniqueness of the TEST_IND_EXPR2 index.

PostgreSQL to find the next available value

I have followed the example here to find the next available value on a table column: the generated value will be used by an application to insert data in another table. But, if multiple concurrent application instances run the same query, some of these instances could get the same value. How could I avoid these collisions without change the application? Is it possible write a PostreSQL function to handle this task?
You can use an IDENTITY column or a SEQUENCE.
Identity Column Example
create table t (
id int primary key not null generated always as identity,
name varchar(10)
);
insert into t (name) values ('New York');
insert into t (name) values ('Chicago');
Result:
id name
--- --------
1 New York
2 Chicago
Each INSERT statement will produce a different value for the id column, even when they are executed on separate simultaneous threads.
Sequence Example
create table u (
id int primary key not null,
name varchar(10)
);
create sequence sequ;
insert into u (id, name) values (nextval('sequ'), 'New York');
insert into u (id, name) values (nextval('sequ'), 'Chicago');
Result:
id name
--- --------
1 New York
2 Chicago
Again, each INSERT statement will produce a different value for the id column, even when they are executed on separate simultaneous threads.
See running example for both cases at DB Fiddle.

INSERT + SELECT data type mismatch on similar fields

I'm running the following SQLite workaround to add a primary key to a table that did not have one. I am getting a datatype mismatch on
INSERT INTO cities
SELECT id, name FROM old_cities;
However, the fields have exactly the same type. Is it possible that his happens due to running the queries from DbBrowser for SQLite?
CREATE table cities (
id INTEGER NOT NULL,
name TEXT NOT NULL
);
INSERT INTO cities (id, name)
VALUES ('pan', 'doul');
END TRANSACTION;
PRAGMA foreign_keys=off;
BEGIN TRANSACTION;
ALTER TABLE cities RENAME TO old_cities;
--CREATE TABLE cities (
-- id INTEGER NOT NULL PRIMARY KEY,
-- name TEXT NOT NULL
--);
CREATE TABLE cities (
id INTEGER NOT NULL,
name TEXT NOT NULL,
PRIMARY KEY (id)
);
SELECT * FROM old_cities;
INSERT INTO cities
SELECT id, name FROM old_cities;
DROP TABLE old_cities;
COMMIT;
You have defined the column id of the table cities to be INTEGER, but with this:
INSERT INTO cities (id, name) VALUES ('pan', 'doul');
you insert the string 'pan' as id.
SQLite does not do any type checking in this case and allows it.
Did you mean to insert 2 rows each having the names 'pan' and 'doul'?
If so, you should do something like:
INSERT INTO cities (id, name) VALUES (1, 'pan'), (2, 'doul');
Later you rename the table cities to old_cities and you recreate cities but you do something different: you define id as INTEGER and PRIMARY KEY.
This definition is the only one that forces type checking in SQLite.
So, when you try to insert the rows from old_cities to cities you get an error because 'pan' is not allowed in the column id as it is defined now.

How can I insert a row that references another postgres table via foreign key, and creates the foreign row too if it doesn't exist?

In Postgres, is there a way to atomically insert a row into a table, where one column references another table, and we look up to see if the desired row exists in the referenced table and inserts it as well if it is not?
For example, say we have a US states table and a cities table which references the states table:
CREATE TABLE states (
state_id serial primary key,
name text
);
CREATE TABLE cities (
city_id serial,
name text,
state_id int references states(state_id)
);
When I want to add the city of Austin, Texas, I want to be able to see whether Texas exists in the states table, and if so use its state_id in the new row I'm inserting in the cities table. If Texas doesn't exist in the states table, I want to create it and then use its id in the cities table.
I tried this query, but I got an error saying
ERROR: WITH clause containing a data-modifying statement must be at the top level
LINE 2: WITH inserted AS (
^
WITH state_id AS (
WITH inserted AS (
INSERT INTO states(name)
VALUES ('Texas')
ON CONFLICT DO NOTHING
RETURNING state_id),
already_there AS (
SELECT state_id FROM states
WHERE name='Texas')
SELECT * FROM inserted
UNION
SELECT * FROM already_there)
INSERT INTO cities(name, state_id)
VALUES
('Austin', (SELECT state_id FROM state_id));
Am I overlooking a simple solution?
Here is one option:
with inserted as (
insert into states(name) values ('Texas')
on conflict do nothing
returning state_id
)
insert into cities(name, state_id)
values (
'Dallas',
coalesce(
(select state_id from inserted),
(select state_id from states where name = 'Texas')
)
);
The idea is to attempt to insert in a CTE, and then, in the main insert, check if a value was inserted, else select it.
For this to work properly, you need a unique constraint on states(name):
create table states (
state_id serial primary key,
name text unique
);
Demo on DB Fiddlde
You can force the insert statement to return a value:
WITH inserted AS (
INSERT INTO states (name)
VALUES ('Texas')
ON CONFLICT (name) DO UPDATE SET name = EXCLUDED.NAME
RETURNING state_id
)
. . .
The DO UPDATE SET forces the INSERT to return something.
I notice that you don't have a unique constraint, so you also need that:
ALTER TABLE states ADD CONSTRAINT unq_state_name
UNIQUE (name);
Otherwise the ON CONFLICT doesn't have anything to work with.

INSERT INTO with default values for a single column

I have a problem to insert the data into 1 table with 1 column
Name: user_id
Column: id
I am trying to add 1 line in this column with this query:
INSERT INTO user_id (id) VALUES ()
The problem is the above is invalid, I want the id take the last value id +1
This is not a syntax problem because this query works:
INSERT INTO user_id (id) VALUES (4)
So, I do not really know how to solve this problem.
Assuming the id column is defined as serial or identity you can specify a column list and set the column value to default:
insert into user_id (id) values (default);
This also works if you have more columns, e.g:
insert into users (id, firstname, lastname)
values (default, 'Arthur', 'Dent');
Or you can leave out the column list completely and request the default value(s) for all columns:
insert into user_id default values;
SQL supports the default values statement.
So this will work:
create table t (id serial primary key);
insert into t
default values;
The syntax is described in the documentation.