Is it possible to create a Database which has 1 column (but not the column of primary key) to be auto-increment? So that when I insert value to the database, i don't need to fill in the value myself, and DB will fill in that value for that column for me (and increment every time I do a new insert)?
Thank you.
Yes, of course it is possible. Just make this column a unique key (not a primary key) and it has to be declared with a special attribute: "IDENTITY" for SQL Server, and
"AUTO_INCREMENT" for MySQL (see the example below) . And another column can be a primary key.
On MySQL database the table could be declared like this:
CREATE TABLE `mytable` (
`Name` VARCHAR(50) NOT NULL,
`My_autoincrement_column` INTEGER(11) NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`Name`),
UNIQUE KEY `My_autoincrement_column` (`My_autoincrement_column`)
);
Yes, you can do this. Here is a sample for SQL Server using IDENTITY:
CREATE TABLE MyTable (
PrimaryKey varchar(10) PRIMARY KEY,
IdentityColumn int IDENTITY(1,1) NOT NULL,
DefaultColumn CHAR(1) NOT NULL DEFAULT ('N')
)
INSERT INTO MyTable (PrimaryKey) VALUES ('A')
INSERT INTO MyTable (PrimaryKey) VALUES ('B')
INSERT INTO MyTable (PrimaryKey, DefaultColumn) VALUES ('C', 'Y')
INSERT INTO MyTable (PrimaryKey, DefaultColumn) VALUES ('D', 'Y')
INSERT INTO MyTable (PrimaryKey, DefaultColumn) VALUES ('E', DEFAULT)
--INSERT INTO MyTable (PrimaryKey, DefaultColumn) VALUES ('F', NULL) -- ERROR
--> Cannot insert the value NULL into column 'DefaultColumn', table 'tempdb.dbo.MyTable'; column does not allow nulls. INSERT fails.
SELECT * FROM MyTable
Here is an example using SQL Server using functions to roll-your-own incrementing column. This is by means not fault tolerant or the way I would do it. (I'd use the identity feature.) However, it is good to know that you can use functions to return default values.
DROP TABLE MyTable
GO
DROP FUNCTION get_default_for_mytable
GO
CREATE FUNCTION get_default_for_mytable
()
RETURNS INT
AS
BEGIN
-- Declare the return variable here
DECLARE #ResultVar int
-- Add the T-SQL statements to compute the return value here
SET #ResultVar = COALESCE((SELECT MAX(HomeBrewedIdentityColumn) FROM MyTable),0) + 1
-- Return the result of the function
RETURN #ResultVar
END
GO
CREATE TABLE MyTable (
PrimaryKey varchar(10) PRIMARY KEY,
IdentityColumn int IDENTITY(1,1) NOT NULL,
DefaultColumn CHAR(1) NOT NULL DEFAULT ('N'),
HomeBrewedIdentityColumn int NOT NULL DEFAULT(dbo.get_default_for_mytable())
)
GO
INSERT INTO MyTable (PrimaryKey) VALUES ('A')
INSERT INTO MyTable (PrimaryKey) VALUES ('B')
INSERT INTO MyTable (PrimaryKey, DefaultColumn) VALUES ('C', 'Y')
INSERT INTO MyTable (PrimaryKey, DefaultColumn) VALUES ('D', 'Y')
INSERT INTO MyTable (PrimaryKey, DefaultColumn) VALUES ('E', DEFAULT)
--INSERT INTO MyTable (PrimaryKey, DefaultColumn) VALUES ('F', NULL) -- ERRROR
--> Cannot insert the value NULL into column 'DefaultColumn', table 'tempdb.dbo.MyTable'; column does not allow nulls. INSERT fails.
SELECT * FROM MyTable
Results
PrimaryKey IdentityColumn DefaultColumn HomeBrewedIdentityColumn
---------- -------------- ------------- ------------------------
A 1 N 1
B 2 N 2
C 3 Y 3
D 4 Y 4
E 5 N 5
I think you can have only 1 identity autoincrement column per table, this columns doesn't have to be the primary key but it would mean you have to insert the primary key yourself.
If you already have a primary key which is auto increment then I would try and use this if possible.
If you are trying to get an row ID to range on for querying then I would look at creating a view which has the row ID in it (not SQL 2000 or below).
Could you add in what your primary key is and what you intend to use the auto increment column for and it might help come up with a solution
On sql server this is called an identity column
Oracle and DB2 have sequence but I think you are looking for identity and all major dbms (mysql, sql server, db2, oracle) support it.
Related
How to create a table in SQL with the following attributes?
The table has two columns A and B.
The primary key of the table is (A, B).
All values in A are unique. Pseudo code: (Count(A) == COUNT(SELECT DISTINCT A)).
All values in B are also unique.
CREATE TABLE IF NOT EXISTS myTable(
A VARCHAR(32) PRIMARY KEY NOT NULL, -- A HAS DISTINCT VALUES
B VARCHAR(32) NOT NULL -- B HAS DISTINCT VALUES
);
INSERT INTO myTable VALUES ('A1', 'B1') --> Add value
INSERT INTO myTable VALUES ('A1', 'B2') --> Do not add value
INSERT INTO myTable VALUES ('A2', 'B2') --> Add value
INSERT INTO myTable VALUES ('A3', 'B3') --> Add value
INSERT INTO myTable VALUES ('A4', 'B3') --> Do not add value
INSERT INTO myTable VALUES ('A4', 'B4') --> Add value
INSERT INTO myTable VALUES ('A5', 'B6') --> Add value
To define a compound PRIMARY KEY:
CREATE TABLE myTable
(
A VARCHAR(32) NOT NULL,
B VARCHAR(32) NOT NULL,
CONSTRAINT PK_AB primary key (A,B),
CONSTRAINT UQ_A UNIQUE(A),
CONSTRAINT UQ_B UNIQUE(B)
);
Please note: a table with just 2 columns with both columns in the primary key smells funny.
I'm coming from a Teradata environment where
create table mytable
(
first_column varchar(50),
second_column varchar(50),
third_column varchar(50)
)
insert into mytable values (first_column = 'one', second_column = 'first')
insert into mytable values (first_column = 'two', third_column = 'second')
is possible. This does not seem to be possible in HANA even with default specified
create column table mytable
(
"FIRST_COLUMN" varchar(50) default null,
"SECOND_COLUMN" varchar(50) default null,
"THIRD_COLUMN" varchar(50) default null
)
I could create a row with a unique ID specifying NULLs for all the fields and then UPDATE the columns I want using the ID which seems time consuming and awkward or is there a better way?
Use the standard syntax:
insert into mytable (first_column, second_column)
values ('one', 'first');
This should work both in Hana and Teradata -- and any other database.
I found this: Unique constraint on multiple columns
SQL> CREATE TABLE t (id1 NUMBER, id2 NUMBER);
Table created
SQL> ALTER TABLE t ADD CONSTRAINT u_t UNIQUE (id1, id2);
Table altered
SQL> INSERT INTO t VALUES (1, NULL);
1 row inserted
SQL> INSERT INTO t VALUES (1, NULL);
INSERT INTO t VALUES (1, NULL)
ORA-00001: unique constraint (VNZ.U_T) violated
I want to create a constraint that allows to enter several (X,null) values, so that the constraint only kicks in when BOTH values that the constraint is about are not null. Is this possible?
Note that you can insert multiple (NULL,NULL), but not multiple (1,NULL). This is how indexes work in Oracle; when all columns are null, then there is no entry in the index.
So rather than building a normal index on (id1,id2) we must build a function index that makes both values null when at least one is null. We need deterministic functions for this. First DECODE to check for null. Then GREATEST, making use of it resulting in null when at least one value is null:
create unique index idx_t_unique on t
(
decode(greatest(id1,id2),null,null,id1),
decode(greatest(id1,id2),null,null,id2)
);
EDIT (after acceptance :-) I just see, you don't need deterministic functions, but can also use case constructs. Maybe that was always the case, maybe not, I don't know. However, you can also write the index as follows, if you find it more readable:
create unique index idx_t_unique on t
(
case when id1 is null or id2 is null then null else id1 end,
case when id1 is null or id2 is null then null else id2 end
);
You need a CHECK constraint in this case:
ALTER TABLE t ADD CONSTRAINT chk_t CHECK (id1 is null or id2 is null);
If you need a unique constraint behaviour you may try this:
drop table t1;
create table t1 (n number, m number);
create unique index t_inx on t1(case when n is null then null when m is null then null else n || '_' || m end);
insert into t1 values (1, null);
insert into t1 values (1, null);
insert into t1 values (null, 1);
insert into t1 values (null, 1);
insert into t1 values (1, 1);
insert into t1 values (1, 1);
insert into t1 values (1, 2);
A unique function based index
In SQL Server 2008, is there a way to insert rows while omitting those rows that cause a foreign key constraint to fail?
E.g. I have an insert statement similar to this:
insert into tblFoo(id, name, parent_id, desc) values
(1, 'a', 1, null),
(2, 'c', 3, 'blah'),
....;
parent_id is a fk to another table. How can I then get sql server to skip rows on which the fk column is invalid?
Update I would like to get this to work automatically, without first having to filter out those rows that violates the fk constraint. The reason for that is because the insert statements are generated by a program so it is not known beforehand which foreign keys exist on each table.
Is a weird situation you got there but you can insert the values to a temporary table and then select only the values with a valid FK.
something like:
declare #tempTable table (
id int,
name nvarchar(50) ,
parent_id int ,
[desc] nvarchar(50)
)
insert into #tempTable values
(1, 'a', 1, null),
(2, 'c', 3, 'blah')
insert into tblFoo(id, name, parent_id, [desc])
select tempTable.* from #tempTable as tempTable
inner join parent_id on parent_id.id = tempTable.parent_id
One way would be to use an Instead Of trigger on Inserts and Updates. You could then evaluate each row being updated before the actual write to the DB takes place. I'm generally not a huge fan of triggers, but you seem to have an unusual requirement here.
I have a table with one and only one column, which is an identity column (PK) of this table. How to insert row in this table?
INSERT INTO table_name
doesn't work, neither does:
INSERT INTO table_name() VALUES()
VALID SOLUTION FROM THE ANSWER:
INSERT INTO table_name DEFAULT VALUES
DECLARE #TABLE TABLE
(
ID INT IDENTITY(1,1) PRIMARY KEY
)
INSERT INTO #TABLE DEFAULT VALUES
SELECT * FROM #TABLE
You should enable identity insert on the table, so that you can insert values into the column.
SET IDENTITY_INSERT table_name ON