SQL Not Auto-incrementing - sql

I have the following SQL I trigger in a C# app.
All works well but the ID table doesn't auto increment. It creates the value of 1 for the first entry then will not allow other inserts due to not being able to create a unquie ID.
Here is the SQL:
CREATE TABLE of_mapplist_raw (
id integer PRIMARY KEY NOT NULL,
form_name varchar(200) NOT NULL,
form_revi varchar(200) NOT NULL,
source_map varchar(200),
page_num varchar(200) NOT NULL,
fid varchar(200) NOT NULL,
fdesc varchar(200) NOT NULL
)";
I'm sure its a schoolboy error at play here.

you need to specify its seed and increment.( plus , i dont think there is integer keyword ....)
id [int] IDENTITY(1,1) NOT NULL,
the first value is the seed
the second one is the delta between increases
A Question you might ask :
delta between increases ? why do i need that ? its always 1 ....??
well - yes and no. sometimes you want to leave a gap between rows - so you can later insert rows between... specially if its clustered index by that key....and speed is important... so you can pre-design it to leave gaps.
p.s. ill be glad to hear other scenarios from watchers.

You need to mention the Identity.
id int IDENTITY(1,1) NOT NULL

Related

Identity Column SQL skip zero

I have the following table:
CREATE TABLE [T_Manufacture](
[ManufactureID] [int] IDENTITY(-2147483648,1) NOT NULL,
[Manufacture] [varchar](25) NULL,
[ManufactureDescription] [varchar](50) NULL)
As we can see that the ManufactureID is identity column which is start from -2147483648. So if the identity reach -1 then the next record shoud be 0 (zero). Is it possible to skip this record? I mean, I don't wanna have an ID with value zero.
Please help.
Thank you.
This is an elaboration on Deepak's comment.
Why would anyone use negative numbers for an id column? And, if you are concerned about having 2 billion values, then you should be concerned about 4 billion and use a bigint:
CREATE TABLE T_Manufacture (
ManufactureID bigint IDENTITY(1, 1) PRIMARY KEY, -- might as well declare it
Manufacture varchar(25),
ManufactureDescription varchar(50)
);
This is the simplest way to solve your problem and to ensure that the table can grow as large as you want.
In other words, the right way to "skip" an particular value is to start the identity enumeration after that value. Or, if you really wanted:
CREATE TABLE T_Manufacture (
ManufactureID int IDENTITY(-1, -1) PRIMARY KEY, -- might as well declare it
Manufacture varchar(25),
ManufactureDescription varchar(50)
);
But once again, negative ids seem absurd.
You can control this with a trigger...
CREATE TRIGGER SkipZeroManufacture ON T_Manufacture AFTER INSERT
AS
BEGIN
IF EXISTS (SELECT 'zero record inserted' FROM inserted AS I WHERE I.ManufactureID = 0)
BEGIN
INSERT INTO T_Manufacture (
Manufacture,
ManufactureDescription)
SELECT
I.Manufacture,
I.ManufactureDescription
FROM
inserted AS I
WHERE
i.ManufactureID = 0
DELETE T_Manufacture WHERE ManufactureID = 0
END
END
Although this is an awful idea since it will be triggered on each insert, only check for one value and will need maintenance for new or updated columns. It will also switch the 0 ID'd record with another ID.

I have a GUID Clustered primary key - Is there a way I can optimize or unfragment a table that might be fragmented?

Here's the code I have. The table actually has 20 more columns but I am just showing the first few:
CREATE TABLE [dbo].[Phrase]
(
[PhraseId] [uniqueidentifier] NOT NULL,
[PhraseNum] [int] NULL
[English] [nvarchar](250) NOT NULL,
PRIMARY KEY CLUSTERED ([PhraseId] ASC)
) ON [PRIMARY]
GO
From what I remember I read
Fragmentation and GUID clustered key
that it was good to have a GUID for the primary key but now it's been suggested it's not a good idea as data has to be re-ordered for each insert -- causing fragmentation.
Can anyone comment on this. Now my table has already been created is there a way to unfragment it? Also how can I stop this problem getting worse. Can I modify an existing table add NEWSEQUENTIALID?
Thats true ,NEWSEQUENTIALID helps to completely fill the data and index pages.
But NEWSEQUENTIALID datasize is 4 times than int.So 4 times more page will be require than int.
declare #t table(col int
,col2 uniqueidentifier DEFAULT NEWSEQUENTIALID())
insert into #t (col) values(1),(2)
select DATALENGTH(col2),DATALENGTH(col) from #t
Suppose x data page is require in case of int to hold 100 rows
In case of NEWSEQUENTIALID 4x data page will be require to hold 100 rows.
Therefore query will read more page to fetch same number of records.
So ,if you can alter table then you can add int identity column and make it PK+CI.You can drop or not [uniqueidentifier] as per your requirement or need.
Looks like this is dup to:
INT vs Unique-Identifier for ID field in database
But here's a rehash for your issue:
Rather than a guid and depending on your table depth, int or big int would be better choices, both from storage and optimization vantages. You might also consider defining the field as "int identity not null" to further help population.
GUIDs have a considerable storage impact, due to their length.
CREATE TABLE [dbo].[Phrase]
(
[PhraseId] [int] identity NOT NULL
CONSTRAINT [PK_Phrase_PhraseId] PRIMARY KEY,
[PhraseNum] [int] NULL
[English] [nvarchar](250) NOT NULL,
....
) ON [PRIMARY]
GO

T-SQL create table statement will not accept variable

Why can I not use a variable to name a new table?
As a beginning SQL project, I'm making a personal finance database. Each account will have a corresponding table in the database. There is also a table listing all the current accounts. See (simplified) code sample below:
CREATE TABLE accountList
(
[Id] INT NOT NULL PRIMARY KEY IDENTITY,
[Name] NCHAR(30) NOT NULL UNIQUE,
[Active] BIT NOT NULL
)
INSERT INTO accountList(name, active)
VALUES
('Bank_One_Checking', 1);
CREATE TABLE Bank_One_Checking
(
[Id] BIGINT NOT NULL PRIMARY KEY IDENTITY,
[payee] NCHAR(30) NOT NULL UNIQUE,
[category] NCHAR(30) NOT NULL UNIQUE,
[amount] INT NOT NULL DEFAULT 0.00
)
This code works. I want to set the account name to a variable (so it can be passed as a parameter to a stored procedure). See code below:
DECLARE #accountName nchar(30);
SET #accountName = 'Bank_One_Savings';
INSERT INTO accountList(name, active)
VALUES
(#accountName, 1);
CREATE TABLE #accountName
(
[Id] BIGINT NOT NULL PRIMARY KEY IDENTITY,
[payee] NCHAR(30) NOT NULL UNIQUE,
[category] NCHAR(30) NOT NULL UNIQUE,
[amount] INT NOT NULL DEFAULT 0.00
)
Line 6 in that code (CREATE TABLE #accountName) produces an error
Incorrect syntax near #accountName, expecting '.', 'ID', or 'QUOTEID'.
Why won't it insert the variable into the command?
SQL doesn't allow tables to be variables. You could use dynamic SQL, if you like, but I strongly recommend against it.
Your code has several flaws. You should learn not only to fix them but why they are wrong.
You need a "master" table, where AccountName is a column. Multiple tables with the same structure is almost always a sign of poor database design.
Strings should be designed using VARCHAR() or NVARCHAR(), unless they are short or known to be the same length (say an account number that is always 15 characters). Fixed-length strings just waste space.
I find it unlikely that a column named category would be unique in such a table. It seems to violate what uniqueness means.
Integers are not appropriate for monetary amounts in most of the world (use decimal or money). And, they shouldn't be initialized to constants with a decimal point.

store two values in one field sql

I have to create a table in sql where one of the columns stores awards for a movie. The schema says it should store something like Oscar, screenplay. Is it possible to store two values in the same field in SQL. If so what datatype would that be and how would you query the table for it?
It's a horrible design pattern to store more than one piece of data in a single column in a relational database. The exact design of your system depends on several things, but here is one possible way to model it:
CREATE TABLE Movie_Awards (
movie_id INT NOT NULL,
award_id INT NOT NULL,
CONSTRAINT PK_Movie_Awards PRIMARY KEY CLUSTERED (movie_id, award_id)
)
CREATE TABLE Movies (
movie_id INT NOT NULL,
title VARCHAR(50) NOT NULL,
year_released SMALLINT NULL,
...
CONSTRAINT PK_Movies PRIMARY KEY CLUSTERED (movie_id)
)
CREATE TABLE Awards (
award_id INT NOT NULL,
ceremony_id INT NOT NULL,
name VARCHAR(50) NOT NULL, -- Ex: Best Picture
CONSTRAINT PK_Awards PRIMARY KEY CLUSTERED (award_id)
)
CREATE TABLE Ceremonies (
ceremony_id INT NOT NULL,
name VARCHAR(50) NOT NULL, -- Ex: "Academy Awards"
nickname VARCHAR(50) NULL, -- Ex: "Oscars"
CONSTRAINT PK_Ceremonies PRIMARY KEY CLUSTERED (ceremony_id)
)
I didn't include Foreign Key constraints here, but hopefully they should be pretty obvious.
Anything's possible; that doesn't mean it's a good idea :)
Far better to normalize your structure and store types like so:
AwardTypes:
AwardTypeID
AwardTypeName
Movies:
MovieID
MovieName
MovieAwardType:
MovieID
AwardTypeID
You can serialize your data in Json format,store Json string, and deselialize on read. More sefer than using your own format
Data presentation does't have to be so close tied with phisical data organisation. Wouldn't it be bether to store these two data in two separate columns and then just do some kind of concatenation at the display time?
It is much less painfull to join data than to split it, if you happen to need just a screenplay, one day...

Ensuring uniqueness of multiple large URL fields in MS SQL

I have a table with the following definition:
CREATE TABLE url_tracker (
id int not null identity(1, 1),
active bit not null,
install_date int not null,
partner_url nvarchar(512) not null,
local_url nvarchar(512) not null,
public_url nvarchar(512) not null,
primary key(id)
);
And I have a requirement that these three URLs always be unique - any individual URL can appear many times, but the combination of the three must be unique (for a given day).
Initially I thought I could do this:
CREATE UNIQUE INDEX uniques ON url_tracker
(install_date, partner_url, local_url, public_url);
However this gives me back the warning:
Warning! The maximum key length is 900 bytes. The index 'uniques' has maximum
length of 3076 bytes. For some combination of large values, the insert/update
operation will fail.
Digging around I learned about the INCLUDE argument to CREATE INDEX, but according to this question converting the command to use INCLUDE will not enforce uniqueness on the URLs.
CREATE UNIQUE INDEX uniques ON url_tracker (install_date)
INCLUDE (partner_url, local_url, public_url);
How can I enforce uniqueness on several relatively large nvarchar fields?
Resolution
So from the comments and answers and more research I'm concluding I can do this:
CREATE TABLE url_tracker (
id int not null identity(1, 1),
active bit not null,
install_date int not null,
partner_url nvarchar(512) not null,
local_url nvarchar(512) not null,
public_url nvarchar(512) not null,
uniquehash AS HashBytes('SHA1',partner_url+local_url+public_url) PERSISTED,
primary key(id)
);
CREATE UNIQUE INDEX uniques ON url_tracker (install_date,uniquehash);
Thoughts?
I would make a computed column with the hash of the URLs, then make a unique index/constraint on that. Consider making the hash a persisted computed column. It shouldn't have to be recalculated after insertion.
Following the ideas from the conversation in the comments. Assuming that you can change the datatype of the URL to be VARCHAR(900) (or NVARCHAR(450) if you really think you need Unicode URLs) and be happy with the limitation on the length of the URL, this solution could work. This also assumes SQL Server 2008 or better. Please, always specify what version you're working with; sql-server is not specific enough, since solutions can vary greatly depending on the version.
Setup:
USE tempdb;
GO
CREATE TABLE dbo.urls
(
id INT IDENTITY(1,1) PRIMARY KEY,
url VARCHAR(900) NOT NULL UNIQUE
);
CREATE TABLE dbo.url_tracker
(
id INT IDENTITY(1,1) PRIMARY KEY,
active BIT NOT NULL DEFAULT 1,
install_date DATE NOT NULL DEFAULT CURRENT_TIMESTAMP,
partner_url_id INT NOT NULL REFERENCES dbo.urls(id),
local_url_id INT NOT NULL REFERENCES dbo.urls(id),
public_url_id INT NOT NULL REFERENCES dbo.urls(id),
CONSTRAINT unique_urls UNIQUE
(
install_date,partner_url_id, local_url_id, public_url_id
)
);
Insert some URLs:
INSERT dbo.urls(url) VALUES
('http://msn.com/'),
('http://aol.com/'),
('http://yahoo.com/'),
('http://google.com/'),
('http://gmail.com/'),
('http://stackoverflow.com/');
Now let's insert some data:
-- succeeds:
INSERT dbo.url_tracker(partner_url_id, local_url_id, public_url_id)
VALUES (1,2,3), (2,3,4), (3,4,5), (4,5,6);
-- fails:
INSERT dbo.url_tracker(partner_url_id, local_url_id, public_url_id)
VALUES(1,2,3);
GO
/*
Msg 2627, Level 14, State 1, Line 3
Violation of UNIQUE KEY constraint 'unique_urls'. Cannot insert duplicate key
in object 'dbo.url_tracker'. The duplicate key value is (2011-09-15, 1, 2, 3).
The statement has been terminated.
*/
-- succeeds, since it's for a different day:
INSERT dbo.url_tracker(install_date, partner_url_id, local_url_id, public_url_id)
VALUES('2011-09-01',1,2,3);
Cleanup:
DROP TABLE dbo.url_tracker, dbo.urls;
Now, if 900 bytes is not enough, you could change the URL table slightly:
CREATE TABLE dbo.urls
(
id INT IDENTITY(1,1) PRIMARY KEY,
url VARCHAR(2048) NOT NULL,
url_hash AS CONVERT(VARBINARY(32), HASHBYTES('SHA1', url)) PERSISTED,
CONSTRAINT unique_url UNIQUE(url_hash)
);
The rest doesn't have to change. And if you try to insert the same URL twice, you get a similar violation, e.g.
INSERT dbo.urls(url) SELECT 'http://www.google.com/';
GO
INSERT dbo.urls(url) SELECT 'http://www.google.com/';
GO
/*
Msg 2627, Level 14, State 1, Line 1
Violation of UNIQUE KEY constraint 'unique_url'. Cannot insert duplicate key
in object 'dbo.urls'. The duplicate key value is
(0xd111175e022c19f447895ad6b72ff259552d1b38).
The statement has been terminated.
*/