I created a script of my database.
But When I run it, the script does not create the database. It skips the "Create db" statement, and only creates the tables (on the database I have selected at the moment, so not ideal....)
(query executes with no errors by the way.)
Why is this happening? why cant you create a database and edit the content in it in one go?
(I know you can check if the db exist first, but this shouldn't be happening from the start)
--My Script--
CREATE DATABASE [EthicsDB]
USE [EthicsDB]
go
CREATE TABLE [dbo].[TempEmployee](
[PersonnelNumber] [int] IDENTITY(1,1) NOT NULL,
[Name] [varchar](80) NULL,
[SurName] [varchar](80) NULL,
[ManagerEmail] [varchar](80) NULL,
[ManagerName] [varchar](80) NULL,
CONSTRAINT [PK_TempEmployee] PRIMARY KEY CLUSTERED
(
[PersonnelNumber] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
You must use GO after CREATE DATABASE [EthicsDB].
Try this one -
USE [master]
GO
IF EXISTS (
SELECT 1 FROM sys.databases WHERE name = N'EthicsDB'
)
DROP DATABASE [EthicsDB]
GO
CREATE DATABASE [EthicsDB]
GO --<----
USE [EthicsDB]
GO
CREATE TABLE [dbo].[TempEmployee](
[PersonnelNumber] [int] IDENTITY(1,1) NOT NULL,
[Name] [varchar](80) NULL,
[SurName] [varchar](80) NULL,
[ManagerEmail] [varchar](80) NULL,
[ManagerName] [varchar](80) NULL,
CONSTRAINT [PK_TempEmployee] PRIMARY KEY CLUSTERED
(
[PersonnelNumber] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
If you run the SQL as provided you get an error message on the line
USE [EthicsDB]
This occurs as when SQL Servers runs SQL commands via SQL CMD it process the SQL in batches.
As you have no GO statement after the Create database statement it maybe that SQL Server does not yet recognise that a new database Ethics has been created and thus when you attempt to use the database via USE [EthicsDB] the statement fails.
As your SQL Statements are not wrapped in a transaction and as you are not checking for errors then if SQL Server encounters an error it will raise the error but also continue to process the rest of the query.
In the query provided this leads to the new tables being created in the current database.
To correct the problem modify your query to
CREATE DATABASE [EthicsDB]
go
USE [EthicsDB]
go
You should probably wrap each action in a transaction block.
Also, when you are creating the table I generally do a check to see if it already exists first.
If you run only the create database, what happens?
Related
Updated question for better understanding and because I found the solution, I was looking for:
My script goes like this:
CREATE TABLE [dbo].[MyTable]
(
[Columndata1] [nvarchar] (255) NOT NULL,
[Columndata2] [nvarchar] (max) NOT NULL,
[Columndata3] [nvarchar] (max) NOT NULL,
[ColumndataTime] [datetime] NOT NULL,
CONSTRAINT [PK_MyTable]
PRIMARY KEY CLUSTERED ([Columndata1] ASC)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,
IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,
ALLOW_PAGE_LOCKS = ON) [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
ALTER TABLE [dbo].[MyTable]
ADD CONSTRAINT [DF_MyTable_ColumndataTime] DEFAULT (getutcdate()) FOR [ColumndataTime]
GO
I am trying to do a workaround in case of duplicate PK, so that if it happens, it should ignore the request and if not create the table.
I guess I wasn't clear about it in my initial question.
I think you already have your answer, you are inserting duplicate values to the column you designated as your primary key, the error you are getting tells you this, and it's all particularly clear. I can see that you assumed something more "sinister" was happening, but it seems to be that this wasn't the case, and it wasn't a "race condition" or something more complex.
However, I thought it might be worth pointing out something that I see as a bit of a "red flag". Maybe this doesn't classify as an answer, but it does address some points in your original question, particularly when you start asking about the options in the "complicated" part of your CREATE TABLE script, and it's too long for a comment?
If you have a "default" out of the box installation of SQL Server then 90% of the statements in your CREATE TABLE script are simply redundant defaults.
I can run this script:
CREATE TABLE [dbo].[MyTable] (
[Columndata1] [nvarchar] (255) NOT NULL,
[Columndata2] [nvarchar] (max) NOT NULL,
[Columndata3] [nvarchar] (max) NOT NULL,
[ColumndataTime] [datetime] NOT NULL,
CONSTRAINT [PK_MyTable] PRIMARY KEY CLUSTERED ([Columndata1]));
Then I can generate the create script for the table, directly from SSMS, to get this:
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[MyTable](
[Columndata1] [nvarchar](255) NOT NULL,
[Columndata2] [nvarchar](max) NOT NULL,
[Columndata3] [nvarchar](max) NOT NULL,
[ColumndataTime] [datetime] NOT NULL,
CONSTRAINT [PK_MyTable] PRIMARY KEY CLUSTERED
(
[Columndata1] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
That second script looks very similar to the one in your post now doesn't it?
Now there's nothing necessarily wrong with specifying all of these default options, and there are cases where it might end up to your advantage, but my personal preference (and the preference of everyone I have ever worked with) is to omit the default options, as they just make scripts harder to peer review, and are essentially "clutter". You can argue over whether it's worth specifying the ASC in the PRIMARY KEY section or not, some of this is assumed knowledge, and there's always the possibility that Microsoft might decide to change defaults in the future (and then the first script wouldn't generate what you wanted). However, taking a pragmatic view of how these things work, the chances of Microsoft changing these options in a future version are incredibly slim, as it would break so many databases out there being used in the wild.
Take this as you want, but I thought it was worth a stab at explaining this, as you seem to be a little fixated (maybe the wrong word :)) on the "long part" (your words) in your original query?
My solution for it was to use the WHERE NOT EXISTS statement in SQL greatly inspired from this answer: https://stackoverflow.com/a/3025332
INSERT INTO `table` (`value1`, `value2`)
SELECT 'stuff for value1', 'stuff for value2' FROM DUAL
WHERE NOT EXISTS (SELECT * FROM `table`
WHERE `value1`='stuff for value1' AND `value2`='stuff for value2' LIMIT 1)
This way I won't get duplicates.
Kudos to Richard Hansell for the explanation, making me move my focus from the "complicated" part.
Using version:
Microsoft SQL Server 2008 R2 (SP3-OD) (KB3144114) - 10.50.6542.0 (Intel X86)
Feb 22 2016 18:12:09
Copyright (c) Microsoft Corporation
Standard Edition on Windows NT 5.2 <X86> (Build : )
I have a heavy table (135K rows), that I moved from another DB.
It transferred with the [id] column being a standard int column instead of it being the key & seed column.
When trying to edit that field to become an identity specification, with a seed value, its errors out and gives me this error:
Execution Timeout Expired.
The timeout period elapsed prior to completion of the operation...
I even tried deleting that column, to try recreate it later, but i get the same issue.
Thanks
UPDATE:
Table structure:
CREATE TABLE [dbo].[tblEmailsSent](
[id] [int] IDENTITY(1,1) NOT NULL, -- this is what it should be. currently its just an `[int] NOT NULL`
[Sent] [datetime] NULL,
[SentByUser] [nvarchar](50) NULL,
[ToEmail] [nvarchar](150) NULL,
[StudentID] [int] NULL,
[SubjectLine] [nvarchar](200) NULL,
[MessageContent] [nvarchar](max) NULL,
[ReadStatus] [bit] NULL,
[Folder] [nvarchar](50) NULL,
CONSTRAINT [PK_tblMessages] PRIMARY KEY CLUSTERED
(
[id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
I think that your question is a duplicate of Adding an identity to an existing column. That question above has an answer that should be perfect for your situation. I'll reproduce its essential part here below.
But before that, let's clarify why you see the timeout error.
You are trying to add the IDENTITY property to existing column. And you are using SSMS GUI for it. A simple ALTER COLUMN statement can't do it and even if it could, SSMS generates a script that creates a new table, copies over the data into the new table, drops the old table and renames the new table to the old name. When you do this operation via SSMS GUI it runs its scripts with a predefined timeout of 30 seconds.
Of course, you can change this setting in SSMS and increase the timeout, but there is a much better way.
Simple/lazy way
Use SSMS GUI to change the column definition, but then instead of clicking "Save", click "Generate Change Script" in the table designer.
Then save this script to a file and review the generated T-SQL code that GUI runs behind the scene.
You'll see that it creates a temp table with the required schema, copies data over, re-creates foreign keys and indexes, drops the old table and renames the new table.
The script itself is usually correct, but pay close attention to transactions in it. For some reason SSMS often doesn't use a single transaction for the whole operation, but several transactions. I'd recommend to manually review the script and make sure that there is only one BEGIN TRANSACTION at the top and one COMMIT in the end. You don't want to end up with a half-done operation with, say, a table where all indexes and foreign keys were dropped.
If it is a one-off operation, it could be enough for you. Your table is only 2.4GB, so it may take few minutes, but it should not be hours.
If you run the T-SQL script yourself in SSMS, then by default there is no timeout. You can stop it yourself if it takes too long.
Smart and fast way to do it is described in details in this answer by Justin Grant.
The main idea is to use the ALTER TABLE...SWITCH statement to make the change only touching the metadata without touching each page of the table.
BEGIN TRANSACTION;
-- create a new table with required schema
CREATE TABLE [dbo].[NEW_tblEmailsSent](
[id] [int] IDENTITY(1,1) NOT NULL,
[Sent] [datetime] NULL,
[SentByUser] [nvarchar](50) NULL,
[ToEmail] [nvarchar](150) NULL,
[StudentID] [int] NULL,
[SubjectLine] [nvarchar](200) NULL,
[MessageContent] [nvarchar](max) NULL,
[ReadStatus] [bit] NULL,
[Folder] [nvarchar](50) NULL,
CONSTRAINT [PK_tblEmailsSent] PRIMARY KEY CLUSTERED
(
[id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
-- switch the tables
ALTER TABLE [dbo].[tblEmailsSent] SWITCH TO [dbo].[NEW_tblEmailsSent];
-- drop the original (now empty) table
DROP TABLE [dbo].[tblEmailsSent];
-- rename new table to old table's name
EXEC sp_rename 'NEW_tblEmailsSent','tblEmailsSent';
COMMIT;
After the new table has IDENTITY property you normally should set the current identity value to the maximum of the actual values in your table. If you don't do it, new rows inserted into the table would start from 1.
One way to do it is to run DBCC CHECKIDENT after you switched the tables:
DBCC CHECKIDENT('dbo.tblEmailsSent')
Alternatively, you can specify the new seed in the table definition:
CREATE TABLE [dbo].[NEW_tblEmailsSent](
[id] [int] IDENTITY(<max value of id + 1>, 1) NOT NULL,
I have a customized application in my company where I can create a place for users to input their values to a database.
The table where I am submitting the data has 5 columns with its SQL CREATE Query as below:
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[Log_Ongoing](
[ID] [int] IDENTITY(1,1) NOT NULL,
[LogType] [int] NULL,
[ActivityDate] [datetime] NOT NULL,
[ActivityDescription] [text] NULL,
[Train] [int] NULL,
CONSTRAINT [PK_Log_Ongoing] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
ALTER TABLE [dbo].[Log_Ongoing] WITH CHECK ADD CONSTRAINT [FK_Log_Ongoing_Trains] FOREIGN KEY([Train])
REFERENCES [dbo].[Trains] ([Id])
GO
ALTER TABLE [dbo].[Log_Ongoing] CHECK CONSTRAINT [FK_Log_Ongoing_Trains]
GO
The purpose of this table is to record the ongoing activities in the plant.
The user can come later and modify those activities by updating, adding or deleting through the application by choosing the report data then modifying the data.
My thinking was that before the user submits the data I will delete the old data with the same report date first then insert the new data again.
Unfortunately the data is submitted successfully, but not deleted.
I made a SQL trace to check the queries that the application sends to the database, and I found the below two statements:
exec sp_executesql N'DELETE FROM Log_Ongoing WHERE ActivityDate = #StartDate',N'#startDate datetimeoffset(7)',#startDate='2017-02-12 07:00:00 +02:00'
exec sp_executesql N'INSERT INTO Log_Ongoing (LogType, ActivityDate, ActivityDescription, Train ) VALUES (1,#StartDate, #Activity, #Train)',N'#Train int,#Activity nvarchar(2),#startDate datetimeoffset(7)',#Train=1,#Activity=N'11',#startDate='2017-02-12 07:00:00 +02:00'
When I tested the INSERT staement in the SSMS, it worked fine, but then when I tested the DELETE statement, it didn't work. What is wrong with this query?
We've put in place the following filtered index on a table in our SQL Server 2016 database:
CREATE UNIQUE NONCLUSTERED INDEX [fix_SystemPKeyExecutionOrder] ON [DataInt].[TaskMaster]
(
[SystemPkey] ASC,
[ExecutionOrder] ASC
)
WHERE ([ExecutionOrder] IS NOT NULL)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 95)
GO
Which is causing SQL code to fail now with the following error:
UPDATE failed because the following SET options have incorrect
settings: 'QUOTED_IDENTIFIER'. Verify that SET options are correct for
use with indexed views and/or indexes on computed columns and/or
filtered indexes and/or query notifications and/or XML data type
methods and/or spatial index operations. [SQLSTATE 42000] (Error
1934). The step failed.
When the filtered index is removed, the code runs perfectly.
Looking on MSDN for Index Options, there's nothing about QUOTED_IDENTIFIERS.
None of the UPDATE statements in our SQL code have double quotes for any of the values. The only double-quotes we can see are the following:
SET #ROWCOUNT = ##ROWCOUNT
If (#ROWCOUNT = 0)
BEGIN
RAISERROR('The "File Import" task ACTIVE_YN could not be updated to "Y". Either the task does not exist or the system "File Import To Stage" does not exist.', 16, 1)
END
ELSE
BEGIN
Print 'Successfully updated the "File Import" task ACTIVE_YN to "Y".'
END
Even if we change those double quotes " to two single quotes '', the code still fails with the same error.
The table itself was created with:
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [DataInt].[TaskMaster](
[Pkey] [bigint] IDENTITY(1,1) NOT NULL,
[ScheduleMasterPkey] [int] NOT NULL,
[SystemPkey] [int] NOT NULL,
[SourcePkey] [int] NOT NULL,
[TargetPkey] [int] NOT NULL,
[TaskName] [varchar](255) NOT NULL,
[TaskTypePkey] [int] NOT NULL,
[Active_YN] [char](1) NOT NULL,
[ModifiedDate] [datetime] NULL,
[ModifiedBy] [varchar](100) NULL,
[RowVersion] [timestamp] NOT NULL,
[ExecutionOrder] [int] NULL,
CONSTRAINT [PK_Table1] PRIMARY KEY CLUSTERED
(
[Pkey] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 95) ON [PRIMARY],
CONSTRAINT [uc_TaskName] UNIQUE NONCLUSTERED
(
[TaskName] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 95) ON [PRIMARY]
) ON [PRIMARY]
GO
Like I said though, the entirety of the code runs perfectly if we do not create the filtered index; it only fails with the index.
So why is the filtered index suddenly causing our SQL to bomb and how can we fix it?
UPDATE: here is a small snippet of code that reproduces the failure. This code is run through an SQL Agent Job. When the index is removed, this code runs as expected stating the error the task does not exist:
DECLARE #ROWCOUNT INT = 0
UPDATE [DataIntegrationMaster].[DataInt].[TaskMaster]
Set Active_YN = 'Y'
where TaskName = 'File Import'
and SystemPkey = 0
SET #ROWCOUNT = ##ROWCOUNT
If (#ROWCOUNT = 0)
BEGIN
RAISERROR('The "File Import" task ACTIVE_YN could not be updated to "Y". Either the task does not exist or the system "File Import To Stage" does not exist.', 16, 1)
END
ELSE
BEGIN
Print 'Successfully updated the "File Import" task ACTIVE_YN to "Y".'
END
UPDATE2 with ANSWER:
As pointed out by the helpful answers below, I had to put
SET QUOTED_IDENTIFIER ON
at the top of the SQL for it to work properly.
SET QUOTED_IDENTIFIER ON
has NO EFFECT when I use it creating the index.
There is: SET QUOTED_IDENTIFIER (Transact-SQL)
In order to prevent similar issues, I would recommend to check the exact requirements for creating a filtered index: CREATE INDEX (Transact-SQL). It has a nice neat table that shows SET options required for a filtered index to be created.
As pointed out in #Roger Wolf's answer, creating a filtered index requires you to have the QUOTED_IDENTIFER setting to be set to ON, which is what you did. Had you not done so, you would have been unable to create the filtered index in the first place.
However, once created, it would seem that any DML operation (not just updates) on that table require you to have the QUOTED_IDENTIFER setting to be set to ON as well. This is what you are currently missing, and the reason why you get the error.
So, I don't know what the context of your update is, whether you are running this as an ad-hoc statement, or if this is part of a stored procedure. Either way, make sure to include the SET QUOTED_IDENTIFIER ON statement somewhere at the beginning.
in our Azure SQL Service db we had a table App_Tracking that is/was used to track user actions. We needed to increase the size of the log buffer so I first copied over all the records to an archive table that was defined using this SQL statement
CREATE TABLE [dbo].[App_Tracking_Nov20_2015](
[ID] [int] IDENTITY(1,1) NOT NULL,
[UserID] [nvarchar](50) NOT NULL,
[App_Usage] [nvarchar](1024) NOT NULL,
[Timestamp] [datetime] NOT NULL )
Then using SQL Management Studio 2012 I recreated the original table using :Drop/Create script Generation:
USE [tblAdmin] GO
/****** Object: Table [dbo].[App_Tracking] Script Date: 11/21/2015 11:42:01 AM ******/
DROP TABLE [dbo].[App_Tracking] GO
/****** Object: Table [dbo].[App_Tracking] Script Date: 11/21/2015 11:42:01 AM ******/
SET ANSI_NULLS OFF
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[App_Tracking](
[ID] [int] IDENTITY(1,1) NOT NULL,
[UserID] [nvarchar](50) NOT NULL,
[App_Usage] [nvarchar](4000) NOT NULL,
[Timestamp] [datetime] NOT NULL,
CONSTRAINT [PrimaryKey_ 7c88841f-aaaa-bbbb-cccc- c26fe6a5720e] PRIMARY KEY CLUSTERED (
[ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) )
GO
this is the automated drop/create that SMS2012 creates for you
I then updated statistics on App_Admin using EXEC sp_updatestats
The gotcha is that I can no longer programattically add records to this table.
If I open App_Admin from manage.windowsazure.net and "open in Visual Studio" I can manually add a record to it. but if in SMS2012 I run the code
USE [tblAdmin] GO
UPDATE [dbo].[App_Tracking] SET
[UserID] = 'e146ba22-930c-4b22-ac3c-15da47722e75' ,
[App_Usage] = 'search search: Bad Keyword: asdfadsfs' ,
[Timestamp] = '2015-11-20 20:00:18.700'
GO
nothing gets updated but no error is thrown.
If programmatically I use
var adminContext = new App_AdminEntities();
string prunedAction = action.Length <= 4000 ? action : action.Trim().Substring (0, 4000); // insure we don't fault on overflow of too long a keyword list
var appTracking = new App_Tracking
{
UserID = userId,
PP_Usage = prunedAction,
Timestamp = DateTime.Now
};
try {
adminContext.App_Tracking.Add(APPTracking);
adminContext.SaveChanges();
adminContext.Dispose();
}
I get an error thrown on SaveChanges (which is the .net SQL db function) What did I do wrong
OK so I found the problem. it turns out I had not updated the EDMX file associated and thus the error was being thrown by internal entity validation - which is kindof hidden under the covers –