I'm running SQL Server Management Studio 2008 against a SQL Server 2005 back-end. SSMS just exhibited a behavior I have never seen before. I don't know if this is something new in SSMS 2008 or just a function of something else.
Basically, what happened in that I added some new columns to an existing table. After adding those columns, I executed "Script table as...CREATE" within the IDE on the table. I expected to just get a single CREATE TABLE statement with all the rows, prvious and new. However, the generated code was the CREATE statement for the original definition of the table, plus individual ALTER TABLE T ADD [Column]... statements for each of the new columns.
This isn't a problem (and actually could be useful from a recent change management point-of-view ... sorta), but it is behavior I've never seen before.
I thought that this may have to do with row length, but the length comes in under the 8,000 byte limit before the table page gets split (forgive my terminology ... I'm a developer and not a DBA). Granted, it's not a small table (127 columns now with the additions and a little over 7,000 byte rowlength).
What am I seeing? Is this a feature/function of SSMS or SQL Server itself? Is this a side effect of the large table definition?
The following sample does not repeat the behavior, but it illustrates (simplified) what I'm seeing:
CREATE TABLE dbo.Table_1
(
ID int NOT NULL,
title nvarchar(50) NOT NULL
)
Then,
ALTER TABLE dbo.Table_1 ADD
[description] [varchar](50) NULL,
[numthing] [nchar](10) NULL
I expected to have this generated:
CREATE TABLE [dbo].[Table_1](
[ID] [int] NOT NULL,
[title] [nvarchar](50) NOT NULL,
[description] [varchar](50) NULL,
[numthing] [nchar](10) NULL,
However, this was generated:
CREATE TABLE [dbo].[Table_1](
[ID] [int] NOT NULL,
[title] [nvarchar](50) NOT NULL)
ALTER TABLE [dbo].[Table_1] ADD [description] [varchar](50) NULL
ALTER TABLE [dbo].[Table_1] ADD [numthing] [nchar](10) NULL
I suspect that you "cleaned up" the SQL in your post since it would normally contain many of other SET and GO statements. I am assuming that you removed the "SET ANSI_PADDING" statements.
Some columns in the table may have ANSI_PADDING set to ON while others are OFF. The ANSI_PADDING option affects all columns created after it was set Since the columns are going to be created in table order, the the ANSI_PADDING option will need to be used a few times depending on the table. The real problem is that MS SQL Server cannot set the ANSI_PADDING option within the CREATE TABLE statement. So it needs to do some of that work after the initial CREATE by using ALTER TABLE statements after the appropriate SET ANSI_PADDING statement.
See: http://kevine323.blogspot.com/2011/03/ansipadding-and-scripting-tables-from.html
I believe you're seeing Microsoft re-using two sections of SQL generation code.
I'm guessing you haven't saved the changes when you click to generate the CREATE script; so the generation code doesn't have an up-to-date version of the table to generate it from. Instead, it runs the normal generation code on the old table, and then the code behind the "Script changes" option to bring it up to date.
Clever code reuse with peculiar results.
Related
I am attempting to create a backup table without having to re-create it every single time. If the table already exists in the next run then it should simply truncate the table.
But it doesn't seem to be working. It says backup_reportsettings is already in the database. Can anyone assist me with this?
--Only re-create table if table does not exist otherwise truncate the existing table.
IF NOT EXISTS (SELECT * FROM [Misc].sys.tables where name= 'dbo.backup_reportsettings')
CREATE TABLE [MISC].dbo.backup_reportsettings
(
[datestamp] [datetime] NULL,
[reportsettingid] [char](8) NOT NULL,
[description] [char](30) NOT NULL,
[formname] [char](30) NOT NULL,
[usersid] [char](8) NOT NULL,
[settings] [text] NOT NULL,
[notes] [varchar](255) NOT NULL,
[userdefault] [char](1) NOT NULL
)
ELSE
TRUNCATE TABLE [Misc].dbo.backup_reportsettings;
What am I doing wrong? Note: this is done within a transaction.
Object names in sys.tables don't have the schema as part of the name. Remove the table schema when verifying whether the table exists:
IF NOT EXISTS (SELECT * FROM [Misc].sys.tables where name= 'backup_reportsettings')
Despite the use of IF, SQL Server needs to Parse/Compile all the statements in your script, so when it sees a CREATE TABLE statement it will give you a compilation error if the table already exists, even though the IF would prevent that code from being executed when that is the case.
The way to get around this is to put your CREATE TABLE statement in dynamic SQL, which will not be parsed/compiled before execution.
I need write a sql view on series of tables in a database. The problem is that the table only contains one month of history. Each month a new table is created. For example dbo.LOG_2015_09 would be September's table.
I need to write a view that shows me the last 60 days of history.
SELECT * FROM dbo.LOG_2015_09
UNION ALL
SELECT * FROM dbo.LOG_2015_08
The problem is that next month this will not be valid anymore.
I am limited to using SQL Views. Stored procedures are not an option.
One thought I had was to create a Table Function to get the relevant tables but I don't think we can use dynamic SQL to generate the code.
Thank you for any help.
EDIT: This is a sample of the table definition created by the application. I do not have any access to modify the table. I can only create views or functions.:
CREATE TABLE [dbo].[CLOG201509](
[LASTUPD] [datetime] NULL,
[CREDATE] [datetime] NULL,
[SERIALNO] [int] NOT NULL,
[LSEQNO] [int] NOT NULL,
[EVENTNO] [int] NOT NULL,
[EVDATE] [datetime] NOT NULL,
[LOGDATE] [datetime] NOT NULL,
CONSTRAINT [PK__CLOG2015__D08461DA672EF3E9] PRIMARY KEY CLUSTERED
([SERIALNO] ASC,
[EVENTNO] ASC,
[LSEQNO] ASC
) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
These tables contain over a million records each.
EDIT: I have attempted to create a TABLE UDF to combine the tables but without being able to use Dynamic SQL I don't know how to make it change the tables based on the date. I would want to have this months table and last months table joined together to reference. The problem is that I have to go and remember to update this UDF each month and until I do, the data is unavailable.
CREATE FUNCTION [dbo].[KS_ManitouSync_OPT_CLOG]()
RETURNS TABLE
AS
RETURN
(
SELECT *
FROM CLOG201511
UNION ALL
SELECT *
FROM CLOG201512
)
As a reminder I cannot use Stored Procs, It must be useable by a program that supports a single SQL SELECT Query only.
Thanks for any help.
You could create a partitioned view which unions all the tables, but each table must have a constraint to limit the time interval allowed in it. In this case, when you select from the partitioned view, using a WHERE clause on the partitioning column, only the relevant tables will be accessed.
See https://technet.microsoft.com/en-us/library/ms190019(v=sql.105).aspx
create your view via a stored procedure. there you can figure out which log tables are all existing and then build your create view command dynamicaly.
have a look in a quite similar question i had in the past:
Creating View with dynamic columns by stored procedure
If really must use a view (you cannot use a stored procedure) and you cannot update the view automatically (using an SQL Server Agent job or using a DDL trigger), you could use dynamic SQL in a view using OPENROWSET, but there are a lot of caveats, see http://www.sommarskog.se/share_data.html#OPENQUERY
I have a table
CREATE TABLE [misc]
(
[misc_id] [int] NOT NULL,
[misc_group] [nvarchar](255) NOT NULL,
[misc_desc] [nvarchar](255) NOT NULL
)
where misc_id [int] not null should have been IDENTITY (1,1) but is not and now I'm having issues
With a simple form that insert into this table but since misc_id is looking for a number that a user would not know unless they have access to the database.
I know a option would be to create another column make it IDENTITY(1,1) and copy that data.
Is there another way I will be able to get around this?
INSERT INTO misc (misc_group, misc_desc)
VALUES ('#misc_group#', '#misc_desc#')
I have SQL Server 2012
You should re-create your table with the desired identity column. The following statements will get you close. SQL Server will automatically adjust the table's identity field to MAX(misc_id) + 1 as you're migrating data.
You'll obviously need to stop trying to insert misc_id with new records. You'll want to retrieve the SCOPE_IDENTITY() column after inserting records.
-- Note: I'd recommend having SSMS generate your base create statement so you know you didn't miss anything. You'll have to export the indexes and foreign keys as well. Add them after populating data to improve performance and reduce fragmentation.
CREATE TABLE [misc_new]
(
[misc_id] [int] NOT NULL IDENTITY(1,1),
[misc_group] [nvarchar](255) NOT NULL,
[misc_desc] [nvarchar](255) NOT NULL
-- Todo: Don't forget primary key but can be added later (not recommended).
)
GO
SET IDENTITY_INSERT misc_new ON;
INSERT INTO misc_new
(
[misc_id],
[misc_group],
[misc_desc]
)
SELECT
[misc_id],
[misc_group],
[misc_desc]
FROM misc
ORDER BY misc_id;
SET IDENTITY_INSERT misc_new OFF;
GO
EXEC sp_rename 'misc', 'misc_old';
EXEC sp_rename 'misc_new', 'misc';
GO
If altering the table is not an option, you can try having a different table with the latest [misc_id] value inserted, so whenever you insert a new record into the table, you retrieve this value, add 1, and use it as your new Id. Just don't forget to update the table after.
Changing a int column to an identity can cause problems because by default you cannot insert a value into an identity column without use the set identity_insert command on. So if you have existing code that inserts a value into the identity column it will fail. However its much easier to allow SQL Server to insert values(that is change it to an identity column) so I would change misc_id into an identity column and make sure that there are no programs inserting values into misc_id.
In MSSQL 2012 you can use SEQUENCE objects:
CREATE SEQUENCE [dbo].[TestSequence]
AS [BIGINT]
START WITH 1
INCREMENT BY 1
GO
Change 1 in START WITH 1 with MAX value for [misc_id] + 1.
Usage:
INSERT INTO misc (misc_id, misc_group, misc_desc)
VALUES (NEXT VALUE FOR TestSequence, '#misc_group#','#misc_desc#')
I've been struggling over this for a while now, I'm trying to define my table so that a specific column will use data sent in and stored in other columns to find data within a separate table and then store it in said column.
I've tried using stored procedures - wouldn't allow to me call the procedure in the following manner
[to] AS EXECUTE procName #parameter1 = [info],
#parameter2 = [info2],
#outputparmeter = #output OUTPUT
I've also tried to call functions but found that you couldn't use SELECT statements in relation to searching through a table inside a function. (<-- Not sure if this is entirely accurate, but from my extensive searching this is the conclusion I've come to, Feel free to correct me on it, or just clarify whether it's true or false as a statement).
Started searching after that and attempted to use a function calling a stored procedure, doing the search of the alternative table in the procedure, passing it back to the function which in turn passed it back to the column. This seemed to be okay until I executed my query at which point it blew up and told me that I can't have an EXECUTE statement within a function.
If you guys can shed any light on any of these issues and ideally a solution to my problem, I'd appreciate the help.
EDIT :
Table Users SAMPLE (Everything here works.)
CREATE TABLE [dbo].Users
(
[userId] [int] NOT NULL IDENTITY,
[firstname] [nvarchar](50) NOT NULL,
[lastname] [nvarchar](50) NOT NULL,
[dateOfBirth] [date] NOT NULL,
[age] AS [dbo].getAge([dateOfBirth])
)
Table Messages SAMPLE (Herein Lies the problem)
CREATE TABLE [dbo].Messages
(
[messageId] [int] NOT NULL IDENTITY,
[fromFirstName] [nvarchar](50) NOT NULL,
[fromLastName] [nvarchar](50) NOT NULL,
[from] [int], --This needs to search for data within the Users table using the [fromFirstName] and [fromLastName] to get the userId for the specified user.
[toFirstName] [nvarchar](50) NOT NULL,
[toLastName] [nvarchar](50) NOT NULL,
[to] [int], --This needs to search for data within the Users table using the [toFirstName] and [toLastName] to get the userId for the specified user.
[content] [nvarchar](999) NOT NULL,
[dateSent] [date] DEFAULT GETDATE()
)
tried the following :
Stored Procedure - unable to call the procedure as shown in the
original post.
User Defined Function - unable to use a SELECT statement to get
the data from Users table.
User Defined Function calling a Stored Procedure - unable to use
EXECUTE statement within a function?
Per this post you can use a udf to fill in a calculated column and a udf can access other tables. I have not tested this functionality, perhaps this example will help expand upon your previous attempts. A related question seems to be using this technique: SQL Creating UDF Computed columns
Quote:
Accessing a column outside of the computed column table
A computed column can not directly access any column outside its
table. This limitation may be overcome by using a User Defined
Function. A UDF may be used in the expression to access any column
outside the computed column table.
Is it possible to consolidate the history of all the tables into a single table?
I tried to use the CDC feature provided by SQL server 2012 enterprise edition, but for that it creates a copy of every table, which increases the number of tables in the database.
Is it also possible track & insert the table name & column name in which DML has occurred into the history table? Will this cause any issues with performance?
Here is one solution using triggers.
1 - Create a trigger for each table that you want history on.
2 - Copy the modified data (INS, UPD, DEL) from base table to audit table during the action.
3 - Store all the data in XML format so that multiple tables can store data in the same audit table.
I did cover this in one of my blog articles. It is a great solution for auditing small amounts of data. There might be an overhead concern when dealing with thousands of record changes per second.
Please test before deploying to a production environment!
Here is the audit table that keeps track of the table name as well as the type of change.
/*
Create data level auditing - table.
*/
-- Remove table if it exists
IF EXISTS (SELECT * FROM sys.objects WHERE object_id =
OBJECT_ID(N'[ADT].[LOG_DML_CHANGES]') AND type in (N'U'))
DROP TABLE [ADT].[LOG_DML_CHANGES]
GO
CREATE TABLE [ADT].[LOG_DML_CHANGES]
(
[ChangeId]BIGINT IDENTITY(1,1) NOT NULL,
[ChangeDate] [datetime] NOT NULL,
[ChangeType] [varchar](20) NOT NULL,
[ChangeBy] [nvarchar](256) NOT NULL,
[AppName] [nvarchar](128) NOT NULL,
[HostName] [nvarchar](128) NOT NULL,
[SchemaName] [sysname] NOT NULL,
[ObjectName] [sysname] NOT NULL,
[XmlRecSet] [xml] NULL,
CONSTRAINT [pk_Ltc_ChangeId] PRIMARY KEY CLUSTERED ([ChangeId] ASC)
)
GO
Here is the article.
http://craftydba.com/?p=2060
The image below shows a single [LOG_DML_CHANGES] table with multiple [TRG_TRACK_DML_CHGS_XXX] triggers.
If you want to more than record that user x updated/deleted/inserted table y id x at time t then it will cause problems.
Choose the tables you want to audit; create Audit tables for them and update them from triggers on the base table. Lot of work, but the best way of doing it.