TFS 2015 Update 1 – TF255430: the database was partially upgraded during a failed upgrade - tfs-2015

TF255356: The following error occurred when configuring the Team
Foundation databases:
TF400711: Error occurred while executing
servicing step 'Upgrade Process Template Description column' for
component FrameworkToDev14M85 during ToDev14M85: 2 error(s) occurred
while executing upd_ProcessTemplateToDev14M85.sql script.Failed batch
starts on line: 6.Error: 5074, Level: 16, State: 1, Batch Line: 6,
Script Line: 11Message: The statistics 'Description' is dependent on
column 'Description'.Error: 4922, Level: 16, State: 9, Batch Line: 6,
Script Line: 11Message:
ALTER TABLE ALTER COLUMN Description failed
because one or more objects access this column.================ Failed
batch begin ==========================--small table, so no need to
batchUPDATE tbl_ProcessTemplateDescriptorSET Description =
LEFT(Description, 1024)--no race condition as binaries aren't allowing
people to save > 1024 length templates
ALTER TABLE
tbl_ProcessTemplateDescriptor
ALTER COLUMN Description
NVARCHAR(1024)================ Failed batch end**
You get an error while upgrading Team Foundation Server 2012 to Update 1 with a “TF254027: You must correct all errors before you continue”, “TF255375: the configuration database that you specified cannot be used” and a “TF255430: the database was partially upgraded during a failed upgrade”.

the error looks confusing and simple enough trying to alter the table
Alter fails because statistics were auto generated in the table
tbl_ProcessTemplateDescriptor
run the following query in the tfs_configuration database
SELECT 'DROP STATISTICS ' + Schema_NAME(d.Schema_id) + '.' + '['+ OBJECT_NAME(a.object_id) + '].[' + a.name +']' FROM sys.stats a INNER JOIN sys.Objects d ON d.Object_id = a.object_id WHERE auto_created = 0 AND User_Created = 1
Steps to fix it
1. Restore the tfs_configuration database from the backup you had before the update 1 was installed
2. execute the results in above query in that restored database
3. Rerun the Upgrade wizard for update 1
4. All should be well and it succeeds
another query window which drops all statistics

Related

Exclude Query Output SQL Server Failed to initialize sqlcmd library with error number -2147467259

Determined having #Exclude_Query_Output = 0 is the failing piece in the SQL code. I have no idea on how to correct this.
I've been working on correcting this issue for quite (Months) a while when sending DB Mail from a SQL job. I created one SQL job executing a stored procedure that queries another servers database & tables. Then the results are stored in a variable and emailed to users daily.
Here is my setup:
Linked server A to linked server B using a SQL user named 'xxxdba'
Server type SQL Server in Linked Server Properties
On both servers I'm using "Be made using this security context"
The user has the database role "DatabaseMailUserRole"
Job owner is NT Service/SQLSERVERAGENT
The job is ran as xxxdba in the job step advanced area
On both servers A & B the user has read/write user mappings to the tables in the query which I have supplied below. Under the database itself, the user has execute to the stored procedure that I created.
Job step properties:
Step name
Type T-SQL
Run As (Blank)
Database (Win)
Command: Execute dbo.procedure_db_mail
Procedure that was created:
CREATE PROCEDURE PROCEDURE_DB_MAIL
AS
Declare #Qry varchar(5000) = 'EXEC XXXXXX.WIN.DBO.[Table_Roll_View]' --VIEW
SET #Qry = N'SELECT [Previous Day Table Roll Information] --Stored Procedure
From MSDB.DBO.sysjobs'
EXEC msdb.dbo.sp_send_dbmail
#recipients = 'Jwendt#xxxxxxx.com;',
#Query = #Qry,
#query_result_header = 1,
#exclude_query_output = 0,
#execute_query_database = 'Win',
#subject = 'Table Roll Data',
#attach_query_result_as_file = 1,
#query_attachment_filename = 'Table Roll Data.txt',
#append_query_error = 1,
#profile_name = 'SQL Mail Alert'
View Information:
CREATE VIEW Table_Roll_View
AS
SELECT DISTINCT a.Timestamp, a.DocGamingDate,
a.Document_ID, a.DocType,
t1.InternalStatus, a.DocStatus, a.DocNumber, b.Name, t1.Description, T1.XXXXGameName, b.Location_ID, T1.PitID, T1.TableID
From OPENQUERY([XXXXITDB], 'Select DISTINCT b.PitID, b.TableID, b.Description, a.GameCode, b.XXXXGameName, b.XXXXTableName, b.InternalStatus,
b.ExternalGameDay, b.InternalGameDay
From [XXXXIT].[dbo].[CurrentTableConfig] a
INNER JOIN [XXXXIT].[dbo].[Tables] b
on a.SAKEY = b.TableSAKEY'
) t1
INNER JOIN [Win].[dbo].[XX_Location] b
on b.Name = t1.XXXXTableName
INNER JOIN [Win].[dbo].[XX_Document] a
ON a.Location_ID = b.location_ID
Where a.DocGamingDate = CAST(GETDATE() -1 AS Date) --Looking at yesterday's table processes
--OR a.DocGamingDate = GETDATE() --Including today's table openers
and t1.description NOT LIKE '%zz%'
Execute the job (I'm a sys admin on all servers)
SQL History Message:
Executed as user: xxxdba. Failed to initialize sqlcmd library with error number -2147467259. [SQLSTATE 42000] (Error 22050). The step failed.
Windows Application Log:
SQL Server Scheduled Job 'Previous Day Table Roll Information' (0x74F50053DC25DD439C6438B586359845) - Status: Failed - Invoked on: 2021-02-24 10:19:46 - Message: The job failed. The Job was invoked by User XXXXXXXXXXX\jwendt. The last step to run was step 1 (Execute and email).
Additional notes:
I made the linked server credential a system admin for testing purposes and got the same error.
I get the same result if I execute the SP by itself outside of the job
If I run as myself, the results are the same.
Shared Memory, Named Pipes and TCP/IP are enabled on both servers
Data Access, RPC, RPC Out, Use Remote Collation and Enable Promotion of Distributed Transaction is set to True in the linked server properties
I have the same issue on another server and here is where it gets VERY confusing. I have the same setup and permissions for the user above. The difference is, I have an email that goes out preload and post load.
Management gets an email for a point multiplier that customers earn playing Bingo on Sunday's. This email works correctly (Preload). Once the post load time has passed, the email fails. If the local server time is 10AM then management would already have received their email at 7AM. If the post load time is set to 11AM and the local server time >= 11AM the job fails with the SQLCMD Library error.

ANSI_PADDING and Partition Switching

I am in the process of migrating several on-premise SQL Server 2008R2 Enterprise Edition instances into AWS. The new SQL Servers are 2017 Enterprise Edition hosted on EC2 instances.
Currently, in our SQL Server 2008R2, some of the larger tables are partitioned (by date). I have a partition management script that, on the first of every month, creates a new partition and switches out the oldest partition. This works fine. However, a lot of the columns in the partitioned tables have been created with ANSI_PADDING OFF. When the partition management process runs, it creates a staging table, then iteratively adds new columns to the staging table, setting ANSI_PADDING dependent on the source tables column setting. This staging table's subsequently used in the partition switch. Code snippet follows#
SELECT #AnsiPadding = is_ansi_padded FROM sys.columns WHERE object_id = object_id(#SourceTable) AND [name] = #columnname
IF #AnsiPadding = 1
BEGIN
SET #CreateTablestmt = 'SET ANSI_NULLS ON SET QUOTED_IDENTIFIER ON SET ANSI_PADDING ON ALTER TABLE '+#Newtablename+ ' ADD [' + #columnname + '] '+ #DatatypeFormat + ' ' + #nullvalue
PRINT #CreateTablestmt
EXEC (#CreateTablestmt)
END
ELSE
BEGIN
SET #CreateTablestmt = 'SET ANSI_PADDING OFF ALTER TABLE '+#Newtablename+ ' ADD [' + #columnname + '] '+ #DatatypeFormat + ' ' + #nullvalue
PRINT #CreateTablestmt
EXEC (#CreateTablestmt)
END
I have just been testing this same procedure in SQL Server 2017 EE and it fails.
I receive the following error on creation of the Staging table:
Msg 50000, Level 16, State 1, Line 533
CONDITIONAL failed because the following SET options have incorrect settings: 'ANSI_PADDING'.
Verify that SET options are correct for use with indexed views and/or indexes on
computed columns and/or filtered indexes and/or query notifications
and/or XML data type methods and/or spatial index operations.
Simple reproduction on Microsoft SQL Server 2017 (RTM-CU17) (KB4515579) - 14.0.3238.1 (X64) Sep 13 2019 15:49:57 Copyright (C) 2017 Microsoft Corporation Enterprise Edition (64-bit) on Windows Server 2016 Datacenter 10.0 (Build 14393: ) (Hypervisor)
SET ANSI_PADDING ON;
CREATE TABLE testANSI (col1 VARCHAR(15))
GO
SET ANSI_PADDING OFF;
ALTER TABLE testANSI ADD col2 VARCHAR(20)
yields:
Msg 1934, Level 16, State 1, Procedure DDLEventLogging, Line 13 [Batch Start Line 4]
SELECT failed because the following SET options have incorrect settings: 'ANSI_PADDING'.
The partitioned table has a multitude of data types in it; UNIQUEIDENTIFIER, SMALLINT, MONEY, CHAR, VARCHAR.
I can't change the definition of the staging table, as then the schema's don't match and I will get errors such as :
ALTER TABLE SWITCH statement failed because column
'COLUMNA' does not have the same ANSI trimming semantics
in tables 'DB.SCHEMA.MAINTABLE' and 'DB.SCHEMA.STAGINGTABLE'.
Rock and a hard place.
Other than creating a new table with ANSI_PADDING ON, and pumping all my data into it, do I have any other options?
Many thanks
Apologies. Unknown to me, there's a DDL trigger on the database that's using EVENTDATA() to capture DDL events. This is the source of the error, rather than the partition management script.
As Dan suggested, I have added SET ANSI_PADDING ON; in the trigger and the partition maintenance job runs to completion.

SQL Code Evaluation stopping a valid transaction

As part of the company I am working for at the moment I need to create some database upgrade scripts to replace some work of a previous contractor.
The code before the following block runs, creates the new ID column, and then this script looks to populate the values and then drop some columns.
IF EXISTS (
SELECT *
FROM sys.columns
WHERE object_id = OBJECT_ID(N'[Central].[Core.Report].[ReportLessonComp]')
AND name = 'Name')
and
EXISTS (
SELECT *
FROM sys.columns
WHERE object_id = OBJECT_ID(N'[Central].[Core.Report].[ReportLessonComp]')
AND name = 'Code')
BEGIN
UPDATE
[Central].[Core.Report].[ReportLessonComp]
SET
CompetencyId = rc.Id
FROM
[Central].[Core.Report].[ReportLessonComp] rlc
INNER JOIN
[Core.Lookup].ReportCompetency rc
ON
rc.Code = rlc.Code and rc.Name = rlc.Name
ALTER TABLE [Central].[Core.Report].[ReportLessonComp] DROP COLUMN CODE
ALTER TABLE [Central].[Core.Report].[ReportLessonComp] DROP COLUMN [Name]
ALTER TABLE [Central].[Core.Report].[ReportLessonComp] DROP COLUMN [Description]
END
GO
When running the if exists \ not exists checks and then select getdate() this works perfeclty fine and gives me the result I expect.
However, when I run the code block above I get error
Msg 207, Level 16, State 1, Line 23
Invalid column name 'Code'.
Msg 207, Level 16, State 1, Line 23
Invalid column name 'Name'.
This script it part of a larger upgrade script and is used in a system calle RoundHouse https://github.com/chucknorris/roundhouse which is the system chosen by the company.
Prior to the above if exists check,
IF (SELECT COUNT(1) FROM sys.columns
WHERE OBJECT_ID = OBJECT_ID('[Central].[Core.Report].[ReportLessonComp]')
AND Name in ('Name','Code')) = 2
which also gave the same issue. I have five tables that I need to update and this is going to stop the team from working if I cant resolve this at my next PR
What can I do in order to stop this from causing the upgrade scripts to fail?
EDIT -- The reason I am linking on varchar fields also is because the previous developer did not create relationships between tables, and was just inserting strings into tables rather than relating by ID causing the potential for unlinked \ inconsistent data.
The table edit prior to this creates the new id column, and this script is getting the value and dropping columns that are no longer needed
SQL Server will parse the whole of the statement prior to execution, so the exists check does not protect you from the update being parsed. If the column has already been dropped, that makes the statement invalid and you get a parse error. The update statement would have to be executed as dynamic SQL, sp_execute basically so that the varchar of the update is not directly parsed.
For SQL Server 2016 and above the drop column can be protected a bit more as well:
ALTER TABLE [Central].[Core.Report].[ReportLessonComp] DROP COLUMN IF EXISTS CODE

Get list of SQL server databases which files have been deleted

My goal is getting list of SQL server databases which files have been deleted.
In other words. I attach a db from mount, then close the mount so actually I have the attached db without files.
At first it seemed to be easy. Just pretty easy select:
SELECT
'DB_NAME' = db.name,
'FILE_NAME' = mf.name,
'FILE_TYPE' = mf.type_desc,
'FILE_PATH' = mf.physical_name
FROM
sys.databases db
INNER JOIN sys.master_files mf
ON db.database_id = mf.database_id
WHERE
--and specific condition here
But it turned out differently. Sql server has almost the same information about a regular database and a database which doesn't have files. So I had to try something else.
Further I tried to use state of database. And it was quite strange.
Unfortunately the following query gives me wrong(or not actual information):
SELECT state
FROM sys.databases
WHERE name = N'TestDB'
state
-----
0
And 0 means ONLINE according to this link
But actually the database has RECOVERY_PENDING state. It looks like that sql server information about my TestDB us out of date and should be refreshed. But I have no idea how to achieve this. But after executing any of following query this info(db state) is being refreshed:
EXEC sp_helpdb N'TestDB'
ALTER DATABASE N'TestDB' SET SINGLE_USER WITH ROLLBACK IMMEDIATE
USE N'TestDB'
--etc
--all requests are terminated with the same error
Msg 5120, Level 16, State 101, Line 10
Unable to open the physical file "C:\MOUNT\b4c059e8-3ba6-425f-9a2a-f1713e7719ca\TestDB.mdf". Operating system error 3: "3(The system cannot find the path specified.)".
File activation failure. The physical file name "C:\MOUNT\b4c059e8-3ba6-425f-9a2a-f1713e7719ca\TestDB_log.ldf" may be incorrect.
File activation failure. The physical file name "C:\MOUNT\b4c059e8-3ba6-425f-9a2a-f1713e7719ca\TestDB_log-2.ldf" may be incorrect.
Msg 5181, Level 16, State 5, Line 10
Could not restart database "TestDB". Reverting to the previous status.
Msg 5069, Level 16, State 1, Line 10
ALTER DATABASE statement failed.
So do you have any idea ?
And also I've asked the question looks like this here differently.
Finally, I've found what i actually need.
I can chech whether the specific file exists or not by sql server:
CREATE FUNCTION dbo.fn_FileExists(#path varchar(512))
RETURNS BIT
AS
BEGIN
DECLARE #result INT
EXEC master.dbo.xp_fileexist #path, #result OUTPUT
RETURN cast(#result as bit)
END;
GO
So i just need to execute the function above for each file which i can get by executing for example following query:
SELECT
DISTINCT 'FILE_PATH' = physical_name
FROM sys.master_files

Early execution of "sp_rename" causes query to fail

I'm having a strange problem with an MSSQL Query that I'm trying to run in Microsoft SQL Server 2014. It is an update script for my database structure. It should basically rename a Column (from Price to SellingPrice) of a Table after its content was merged to another one.
USE db_meta
GO
DECLARE #BakItemPrices TABLE
(
ItemNum int,
Price int,
CashPrice int
)
-- backup old prices
insert into #BakItemPrices
select ItemNum, Price from dbo.ItemInfo
-- merge into other table
alter table ShopInfo
add column Price int NOT NULL DEFAULT ((0))
update ShopInfo
set ShopInfo.Price = i.Price
from ShopInfo s
inner join #BakItemPrices i
on s.ItemNum = i.ItemNum
GO
-- rename the column
exec sp_rename 'ItemInfo.Price', 'SellingPrice', 'COLUMN' -- The Debugger executes this first
GO
This query always gave me the error
Msg 207, Level 16, State 1, Line 13
Invalid column name 'Price'.
I couldn't understand this error until I debugged the query. I was amazed as I saw that the debugger wont even hit the breakpoint I placed at the backup code and says that "its unreachable because another batch is being executed at the moment".
Looking further down I saw that the debugger instantly starts with the exec sp_rename ... line before it executes the query code that I wrote above. So at the point my backup code is being executed the Column is named SellingPrice and not Price which obviously causes it to fail.
I thought queries get processed from top to bottom? Why is the execute sequence being executed before the code that I wrote above?
Script is sequenced from top to down. But some changes to schema is "visible" after the transaction with script is committed. Split your script into two scripts, it can help.