Processing ProcessParameters as XML in SQL Server - sql

I am trying to extract values from an XML column. Unfortunately, whatever combination I try, I can't get any meaningfull result out of it.
A test script with data can be found here
Related questions that did not turn the light on for me
Getting values from XML type field
XML query() works, value() requires singleton
Getting rowsets from XQuery and SQL Server 2005
Example of the contents of one item
<Dictionary xmlns="clr-namespace:System.Collections.Generic;assembly=mscorlib" xmlns:mtbwa="clr-namespace:Microsoft.TeamFoundation.Build.Workflow.Activities;assembly=Microsoft.TeamFoundation.Build.Workflow" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" x:TypeArguments="x:String, x:Object">
<mtbwa:BuildSettings x:Key="BuildSettings" ProjectsToBuild="$/Projects/BpABA/Dev/V6/DUnit/FrameworkTests.dproj">
<mtbwa:BuildSettings.PlatformConfigurations>
<mtbwa:PlatformConfigurationList Capacity="1">
<mtbwa:PlatformConfiguration Configuration="Debug" Platform="Win32" />
</mtbwa:PlatformConfigurationList>
</mtbwa:BuildSettings.PlatformConfigurations>
</mtbwa:BuildSettings>
<mtbwa:SourceAndSymbolServerSettings SymbolStorePath="{x:Null}" x:Key="SourceAndSymbolServerSettings" />
<mtbwa:AgentSettings x:Key="AgentSettings" MaxExecutionTime="01:00:00" MaxWaitTime="04:00:00" Tags="Delphi 5" />
<x:Boolean x:Key="CreateWorkItem">False</x:Boolean>
<x:Boolean x:Key="PerformTestImpactAnalysis">False</x:Boolean>
</Dictionary>
Latest attempt
;WITH XMLNAMESPACES('http://schemas.microsoft.com/winfx/2006/xaml' AS mtbwa)
, q AS (
SELECT CAST(bd.ProcessParameters AS XML) p
FROM dbo.tbl_BuildDefinition bd
)
SELECT X.Doc.value('mtbwa:BuildSettings[0]', 'VARCHAR(50)') AS 'Test'
FROM q CROSS APPLY p.nodes('/mtbwa:Dictionary') AS X(Doc)
Background
The column ProcessParameters is part of the TFS build system in the tbl_BuildDefinition table.
The complete DDL is as follows
USE [Tfs_ProjectCollection]
GO
/****** Object: Table [dbo].[tbl_BuildDefinition] Script Date: 06/19/2012 16:28:56 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[tbl_BuildDefinition](
[DefinitionId] [int] IDENTITY(1,1) NOT NULL,
[GroupId] [int] NOT NULL,
[DefinitionName] [nvarchar](260) NOT NULL,
[ControllerId] [int] NOT NULL,
[DropLocation] [nvarchar](260) NULL,
[ContinuousIntegrationType] [tinyint] NOT NULL,
[ContinuousIntegrationQuietPeriod] [int] NOT NULL,
[LastBuildUri] [nvarchar](64) NULL,
[LastGoodBuildUri] [nvarchar](64) NULL,
[LastGoodBuildLabel] [nvarchar](326) NULL,
[Enabled] [bit] NOT NULL,
[Description] [nvarchar](2048) NULL,
[LastSystemQueueId] [int] NULL,
[LastSystemBuildStartTime] [datetime] NULL,
[ProcessTemplateId] [int] NOT NULL,
[ProcessParameters] [nvarchar](max) NULL,
[ScheduleJobId] [uniqueidentifier] NOT NULL,
CONSTRAINT [PK_tbl_BuildDefinition] PRIMARY KEY CLUSTERED
(
[GroupId] ASC,
[DefinitionName] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[tbl_BuildDefinition] ADD DEFAULT (newid()) FOR [ScheduleJobId]
GO

I think you have a wrong namespace defined for your mbtwa prefix in your XML/XQuery text, and you need to use 1-based indexing to get at the data when using the .value() function (not 0-based like commonly used).
So try this:
;WITH XMLNAMESPACES('clr-namespace:Microsoft.TeamFoundation.Build.Workflow.Activities;assembly=Microsoft.TeamFoundation.Build.Workflow' AS mtbwa,
DEFAULT 'clr-namespace:System.Collections.Generic;assembly=mscorlib')
, q AS (
SELECT CAST(bd.ProcessParameters AS XML) p
FROM dbo.tbl_BuildDefinition bd
WHERE DefinitionId = 1
)
SELECT
X.Doc.query('mtbwa:BuildSettings') AS 'Node',
X.Doc.value('(mtbwa:BuildSettings/#ProjectsToBuild)[1]', 'VARCHAR(50)') AS 'ProjectsToBuild'
FROM
q
CROSS APPLY
p.nodes('/Dictionary') AS X(Doc)
This should give you the whole <mtbwa:BuildSettings> node as XML (using the .query() function), as well as the value of the single attribute ProjectsToBuild ($/Projects/BpABA/Dev/V6/DUnit/FrameworkTests.dproj) of that node.
If you want a whole node (as XML), then you need to use .query('xpath') - the .value() function can get you the inner text of a node (if present), or the value of a single attribute.
Does that help at all?

Related

Re-seeding a large sql table

Using version:
Microsoft SQL Server 2008 R2 (SP3-OD) (KB3144114) - 10.50.6542.0 (Intel X86)
Feb 22 2016 18:12:09
Copyright (c) Microsoft Corporation
Standard Edition on Windows NT 5.2 <X86> (Build : )
I have a heavy table (135K rows), that I moved from another DB.
It transferred with the [id] column being a standard int column instead of it being the key & seed column.
When trying to edit that field to become an identity specification, with a seed value, its errors out and gives me this error:
Execution Timeout Expired.
The timeout period elapsed prior to completion of the operation...
I even tried deleting that column, to try recreate it later, but i get the same issue.
Thanks
UPDATE:
Table structure:
CREATE TABLE [dbo].[tblEmailsSent](
[id] [int] IDENTITY(1,1) NOT NULL, -- this is what it should be. currently its just an `[int] NOT NULL`
[Sent] [datetime] NULL,
[SentByUser] [nvarchar](50) NULL,
[ToEmail] [nvarchar](150) NULL,
[StudentID] [int] NULL,
[SubjectLine] [nvarchar](200) NULL,
[MessageContent] [nvarchar](max) NULL,
[ReadStatus] [bit] NULL,
[Folder] [nvarchar](50) NULL,
CONSTRAINT [PK_tblMessages] PRIMARY KEY CLUSTERED
(
[id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
I think that your question is a duplicate of Adding an identity to an existing column. That question above has an answer that should be perfect for your situation. I'll reproduce its essential part here below.
But before that, let's clarify why you see the timeout error.
You are trying to add the IDENTITY property to existing column. And you are using SSMS GUI for it. A simple ALTER COLUMN statement can't do it and even if it could, SSMS generates a script that creates a new table, copies over the data into the new table, drops the old table and renames the new table to the old name. When you do this operation via SSMS GUI it runs its scripts with a predefined timeout of 30 seconds.
Of course, you can change this setting in SSMS and increase the timeout, but there is a much better way.
Simple/lazy way
Use SSMS GUI to change the column definition, but then instead of clicking "Save", click "Generate Change Script" in the table designer.
Then save this script to a file and review the generated T-SQL code that GUI runs behind the scene.
You'll see that it creates a temp table with the required schema, copies data over, re-creates foreign keys and indexes, drops the old table and renames the new table.
The script itself is usually correct, but pay close attention to transactions in it. For some reason SSMS often doesn't use a single transaction for the whole operation, but several transactions. I'd recommend to manually review the script and make sure that there is only one BEGIN TRANSACTION at the top and one COMMIT in the end. You don't want to end up with a half-done operation with, say, a table where all indexes and foreign keys were dropped.
If it is a one-off operation, it could be enough for you. Your table is only 2.4GB, so it may take few minutes, but it should not be hours.
If you run the T-SQL script yourself in SSMS, then by default there is no timeout. You can stop it yourself if it takes too long.
Smart and fast way to do it is described in details in this answer by Justin Grant.
The main idea is to use the ALTER TABLE...SWITCH statement to make the change only touching the metadata without touching each page of the table.
BEGIN TRANSACTION;
-- create a new table with required schema
CREATE TABLE [dbo].[NEW_tblEmailsSent](
[id] [int] IDENTITY(1,1) NOT NULL,
[Sent] [datetime] NULL,
[SentByUser] [nvarchar](50) NULL,
[ToEmail] [nvarchar](150) NULL,
[StudentID] [int] NULL,
[SubjectLine] [nvarchar](200) NULL,
[MessageContent] [nvarchar](max) NULL,
[ReadStatus] [bit] NULL,
[Folder] [nvarchar](50) NULL,
CONSTRAINT [PK_tblEmailsSent] PRIMARY KEY CLUSTERED
(
[id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
-- switch the tables
ALTER TABLE [dbo].[tblEmailsSent] SWITCH TO [dbo].[NEW_tblEmailsSent];
-- drop the original (now empty) table
DROP TABLE [dbo].[tblEmailsSent];
-- rename new table to old table's name
EXEC sp_rename 'NEW_tblEmailsSent','tblEmailsSent';
COMMIT;
After the new table has IDENTITY property you normally should set the current identity value to the maximum of the actual values in your table. If you don't do it, new rows inserted into the table would start from 1.
One way to do it is to run DBCC CHECKIDENT after you switched the tables:
DBCC CHECKIDENT('dbo.tblEmailsSent')
Alternatively, you can specify the new seed in the table definition:
CREATE TABLE [dbo].[NEW_tblEmailsSent](
[id] [int] IDENTITY(<max value of id + 1>, 1) NOT NULL,

temporary table | multi-part identifier could not be bound

Every other article I see has something to with JOINS... I'm not even trying to do a join! I'm just trying to run a simple UPDATE based off information in a temporary table. Here's the code...
BEGIN TRAN ArchiveMigration
-- insert into temporary table
CREATE TABLE #tblTemp(
[theID] [int] NOT NULL,
[ScheduleID] [int] NOT NULL,
[OverridingCustomerID] [int] NOT NULL,
[Timestamp] [datetime] NOT NULL,
[DeviceName] [nvarchar](max) NULL,
[DestinationTempCool] [int] NULL,
[DestinationMode] [nvarchar](max) NULL,
[DestinationTempHeat] [int] NULL,
CONSTRAINT [PK_#tblTemp] PRIMARY KEY CLUSTERED
(
[theID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
INSERT INTO #tblTemp ([theID], [ScheduleID], [OverridingCustomerID], Timestamp, DeviceName, DestinationTempCool, DestinationMode, DestinationTempHeat)
SELECT Id, ScheduleId, OverridingCustomerId, Timestamp, DeviceName, DestinationTempCool, DestinationMode, DestinationTempHeat
FROM CustomerScheduleOverride
WHERE Id = 836;
-- modify the extended info table
UPDATE ExtendedOverrideInfo
SET ExtendedOVerrideInfo.OverrideId = Null
WHERE ExtendedOverrideInfo.OverrideId = #tblTemp.[theID];
COMMIT TRAN
All I want to do is nullify the values of ExtendedOverrideInfo.OverrideId if said ID exists in the #tblTemp (statement is towards the bottom of the script). Any idea why I might be getting this message? Thanks in advanced!
Your current UPDATE syntax is incorrect, you will need to use a JOIN on your temporary table. This article from Pinal Dave provides a more detailed explanation.
UPDATE ExtendedOverrideInfo
SET ExtendedOverrideInfo.OverrideId = Null
FROM ExtendedOverrideInfo
INNER JOIN #tblTemp t on t.[theID]=ExtendedOverrideInfo.OverrideId
You update statment is totally wrong,the where clause is not correct,you have multiple choices here to resolve your problem:
make join with tmptable
use Exists key in your where clause
Or simply,if your purpose of creating tmptable is just to nullify,why not using cursor?or change your where statment to search record by id?

cannot write to newly created table in SQL Azure

in our Azure SQL Service db we had a table App_Tracking that is/was used to track user actions. We needed to increase the size of the log buffer so I first copied over all the records to an archive table that was defined using this SQL statement
CREATE TABLE [dbo].[App_Tracking_Nov20_2015](
[ID] [int] IDENTITY(1,1) NOT NULL,
[UserID] [nvarchar](50) NOT NULL,
[App_Usage] [nvarchar](1024) NOT NULL,
[Timestamp] [datetime] NOT NULL )
Then using SQL Management Studio 2012 I recreated the original table using :Drop/Create script Generation:
USE [tblAdmin] GO
/****** Object: Table [dbo].[App_Tracking] Script Date: 11/21/2015 11:42:01 AM ******/
DROP TABLE [dbo].[App_Tracking] GO
/****** Object: Table [dbo].[App_Tracking] Script Date: 11/21/2015 11:42:01 AM ******/
SET ANSI_NULLS OFF
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[App_Tracking](
[ID] [int] IDENTITY(1,1) NOT NULL,
[UserID] [nvarchar](50) NOT NULL,
[App_Usage] [nvarchar](4000) NOT NULL,
[Timestamp] [datetime] NOT NULL,
CONSTRAINT [PrimaryKey_ 7c88841f-aaaa-bbbb-cccc- c26fe6a5720e] PRIMARY KEY CLUSTERED (
[ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) )
GO
this is the automated drop/create that SMS2012 creates for you
I then updated statistics on App_Admin using EXEC sp_updatestats
The gotcha is that I can no longer programattically add records to this table.
If I open App_Admin from manage.windowsazure.net and "open in Visual Studio" I can manually add a record to it. but if in SMS2012 I run the code
USE [tblAdmin] GO
UPDATE [dbo].[App_Tracking] SET
[UserID] = 'e146ba22-930c-4b22-ac3c-15da47722e75' ,
[App_Usage] = 'search search: Bad Keyword: asdfadsfs' ,
[Timestamp] = '2015-11-20 20:00:18.700'
GO
nothing gets updated but no error is thrown.
If programmatically I use
var adminContext = new App_AdminEntities();
string prunedAction = action.Length <= 4000 ? action : action.Trim().Substring (0, 4000); // insure we don't fault on overflow of too long a keyword list
var appTracking = new App_Tracking
{
UserID = userId,
PP_Usage = prunedAction,
Timestamp = DateTime.Now
};
try {
adminContext.App_Tracking.Add(APPTracking);
adminContext.SaveChanges();
adminContext.Dispose();
}
I get an error thrown on SaveChanges (which is the .net SQL db function) What did I do wrong
OK so I found the problem. it turns out I had not updated the EDMX file associated and thus the error was being thrown by internal entity validation - which is kindof hidden under the covers –

SQL Statement take long time to execute

I have a SQL Server database and having a table containing too many records. Before it was working fine but now when I run SQL Statement takes time to execute.
Sometime cause the SQL Database to use too much CPU.
This is the Query for the table.
CREATE TABLE [dbo].[tblPAnswer1](
[ID] [bigint] IDENTITY(1,1) NOT NULL,
[AttrID] [int] NULL,
[Kidato] [int] NULL,
[Wav] [int] NULL,
[Was] [int] NULL,
[ShuleID] [int] NULL,
[Mwaka] [int] NULL,
[Swali] [float] NULL,
[Wilaya] [int] NULL,
CONSTRAINT [PK_tblPAnswer1] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
And the following down is the sql stored procedure for the statement.
ALTER PROC [dbo].[uspGetPAnswer1](#ShuleID int, #Mwaka int, #Swali float, #Wilaya int)
as
SELECT ID,
AttrID,
Kidato,
Wav,
Was,
ShuleID,
Mwaka,
Swali,
Wilaya
FROM dbo.tblPAnswer1
WHERE [ShuleID] = #ShuleID
AND [Mwaka] = #Mwaka
AND [Swali] = #Swali
AND Wilaya = #Wilaya
What is wrong in my SQL Statement. Need help.
Just add an index on ShuleID, Mwaka, Swali and Wilaya columns. The order of columns in the index should depend on distribution of data (the columns with most diverse values in it should be the first in the index, and so on).
And if you need it super-fast, also include all the remaining columns used in the query, to have a covering index for this particular query.
EDIT: Probably should move the float col (Swali) from indexed to included columns.
Add an Index on the ID column and include ShuleID, Mwaka, Swali and Wilaya columns. That should help improve the speed of the query.
CREATE NONCLUSTERED INDEX IX_ID_ShuleID_Mwaka_Swali_Wilaya
ON tblPAnswer1 (ID)
INCLUDE (ShuleID, Mwaka, Swali, Wilaya);
What is the size of the table? You may need additional indices as you are not using the primary key to query the data. This article by Pinal Dave provides a script to identify missing indices.
http://blog.sqlauthority.com/2011/01/03/sql-server-2008-missing-index-script-download/
It provides a good starting point for index optimization.

how to set default value for a column using a scalar function

I have table like this:
CREATE TABLE [dbo].[tbl](
[Id] [int] IDENTITY(1,1) NOT NULL,
[Year] [int] NOT NULL,
[Month] [int] NOT NULL,
[Fields] [xml] NOT NULL,
[NameFromXML] [nvarchar](1000) NULL
) ON [PRIMARY]
in Fields column I stor XML like this:
<Employees>
<Person>
<ID>0</ID>
<Name>Ligha</Name>
<LName>Agha</LName>
</Person>
</Employees>
Ok.I want persist value of Name element to NameFromXML.I write this function :
CREATE FUNCTION dbo.GetName
(
#xml XML
)
RETURNS NVARCHAR(100)
WITH RETURNS NULL ON NULL INPUT
AS
BEGIN
RETURN #xml.value('/Employees[1]/Person[1]/Name[1]', 'nvarchar(100)')
END
GO
but when I write this code to add default:
ALTER TABLE tbl_Test_XML_Index_View ADD CONSTRAINT df_f DEFAULT(dbo.GetName(Fields))
FOR namefromXML
I got this error:
The name "Fields" is not permitted in this context. Valid expressions are constants, constant expressions, and (in some contexts) variables. Column names are not permitted.
How I can solve this problem?
Why not have it as a computed column, which will always be correct, and won't require any more maintenance effort:
CREATE FUNCTION dbo.GetName
(
#xml XML
)
RETURNS NVARCHAR(100)
WITH RETURNS NULL ON NULL INPUT
, SCHEMABINDING
AS
BEGIN
RETURN #xml.value('/Employees[1]/Person[1]/Name[1]', 'nvarchar(100)')
END
GO
CREATE TABLE [dbo].[tbl](
[Id] [int] IDENTITY(1,1) NOT NULL,
[Year] [int] NOT NULL,
[Month] [int] NOT NULL,
[Fields] [xml] NOT NULL,
[NameFromXML] AS dbo.GetName(Fields) persisted
) ON [PRIMARY]
insert into dbo.tbl (Year,Month,Fields)
select 1,1,'<Employees>
<Person>
<ID>0</ID>
<Name>Ligha</Name>
<LName>Agha</LName>
</Person>
</Employees>'
select * from dbo.tbl
I had to add SCHEMABINDING to the function, in order for it to be treated as deterministic. That, in turn, allowed me to mark the computed column as persisted, which means it can, in turn, be indexed if needed.
It seems that you are trying to store redundant data for each row. What happens if somehow that name in the XML is changed? How do you take care to update the NameFromXML column? By using triggers? I can give you the advise, to just store data you need by using regular columns. Write insert/update stored procedures with will take the XML as a parameter, and insert data from it into the appropriate columns.