SSDT Drop and Recreate Tables when nothing has changed - sql

I'm using SSDT database project to create deployment scripts for my database.
One of the tables, [AdrInfo].[IL] is dropped and then recreated every time when the deployment runs.
Nothing has changed in the definition of the tables in the project files.
Definition of the table:
CREATE TABLE [AdrInfo].[IL] (
[IL_ID] NVARCHAR (50) NULL,
[IL_ADI] NVARCHAR (50) NULL,
[XCOOR] VARCHAR (50) NULL,
[YCOOR] VARCHAR (50) NULL,
[IL_ADI_KEY] AS (CONVERT (NVARCHAR (255), replace(replace([IL_ADI], ' ', ''), '.', ''), 0) COLLATE SQL_Latin1_General_Cp850_CI_AI) PERSISTED );
CREATE CLUSTERED INDEX [index_IX_IL_CI1] ON [AdrInfo].[IL]([IL_ADI_KEY] ASC);
Snippet from deployment script:
GO PRINT N'Starting rebuilding table [AdrInfo].[IL]...';
GO BEGIN TRANSACTION;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
SET XACT_ABORT ON;
CREATE TABLE [AdrInfo].[tmp_ms_xx_IL] (
[IL_ID] NVARCHAR (50) NULL,
[IL_ADI] NVARCHAR (50) NULL,
[XCOOR] VARCHAR (50) NULL,
[YCOOR] VARCHAR (50) NULL,
[IL_ADI_KEY] AS (CONVERT (NVARCHAR (255), replace(replace([IL_ADI], ' ', ''), '.', ''), 0) COLLATE SQL_Latin1_General_Cp850_CI_AI) PERSISTED );
CREATE CLUSTERED INDEX [tmp_ms_xx_index_IX_IL_CI1]
ON [AdrInfo].[tmp_ms_xx_IL]([IL_ADI_KEY] ASC);
I would expect SSDT to not touch this table during deployment. What can cause such a behavior?

SSDT is very picky when deploying default expressions for table columns.
Please compare expressions below:
(CONVERT (NVARCHAR (255), replace(replace([IL_ADI], ' ', ''), '.', ''), 0) COLLATE SQL_Latin1_General_Cp850_CI_AI) PERSISTED
((CONVERT (NVARCHAR (255), replace(replace([IL_ADI], ' ', ''), '.', ''), 0)) COLLATE SQL_Latin1_General_Cp850_CI_AI) PERSISTED
Using 1st one will cause the table to be redeployed every time, using the second one will stop this behavior. SQL Server do not store default expressions as text, but normalizes them. SSDT uses own normalization and then compares it to normalized expression.
If the both sets of normalization rules are not creating the same expression, SSDT will redeploy the column expression every time, which was causing redeploying the table in your case.
To avoid it, use SSMS script table to get normalized expression and save it in the project file.

Related

Can I retrieve Arabic text that stored in varchar column in Arabic Collate that stored from another database in Latin Collate

I have two database one of them in Latin collate and have Arabic data. In some case I take data that stored in the first database and store it in the second database table which has Arabic collate but each column in both database have varchar datatype. When I store data in the second database, It stored text with question mark.
If I change the type to nvarchar every thing goes well but old data still has question mark how can I retrieve the old data
I try to get old data but I get unreadable text with question mark.
CREATE TABLE dbo.cust_supp (
company_id t_id_char2 NOT NULL,
acc_name_a t_id_var70 NULL,
acc_name_e t_id_var70 NULL,
closed_by numeric(4,0) NULL COLLATE SQL_Latin1_General_CP850_CI_AS
);
CREATE TABLE dbo.dcl_item_update (
item_update_id numeric(18,0) NOT NULL IDENTITY,
item_group_id numeric(18,0) NOT NULL,
change_column_id numeric(9,0) NOT NULL,
dcl_record_key numeric(9,0) NOT NULL,
old_value varchar(510) NULL COLLATE Arabic_CI_AS,
new_value varchar(510) NULL COLLATE Arabic_CI_AS
);
I store the acc_name_a value in to the columns old_value and new_value.
Did this work ?
(I don't have an SQL Server right now to try, and I'm not sure if it's really what you want either...)
INSERT INTO dbo.dcl_item_update (old_value)
SELECT closed_by COLLATE Arabic_CI_AS as closed_by
FROM dbo.cust_supp

How to retrieve German characters from a large CSV File into SQL Server 2017 script

I have a CSV file including a list of employees, where some of them includes German characters like 'ö' in their names. I need to create a temp table in my SQL Server 2017 script and fill it with the content of the CSV file. My script is:
CREATE TABLE #AllAdUsers(
[PhysicalDeliveryOfficeName] [NVARCHAR](255) NULL,
[Name] [NVARCHAR](255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[DisplayName] [NVARCHAR](255) NULL,
[Company] [NVARCHAR](255) NULL,
[SAMAccountName] [NVARCHAR](255) NULL
)
--import AD users
BULK INSERT #AllAdUsers
FROM 'C:\Employees.csv'
WITH
(
FIRSTROW = 2,
FIELDTERMINATOR = ',', --CSV field delimiter
ROWTERMINATOR = '\n', --Use to shift the control to next row
TABLOCK
)
However, even though I use "Nvarchar" variable type with the collation of "SQL_Latin1_General_CP1_CI", the German characters are not seem OK, for instance "Kösker" seems like:
"K├╢sker"
I've tried many other collations but couldn't find a fix for it. Any help would be very much appreciated.

MS SQL explicitly using "default defaults" on NOT NULL fields - why?

I stumbled upon this definition:
CREATE TABLE dbo.whatever (
[flBlahBlah] BIT DEFAULT ((0)) NOT NULL,
[txCity] NVARCHAR (50) DEFAULT ('') NOT NULL,
[cdFrom] VARCHAR (10) DEFAULT ('') NOT NULL
);
I can't think of a reason to add those default values. Not null string is defaulted to '' and bit is defaulted to 0. Is there a reason for defining these default values? Am I missing something? Is this in some best practice handbook I'm not aware of?
I'd just use:
CREATE TABLE dbo.whatever (
[flBlahBlah] BIT NOT NULL,
[txCity] NVARCHAR (50) NOT NULL,
[cdFrom] VARCHAR (10) NOT NULL
);
The database is in MS SQL Server 2012, now migrating to Azure Database.
For example you create table from a first batch of your question. Then insert value like this
INSERT INTO dbo.whatever (flBlahBlah) VALUES (1)
You will get 1 row dbo.whatever
flBlahBlah txCity cdFrom
1
So if you "forget" to insert in one of the column with default values determined - SQL Server will take care of them.
It is very useful when you got table, in which you need to insert new field. With default value determined you don't need to change SP/query's/other stuff that works with this table.

SQL Insert Out of Sync

I have a bit of SQL here which is throwing an error:
DROP TABLE HACP_TEMP_PIC_HCV_Imported;
CREATE TABLE HACP_TEMP_PIC_HCV_Imported
(
HeadSSN varchar(255) NOT NULL,
HeadFName varchar(255) NOT NULL,
HeadMName varchar(255),
HeadLName varchar(255) NOT NULL,
ModifiedDate varchar(255) NOT NULL,
ActionType varchar(255) NOT NULL,
EffectiveDate varchar(255) NOT NULL
);
BULK INSERT HACP_TEMP_PIC_HCV_Imported
FROM 'C:\Work\MTWAdhocReport.csv'
WITH
(
FIRSTROW = 11,
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n',
ERRORFILE = 'C:\Work\Import_ErrorRows_HCV.csv',
TABLOCK
);
UPDATE HACP_TEMP_PIC_HCV_Imported
SET HeadSSN = REPLACE(HeadSSN, '"', ''),
HeadFName = REPLACE(HeadFName, '"', ''),
HeadMName = REPLACE(HeadMName, '"', ''),
HeadLName = REPLACE(HeadLName, '"', ''),
ModifiedDate = REPLACE(ModifiedDate, '"', ''),
ActionType = REPLACE(ActionType, '"', ''),
EffectiveDate = REPLACE(REPLACE(EffectiveDate, '"', ''),',','');
DROP TABLE HACP_PIC_HCV_Imported;
CREATE TABLE HACP_PIC_HCV_Imported
(
HeadSSN varchar(255) NOT NULL,
HeadFName varchar(255) NOT NULL,
HeadMName varchar(255),
HeadLName varchar(255) NOT NULL,
ModifiedDate varchar(255) NOT NULL,
ActionType int NOT NULL,
EffectiveDate varchar(255) NOT NULL
);
INSERT INTO HACP_PIC_HCV_Imported(HeadSSN, HeadFName, HeadMName, HeadLName, ModifiedDate, ActionType, EffectiveDate)
SELECT
LTRIM(HeadSSN),
LTRIM(HeadFName),
LTRIM(HeadMName),
LTRIM(HeadLName),
LTRIM(ModifiedDate),
CONVERT(int, LTRIM(ActionType)),
LTRIM(EffectiveDate)
FROM
HACP_TEMP_PIC_HCV_Imported;
Stepping through this, creating the temp table and importing the CSV into it works fine. Updating the table to remove quotes and a trailing comma from the EffectiveDate column works. Creating the new table-proper works.
When trying to copy the data into the second table (and converting ActionType into an INT), I get this error message:
Conversion failed when converting the varchar value '4/07/2016' to data type int.
That data is the second row value in ModifiedDate, so the columns are apparently getting out of sync after importing the first row. I have double-checked that all of the data is in the proper columns after being imported into the temp table initially.
Any thoughts? I feel like I'm missing something obvious.
Your code suggests that you are using "proper" CSV format, which allows fields to be enclosed in double quotes. These delimited fields can contain commas. This is the format produced and read by Excel.
My guess is that you have a comma in such a delimited field and this is throwing off the import.
But, this format is not read properly by bulk insert. Ironically, (at least) one database does import the CSV formatted files with commas in the fields.
In the past when I've had this problem, it has only been on smallish files. I simply loaded the data into Excel and then saved in out using tabs or vertical bars as delimiters. This solved the problem in my case.
I'm not sure if there is a more advanced solution now. But I'm pretty sure your problem is that some fields have embedded commas in the text fields.

Sql Server - Insufficient result space to convert uniqueidentifier value to char

I am getting below error when I run sql query while copying data from one table to another,
Msg 8170, Level 16, State 2, Line 2
Insufficient result space to convert
uniqueidentifier value to char.
My sql query is,
INSERT INTO dbo.cust_info (
uid,
first_name,
last_name
)
SELECT
NEWID(),
first_name,
last_name
FROM dbo.tmp_cust_info
My create table scripts are,
CREATE TABLE [dbo].[cust_info](
[uid] [varchar](32) NOT NULL,
[first_name] [varchar](100) NULL,
[last_name] [varchar](100) NULL)
CREATE TABLE [dbo].[tmp_cust_info](
[first_name] [varchar](100) NULL,
[last_name] [varchar](100) NULL)
I am sure there is some problem with NEWID(), if i take out and replace it with some string it is working.
I appreciate any help. Thanks in advance.
A guid needs 36 characters (because of the dashes). You only provide a 32 character column. Not enough, hence the error.
You need to use one of 3 alternatives
1, A uniqueidentifier column, which stores it internally as 16 bytes. When you select from this column, it automatically renders it for display using the 8-4-4-4-12 format.
CREATE TABLE [dbo].[cust_info](
[uid] uniqueidentifier NOT NULL,
[first_name] [varchar](100) NULL,
[last_name] [varchar](100) NULL)
2, not recommended Change the field to char(36) so that it fits the format, including dashes.
CREATE TABLE [dbo].[cust_info](
[uid] char(36) NOT NULL,
[first_name] [varchar](100) NULL,
[last_name] [varchar](100) NULL)
3, not recommended Store it without the dashes, as just the 32-character components
INSERT INTO dbo.cust_info (
uid,
first_name,
last_name
)
SELECT
replace(NEWID(),'-',''),
first_name,
last_name
FROM dbo.tmp_cust_info
I received this error when I was trying to perform simple string concatenation on the GUID. Apparently a VARCHAR is not big enough.
I had to change:
SET #foo = 'Old GUID: {' + CONVERT(VARCHAR, #guid) + '}';
to:
SET #foo = 'Old GUID: {' + CONVERT(NVARCHAR(36), #guid) + '}';
...and all was good. Huge thanks to the prior answers on this one!
Increase length of your uid column from varchar(32) ->varchar(36)
because guid take 36 characters
Guid.NewGuid().ToString() -> 36 characters
outputs: 12345678-1234-1234-1234-123456789abc
You can try this. This worked for me.
Specify a length for VARCHAR when you cast/convert a value..for uniqueidentifier use VARCHAR(36) as below:
SELECT Convert (varchar(36),NEWID()) AS NEWID
The default length for VARCHAR datatype if we don't specify a length during CAST/CONVERT is 30..
Credit : Krishnakumar S
Reference : https://social.msdn.microsoft.com/Forums/en-US/fb24a153-f468-4e18-afb8-60ce90b55234/insufficient-result-space-to-convert-uniqueidentifier-value-to-char?forum=transactsql