Allow trailing white space differences in SQL Server unique constraint [duplicate] - sql

I'm using Microsoft SQL Server 2008 R2 (with latest service pack/patches) and the database collation is SQL_Latin1_General_CP1_CI_AS.
The following code:
SET ANSI_PADDING ON;
GO
CREATE TABLE Test (
Code VARCHAR(16) NULL
);
CREATE UNIQUE INDEX UniqueIndex
ON Test(Code);
INSERT INTO Test VALUES ('sample');
INSERT INTO Test VALUES ('sample ');
SELECT '>' + Code + '<' FROM Test WHERE Code = 'sample ';
GO
produces the following results:
(1 row(s) affected)
Msg 2601, Level 14, State 1, Line 8
Cannot insert duplicate key row in object 'dbo.Test' with unique index 'UniqueIndex'. The duplicate key value is (sample ).
The statement has been terminated.
‐‐‐‐‐‐‐‐‐‐‐‐
>sample<
(1 row(s) affected)
My questions are:
I assume the index cannot store trailing spaces. Can anyone point me to official documentation that specifies/defines this behavior?
Is there a setting to change this behavior, that is, make it recognize 'sample' and 'sample ' as two different values (which they are, by the way) so both can be in the index.
Why on Earth is the SELECT returning a row? SQL Server must be doing something really funny/clever with the spaces in the WHERE clause because if I remove the uniqueness in the index, both INSERTs will run OK and the SELECT will return two rows!
Any help/pointer in the right direction would be appreciated. Thanks.

Trailing blanks explained:
SQL Server follows the ANSI/ISO SQL-92 specification (Section 8.2,
, General rules #3) on how to compare strings
with spaces. The ANSI standard requires padding for the character
strings used in comparisons so that their lengths match before
comparing them. The padding directly affects the semantics of WHERE
and HAVING clause predicates and other Transact-SQL string
comparisons. For example, Transact-SQL considers the strings 'abc' and
'abc ' to be equivalent for most comparison operations.
The only exception to this rule is the LIKE predicate. When the right
side of a LIKE predicate expression features a value with a trailing
space, SQL Server does not pad the two values to the same length
before the comparison occurs. Because the purpose of the LIKE
predicate, by definition, is to facilitate pattern searches rather
than simple string equality tests, this does not violate the section
of the ANSI SQL-92 specification mentioned earlier.
Here's a well known example of all the cases mentioned above:
DECLARE #a VARCHAR(10)
DECLARE #b varchar(10)
SET #a = '1'
SET #b = '1 ' --with trailing blank
SELECT 1
WHERE
#a = #b
AND #a NOT LIKE #b
AND #b LIKE #a
Here's some more detail about trailing blanks and the LIKE clause.
Regarding indexes:
An insertion into a column whose values must be unique will fail if you supply a value that is differentiated from existing values by
trailing spaces only. The following strings will all be considered
equivalent by a unique constraint, primary key, or unique index.
Likewise, if you have an existing table with the data below and try to
add a unique restriction, it will fail because the values are
considered identical.
PaddedColumn
------------
'abc'
'abc '
'abc '
'abc '
(Taken from here.)

Related

Differentiate Exponents in T-SQL

In SQL Server 2017 (14.0.2)
Consider the following table:
CREATE TABLE expTest
(
someNumbers [NVARCHAR](10) NULL
)
And let's say you populate the table with some values:
INSERT INTO expTest VALUES('²', '2')
Why does the following SELECT return both rows?
SELECT *
FROM expTest
WHERE someNumbers = '2'
Shouldn't nvarchar realize that '²' is unicode, while '2' is a separate value? How (without using the UNICODE() function) could I identify this data as being nonequivalent?
Here is a db<>fiddle. This shows the following:
Your observation is true even when the values are entered as national character set constants.
The "ASCII" versions of the characters are actually different.
The problem goes away with a case-sensitive collation.
I think the exponent is just being treated as a different "case" of the number, so they are considered the same in a case-insensitive collation.
The comparison is what you expect with a case-sensitive collation.

Include wildcards in sql server in the values themselves - not when searching with LIKE

Is there a way to include wildcards in sql server in the values themselves - not when searching with LIKE?
I have a database that users search on. They search for model numbers that contain different wildcard characters but do not know that these wildcard characters exist.
For example, a model number may be 123*abc in the database, but the user will search for 1234abc because that's what they see for their model number on their unit at home.
I'm looking for a way to allow users to search without knowledge of wildcards but have a systematic way to include model numbers with wildcard characters in the database.
What you could do is add a PERSISTED computed column to your table with valid pattern expression for SQL Server. You stated that * should be any letter or numerical character, and comma delimited values in parenthesis can be any one of those characters. Provided that commas don't appear in your main data, nor parenthesis, then this should work:
USE Sandbox;
GO
CREATE TABLE SomeTable (SomeString varchar(15));
GO
INSERT INTO SomeTable
VALUES('123abc'),
('abc*987'),
('def(q,p,r,1)555');
GO
ALTER TABLE SomeTable ADD SomeString_Exp AS REPLACE(REPLACE(REPLACE(REPLACE(SomeString,'*','[0-9A-z]'),'(','['),')',']'),',','') PERSISTED; --What you're interested in
SELECT *
FROM SomeTable;
GO
DECLARE #String varchar(15) = 'defp555';
SELECT *
FROM SomeTable
WHERE #String LIKE SomeString_Exp; --And how to search
GO
DROP TABLE SomeTable;
If * is any character, and noy any alphanumeric then you could shorten the whole thing to (and provided your on SQL Server 2017):
ALTER TABLE SomeTable ADD SomeString_Exp AS REPLACE(TRANSLATE(SomeString,'*()','_[]'),',','') PERSISTED;
I'm thinking either:
where #model_number like replace(model_number, '*', '%')
or
where #model_number like replace(model_number, '*', '_')
Depending on whether '*' means any string (first example) or exactly one character (second example).

SQL Server stored procedure to search list of values without special characters

What is the most efficient way to search a column and return all matching values while ignoring special characters?
For example if a table has a part_number column with the following values '10-01' '14-02-65' '345-23423' and the user searches for '10_01' and 140265 it should return '10-01' and '14-02-65'.
Processing the input to with a regex to remove those characters is possible, so the stored procedure could could be passed a parameter '1001 140265' then it could split that input to form a SQL statement like
SELECT *
FROM MyTable
WHERE part_number IN ('1001', '140265')
The problem here is that this will not match anything. In this case the following would work
SELECT *
FROM MyTable
WHERE REPLACE(part_number,'-','') IN ('1001', '140265')
But I need to remove all special characters. Or at the very least all of these characters ~!##$%^&*()_+?/\{}[]; with a replace for each of those characters the query takes several minutes when the number of parts in the IN clause is less than 200.
Performance is improved by creating a function that does the replaces, so the query takes less than a minute. But without removals the query takes around 1 second, is there any way to create some kind of functional index that will work on multiple SQL Server engines?
You could use a computed column and index it:
CREATE TABLE MyTable (
part_number VARCHAR(10) NOT NULL,
part_number_int AS CAST(replace(part_number, '-', '') AS int)
);
ALTER TABLE dbo.MyTable ADD PRIMARY KEY (part_number);
ALTER TABLE dbo.MyTable ADD UNIQUE (part_number_int);
INSERT INTO dbo.MyTable (part_number)
VALUES ('100-1'), ('140265');
SELECT *
FROM dbo.MyTable AS MT
WHERE MT.part_number_int IN ('1001', '140265');
Of course your replace statement will be more complex and you'll have to sanitize user input the same way you sanitize column values. But this is going to be the most efficient way to do it.
This query can now seek your column efficiently:
But to be honest, I'd just create a separate column to store cleansed values for querying purpose and keep the actual values for display. You'll have to take care of extra update/insert clauses, but that's a minimum damage.

Alter column from varchar to decimal when nulls exist

How do I alter a sql varchar column to a decimal column when there are nulls in the data?
I thought:
ALTER TABLE table1
ALTER COLUMN data decimal(19,6)
But I just get an error, I assume because of the nulls:
Error converting data type varchar to numeric. The statement has been terminated.
So I thought to remove the nulls I could just set them to zero:
ALTER TABLE table1
ALTER COLUMN data decimal(19,6) NOT NULL DEFAULT 0
but I dont seem to have the correct syntax.
Whats the best way to convert this column?
edit
People have suggested it's not the nulls that are causing me the problem, but non-numeric data. Is there an easy way to find the non-numeric data and either disregard it, or highlight it so I can correct it.
If it were just the presence of NULLs, I would just opt for doing this before the alter column:
update table1 set data = '0' where data is null
That would ensure all nulls are gone and you could successfully convert.
However, I wouldn't be too certain of your assumption. It seems to me that your new column is perfectly capable of handling NULL values since you haven't specified not null for it.
What I'd be looking for is values that aren't NULL but also aren't something you could turn in to a real numeric value, such as what you get if you do:
insert into table1 (data) values ('paxdiablo is good-looking')
though some may argue that should be treated a 0, a false-y value :-)
The presence of non-NULL, non-numeric data seems far more likely to be causing your specific issue here.
As to how to solve that, you're going to need a where clause that can recognise whether a varchar column is a valid numeric value and, if not, change it to '0' or NULL, depending on your needs.
I'm not sure if SQL Server has regex support but, if so, that'd be the first avenue I'd investigate.
Alternatively, provided you understand the limitations (a), you could use isnumeric() with something like:
update table1 set data = NULL where isnumeric(data) = 0
This will force all non-numeric values to NULL before you try to convert the column type.
And, please, for the love of whatever deities you believe in, back up your data before attempting any of these operations.
If none of those above solutions work, it may be worth adding a brand new column and populating bit by bit. In other words set it to NULL to start with, and then find a series of updates that will copy data to this new column.
Once you're happy that all data has been copied, you should then have a series of updates you can run in a single transaction if you want to do the conversion in one fell swoop. Drop the new column and then do the whole lot in a single operation:
create new column;
perform all updates to copy data;
drop old column;
rename new column to old name.
(a) From the linked page:
ISNUMERIC returns 1 for some characters that are not numbers, such as plus (+), minus (-), and valid currency symbols such as the dollar sign ($).
Possible solution:
CREATE TABLE test
(
data VARCHAR(100)
)
GO
INSERT INTO test VALUES ('19.01');
INSERT INTO test VALUES ('23.41');
ALTER TABLE test ADD data_new decimal(19,6)
GO
UPDATE test SET data_new = CAST(data AS decimal(19,6));
ALTER TABLE test DROP COLUMN data
GO
EXEC sp_RENAME 'test.data_new' , 'data', 'COLUMN'
As people have said, that error doesn't come from nulls, it comes from varchar values that can't be converted to decimal. Most typical reason for this I've found (after checking that the column doesn't contain any logically false values, like non-digit characters or double comma values) is when your varchar values use comma for decimal pointer, as opposed to period.
For instance, if you run the following:
DECLARE #T VARCHAR(256)
SET #T = '5,6'
SELECT #T, CAST(#T AS DEC(32,2))
You will get an error.
Instead:
DECLARE #T VARCHAR(256)
SET #T = '5,6'
-- Let's change the comma to a period
SELECT #T = REPLACE(#T,',','.')
SELECT #T, CAST(#T AS DEC(32,2)) -- Now it works!
Should be easy enough to look if your column has these cases, and run the appropriate update before your ALTER COLUMN, if this is the cause.
You could also just use a similar idea and make a regex search on the column for all values that don't match digit / digit+'.'+digit criteria, but i suck with regex so someone else can help with that. :)
Also, the american system uses weird separators like the number '123100.5', which would appear as '123,100.5', so in those cases you might want to just replace the commas with empty strings and try then?

Oracle varchar2 equivalent in sql server

create table #temp(name nvarchar(10))
insert into #temp values('one')
select * from #temp where name = 'one'
select * from #temp where name = 'one ' --one with space at end
drop table #temp
In the above I have used nvarchar for name.
My requirement is the result should be exist for the first select query, and it should not return for 2nd query. Do not trim the name. Advise me which data type can I use for this in sql server?
Its not the data type that can resolve this issue. You need to see this article:
INF: How SQL Server Compares Strings with Trailing Spaces
SQL Server follows the ANSI/ISO SQL-92 specification (Section 8.2,
, General rules #3) on how to compare strings
with spaces. The ANSI standard requires padding for the character
strings used in comparisons so that their lengths match before
comparing them. The padding directly affects the semantics of WHERE
and HAVING clause predicates and other Transact-SQL string
comparisons. For example, Transact-SQL considers the strings 'abc' and
'abc ' to be equivalent for most comparison operations.
There are several ways to overcome this, one is to use Like.
select * from #temp where name like 'one ' --one with space at end
This will return no result.
You should see this blog post: Testing strings for equality counting trailing spaces by AnthonyBloesch