SQL Server CONVERT(NUMERIC(18,0), '') fails but CONVERT(INT, '') succeeds? - sql

PRINT CONVERT(NUMERIC(18,0), '')
produces Error converting data type varchar to numeric.
However,
PRINT CONVERT(INT, '')
produces 0 without error...
Question: Is there some SQL Server flag for this or will I need to do case statements for every varchar to numeric conversion? (aside from the obvious why?)

Use ISNUMERIC
declare #a varchar(20)
set #a = 'notanumber'
select case when isnumeric(#a) = 0 then 0 else convert(numeric(18,0),#a) end

ISNUMERIC doesn't alway work as you might expect: in particular it returns True for some values that can't subsequently be converted to numeric.
This article describes the issue and suggests how to work around it with UDFs.

Empty string will convert to zero for float and int types, but not decimal. (And converts to 01 Jan 1900 for datetimes = zero). I don't know why.. it just is...
If you need decimal(18,0), use bigint instead. Or cast via float first
ISNUMERIC will accept - and . and 1.2E3 as a number, but all fail to convert to decimal.

Related

Trim decimal off a varchar value

I've got a legacy SQL Server stored procedure that stopped working some time ago. While looking at it today, there is an inner join where one table is storing the value as an int and the other is storing it as a varchar in a (##.#) format. Not sure why or how that happened but SQL Server is none too happy about it.
I need a simple programmatic bit of string manipulation to pull out everything to the left of the decimal point so I can cast or convert it to an int to fix the join.
I started with the following, however substring requires a fixed length and the data could be 1-3 digits to the left of the decimal. Having trouble with the dynamic aspect of it. For clarity sake, I don't care what's to the right of the decimal.
cast(substring(H.Variable, 1, 1) as int)
First, find the index of the decimal by using CHARINDEX(). Then, you can pass that index to the LEFT() function:
LEFT(H.Variable, CHARINDEX('.', H.Variable) - 1)
Try:
CAST(TRY_CAST H.Variable AS Float) AS Int)
That should get you the integer value of the varchar string--if it cannot be converted, it will come back as NULL.
It's going in the other direction than your question, but is likely to be more accurate and higher performance.
Note that you need SQL Server 2012 or later to use the TRY_CAST conversion...
If you can have no decimals with decimals, you need to account for that.
declare #table table (c1 varchar(64))
insert into #table
values
('123')
,('5465465.465465')
select
case when CHARINDEX('.', c1) = 0 then c1 else LEFT(c1, CHARINDEX('.', c1) - 1) end
from #table
Other wise, only using LEFT() and CHARINDEX() will result in:
Invalid length parameter passed to the LEFT or SUBSTRING function.
Another way is
substring(c1,0,case when charindex('.',c1) = 0 then 9999 else charindex('.',c1) end)
Try:
CONVERT(INT, H)
It could be more tolerant...

Error converting data type varchar to float isnumeric = 1

When I run the script:
select
cast(s as float)
from
t
where
ISNUMERIC(s) = 1
it stops with the error:
Error converting data type varchar to float.
Why does it happen? I'm trying to convert to float only numerics. How do I found out which row causes the error?
The isnumeric function thinks just about everything is a number. Use "try_convert" instead. if the value can't convert to your destination datatype, it returns null.
select convert(float, '1,0,1')
where try_convert(float, '1,0,1') is not null
If you are on an older version of SQL, I would write my own function.
I usually face with this when the value in a column you are trying to convert to float contains a comma (,) as thousand separator:
SELECT ISNUMERIC('140,523.86')
The Result is 1, but unable to cast it as a float.
By replacing it works fine for me:
SELECT
CAST( replace(s,',','') AS float ) AS Result
FROM t
WHERE ISNUMERIC(replace(s,',','')) = 1
ISNUMERIC() function will return 1 for values like 123e3 because these values are Interpreted as numeric values. Because sql server sees this as 123 , 3 to the powers of 10 which is really a numeric value.
You should try something like....
Select *
From tableName
WHERE Col NOT LIKE '%[^0-9]%'
This will return any row where there is a non-numeric character, even values with a ..

t-sql Different datatype possible in a case?

I have this query
SELECT
CASE WHEN dbo.CFE_PPHY.P77 IS NOT NULL OR dbo.CFE_PPHY.P77 <>''
THEN MONTH(dbo.CFE_PPHY.P77)
WHEN dbo.CFE_PPHY.P70 IS NOT NULL OR dbo.CFE_PPHY.P70 <>''
THEN MONTH(dbo.CFE_SERVICE_EVTS.C10_2)
ELSE COALESCE(CONVERT(VARCHAR,dbo.CFE_PPHY.P77)+
CONVERT(VARCHAR,dbo.CFE_SERVICE_EVTS.C10_2),'toto') END
AS CFELiasse_DateEffetEIRL_MM_N
FROM CFE_PPHY LEFT JOIN CFE_SERVICE_EVTS ON CFE_PPHY.colA = CFE_SERVICE_EVTS.colB
The ELSE part is giving me headaches.
The columns CFE_PPHY.P77 and CFE_SERVICE_EVTS.C10_2 have date time format. I'm turning them into varchar. Yet when I'm running the query, I have the following error
Msg 245, Level 16, State 1, Line 1 Conversion failed when converting the varchar value 'toto' to data type int.
Obviously, I cannot turn toto to an integer. Fair enough. However, from my point of view, I've converted the datetime format to a varchar format, so it should do the work.
Where am I wrong?
Thanks
You have to convert all of your case expressions to varchar. SQL is deciding to case the field as int so 'toto' is invalid. If all expressions are converted to varchar this error should be solved.
http://blog.sqlauthority.com/2010/10/08/sql-server-simple-explanation-of-data-type-precedence/
Have a closer look at your case expression: in the first and second conditional branches you're returning MONTH(... which is obviously integer.
But in third branch you're returning varchar thus SQL server tries to convert it to int according to data type of previous branches and failing to do it.
Try like this,
SELECT CASE
WHEN dbo.CFE_PPHY.P77 IS NOT NULL
OR dbo.CFE_PPHY.P77 <> ''
THEN convert(VARCHAR, MONTH(dbo.CFE_PPHY.P77))
WHEN dbo.CFE_PPHY.P70 IS NOT NULL
OR dbo.CFE_PPHY.P70 <> ''
THEN convert(VARCHAR, MONTH(dbo.CFE_SERVICE_EVTS.C10_2))
ELSE COALESCE(CONVERT(VARCHAR, dbo.CFE_PPHY.P77) + CONVERT(VARCHAR, dbo.CFE_SERVICE_EVTS.C10_2), 'toto')
END AS CFELiasse_DateEffetEIRL_MM_N
FROM CFE_PPHY
LEFT JOIN CFE_SERVICE_EVTS ON CFE_PPHY.colA = CFE_SERVICE_EVTS.colB
First, when converting to a string, always include a length (in SQL Server). The default length varies by context and may not be correct.
Second, the comparison of date/time values to '' is not necessary. This is not really valid value for a date/time -- although it does get converted to a 0 which is 1900-01-01. The NULL comparison should be sufficient. Otherwise, be explicit.
Third, string concatenation will return NULL if any of the arguments are NULL.
Fourth, table aliases make a query easier to write and to read.
As far as I can tell, your case is a bit over complicated. In the ELSE, we know that dbo.CFE_PPHY.P77 is NULL, because of the first condition. So, how about:
SELECT (CASE WHEN p.P77 IS NOT NULL
THEN CAST(MONTH(p.P77) as VARCHAR(255))
WHEN p.P70 IS NOT NULL
THEN CAST(MONTH(e.C10_2) as VARCHAR(255))
ELSE 'toto'
END) AS CFELiasse_DateEffetEIRL_MM_N
FROM CFE_PPHY p LEFT JOIN
CFE_SERVICE_EVTS e
ON p.colA = e.colB;

SQL Server 2012 blank string comparison with 0

I am running this query on SQL Server 2012:
select 'weird'
where '' = 0
It's returning 'weird'.
As far as I understand, '' is quite different from 0. So please explain why the above happens.
Thanks
So, taking a look at the data types where they stand in the WHERE clause
SELECT SQL_VARIANT_PROPERTY(0, 'BaseType'),SQL_VARIANT_PROPERTY('', 'BaseType')
They return int, varchar respectively.
When comparing two different data types, the data type with the lower precedence will convert to the higher precedence, per MSDN.
In this case, Varchar converts to int.
select cast('' AS int)
The above returns 0.
Thus
select 'weird' where 0 = 0

Why IsNull(LTrim(RTrim(Lower(null))), -1) is *?

Today I was testing something at work place and came across this one
Case 1:
Declare #a nvarchar(20)
Set #a = null
Select IsNull(LTrim(RTrim(Lower(#a))), -1)
Case 2:
Select IsNull(LTrim(RTrim(Lower(null))), -1)
The result in case 1 is -1 but * in case 2
I was expecting same results in both cases. Any reason?
Without the declaration of data type, null in this case is declared as varchar(1). You can observe this by selecting the results into a #temp table:
Select IsNull(LTrim(RTrim(Lower(null))), -1) as x INTO #x;
EXEC tempdb..sp_help '#x';
Among the results you'll see:
Column_name Type Length
----------- ------- ------
x varchar 1
Since -1 can't fit in a varchar(1), you are getting * as output. This is similar to:
SELECT CONVERT(VARCHAR(1), -1);
If you want to collapse to a string, then I suggest enclosing the integer in single quotes so there is no confusion caused by integer <-> string conversions that aren't intended:
SELECT CONVERT(VARCHAR(1), '-1'); -- yields "-"
SELECT CONVERT(VARCHAR(30), '-1'); -- yields "-1"
I would not make any assumptions about how SQL Server will handle a "value" explicitly provided as null, especially when complex expressions make it difficult to predict which evaluation rules might trump data type precedence.
In SQL Server, there are "typed NULLs" and "untyped NULLs".
In the first case, the NULL is typed—it is aware that NULL is a varchar(20) and so as your functions wrap the inner value, that data type is propagated throughout the expression.
In the second case, the NULL is untyped, so it has to infer the NULL's type from the surrounding expressions. The IsNull function evaluates the data type of the first operand and applies that to the whole expression, and thus the NULL defaults to varchar(1):
PRINT sql_variant_property(IsNull(LTrim(NULL), -1), 'BaseType'); -- varchar
PRINT sql_variant_property(IsNull(LTrim(NULL), -1), 'MaxLength'); -- 1
Another complication is that IsNull does not do type promotion in the same way that Coalesce does (though Coalesce has its own problems due to not being a function—it is expanded to a CASE expression, sometimes causing unexpected side-effects due to repeat expression evaluation). Look:
SELECT Coalesce(LTrim(NULL), -1);
This results in -1 with data type int!
Check out Sql Server Data Type Precedence and you'll see that int is much higher than varchar, so the whole expression becomes int.
The naked NULL is being passed to LOWER(), which expects a character. This is being defaulted to one character wide. The value "-1" doesn't fit in this field, so it is returning "*".
You can get the same effect with:
select isnull(CAST(NULL as varchar(1)), -1)
The following code also causes the problem:
declare #val varchar;
set #val = -1
select #val
Note that COALESCE() does not cause this problem.
I'm pretty sure this is fully documented behavior.