SQL converting Text fields - sql

I'm bumping into somenull fields in a SQL2005 db.
Some report Null and others have values like 1.00 or 713.00.
I'd like a bullet proof way to convert the 'Null's to 0 and the '1.00' and '713.00' values into Money types.

CREATE TABLE dbo.Test_Text_Convert
(
my_string TEXT NULL
)
GO
INSERT INTO dbo.Test_Text_Convert VALUES (NULL)
INSERT INTO dbo.Test_Text_Convert VALUES ('7.10')
INSERT INTO dbo.Test_Text_Convert VALUES ('xxx')
INSERT INTO dbo.Test_Text_Convert VALUES ('$20.20')
INSERT INTO dbo.Test_Text_Convert VALUES ('20.2020')
GO
SELECT
CASE
WHEN ISNUMERIC(CAST(my_string AS VARCHAR(MAX))) = 1
THEN CAST(ISNULL(CAST(my_string AS VARCHAR(MAX)), '0') AS MONEY)
ELSE 0
END
FROM
dbo.Test_Text_Convert
GO
I've set invalid strings to be 0, but you could easily change that behavior.

You can convert from null via the coalesce function:
coalesce(a, b) //returns b if a is null, a otherwise

To cast from text to money and handle nulls, try this:
CAST(COALESCE(column, '0') AS MONEY)
See COALESCE and CAST for details.
An alternative when things are more complex is CASE..WHEN.

I'm not sure what the performance will be like, but 2005 introduced the varchar(MAX) and nvarchar(MAX) data types (among others like varbinary(MAX). These are true varchars and nvarchars, and can be used in any place where they could (like Convert), but also have the capacity of Text and Image fields. In fact, the Text and Image types are actually deprecated, so going forward it's recommended that you use the new types.
That being said, you could just use this:
convert(Money, coalesce(text_column_name, 0))

Related

Error converting data type varchar to float on non varchar data type

I've come across an issue (that I've partially solved) but can't seem to find a reason behind the failing in the first place.
I have a field in a table which holds a combination of alpha and numerical values. The field is a char(20) data type (which is wrong, but unchangeable) and holds either a NULL value, 'Unknown' or the "numbers" 0, 50, 100. The char field pads the values with trailing white space. This is a known and we can't do a thing about it.
To remove the Unknown values, we have a series of coalesce statements in place, and these two return the error message as per the title.
,coalesce(DHMCC.[HESA Module Total Proportion Taught], 'Missing')
,cast(isnull(DHMCC.[HESA Module Total Proportion Taught] ,'Missing') as varchar(10))
The query I have is why am I getting this error when I'm not converting a data type of varchar to float (or am I?)
Does anyone have an idea as to where to look next to try to fix this error?
The STR() function accepts a float datatype as the first argument, therefore SQL Server is implicitly converting whatever you pass to this function, which in your case is the CHAR(20) column. Since unknown can't be converted to a float, you get the error.
If you run the following with the actual execution plan enabled:
DECLARE #T TABLE (Col CHAR(20));
INSERT #T VALUES (NULL);
SELECT Result = ISNULL(STR(Col, 25, 0), 'Missing')
FROM #T
Then checkthe execution plan XML you will see the implicit conversion:
<ScalarOperator ScalarString="isnull(str(CONVERT_IMPLICIT(float(53),[Col],0),(25),(0)),'Missing')">
The simplest solution is probably to use a case expression and not bother with any conversion at all (only if you know you will only have the 5 values you listed:
DECLARE #T TABLE (Col CHAR(20));
INSERT #T VALUES (NULL), ('0'), ('50'), ('100');--, ('Unknown');
SELECT Result = CASE WHEN Col IS NULL OR Col = 'Unknown' THEN 'Missing' ELSE Col END
FROM #T;
Result
---------
Missing
0
50
100
Missing
If you really want the STR() function, you can make the conversion explicit, but use TRY_CONVERT() so that anything that is not a float simply returns NULL:
DECLARE #T TABLE (Col CHAR(20));
INSERT #T VALUES (NULL), ('0'), ('50'), ('100');--, ('Unknown');
SELECT Result = ISNULL(STR(TRY_CONVERT(FLOAT, Col), 25, 0), 'Missing')
FROM #T
Result
------------
Missing
0
50
100
Missing
Although, since you the numbers you have stated are integers, I would be inclined to convert them to integers rather than floats:
DECLARE #T TABLE (Col CHAR(20));
INSERT #T VALUES (NULL), ('0'), ('50'), ('100'), ('Unknown');
SELECT Result = ISNULL(CONVERT(VARCHAR(10), TRY_CONVERT(INT, Col)), 'Missing')
FROM #T;
Result
---------
Missing
0
50
100
Missing
Thanks to #GarethD
I've only just come across TRY_CONVERT and this seems like the better option, so thanks him for that pointer, also trying with TRY_CAST as well.
The data really should be held in a varchar field, it's referential and not for calculation, and this seems to work equally as well,
-- Declare #varText as varchar(16) = '10 '
-- Declare #varText as char(16) = 'Unknown'
-- Declare #varText as char(16) = ''
SELECT
ISNULL(NULLIF(TRY_CAST(LTRIM(RTRIM(#varText)) as varchar(16)), ''), 'Missing') AS HESA
I've created this test scenario which works ok.

'LIKE' issues with FLOAT: SQL query needed to find values >= 4 decimal places

I have a conundrum....
There is a table with one NVARCHAR(50) Float column that has many rows with many numbers of various decimal lengths:
'3304.063'
'3304.0625'
'39.53'
'39.2'
I need to write a query to find only numbers with decimal places >= 4
First the query I wrote was:
SELECT
Column
FROM Tablename
WHERE Column LIKE '%.[0-9][0-9]%'
The above code finds all numbers with decimal places >= 2:
'3304.063'
'3304.0625'
'39.53'
Perfect! Now, I just need to increase the [0-9] by 2...
SELECT
Column
FROM Tablename
WHERE Column LIKE '%.[0-9][0-9][0-9][0-9]%'
this returned nothing! What?
Does anyone have an explanation as to what went wrong as well and/or a possible solution? I'm kind of stumped and my hunch is that it is some sort of 'LIKE' limitation..
Any help would be appreciated!
Thanks.
After your edit, you stated you are using FLOAT which is an approximate value stored as 4 or 8 bytes, or 7 or 15 digits of precision. The documents explicitly state that not all values in the data type range can be represented exactly. It also states you can use the STR() function when converting it which you'll need to get your formatting right. Here is how:
declare #table table (columnName float)
insert into #table
values
('3304.063'),
('3304.0625'),
('39.53'),
('39.2')
--see the conversion
select * , str(columnName,20,4)
from #table
--now use it in a where clause.
--Return all values where the last digit isn't 0 from STR() the conversion
select *
from #table
where right(str(columnName,20,4),1) != 0
OLD ANSWER
Your LIKE statement would do it, and here is another way just to show they both work.
declare #table table (columnName varchar(64))
insert into #table
values
('3304.063'),
('3304.0625'),
('39.53'),
('39.2')
select *
from #table
where len(right(columnName,len(columnName) - charindex('.',columnName))) >= 4
select *
from #table
where columnName like '%.[0-9][0-9][0-9][0-9]%'
One thing that could be causing this is a space in the number somewhere... since you said the column type was VARCHAR this is a possibility, and could be avoided by storing the value as DECIMAL
declare #table table (columnName varchar(64))
insert into #table
values
('3304.063'),
('3304. 0625'), --notice the space here
('39.53'),
('39.2')
--this would return nothing
select *
from #table
where columnName like '%.[0-9][0-9][0-9][0-9]%'
How to find out if this is the case?
select *
from #table
where columnName like '% %'
Or, anything but numbers and decimals:
select *
from #table
where columnName like '%[^.0-9]%'
The following is working fine for me:
declare #tab table (val varchar(50))
insert into #tab
select '3304.063'
union select '3304.0625'
union select '39.53'
union select '39.2'
select * from #tab
where val like '%.[0-9][0-9][0-9][0-9]%'
Assuming your table only has numerical data, you can cast them to decimal and then compare:
SELECT COLUMN
FROM tablename
WHERE CAST(COLUMN AS DECIMAL(19,4)) <> CAST(COLUMN AS DECIMAL(19,3))
You'd want to test the performance of this against using the character data type solutions that others have already suggested.
You can use REVERSE:
declare #vals table ([Val] nvarchar(50))
insert into #vals values ('3304.063'), ('3304.0625'), ('39.53'), ('39.2')
select [Val]
from #Vals
where charindex('.',reverse([Val]))>4

SQL Round if numerical?

(Beginner at sql)
I've been getting the error
'Error converting data type nvarchar to float.'
Which is because I was trying to round an nvarchar(10) column with both characters and integers, and obviously it can't round the characters. (I can't make two separate columns with different data types as they both need to be in this column)
I'm looking for a way to round the numbers in the nvarchar column whilst also returning the characters
I've being trying CAST/Converts nothing seems to work
I've also tried
CASE WHEN ISNUMERIC(Tbl1.Column1) = 1
THEN cast(Round(Tbl1.Column1, 0) AS float)
ELSE Tbl1.Column1 END AS 'Column1'
in the select statement
I cant figure out what else will solve this!
Sample Data in this column would be
8.1
2
9.0
9.6
A
-
5.3
D
E
5.1
-
I would go for try_convert() instead of isnumeric():
COALESCE(CONVERT(VARCHAR(255), TRY_CONVERT(DECIMAL(10, 0), Tbl1.Column1)),Tbl1.Column1) as Column1
A conversion problem arises with your approach because a case expression returns a single value. One of the branches is numeric, so the return type is numeric -- and the conversion in the else fails.
You can fix your version by converting the then clause to a string after converting to a float.
since you hold both types in this column, you need to cast your rounded value back to varchar
declare #Tbl1 table (Column1 varchar(10))
insert into #Tbl1 (Column1) values ('8.1'), ('2'), ('9.0'),
('9.6'), ('A'), ('5.3'),
('D'), ('E'), ('5.1'), ('-')
select case when TRY_CONVERT(float, Column1) IS NULL then Column1
else cast(cast(Round(Column1, 0) as float) as varchar(10))
end AS 'Column1'
from #Tbl1
outcome is
Column1
-------
8
2
9
10
A
5
D
E
5
-
In case you get the error TRY_CONVERTis not a build-in function then you have your database compatibility level is less that SQL 2012.
You can correct that using this command
ALTER DATABASE your_database SET COMPATIBILITY_LEVEL = 120;
Also note that after this statement the answer of Gordon is working now, and I agree that is a better answer then mine

SQL automatically rounding off values

I have two table. First table(Table1) use to get the records and second table(Table2) used to insert first table record into it. But I am little bit confused after getting result.
In table 1 and table 2 column "Amount" have same data type i.e nvarchar(max)
Table1
Id Amount
1 Null
2 -89437.43
2 -533.43
3 22403.88
If I run this query
Insert into Table2(Amount)
Select Amount from Table1
Then get result like this, I don't know why values are automatically rounded off
Table2
Id Amount
1 Null
2 -89437.4
2 -533.43
3 22403.9
SQL Server will round float values when converting back and to from string types.
And then you have the fun bits of empty string being 0, as well other strange effects
SELECT CAST(CAST('' AS float) AS nvarchar(MAX))
SELECT CAST(CAST('0.E0' AS float) AS nvarchar(MAX))
Use decimal.
If you need to store "blank" (how does this differ from NULL?) use a separate bit column to allow that extra value
Here is good explanation about your question.
Eigher you explicitly give float or decimal or numeric(xx,x) (x means numeric value)
Then it will convert as the data, other wise it round off the last value.
Insert into Table2(Amount)
Select cast(Amount as numeric(18,2) --or , cast (Amount as float)
from Table1
Check this link:-
TSQL Round up decimal number
In my case I was doing the conversion to the correct data type but had decimal(18,0) for the column in the table. So make sure the decimal places are represented properly for the column decimal(18,2).
Perhaps it's your query tool that's truncating to 8 characters.
Check the actual fields lengths to see if the problem is really in the database:
SELECT LEN(Amount)
FROM Table2
WHERE Amount LIKE '%-89437.%'
Unreproducible. Running this script on SQL Server 2012:
DECLARE #T1 TABLE ([Amount] nvarchar(max) NULL);
DECLARE #T2 TABLE ([Amount] nvarchar(max) NULL);
INSERT INTO #T1 ([Amount])
VALUES (NULL),('-89437.43'),('-533.43'),('22403.88');
Insert into #T2(Amount)
Select Amount from #T1;
SELECT * FROM #T2;
Produces this result:
Amount
NULL
-89437.43
-533.43
22403.88
The problem you describe does not exist.
This will show you the problem:
DECLARE #T1 TABLE ([Amount123456789] money NULL);
DECLARE #T2 TABLE ([Amount123456789] nvarchar(max) NULL);
INSERT INTO #T1 ([Amount123456789])
VALUES (NULL),('-89437.43123'),('-533.43456'),('22403.88789'),(22403.88789);
Insert into #T2(Amount123456789)
Select Amount123456789 from #T1;
SELECT * FROM #T1;
SELECT * FROM #T2;

SSIS / SQL Server - dealing with various money type notations

In a SQL Server money column how can I deal with different currency notations coming in from country specific Excel files via SSIS (in varchar - transformed to money), taking care of comma and dot representation to make sure the values stay correct?
For example if these are three column values in Excel:
22,333.44
22.333,44
22333,44
the first notation above will result in 22,3334, which of course is incorrect.
What do I need to do with the data? Is it a string replace or something more elegant?
thank you.
UPDATED:
After discussion in comments the problem has been clarified. The values in the excel column can be of many different regional formats (English using commas to separate thousands and '.' for decimal point, German using '.' for separating thousands and comma for decimal point).
Assuming that the destination format is English and you don't have an accompanying column to indicate the format then you're gonna have to implement a kludge of a workaround. If you can guarantee there will always be 2 numbers after the "decimal place" (comma in german format) then REPLACE(REPLACE(#Value,',',''),'.','') will get rid of every comma/point. Then you will have to get the length of the resulting varchar and manually insert a decimal (or comma) before the last 2 characters. Here's a sample implementation:
declare #number varchar(12),#trimmednumber varchar(12),#inserteddecimal varchar(12)
set #number='22.333,44'
select #trimmednumber=REPLACE(REPLACE(#number,',',''),'.','')
select #inserteddecimal=(LEFT(#trimmednumber,len(#trimmednumber)-2) + '.' + RIGHT(#trimmednumber,2))
select #number AS [Original],#trimmednumber AS [Trimmed],#inserteddecimal AS [Result]
And the results:
Original Trimmed Result
------------ ------------ ------------
22.333,44 2233344 22333.44
Original Answer:
I may be misunderstanding your question but if you take in those values as VARCHAR and insert them into MONEY columns then the implicit conversion should be correct.
Here's what I've knocked together to test:
declare #money_varchar1 varchar(12),#money_varchar2 varchar(12),#money_varchar3 varchar(12)
set #money_varchar1='22,333.44'
set #money_varchar2='22.333,44'
set #money_varchar3='22333,22'
declare #table table (Value money)
insert into #table values (#money_varchar1)
insert into #table values (#money_varchar2)
insert into #table values (#money_varchar3)
select * from #table
And the results:
Value
---------------------
22333.44
22.3334
2233322.00