I am loading data from a CSV file to a staging table (using BULK INSERT) where all column types are NVARCHAR(100). This works well.
The idea is then to insert that data into the productive table and while doing that changing the data types.
When trying to convert a column with numeric values from NVARCHAR to DECIMAL, all decimals are gone.
Create and insert from staging table to production table:
DROP TABLE IF EXISTS [dbo].[factFinanzbuchhaltung]
GO
CREATE TABLE [dbo].[factFinanzbuchhaltung]
(
Wert DECIMAL
)
GO
INSERT INTO [dbo].[factFinanzbuchhaltung]
SELECT CONVERT(DECIMAL(25, 2), ROUND(Wert,2))
FROM [dbo].[Stage_factFinanzbuchhaltung]
how the data looks before and after conversion
What am I doing wrong? I feel like I tried very combination of CONVERT, CAST and number of decimals. With our without rounding.
Decimal is a fixed point number, which has a declared number of decimals (a.k.a. scale). When you declare a column as type DECIMAL, you get a decimal with precision 18 and scale 0 (source). In other words, it can only store integer values (whole numbers), and drops anything after the decimal point.
You need to declare with the desired number of decimals, e.g. DECIMAL(18, 2) for two decimals. A quick look at your screenshot suggests you need DECIMAL(18, 8). The only other option would be to use FLOAT (double precision), but that could lead to loss of precision. In some database systems you also have a DECFLOAT (decimal floating point) type, but SQL Server does not have this type.
Related
I have a reshift table, which has a decimal column of (38, 29), but the original data's maximum Integer part is 6 and scale is 12 i.e Decimal (18,12). But the table is created using the max precision and scale. So all the data in that has 0's at the end of the scale part as padding.
For Example:
12345.123456789112300000000000000000000
All the data in the table is like the above example.
Now I'm retrieving the data from the table using the below query.
select cast(column as decimal(30,6)) from table;
The output I'm getting is
12345.123456
But when I try the below query
select cast(12345.123456789112300000000000000000000 as decimal(30,6)) from table;
The output I'm getting is
12345.123457
I want to know why this is happening. when I cast the column in the table, it is not rounding off to its highest value, it is just truncating.
But when I try with the decimal itself it is truncating and it is rounding off.
I also want to how to achieve the second query's result in the table itself.
So this comes down to when is a cast not a cast. If I cast and integer to an int it does nothing. Casting a varchar to a shorter varchar is nearly as simple as long as the data fits. Casting a decimal to a lower scale decimal is also a simplistic operation as it is not changing the data type, just some attribute of it (scale). What you desire is that Redshift implicitly ROUNDS the values when you make this conversion and it is not. (I'll let the database philosophers debate if this is a bug or not.)
Here's a simple example to highlight this:
drop table if exists goo;
create table goo (rownum int, num decimal(30,6));
insert into goo select 1, 12345.123456789112300000000000000000000::text;
insert into goo select 2, 12345.123456789112300000000000000000000::decimal(38,29);
insert into goo select 3, 12345.123456789112300000000000000000000::double;
select rownum, num::text from goo;
In all 3 of these examples there is an implicit cast to the data type of the column 'num' in the table. However you can see that what is getting into the table is different. Lots of experiments can be set up like this. (Note that I'm casting the result to text to avoid any bench precision changes.)
The answer in your case is to explicitly ROUND() the value.
I have a column in a table with a varchar datatype. It has 15 digits after the decimal point. Now I am having a hard time converting it to a numeric format.. float, double etc.
Does anyone have any suggestions?
Example :
Table1
Column1
-------------------
-28.851540616246499
-22.857142857142858
-26.923076923076923
76.19047619047619
I tried using the following statements and it doesn't seem to work :
update table1
set Column1 = Convert(float,column1)..
Any suggestions ?
You can use the decimal data type and specify the precision to state how many digits are after the decimal point. So you could use decimal(28,20) for example, which would hold 28 digits with 20 of them after the decimal point.
Here's a SQL Fiddle, showing your data in decimal format.
Fiddle sample:
create table Table1(MyValues varchar(100))
insert into Table1(MyValues)
values
('-28.851540616246499'),
('-22.857142857142858'),
('-26.923076923076923'),
('76.19047619047619')
So the values are held as varchar in this table, but you can cast it to decimal as long as they are all valid values, like so:
select cast(MyValues as decimal(28,20)) as DecimalValues
from table1
Your Sample
Looking at your sample update statement, you wouldn't be able to convert the values from varchar to a numeric type and insert them back in to the same column, as the column is of type varchar. You would be better off adding a new column with a numeric data type and updating that.
So if you had 2 columns:
create table Table1(MyValues varchar(100), DecimalValues decimal(28,20))
You could do the below to update the numeric column with the nvarchar values that have been cast to decimal:
update Table1
set DecimalValues = cast(MyValues as decimal(28,20))
I think you're trying to actually change the data type of that column?
If that is the case you want to ALTER the table and change the column type over to float, like so:
alter table table1
alter column column1 float
See fiddle: http://sqlfiddle.com/#!6/637e6/1/0
You would use CONVERT if you're changing the text values to numbers for temporary use within a query (not to actually permanently change the data).
Can someone clarify for me why do I get an error when I try to set the variable #a in the example below?
DECLARE #a BIGINT
SET #a = 7*11*13*17*19*23*29*31
/*
ERROR:
Msg 8115, Level 16, State 2, Line 1
Arithmetic overflow error converting expression to data type int.
*/
What I could figure out til now is that, internaly, SQL starts doing the math evaluating the multiplication and placing the temporary result into a INT then it casts it to a BIGINT.
However, if I add a 1.0 * to my list of numbers, there is no error, hence I believe that for this time SQL uses float as a temporary result, then cast it to BIGINT
DECLARE #b BIGINT
SET #b = 1.0 * 7*11*13*17*19*23*29*31
/*
NO ERROR
*/
Frankly, I don't see anything wrong with the code... it's so simple...
[ I am using SQL 2008 ]
[EDIT]
Thanks Nathan for the link.
That's good information I didn't know about, but I still don't understand why do I get the error and why do I have do "tricks" to get a simple script like this working.
Is it something that I should know how to deal with as a programmer?
Or, this a bug and, if so, I will consider this question closed.
When you're doing calculations like this, the individual numbers are stored just large enough to hold that number, ie: numeric(1,0). Check this out:
Caution
When you use the +, -, *,
/, or % arithmetic operators to
perform implicit or explicit
conversion of int, smallint, tinyint,
or bigint constant values to the
float, real, decimal or numeric data
types, the rules that SQL Server
applies when it calculates the data
type and precision of the expression
results differ depending on whether
the query is autoparameterized or not.
Therefore, similar expressions in
queries can sometimes produce
different results. When a query is not
autoparameterized, the constant value
is first converted to numeric, whose
precision is just large enough to hold
the value of the constant, before
converting to the specified data type.
For example, the constant value 1 is
converted to numeric (1, 0), and the
constant value 250 is converted to
numeric (3, 0).
When a query is autoparameterized, the
constant value is always converted to
numeric (10, 0) before converting to
the final data type. When the /
operator is involved, not only can the
result type's precision differ among
similar queries, but the result value
can differ also. For example, the
result value of an autoparameterized
query that includes the expression
SELECT CAST (1.0 / 7 AS float) will
differ from the result value of the
same query that is not
autoparameterized, because the results
of the autoparameterized query will be
truncated to fit into the numeric (10,
0) data type. For more information
about parameterized queries, see
Simple Parameterization.
http://msdn.microsoft.com/en-us/library/ms187745.aspx
Edit
This isn't a bug in SQL Server. From that same page, it states:
The int data type is the primary integer data type in SQL Server.
and
SQL Server does not automatically promote other integer data types (tinyint, smallint, and int) to bigint.
This is defined behavior. As a programmer, if you have reason to believe that your data will overflow the data type, you need to take precautions to avoid that situation. In this case, simply converting one of those numbers to a BIGINT will solve the problem.
DECLARE #a BIGINT
SET #a = 7*11*13*17*19*23*29*CONVERT(BIGINT, 31)
In the first example SQL Server multiplies a list of INTs together, and discovers the result is too big to be an INT and the error is generated. In the second example, it notices there's a float so it converts all the INTs to floats first and then does the multiplication.
Similarly, you can do this:
DECLARE #a BIGINT,
#b BIGINT
set #b = 1
SET #a = #b*7*11*13*17*19*23*29*31
This works fine because it notices there's a BIGINT, so it converts all the INTs to BIGINTs and then does the multiplication.
This is on SQL Server 2008.
I have several columns I want to convert from money and decimal to varchar, for example a column called item_amount.
How will these values be converted?
Will it be the same as convert(varchar, item_amount)? Running a query like select item_amount, convert(varchar, item_amount) from <table> renders the columns identically, which is what I would expect and want.
I should be safe from possible truncation, correct?
Assuming there are enough characters in the varchar column (which would be 39, since the max precision for a decimal column is 38 + 1 character for the decimal point). None of the numeric values are even close to 38 digits, most in the 3-5 range.
I've run this command successfully on a test table and want to make sure I'm not overlooking or forgetting something that's going to screw me: alter table <mytable> alter column item_amount varchar(39) default '0' (this is after droping the existing default ((0)) constraint).
With regard to the way conversion is done, yes you are correct as long as the VARCHAR column you are placing it in has the right number of characters available you will be set to go.
With regards to your change of amount to varchar, you should be fine here as it will do the conversion.
I just have to note that it doesn't sound like a good idea to do this as you are no longer interacting with numbers for sorting, filtering, etc....but just a note.
I have a table with two decimal(18,0) fields.
I am inserting into this table, two decimal values. For example, 1.11
When I select from the table (with no casts), I get 1.
I'm losing all percision and I have no clue why.
insert into TEST values (153, 'test', 'test', 1, 1, 1.11, 1.11)
Select * from TEST and they are 1 and 1 instead of 1.11,1.11
Any Ideas?
When you declare a field as decimal(18,0), you are saying that you want 0 digits of precision after the decimal point. You're going to want to define those columns as decimal(18,2) (or however many digits of precision you desire) in order to maintain a value of 1.11.
Refer to the MSDN page on decimal and numeric types for the grisly details.
Define the Precision to Decimal every time else it stores only int values not Decimal values
Try changing to type decimal(9,2)
Maybe try creating the columns as
decimal(18,2)