I am investigating a performance issue for the following sql statement:
Update tableA
set columnA1 = columnB1
from tableB
where tableA.columnA2 = tableB.columnB2
The problem is that tableA.columnA2 is of type nvarchar(50) while tableB.columnB2 is of type bigint. My question is how sql server execute such query; does it cast bigint to nvarchar and compare using nvarchar comparing operators or does it cast nvarchar to bigint and compare with bigint comparing operators.
Another thing: if I had to leave those column types as is tableA.columnA2, tableB.columnB2' how can I rewrite this query to enhance performance?
Note: this query is only working on around 100,000 records, but it takes like forever.
Thanks in advance, really appreciate your help.
In the comparison, the nvarchar will be converted to bigint, because bigint has a higher precedence
See http://msdn.microsoft.com/en-us/library/ms190309.aspx
EDIT:
I was assuming that the conversion is always to the data type of the updated table. But this is wrong! #podiluska's answer is correct, as I tested with a statement similar to that in the question, and in the plan for the update statement, you see that the conversion is always to bigint when you compare a bigint and a nvarchar column, no matter if the bigint or the nvarchar column is on the side of the updated table: The query plan always contains an expression Scalar Operator(CONVERT_IMPLICIT(bigint,[schema1].[table1].[col1],0)) for the nvarchar column.
To help the performance, you can create a calculated column in the table B with the nvarchar column using the expression cast(ColumnA2 as bigint). Then you could build an index on this and columnB1.
Related
I have a column in a table with a varchar datatype. It has 15 digits after the decimal point. Now I am having a hard time converting it to a numeric format.. float, double etc.
Does anyone have any suggestions?
Example :
Table1
Column1
-------------------
-28.851540616246499
-22.857142857142858
-26.923076923076923
76.19047619047619
I tried using the following statements and it doesn't seem to work :
update table1
set Column1 = Convert(float,column1)..
Any suggestions ?
You can use the decimal data type and specify the precision to state how many digits are after the decimal point. So you could use decimal(28,20) for example, which would hold 28 digits with 20 of them after the decimal point.
Here's a SQL Fiddle, showing your data in decimal format.
Fiddle sample:
create table Table1(MyValues varchar(100))
insert into Table1(MyValues)
values
('-28.851540616246499'),
('-22.857142857142858'),
('-26.923076923076923'),
('76.19047619047619')
So the values are held as varchar in this table, but you can cast it to decimal as long as they are all valid values, like so:
select cast(MyValues as decimal(28,20)) as DecimalValues
from table1
Your Sample
Looking at your sample update statement, you wouldn't be able to convert the values from varchar to a numeric type and insert them back in to the same column, as the column is of type varchar. You would be better off adding a new column with a numeric data type and updating that.
So if you had 2 columns:
create table Table1(MyValues varchar(100), DecimalValues decimal(28,20))
You could do the below to update the numeric column with the nvarchar values that have been cast to decimal:
update Table1
set DecimalValues = cast(MyValues as decimal(28,20))
I think you're trying to actually change the data type of that column?
If that is the case you want to ALTER the table and change the column type over to float, like so:
alter table table1
alter column column1 float
See fiddle: http://sqlfiddle.com/#!6/637e6/1/0
You would use CONVERT if you're changing the text values to numbers for temporary use within a query (not to actually permanently change the data).
I am having to change one of the columns in table which currently is Float to Varchar, but when I use alter command, it stores some of the longer numbers in Scientific Notation.
Can I avoid this?
If not, is there a way to easily update the table later to store the scientific notation as normal integer?
Thanks
Please check the link
convert float into varchar in SQL server without scientific notation
You can cast the float to varchar(max)
How to convert float to varchar in SQL Server
I have a workaround for this I have used in the past. Dan isn't far off, but just casting it won't work. You can alter the table by adding a new varchar column then using str and ltrim to update the new column from the old float column. Say your varchar column was varchar(50), use something like:
update YourTable set NewColumn = ltrim(str(OldColumn, 50))
str() converts to character data, ltrim() gets rid of any extra blanks on the left. You can then always get rid of the old column. Feels janky but should work for you.
I have data greater to this number, if I attempt to get several sums of them like::
1,22826520941614E+24+1,357898350941614E+34+1,228367878888764E+26 I get as Result NULL, How to define the table Datatype for that kind of fields??
I am using float, but it does not work.
If you're getting NULL back, it's not the data type. It's because you have a null value in one of the rows of data. NULL + anything is NULL.
Change your Sum() to include a WHERE YourNumericColumn IS NOT NULL, or use COALESCE().
A float is sufficiently large to contain data of that range. It can store binary floating-point values from -1.79E+308 to 1.79E+308. I suspect an error elsewhere in your statement.
Can someone clarify for me why do I get an error when I try to set the variable #a in the example below?
DECLARE #a BIGINT
SET #a = 7*11*13*17*19*23*29*31
/*
ERROR:
Msg 8115, Level 16, State 2, Line 1
Arithmetic overflow error converting expression to data type int.
*/
What I could figure out til now is that, internaly, SQL starts doing the math evaluating the multiplication and placing the temporary result into a INT then it casts it to a BIGINT.
However, if I add a 1.0 * to my list of numbers, there is no error, hence I believe that for this time SQL uses float as a temporary result, then cast it to BIGINT
DECLARE #b BIGINT
SET #b = 1.0 * 7*11*13*17*19*23*29*31
/*
NO ERROR
*/
Frankly, I don't see anything wrong with the code... it's so simple...
[ I am using SQL 2008 ]
[EDIT]
Thanks Nathan for the link.
That's good information I didn't know about, but I still don't understand why do I get the error and why do I have do "tricks" to get a simple script like this working.
Is it something that I should know how to deal with as a programmer?
Or, this a bug and, if so, I will consider this question closed.
When you're doing calculations like this, the individual numbers are stored just large enough to hold that number, ie: numeric(1,0). Check this out:
Caution
When you use the +, -, *,
/, or % arithmetic operators to
perform implicit or explicit
conversion of int, smallint, tinyint,
or bigint constant values to the
float, real, decimal or numeric data
types, the rules that SQL Server
applies when it calculates the data
type and precision of the expression
results differ depending on whether
the query is autoparameterized or not.
Therefore, similar expressions in
queries can sometimes produce
different results. When a query is not
autoparameterized, the constant value
is first converted to numeric, whose
precision is just large enough to hold
the value of the constant, before
converting to the specified data type.
For example, the constant value 1 is
converted to numeric (1, 0), and the
constant value 250 is converted to
numeric (3, 0).
When a query is autoparameterized, the
constant value is always converted to
numeric (10, 0) before converting to
the final data type. When the /
operator is involved, not only can the
result type's precision differ among
similar queries, but the result value
can differ also. For example, the
result value of an autoparameterized
query that includes the expression
SELECT CAST (1.0 / 7 AS float) will
differ from the result value of the
same query that is not
autoparameterized, because the results
of the autoparameterized query will be
truncated to fit into the numeric (10,
0) data type. For more information
about parameterized queries, see
Simple Parameterization.
http://msdn.microsoft.com/en-us/library/ms187745.aspx
Edit
This isn't a bug in SQL Server. From that same page, it states:
The int data type is the primary integer data type in SQL Server.
and
SQL Server does not automatically promote other integer data types (tinyint, smallint, and int) to bigint.
This is defined behavior. As a programmer, if you have reason to believe that your data will overflow the data type, you need to take precautions to avoid that situation. In this case, simply converting one of those numbers to a BIGINT will solve the problem.
DECLARE #a BIGINT
SET #a = 7*11*13*17*19*23*29*CONVERT(BIGINT, 31)
In the first example SQL Server multiplies a list of INTs together, and discovers the result is too big to be an INT and the error is generated. In the second example, it notices there's a float so it converts all the INTs to floats first and then does the multiplication.
Similarly, you can do this:
DECLARE #a BIGINT,
#b BIGINT
set #b = 1
SET #a = #b*7*11*13*17*19*23*29*31
This works fine because it notices there's a BIGINT, so it converts all the INTs to BIGINTs and then does the multiplication.
I have a bunch of NVARCHAR columns which I suspect contain perfectly storable data in VARCHAR columns. However I can't just go and change the columns' type into VARCHAR and hope for the best, I need to do some sort of check.
I want to do the conversion because the data is static (it won't change in the future) and the columns are indexed and would benefit from a smaller (varchar) index compared to the actual (nvarchar) index.
If I simply say
ALTER TABLE TableName ALTER COLUMN columnName VARCHAR(200)
then I won't get an error or a warning. Unicode data will be truncated/lost.
How do I check?
Why not cast there and back to see what data gets lost?
This assumes column is nvarchar(200) to start with
SELECT *
FROM TableName
WHERE columnName <> CAST(CAST(columnName AS varchar(200)) AS nvarchar(200))
Hmm interesting.
I'm not sure you can do this in a SQL query itself. Are you happy to do it in code? If so, you can get all the records, then loop over all the chars in the string and check. But man it's a slow way.