ORA-01722 invalid number on different numbers? - sql

I have a condition in a query:
ABS ( FIB.QUANT1 ) = ( OI.KLINE * :intv1000 ) / 1000.000000
when I run the query with intv1000 = 1000000 - query runs ok.
when I run the query with intv1000 = 1000 I get ORA-01722 (not immediately, after about 5-6 seconds).
Any Idea why ?
QUANT1 - NUMBER(16,2)
KLINE - NUMBER(38)
The condition is self generated from the application, So I can't really change it.
Thank you

The ORA-01722 is an invalid number error.
You would get this error when a non-number -- typically a string -- has to be converted to a number. For example here.
This conversion can occur in many different ways although the three most common are:
to_number()
cast()
implicit conversion
The expression you have highlighted may or may not have anything to do with where the error actually occurs. What it is saying is that:
When :intv1000 = 1000000 then the row(s) with the problem data are filtered out.
When intv1000 = 1000 then the row(s) with the problem data are being processed.
Arrggh. This is very hard to determine. I would suggest starting by looking at the query and finding all explicit conversions to see if they are a problem.
If you find no back data, then you need to resort to looking at all comparisons (including joins), arithmetic expressions, and function calls to find the problem.
In general, I strongly recommend avoiding implicit conversion. Use explicit conversion to avoid such problems! Note: I do make an exception for conversion to strings with string functions and operators. These are usually pretty safe.

Related

ORA 06502 Error PL/SQL

I am trying to execute a simple statement and i got an error while executing.
begin
dbms_output.put_line('Addition: '||4+2);
end;
Error:
ORA-06502: PL/SQL: numeric or value error: character to number conversion error
ORA-06512: at line 2
But when i executed with * operator, it worked fine.
begin
dbms_output.put_line('Addition: '||4*2);
end;
Does anyone know the reason behind it?
It is due the Operator Precedence.
Multiplication has higher precedence than concatenation. So, 'Addition: '||4*2 evaluates to 'Addition: '||8 and to 'Addition: 8'.
Addition has same precedence as concatenation and operators with equal precedence are evaluated from left to right.
So, 'Addition: '||4+2 evaluates to 'Addition: 4' + 2, which subsequently fails as you cannot add number to characters.
In such cases, you should always use brackets to explicitly specify the order of evaluation, like this 'Addition: '|| (4+2)
In my opinion, the actual problems is that this code is relying on implicit data type conversion by the Oracle kernel. Always use explicit data type conversion. For instance:
begin
dbms_output.put_line('Addition: ' || to_char(4+2));
end;
There are many other cases where you will run into unexpected errors due to implicit data type conversion. Like in equal joining a varchar with number. As long as the varchar contains only numeric values, it works fine (although maybe slow due to index not being used). But as soon as you insert one row with non numeric data, you will run into an error when that row is being hit. Making the data type conversion explicit ensures that Oracle does not by accident chose the wrong side of an equal join to convert.

SQL Server: +(unary) operator on non-numeric Strings

I am surprised! This statement below is valid in SQL SERVER:
SELECT +'ABCDEF'
Has SQL Server defined + as a Unary operator for string types?
Here is my own answer to this question (Please also see the update at the end):
No, there isn't such unary operator defined on the String expressions. It is possible that this is a bug.
Explanation:
The given statement is valid and it generates the below result:
(No column name)
----------------
ABCDEF
(1 row(s) affected)
which is equivalent to doing the SELECT statement without using the + sign:
SELECT 'ABCDEF'
Being compiled without giving any errors, in fact being executed successfully, gives the impression that + is operating as a Unary operation on the given string. However, in the official T-SQL documentation, there is no mentioning of such an operator. In fact, in the section entitled "String Operators", + appears in two String operations which are + (String Concatenation) and += (String Concatenation); but neither is a Unary operation. Also, in the section entitled "Unary Operators", three operators have been introduced, only one of them being the + (Positive) operator. However, for this only one that seems to be relevant, it soon becomes clear that this operator, too, has nothing to do with non-numeric string values as the explanation for + (Positive) operator explicitly states that this operator is applicable only for numeric values: "Returns the value of a numeric expression (a unary operator)".
Perhaps, this operator is there to successfully accept those string values that are successfully evaluated as numbers such as the one that has been used here:
SELECT +'12345'+1
When the above statement is executed, it generates a number in the output which is the sum of both the given string evaluated as a number and the numberic value added to it, which is 1 here but it could obviously be any other amount:
(No column name)
----------------
12346
(1 row(s) affected)
However, I doubt this explanation is the correct as it raises to below questions:
Firstly, if we accept that this explanation is true, then we can conclude that expressions such +'12345' are evaluated to numbers. If so, then why is it that these numbers can appear in the string related functions such as DATALENGTH, LEN, etc. You could see a statement such as this:
SELECT DATALENGTH(+'12345')
is quite valid and it results the below:
(No column name)
----------------
5
(1 row(s) affected)
which means +'12345' is being evaluated as a string not a number. How this can be explained?
Secondly, while similar statements with - operator, such as this:
`SELECT -'ABCDE'`
or even this:
`SELECT -'12345'`
generate the below error:
Invalid operator for data type. Operator equals minus, type equals varchar.
Why, shouldn't it generate an error for similar cases when + operator has been wrongly used with a non-numeric string value?
So, these two questions prevent me from accepting the explanation that this is the same + (unary) operator that has been introduced in the documentation for numeric values. As there is no other mentioning of it anywhere else, it could be that it is deliberately added to the language. May be a bug.
The problem looks to be more severe when we see no error is generated for statements such as this one either:
SELECT ++++++++'ABCDE'
I do not know if there are any other programming languages out there which accept these sort of statements. But if there are, it would be nice to know for what purpose(s) they use a + (unary) operator applied to a string. I cannot imagine any usage!
UPDATE
Here it says this has been a bug in earlier versions but it won't be fixed because of backward compatibility:
After some investigation, this behavior is by design since + is an unary operator. So the parser accepts "+ , and the '+' is simply ignored in this case.
Changing this behavior has lot of backward compatibility implications so we don't intend to change it & the fix will introduce unnecessary changes for application code.

Stop concat removing leading 0

Hoping someone can help.
I am attempting to add a ||'m' to my query but when I add the concat it removes the leading zero.
With out the ||'m' I get this result:
0.00
With the concat I get this result for example:
.0m
edit:
Here is the query:
round(MAX(city_longitude),1) - round(MIN(city_longitude),1)||'m'
Cheers
Try wrapping the whole ROUND statement in a TO_CHAR giving it a format mask.
TO_CHAR(round(MAX(city_longitude),1) - round(MIN(city_longitude),1), '0.00')||'m'
This way you effectively concatenate a string with a string whereas you are currently concatenating a numeric with a string forcing an implicit conversion.
Implicit conversions are usually frowned upon as they can lead to unexpected results.
Oracle says:
Oracle recommends that you specify explicit conversions, rather than
rely on implicit or automatic conversions, for these reasons:
• SQL statements are easier to understand when you use explicit
datatype conversion functions.
• Implicit datatype conversion can have a negative impact on
performance, especially if the datatype of a column value is converted
to that of a constant rather than the other way around.
• Implicit conversion depends on the context in which it occurs and
may not work the same way in every case. For example, implicit
conversion from a datetime value to a VARCHAR2 value may return an
unexpected year depending on the value of the NLS_DATE_FORMAT
parameter.
• Algorithms for implicit conversion are subject to change across
software releases and among Oracle products. Behavior of explicit
conversions is more predictable.
Number formats are here:
http://www.oradev.com/oracle_number_format.jsp
Hope it helps...

convert 'null' varchar to decimal

I have a requirement to create some xml structs (to borrow a C-phrase) in sql-server-2005. In order to do this, I change all my values to varchar. The problem arises when I want to make USE of these values, i have to convert them to decimal.
So, my xml code looks like this:
set #result = #result + <VAL>' + coalesce(cast(#val as varchar(20)), '-.11111') + '</VAL>'
this way, if VAL is null, I return a special decimal and I can check for that decimal. The drawback of doing this, is that I can't use coalesce on the other end when I use the value, I have to check if it converted value is equal to 0.
like this:
case when cast(InvestmentReturn.fn_getSTRUCT(...args...).value('results[1]/VAL[1]', 'varchar(40)')as decimal(10,5)) = -.11111
Since performance is unacceptable right now, I thought one way to improve performance might be to use coalesce instead of using a nested case statement and checking the value for equality with my special 'null' equivalent.
Any thoughts?
also, i see that select cast('null' as decimal(10,5)) gives me:
Msg 8114, Level 16, State 5, Line 1
Error converting data type varchar to numeric.
Performance issues can be caused by a number of factors.
The first one is using XML in sql 2005. I don't know the size of the xml data you are using but when I tried this 5 years ago if you crossed a certain size barrier (I think it was 32k, might have been 64k) then processing performance fell off the cliff. 1 extra byte would cause a query to go from 500ms to 60 seconds. We had to abandon letting SQL server deal with XML data itself at that point. It was MUCH faster to do that processing in C#.
The second one is making calls to functions inside a select statement. If that function has to operate on multiple rows, then performance goes down. One example I always use to illustrate this is GETDATE(). If you set a variable to the return of GETDATE() and then use that variable in a select query it will run an order of magnitude faster than calling GETDATE() in the query itself. The little code example you provided could be a killer just because it's calling a function.
This may not be a good answer to your immediate problem, but I really believe you would be much better served yanking any XML processing code out of SQL server and doing it in ANY OTHER language of your choice.

MySQL Type Conversion: Why is float the lowest common denominator type?

I recently ran into an issue where a query was causing a full table scan, and it came down to a column had a different definition that I thought, it was a VARCHAR not an INT. When queried with "string_column = 17" the query ran, it just couldn't use the index. That really threw me for a loop.
So I went searching and found what happened, the behavior I was seeing is consistent with what MySQL's documentation says:
In all other cases, the arguments are compared as floating-point (real) numbers.
So my question is... why a float?
I could see trying to convert numbers to strings (although the points in the MySQL page linked above are good reasons not to). I could also understand throwing some sort of error, or generating a warning (my preference). Instead it happily runs.
So why convert everything to a float? Is that from the SQL standard, or based on some other reason? Can anyone shed some light on this choice for me?
I feel your pain. We have a column in our DB that holds what is well-known in the company as an "order number". But it's not always a number, in certain circumstances it can have other characters too, so we keep it in a varchar. With SQL Server 2000, this means that selecting on "order_number = 123456" is bad. SQL Server effectively rewrites the predicate as "CAST(order_number, INT) = 123456" which has two undesirable effects:
the index is on order_number as a varchar, so it starts a full scan
those non-numeric order numbers eventually cause a conversion error to be thrown to the user, with a rather unhelpful message.
In a way it's good that we do have those non-numeric "numbers", since at least badly-written queries that pass the parameter as a number get trapped rather than just sucking up resources.
I don't think there is a standard. I seem to remember PostgreSQL 8.3 dropped some of the default casts between number and text types so that this kind of situation would throw an error when the query was being planned.
Presumably "float" is considered to be the widest-ranging numeric type and therefore the one that all numbers can be silently promoted to?
Oh, and similar problems (but no conversion errors) for when you have varchar columns and a Java application that passes all string literals as nvarchar... suddenly your varchar indices are no longer used, good luck finding the occurrences of that happening. Of course you can tell the Java app to send strings as varchar, but now we're stuck with only using characters in windows-1252 because that's what the DB was created as 5-6 years ago when it was just a "stopgap solution", ah-ha.
Well, it's easily understandable: float is able to hold the greatest range of numbers.
If the underlying datatype is datetime, for instance, it can be simply converted to a float number that has the same intrinsic value.
If the datatype is an string it is easy to parse it to a float, degrading performance not withstanding.
So float datatype is better to fallback.