TO_CHAR(number) Function returns ORA-01722: invalid number - sql

Query:
Select To_Number(qty) From my_table Where Id=12345;
Output:
ORA-01722: invalid number
01722. 00000 - "invalid number"
Query: Select qty From my_table Where Id=12345;
Output: 0.00080
Query:
Select To_Number(0.00080) From Dual;
Output:
0.00080 (no error)
This is a odd situation I am facing in Oracle. Can anybody suggest why it happens? The column qty is NUMBER type. Hence it is very hard to imagine that it contains invalid number, but it happened.
I want to clarify that it happened for the specific value in the column although we have thousands of records in the same column.
Added more: The same error appears if I use TO_CHAR(qty) function. The qty column is NUMBER type not VARCHAR2. In fact we are using SUM(qty) function which showed error. Hence I went for a dissection and found this row being the culprit.

I'm assuming that qty is defined as a varchar2 in my_table-- otherwise, there would be no purpose served by calling to_number. If that assumption is correct, I'll wager that there is some other row in the table where qty has non-numeric data in it.
SQL is a set-based language so Oracle (or any other database) is perfectly free to evaluate things in whatever order it sees fit. That means that Oracle is perfectly free to evaluate the to_number(qty) expression before applying the id=12345 predicate. If Oracle happens to encounter a row where the qty value cannot be converted to a number, it will throw an error.
It is also possible that there is some non-numeric data in the particular row where id = 12345 that happens not to be displaying (control characters for example). You can check that by running the query
SELECT dump(qty, 1016)
FROM my_table
WHERE id = 12345
(if you want decimal rather than hexadecimal, use 1010 as the second parameter to dump) and checking to see whether there is anything unexpected in the data.

The only way I can see you could get the results you've shown, given that qty really is a number field, is if it holds corrupt data (which is why there has been scepticism about that assumption). I'm also assuming your client is formatting the value with a leading zero, but is not forcing the trailing zero, which wouldn't normally appear; you can of course force it with to_char(.0008, '0.00000'), but you don't appear to be doing that; still, the leading zero makes me wonder.
Anyway, to demonstrate corruption you can force an invalid value into the field via PL/SQL - don't try this with real data or a table you care about:
create table t42(qty number);
table T42 created.
declare
n number;
begin
dbms_stats.convert_raw_value('bf0901', n);
insert into t42 (qty) values (n);
end;
/
anonymous block completed
select qty from t42;
QTY
----------
.00080
select to_number(qty) from t42;
Error starting at line : 12 in command -
select to_number(qty) from t42
Error report -
SQL Error: ORA-01722: invalid number
01722. 00000 - "invalid number"
Note the plain query shows the number as expected - though with a trailing zero, and no leading zero - and running it through to_number() throws ORA-01722. Apart from the leading zero, that is what you've shown.
It also fails with to_char(), as in your question title:
select to_char(qty) from t42;
Error starting at line : 13 in command -
select to_char(qty) from t42
Error report -
SQL Error: ORA-01722: invalid number
... which makes sense; your to_number() is doing an implicit conversion, so it's really to_number(to_char(qty)), and it's the implicit to_char() that actually generates the error, I think.
Your comments suggest you have a process that is loading and removing data. It would be interesting to see exactly what that is doing, and if it could be introducing corruption. This sort of effect can be achieved through OCI as the database will trust that the data it's passed is valid, as it does in the PL/SQL example above. There are bug reports suggesting imp can also cause corruption. So the details of your load process might be important, as might the exact database version and platform.

I encountered the nearly same problem. And I found the mysterious number behaved differently from the normal number after dump(). For example, assuming my qty=500 (datatype: number(30,2)) , then:
select dump(qty) from my_table where Id=12345;
Typ=2 Len=3: 194,6,1
select dump(500.00) from dual;
Typ=2 Len=2: 194,6
If we know how number datatype be stored (if not, plz visit http://translate.google.com/translate?langpair=zh-CN%7Cen&hl=zh-CN&ie=UTF8&u=http%3A//www.eygle.com/archives/2005/12/how_oracle_stor.html ) , we can find that there is a tailing zero (the last extra "1" in Typ=2 Len=3: 194,6,1) in the mysterious number.
So I made a trick to eliminate the tailing zero, and it works for the problem.
select dump(trunc(qty+0.001,2)) from my_table where Id=12345;
Typ=2 Len=2: 194,6
Hope someone to explain the deep mechanism.

try this:
Select To_Number(trim(qty)) From my_table Where Id=12345;

Related

SQL Decode format numbers only

I want to format amounts to salary format, e.g. 10000 becomes 10,000, so I use to_char(amount, '99,999,99')
SELECT SUM(DECODE(e.element_name,'Basic Salary',to_char(v.screen_entry_value,'99,999,99'),0)) Salary,
SUM(DECODE(e.element_name,'Transportation Allowance',to_char(v.screen_entry_value,'99,999,99'),0)) Transportation,
SUM(DECODE(e.element_name,'GOSI Processing',to_char(v.screen_entry_value,'99,999,99'),0)) GOSI,
SUM(DECODE(e.element_name,'Housing Allowance',to_char(v.screen_entry_value,'99,999,99'),0)) Housing
FROM values v,
values_types vt,
elements e
WHERE vt.value_type = 'Amount'
this gives error invalid number because not all values are numbers until value_type is equal to Amount but I guess decode check all values anyway although what I know is that the execution begins with from then where then select, what's going wrong here?
You said you added decode(...), but it looks like you might have actually added sum(decode(...)).
You are converting your values to strings with to_char(v.screen_entry_value,'99,999,99'), so your decode() generates a string - the default 0 will be converted to '0' - giving you a value like '1,234,56'. Then you are aggregating those, so sum() has to implicitly convert those strings to numbers - and it is throwing the error when it tries to do that:
select to_number('1,234,56') from dual
will also get "ORA-01722: invalid number", unless you supply a similar format mask so it knows how to interpret it. You could do that, e.g.:
SUM(to_number(DECODE(e.element_name,'Basic Salary',to_char(v.screen_entry_value,'99,999,99'),0),'99,999,99'))
... but it's maybe more obvious that something is strange, and even if you did, you would end up with a number, not a formatted string.
So instead of doing:
SUM(DECODE(e.element_name,'Basic Salary',to_char(v.screen_entry_value,'99,999,99'),0))
you should format the result after aggregating:
to_char(SUM(DECODE(e.element_name,'Basic Salary',v.screen_entry_value,0)),'99,999,99')
fiddle with dummy tables, data and joins.

Invalid digits on Redshift

I'm trying to load some data from stage to relational environment and something is happening I can't figure out.
I'm trying to run the following query:
SELECT
CAST(SPLIT_PART(some_field,'_',2) AS BIGINT) cmt_par
FROM
public.some_table;
The some_field is a column that has data with two numbers joined by an underscore like this:
some_field -> 38972691802309_48937927428392
And I'm trying to get the second part.
That said, here is the error I'm getting:
[Amazon](500310) Invalid operation: Invalid digit, Value '1', Pos 0,
Type: Long
Details:
-----------------------------------------------
error: Invalid digit, Value '1', Pos 0, Type: Long
code: 1207
context:
query: 1097254
location: :0
process: query0_99 [pid=0]
-----------------------------------------------;
Execution time: 2.61s
Statement 1 of 1 finished
1 statement failed.
It's literally saying some numbers are not valid digits. I've already tried to get the exactly data which is throwing the error and it appears to be a normal field like I was expecting. It happens even if I throw out NULL fields.
I thought it would be an encoding error, but I've not found any references to solve that.
Anyone has any idea?
Thanks everybody.
I just ran into this problem and did some digging. Seems like the error Value '1' is the misleading part, and the problem is actually that these fields are just not valid as numeric.
In my case they were empty strings. I found the solution to my problem in this blogpost, which is essentially to find any fields that aren't numeric, and fill them with null before casting.
select cast(colname as integer) from
(select
case when colname ~ '^[0-9]+$' then colname
else null
end as colname
from tablename);
Bottom line: this Redshift error is completely confusing and really needs to be fixed.
When you are using a Glue job to upsert data from any data source to Redshift:
Glue will rearrange the data then copy which can cause this issue. This happened to me even after using apply-mapping.
In my case, the datatype was not an issue at all. In the source they were typecast to exactly match the fields in Redshift.
Glue was rearranging the columns by the alphabetical order of column names then copying the data into Redshift table (which will
obviously throw an error because my first column is an ID Key, not
like the other string column).
To fix the issue, I used a SQL query within Glue to run a select command with the correct order of the columns in the table..
It's weird why Glue did that even after using apply-mapping, but the work-around I used helped.
For example: source table has fields ID|EMAIL|NAME with values 1|abcd#gmail.com|abcd and target table has fields ID|EMAIL|NAME But when Glue is upserting the data, it is rearranging the data by their column names before writing. Glue is trying to write abcd#gmail.com|1|abcd in ID|EMAIL|NAME. This is throwing an error because ID is expecting a int value, EMAIL is expecting a string. I did a SQL query transform using the query "SELECT ID, EMAIL, NAME FROM data" to rearrange the columns before writing the data.
Hmmm. I would start by investigating the problem. Are there any non-digit characters?
SELECT some_field
FROM public.some_table
WHERE SPLIT_PART(some_field, '_', 2) ~ '[^0-9]';
Is the value too long for a bigint?
SELECT some_field
FROM public.some_table
WHERE LEN(SPLIT_PART(some_field, '_', 2)) > 27
If you need more than 27 digits of precision, consider a decimal rather than bigint.
If you get error message like “Invalid digit, Value ‘O’, Pos 0, Type: Integer” try executing your copy command by eliminating the header row. Use IGNOREHEADER parameter in your copy command to ignore the first line of the data file.
So the COPY command will look like below:
COPY orders FROM 's3://sourcedatainorig/order.txt' credentials 'aws_access_key_id=<your access key id>;aws_secret_access_key=<your secret key>' delimiter '\t' IGNOREHEADER 1;
For my Redshift SQL, I had to wrap my columns with Cast(col As Datatype) to make this error go away.
For example, setting my columns datatype to Char with a specific length worked:
Cast(COLUMN1 As Char(xx)) = Cast(COLUMN2 As Char(xxx))

ORA-01722: invalid number in column with numbers only?

I did a rather easy view to return only rows where there is number is CONTRACT_ID column. CONTRACT_ID has data type number(8).
CREATE OR REPLACE VIEW cid AS
SELECT *
FROM transactions
WHERE contract_id IS NOT NULL
AND LENGTH(contract_id) > 0;
View works just fine until I scroll down to row ~2950 where I get ORA-01722. Same thing happens if I want to export data to Excel, my file gets only ~2950 rows instead of expected ~20k.
Any idea what might be causing this and how to resolve this issue?
Many thanks!
You wrote too much SQL.. The following will provide all the results you require:
CREATE OR REPLACE VIEW cid AS
SELECT *
FROM transactions
WHERE contract_id IS NOT NULL
You can't LENGTH() a number - a number is either null or it's a value, so you don't need this kind of check.
Passing a number to LENGTH() will turn it into a string first, i.e. LENGTH(TO_CHAR(numbercolumn)). You don't even need a LENGTH() check for null strings, as to oracle NULL string and a zero length string are equivalent, and calling LENGTH() on an empty string or a null, will return null, not 0 (so LENGTH(myNullStr) = 0 doesnt work out; it's not comparing 0 = 0, it's comparing null = 0 and null compared with anything is always false).
The only time this seems to cause confusion is when the string columns in the table are CHAR types rather than VARCHAR types, and people forget that assigning an empty string to a CHAR causes it to become space padded out to the CHAR length hence, not a zero length string any more
First of all, you should remove redundant condition about length(), it's senseless. I'm not sure how it can produce such error, but check whether error disappered after it.
If no, replace star (*) to some field names, say, contract_id. If it will fix error - it would appoint that error source somewhere into removed fields (say, if generated column used).
I cannot imagine how error can be still alive after that, by if so, I'd tried to move it into other tablespace and add into fields list a call of logging function which stores rowid's of rows read - thus check which row produces error.

PLSQL archiving LONG datatype, error:

Im using Oracle 11g, attempting to move anything older than 90days to the History table using PL/SQL..BUT i have one of the columns using datatype of LONG. So i have found the SQL that i thought should work but it gives errors:
BEGIN
FOR ROW IN
(SELECT MESSSAGE_KEY,
DISTRIBUTION_ID,
MESSAGE,
SYSTEM_NAME,
MESSAGE_TYPE,
MESSAGE_NAME,
MESSAGE_STATUS,
LATEST_INBOUND,
CREATETS,
MODIFYTS,
CREATEUSERID,
MODIFYUSERID,
CREATEPROGID,
MODIFYPROGID,
LOCKID,
ENTITY_KEY,
ENTITY_NAME,
ENTITY_VALUE
FROM NWCG_INBOUND_MESSAGE
WHERE TO_CHAR (createts, 'YYYYMMDD') >= TO_CHAR ((sysdate-90), 'YYYYMMDD')
)
LOOP
INSERT INTO NWCG_INBOUND_MESSAGE_H
VALUES (
ROW.MESSSAGE_KEY,
ROW.DISTRIBUTION_ID,
ROW.MESSAGE,
ROW.SYSTEM_NAME,
ROW.MESSAGE_TYPE,
ROW.MESSAGE_NAME,
ROW.MESSAGE_STATUS,
ROW.LATEST_INBOUND,
ROW.CREATETS,
ROW.MODIFYTS,
ROW.CREATEUSERID,
ROW.MODIFYUSERID,
ROW.CREATEPROGID,
ROW.MODIFYPROGID,
ROW.LOCKID,
ROW.ENTITY_KEY,
ROW.ENTITY_NAME,
ROW.ENTITY_VALUE
);
END LOOP;
END;
This is the error i am getting:
Error report:
ORA-06502: PL/SQL: numeric or value error
ORA-06512: at line 2
06502. 00000 - "PL/SQL: numeric or value error%s"
*Cause:
*Action:
From my research it looks like this error has been about a lot, but i cant find any of peoples solutions to work.... any ideas?
The long datatype has been one of the reasons why I've always advised against storing documents or long string in an Oracle database. Without reverting to C and OCI, it is hard to use.
Now we have clob and blob which are reasonable usable in PL/SQL and SQL. But there are still many occurrences of the LONG datatype to be found of it, also in the Oracle data dictionary. Especially in XXX_VIEWS (user_views, all_views, dba_views) it is a real problem. Maybe the original developer should have named it UNUSABLE :-).
There is a workaround when the LONG contents are smaller than 32 KB; for full functionality I would recommend migrating to CLOB or using C. Good luck!
--
-- This sample code works when the long is smaller than 32 KB.
-- It is known to work on 9i, 10g, 11g r1, 11g r2, but it assumes
-- that a LONG smaller than 32 KB can be put in a PL/SQL variable.
-- And then cast.
--
-- You might want to add an exception handler to handle exceptions
-- when the size is larger than 32 KB. In this sample, this situation
-- can not occur; the where clause with text_length ensures that.
--
declare
l_text_as_long long;
l_text_as_clob clob;
l_text_length user_views.text_length%type;
begin
select viw.text
, viw.text_length
into l_text_as_long
, l_text_length
from user_views viw
where viw.view_name = upper(l_object_name)
and viw.text_length <= 32767 /* To fix a problem when accessing a view that is larger than 32K, we have this condition. */
;
l_text_as_clob := cast(l_text_as_long as clob);
... do something interesting ...
end;

Invalid floating point operation error when counting logarithm in SQL Server 2008

In Microsoft SQL Server 2008, I have a table, say myTable, containing about 600k rows (actually, it is a result of joining several other tables, but i suppose this is not important). One of its columns, say value is of type numeric(6,2).
The simple query SELECT value FROM myTable ORDER BY value returns of course about 600k numbers, starting with 1.01 (i.e. the lowest) and ending with 70.00 (highest); no NULLs or other values.
Please notice, that all these values are numeric and positive. However, when calling SELECT LOG(value) FROM myTable, i obtain an error message "An invalid floating point operation occurred".
This error always appears after about 3 minutes of the query running. When copying the 600k values to Excel and counting their LN(), there is absolutely no problem.
I have tried converting value to real or float, which did not help at all. Finally I found a workaround: SELECT LOG(CASE WHEN value>0 THEN value ELSE 1 END) FROM myTable. This works. But why, when all the values are positive? I have tried to take the result and compare the logarithms with those counted by Excel - they are all the same (only differences of the order 10^(-15) or smaller occured in some rows, which is almost surely given by different accuracy). That means that the condition in the CASE statement is always true, I suppose.
Does anyone have any idea why this error occurs? Any help appreciated. Thanks.
You can identify the specific value that's causing the prob;
declare #f numeric(6,2), #r float
begin try select
#f = value, #r = LOG(value)
from mytable
end try begin catch
select error_message(),'value=',#f
end catch
You would get this error - "An invalid floating point operation occurred" when you do LOG(0). The value of LOG(zero) is indeterminate in the world of Maths, hence the error.
Cheers.