Oracle Error, moving data between databases - sql

I am moving some data between two databases and have had much success, but then I encountered a problem doing the same kind of query that I've been doing.
The query:
INSERT INTO INTERNET.WEBSECURITY#crmtest SELECT * FROM INTERNET.WEBSECURITY;
The Error:
ORA-06502: PL/SQL: numeric or value error: character string buffer too small
Any ideas on what this might be?

You are trying to assign a value to a plsql variable which is not big enough or it has greater size than the column data type.

In addition: Assign/insert a non-numeric value to a numeric variable/column.
Probably your table columns are a bit different in datatypes and sizes. I do not see any variables in your example.

Related

"Numeric value '' is not recognized" - what column?

I am trying to insert data from a staging table into the master table. The table has nearly 300 columns, and is a mix of data-typed Varchars, Integers, Decimals, Dates, etc.
Snowflake gives the unhelpful error message of "Numeric value '' is not recognized"
I have gone through and cut out various parts of the query to try and isolate where it is coming from. After several hours and cutting every column, it is still happening.
Does anyone know of a Snowflake diagnostic query (like Redshift has) which can tell me a specific column where the issue is occurring?
Unfortunately not at the point you're at. If you went back to the COPY INTO that loaded the data, you'd be able to use VALIDATE() function to get better information to the record and byte-offset level.
I would query your staging table for just the numeric fields and look for blanks, or you can wrap all of your fields destined for numeric fields with try_to_number() functions. A bit tedious, but might not be too bad if you don't have a lot of numbers.
https://docs.snowflake.com/en/sql-reference/functions/try_to_decimal.html
As a note, when you stage, you should try and use the NULL_IF options to get rid of bad characters and/or try to load them into stage using the actual datatypes in your stage table, so you can leverage the VALIDATE() function to make sure the data types are correct before loading into Snowflake.
Query your staging using try_to_number() and/or try_to_decimal() for number and decimal fields of the table and the use the minus to get the difference
Select $1,$2,...$300 from #stage
minus
Select $1,try_to_number($2)...$300 from#stage
If any number field has a string that cannot be converted then it will be null and then minus should return those rows which have a problem..Once you get the rows then try to analyze the columns in the result set for errors.

query where '0'

I need to query a varchar field in sql for 0's.
when I query where field = '0' i get the resulting error message.
Conversion failed when converting the varchar value 'N' to data type int.
I'm having trouble figuring out where the issue is coming from. My Googling is failing me on this one, could someone point me in the right direction?
EDIT:
Thanks for the help on this one guys, so there were 'N's in the data just very few of them so they weren't showing up in my top 100 query until I limited the search results further.
Apparently sql didn't have any issue comparing ints to varchar(1) so long as they were ints as well. I didn't even realize I was using an int in the where farther up in my query.
Oh and sorry for not sharing my query, it was long and complicated I was trying to share what I thought was the relevant from it. I'll write a simplified query in future questions.
Anyone know how to mark this as solved?
If your field is a varchar(), then this expression:
where field = '0'
cannot return a type conversion error.
This version can:
where field = 0
It would return an error if field has the value of 'N'. I am guessing that is the situation.
Otherwise, you have another expression in your code causing the problem by doing conversions from strings to numbers.

I have an issue trying to UNION All in SQL Server 2008

I am having to create a second header line and am using the first record of the Query to do this. I am using a UNION All to create this header record and the second part of the UNION to extract the Data required.
I have one issue on one column.
,'Active Energy kWh'
UNION ALL
,SUM(cast(invc.UNITS as Decimal (15,0)))
Each side are 11 lines before and after the Union and I have tried all sorts of combinations but it always results in an error message.
The above gives me "Error converting data type varchar to numeric."
Any help would be much appreciated.
The error message indicates that one of your values in the INVC table UNITS column is non-numeric. I would hazard a guess that it's either a string (VARCHAR or similar) column or something else - and one of the values has ended up in a state where it cannot be parsed.
Unfortunately there is no way other than checking small ranges of the table to gradually locate the 'bad' row (i.e. Try running the query for a few million rows at a time, then reducing the number until you home in on the bad data). SQL 2014 if you can get a database restored to it has the TRY_CONVERT function which will permit conversions to fail, enabling a more direct check - but you'll need to play with this on another system
(I'm assuming that an upgrade to 2014 for this feature is out of the question - your best bet is likely just looking for the bad row).
The problem is that you are trying to mix header information with data information in a single query.
Obviously, all your header columns will be strings. But not all your data columns will be strings, and SQL Server is unhappy when you mix data types this way.
What you are doing is equivalent to this:
select 'header1' as col1 -- string
union all
select 123.5 -- decimal
The above query produces the following error:
Error converting data type varchar to numeric.
...which makes sense, because you are trying to mix both a string (the header) with a decimal field.
So you have 2 options:
Remove the header columns from your query, and deal with header information outside your query.
Accept the fact that you'll need to convert the data type of every column to a string type. So when you have numeric data, you'll need to cast the column to varchar(n) explicitly.
In your case, it would mean adding the cast like this:
,'Active Energy kWh'
UNION ALL
,CAST(SUM(cast(invc.UNITS as Decimal (15,0))) AS VARCHAR(50)) -- Change 50 to appropriate value for your case
EDIT: Based on comment feedback, changed the cast to varchar to have an explicit length (varchar(n)) to avoid relying on the default length, which may or may not be long enough. OP knows the data, so OP needs to pick the right length.

Finding which column caused the postgresql exception in a query.

I have a staging table with around 200 columns in Redshift. I first copy data from S3 to this table and then copy data from this table to another table using a large insert into select from query. Most of the fields in staging table are varchar, which I convert to the proper datatype in the query.
I am getting some field in the staging table which is causing a numeric overflow -
org.postgresql.util.PSQLException: ERROR: Numeric data overflow (addition)
Detail:
-----------------------------------------------
error: Numeric data overflow (addition)
code: 1058
context:
query: 9620240
location: numeric.hpp:112
process: query1_194 [pid=680]
how can I find, which field is causing this overflow, so that I can sanitize my input or correct my query.
I use Netezza which also can use regex functions to grep out rows. Fortunately redshift supports regexp as well. Please take a look at
http://docs.aws.amazon.com/redshift/latest/dg/REGEXP_COUNT.html
So the idea in your case is to use the regexp in the where clause and in this way you can find which values are exceeding the numeric cast occurring during the insert. The issue will be finding identifying data that allows you to determine which rows in the physical file are causing the issue. You could create another copy of the data and create row numbers in a temporary table. Use the temporary table as your source of analysis. How large is the numeric field you are going into ? You may need to do this analysis against more than 1 column if you have multiple columns being cast to numeric.

ORA-01438: value larger than specified precision allows for this column

We get sometimes the following error from our partner's database:
<i>ORA-01438: value larger than specified precision allows for this column</i>
The full response looks like the following:
<?xml version="1.0" encoding="windows-1251"?>
<response>
<status_code></status_code>
<error_text>ORA-01438: value larger than specified precision allows for this column ORA-06512: at "UMAIN.PAY_NET_V1_PKG", line 176 ORA-06512: at line 1</error_text>
<pay_id>5592988</pay_id>
<time_stamp></time_stamp>
</response>
What can be the cause for this error?
The number you are trying to store is too big for the field. Look at the SCALE and PRECISION. The difference between the two is the number of digits ahead of the decimal place that you can store.
select cast (10 as number(1,2)) from dual
*
ERROR at line 1:
ORA-01438: value larger than specified precision allowed for this column
select cast (15.33 as number(3,2)) from dual
*
ERROR at line 1:
ORA-01438: value larger than specified precision allowed for this column
Anything at the lower end gets truncated (silently)
select cast (5.33333333 as number(3,2)) from dual;
CAST(5.33333333ASNUMBER(3,2))
-----------------------------
5.33
The error seems not to be one of a character field, but more of a numeric one. (If it were a string problem like WW mentioned, you'd get a 'value too big' or something similar.) Probably you are using more digits than are allowed, e.g. 1,000000001 in a column defined as number (10,2).
Look at the source code as WW mentioned to figure out what column may be causing the problem. Then check the data if possible that is being used there.
Further to previous answers, you should note that a column defined as VARCHARS(10) will store 10 bytes, not 10 characters unless you define it as VARCHAR2(10 CHAR)
[The OP's question seems to be number related... this is just in case anyone else has a similar issue]
This indicates you are trying to put something too big into a column. For example, you have a VARCHAR2(10) column and you are putting in 11 characters. Same thing with number.
This is happening at line 176 of package UMAIN. You would need to go and have a look at that to see what it is up to. Hopefully you can look it up in your source control (or from user_source). Later versions of Oracle report this error better, telling you which column and what value.
FYI:
Numeric field size violations will give
ORA-01438: value larger than specified precision allowed for this column
VARCHAR2 field length violations will give
ORA-12899: value too large for column...
Oracle makes a distinction between the data types of the column based on the error code and message.
One issue I've had, and it was horribly tricky, was that the OCI call to describe a column attributes behaves diffrently depending on Oracle versions. Describing a simple NUMBER column created without any prec or scale returns differenlty on 9i, 1Og and 11g
From http://ora-01438.ora-code.com/ (the definitive resource outside of Oracle Support):
ORA-01438: value larger than specified precision allowed for this column
Cause: When inserting or updating records, a numeric value was entered that exceeded the precision defined for the column.
Action: Enter a value that complies with the numeric column's precision, or use the MODIFY option with the ALTER TABLE command to expand the precision.
http://ora-06512.ora-code.com/:
ORA-06512: at stringline string
Cause: Backtrace message as the stack is unwound by unhandled exceptions.
Action: Fix the problem causing the exception or write an exception handler for this condition. Or you may need to contact your application administrator or DBA.
It might be a good practice to define variables like below:
v_departmentid departments.department_id%TYPE;
NOT like below:
v_departmentid NUMBER(4)
It is also possible to get this error code, if you are using PHP and bound integer variables (oci_bind_by_name with SQLT_INT).
If you try to insert NULL via the bound variable, then you get this error or sometimes the value 2 is inserted (which is even more worse).
To solve this issue, you must bind the variable as string (SQLT_CHR) with fixed length instead. Before inserting NULL must be converted into an empty string (equals to NULL in Oracle) and all other integer values must be converted into its string representation.