Why am I getting a "[SQL0802] Data conversion of data mapping error" exception? - sql

I am not very familiar with iseries/DB2. However, I work on a website that uses it as its primary database.
A new column was recently added to an existing table. When I view it via AS400, I see the following data type:
Type: S
Length: 9
Dec: 2
This tells me it's a numeric field with 6 digits before the decimal point, and 2 digits after the decimal point.
When I query the data with a simple SELECT (SELECT MYCOL FROM MYTABLE), I get back all the records without a problem. However, when I try using a DISTINCT, GROUP BY, or ORDER BY on that same column I get the following exception:
[SQL0802] Data conversion of data mapping error
I've deduced that at least one record has invalid data - what my DBA calls "blanks" or "4 O". How is this possible though? Shouldn't the database throw an exception when invalid data is attempted to be added to that column?
Is there any way I can get around this, such as filtering out those bad records in my query?

"4 O" means 0x40 which is the EBCDIC code for a space or blank character and is the default value placed into any new space in a record.
Legacy programs / operations can introduce the decimal data error. For example if the new file was created and filled using the CPYF command with the FMTOPT(*NOCHK) option.
The easiest way to fix it is to write an HLL program (RPG) to read the file and correct the records.

The only solution I could find was to write a script that checks for blank values in the column and then updates them to zero when they are found.

If the file has record format level checking turned off [ie. LVLCHK(*NO)] or is overridden to that, then an HLL program. (ex. RPG, COBOL, etc) that was not recompiled with the new record might write out records with invalid data in this column, especially if the new column is not at the end of the record.
Make sure that all programs that use native I/O to write or update records on this file are recompiled.

I was able to solve this error by force-casting the key columns to integer. I changed the join from this...
FROM DAILYV INNER JOIN BXV ON DAILYV.DAITEM=BXV.BXPACK
...to this...
FROM DAILYV INNER JOIN BXV ON CAST(DAILYV.DAITEM AS INT)=CAST(BXV.BXPACK AS INT)
...and I didn't have to make any corrections to the tables. This is a very old, very messy database with lots of junk in it. I've made many corrections, but it's a work in progress.

Related

"Numeric value '' is not recognized" - what column?

I am trying to insert data from a staging table into the master table. The table has nearly 300 columns, and is a mix of data-typed Varchars, Integers, Decimals, Dates, etc.
Snowflake gives the unhelpful error message of "Numeric value '' is not recognized"
I have gone through and cut out various parts of the query to try and isolate where it is coming from. After several hours and cutting every column, it is still happening.
Does anyone know of a Snowflake diagnostic query (like Redshift has) which can tell me a specific column where the issue is occurring?
Unfortunately not at the point you're at. If you went back to the COPY INTO that loaded the data, you'd be able to use VALIDATE() function to get better information to the record and byte-offset level.
I would query your staging table for just the numeric fields and look for blanks, or you can wrap all of your fields destined for numeric fields with try_to_number() functions. A bit tedious, but might not be too bad if you don't have a lot of numbers.
https://docs.snowflake.com/en/sql-reference/functions/try_to_decimal.html
As a note, when you stage, you should try and use the NULL_IF options to get rid of bad characters and/or try to load them into stage using the actual datatypes in your stage table, so you can leverage the VALIDATE() function to make sure the data types are correct before loading into Snowflake.
Query your staging using try_to_number() and/or try_to_decimal() for number and decimal fields of the table and the use the minus to get the difference
Select $1,$2,...$300 from #stage
minus
Select $1,try_to_number($2)...$300 from#stage
If any number field has a string that cannot be converted then it will be null and then minus should return those rows which have a problem..Once you get the rows then try to analyze the columns in the result set for errors.

Import PostgreSQL dump into SQL Server - data type errors

I have some data which was dumped from a PostgreSQL database (allegedly, using pg_dump) which needs to get imported into SQL Server.
While the data types are ok, I am running into an issue where there seems to be a placeholder for a NULL. I see a backslash followed by an uppercase N in many fields. Below is a snippet of the data, as viewed from within Excel. Left column has a Boolean data type, and the right one has an integer as the data type
Some of these are supposed to be of the Boolean datatype, and having two characters in there is most certainly not going to fly.
Here's what I tried so far:
Import via dirty read - keeping whatever datatypes SSIS decided each field had; to no avail. There were error messages about truncation on all of the boolean fields.
Creating a table for the data based on the correct data types, though this was more fun... I needed to do the same as in the dirty read, as the source would otherwise not load properly. There was also a need to transform the data into the correct data type for insertion into the destination data source; yet, I am getting truncation issues, when it most certainly shouldn't be.
Here is a sample expression in my derived column transformation editor:
(DT_BOOL)REPLACE(observation,"\\N","")
The data type should be Boolean.
Any suggestion would be really helpful!
Thanks!
Since I was unable to circumvent the SSIS rules in order to get my data into my tables without an error, I took the quick-and-dirty approach.
The solution which worked for me was to have the source data read each column as if it were a string, and the destination table had all fields be of the datatype VARCHAR. This destination table will be used as a staging table, once in SS, I can manipulate as needed.
Thank you #cha for your input.

Insert data via SSIS package and different datatypes

I have a table with a column1 nvarchar(50) null. I want to insert this into a more 'tight' table with a nvarchar(30) not null. My idea was to insert a derived column task between source and destination task with this expression: Replace column1 = (DT_WSTR,30)Column1
I get the "truncation may occur error" and I am not allowed to insert the data into the new tighter table.
Also I am 100% sure that no values are over 30 characters in the column. Moreover I do not have the possibility to change the column data type in the source.
What is the best way to create the ETL process?
JotaBe recommended using a data conversion transformation. Yes, that is another way to achieve the same thing, but it will also error out if truncation occurs. Your way should work (I tried it), provided the input data really is less than 30 characters.
You could modify your derived column expression to
(DT_WSTR,30)Substring([Column1], 1, 30)
Consider changing the truncation error disposition of the Derived Column component within your Data Flow. By default, a truncation will cause the Derived Column component to fail. You can configure the component to ignore or redirect rows which are causing a truncation error.
To do this, open the Derived Column Transformation editor and click the 'Configure Error Output...' button in the bottom-left of the dialog. From here, change the setting in the 'Truncation' column for any applicable columns as required.
Be aware that any data which is truncated for columns ignoring failure will not be reported by SSIS during execution. It sounds like you've already done this, but it's important to be sure you've analysed your data as it currently stands and taken into consideration any possible future changes to the nature of the data before disabling truncation reporting.
To do so you must use a Data Conversion Transformation, which allows to change the data type from the original nvarchar(50) to the desired nvarchar(30).
You'll get a new column with the required data type.
Of course, you can decide what to do in case of error: truncation, by configuring this component.
UPDATE
As there are people who have downvoted this answer, let's add 3 more comments:
this solution is checked and works. Create a table with a nvarchar(50) column, a new table with a nvarchar(30) column, add a data flow that uses a data conversion transform and it works witout a glitch. Please, chek it, I guarantee. Besides, as the OP states "Also I am 100% sure that no values are over 30 characters in the column" in his case there will be no truncation problems. However, I recommend treating the possible errors, just in case they happen.
from MSDN: "a package can perform the following types of data conversions: ... Set the column length of string data"
from MSDN: "If the length of an output column of string data is shorter than the length of its corresponding input column, the output data is truncated."

Querying text file with SQL converts large numbers to NULL

I am importing data from a text file and have hit a snag. I have a numeric field which occasionally has very large values (10 billion+) and some of these values which are being converted to NULLs.
Upon further testing I have isolated the problem as follows - the first 25 rows of data are used to determine the field size, and if none of the first 25 values are large then it throws out any value >= 2,147,483,648 (2^31) which comes after.
I'm using ADO and the following connection string:
Provider=Microsoft.Jet.OLEDB.4.0;Data Source=FILE_ADDRESS;Extended Properties=""text;HDR=YES;FMT=Delimited""
Therefore, can anyone suggest how I can get round this problem without having to get the source data sorted descending on the large value column? Is there some way I could define the data types of the recordset prior to importing rather than let it decide for itself?
Many thanks!
You can use an INI file placed in the directory you are connecting to which describes the column types.
See here for details:
http://msdn.microsoft.com/en-us/library/windows/desktop/ms709353(v=vs.85).aspx

T-SQL: How to log erroneous entries during import

I do an import and convert columns to other datatypes. During the import I check, if I can convert the values stored in the columns into the target datatypes. If not I put a default value, e.g. I convert a string to int and when I cannot convert the string I insert -1.
At this moment it would be great if I could log the erroneous entries into a separate table, e.g. instead of a parsable string '1234' 'xze' arrives, so I put -1 into the target table and 'xze' into the log table.
Is this possible with (T-)SQL?
Cheers,
Andreas
This is also very easy to do with SSIS data flow tasks. Any of the data conversion or lookup steps you might try have a "success" output and an "error" output. You simply direct all the error outputs to a "union" transform, and from there into a common error table. The result of all the successes go into your "success" table. You can even add extra details into the error table, to give you clear error messages.
The nice thing about this is that you still get high performance, as entire buffers move through the system. You'll eventually have buffers full of valid data being bulk written to the success table, and small buffers full of errors being written to the error table. When errors happen on a row, that row will simply be moved from one buffer of rows into another.
If you have a staging table, you can filter the good stuff into the final table and do a join back (NOT EXISTS) to find the rubbish for the log table