How to capture Data Truncation as an Error in informatica - error-handling

i want to make my session fail when ever encountered data Truncation.
current scenario in my session i am using Teradata TPT script to load data from source to target.
sometime due to datalength mismatch data truncate getting load to target without throwing the error.
EX : souce is VARCHAR(15) and target VARCHAR(10) in this scenario my session is throwing warning only and internally it trim and truncate rest 5 character. and first 10 char load in to target.i want to make my session to fail if any truncation occur.
as of now through googling i tried two otpion
1) Reject Truncated/Overflowed rows i checked in this target proerty
2) Stop on errors i did set to 1
but still it doesn't solve the probllem.please suggest any other way make it achieve.

In the post session check for the number of rows in reject when you select reject truncated rows and mark fail parent.

Related

DB2 LOAD Modifier - GeneratedOverride or IdentityOverride

I am performing a DB2 load, and I am struggling to understand the impact of using GeneratedOverride over IdentityOverride. When I run the following command:
db2 load from tab123.ixf of ixf replace into application.table_abc
All rows are rejected, with the following error being the culprit:
SQL3550W The field value in row row-number and column column-number is not NULL, but the target column has been defined as GENERATED ALWAYS.
So to try and step around this, I executed
:
db2 load from tab123.ixf of ixf modified by identityoverride replace into application.table_abc
But this immediately returned this error:
SQL3526N The modifier clause "IDENTITY OVERRIDE" is inconsistent with the current load command. Reason code: "3".
From checking the reason code I see that the issue is "Generated or identity related file type modifiers have been specified but the target table contains no such columns." .. but the SQL3550W error seems to infer that the columns are generated always!
The only way I can get these rows to commit to the table is to run..
db2 load from tab123.ixf of ixf modified by generatedoverride replace into application.table_abc
Can anyone enlighten me to why I am recieving the SQL3526N error, or what the implications of running generatedoverride are?
Thanks for sticking with me..
Generated columns are not necessarily identity columns, apparently that's the case in your situation. Check the CREATE TABLE syntax to see what are other ways to generate column values.
By using the GENERATEDOVERRIDE option during the load you are obviously replacing (overriding) the generated values with those from the input file.

Pentaho table output step not showing proper error in log

In Pentaho, I have a table output step where I load a huge num of records into a netezza target table.
One of the rows fails and the log shows me which values are causing the problem. But the log is probably not right, because when i create an insert statement with those values and run it separately on teh database, it works fine.
My question is:
In Pentaho, is there a way to identify that when a db insert fails, exactly which values caused the problem and why?
EDIT: The error is 'Column width exceeded' and it shows me the values that is supposedly causing the problem. But I made an insert statement with those values and it works good. So I think Pentaho is not showing me the correct error message, it is a different set of values that are causing the problem.
Another way I've used to deal with these kind of problems is to create another table in the DB with widened column types. Then in your transform, add a Table output step connected to the new table. Then connect your original Table output to the new step, but when asked, choose 'Error handling' as the hop type.
When you run your transform, the offending rows will end up in the new table. Then you can investigate exactly what the problem is with that particular row.
For example you can do something like:
insert into [original table] select * from [error table];
You'll probably get a better error message from your native DB interface than from the JDBC driver.
I don't know what is your problem exactly, but I think I had the same problem before.
Everything seems right, but the problem was that in some tranformations, when I transform a numeric value to string for example, the transformation added a whitespace at the end of the field, and the long of the field was n+1 instead of n, but that is very difficult to see.
A practical example would be if you are transforming with a calculator step, you may use YEAR() function to extract the year of a date field, and maybe to that new field with the year have been added a whitespace, so if the year had a length of 4, after that step it will has a length of 5, and when you are going to load a row (with that year field that is a string(5)) into the data warehouse and in your data warehouse is expecting a string(4), you will get the same error that are getting now.
You think is happening --> year = "2013" --> length 4
Really is happening --> year = "2013 " --> length 5
I recommend you to pay quite attention to the string fields and their lengths, because if some transformation adds a whitespace that you don't expect you can lose a lot of time to find the error (myself experience).
I hope this can be useful for you!
EDIT: I'm guessing you are working with PDI (Spoon, before Kettle) and the error is producing when you are loading a data warehouse, so correct me if I'm wrong.
Can you use the file with nzload command, with this command you can find exact error, and bad records in badFile provided by you for detailed analysis.
e.g. -
nzload -u <username> -pw <password> -host <netezzahost> -db <database> -t <tablename> -df <datafile> -lf <logfile> -bf <badrecords file name> -delim <delimiter>

Oracle error: Application failover does not support non-single-SELECT statement

We are getting the following error using Oracle:
[Oracle JDBC Driver]Application failover does not support non-single-SELECT statement
The error occurs when we try to make a delete or insert over a large number of rows (tens of millions of rows).
I know that the script works, because it was working for almost an year before these error messages start to pop.
We know that no one change any database configuration, so we figure out that the problem must be on the volume of processed data (row number is growing as time goes by...).
But we never see that kind of error before! What does it means? It seems that a failover engine tries to recover from an error, but when oracle is 'taken over' by this engine, it enter in a more restricted state, where some kinds of queries does not work (like Windows Safe Mode...)
Well, if this is what is happening, how can I get the real error message? The one that trigger the failover mechanism?
BTW, below is one of the deletes that triggers the error:
delete from odf_ca_rnv_av_snapshot_week
(we tried this one just to test the simplest delete we could think of... a truncate won't help us with the real deal :) )
check this link
the error seems to come not from Oracle or JDBC, but from "progress". It means that it can only recover from SELECT statements and not from DML.
You'll have to figure out why the failover occurs in the first place.

SSIS package fails with error: “Text was truncated or one or more characters had no match in the target code page.”

I recently updated an SSIS package that had been working fine and now I receive the following error:
Text was truncated or one or more characters had no match in the target code page.
The package effectively transferred data from tables in one database to a table in another database on another server. The update I made was to add another column to the transfer. The column is Char(10) in length and it is the same length on both the source and destination server. Before the data is transferred it Char(10) there as well. I've seen people reporting this error in blog posts as well as on Stack, none of what I have read has helped. One solution I read about involved using a data conversion to explicitly change the offending column, this did not help (or I misapplied the fix).
whihc version of SQl Server and SSIS are you usign?
I would say to take a look at the output and imput fields of your components. CHAR always ocupies all it's length (I mean, char(10) will always use 10 bytes) and since you are having a truncation error, it may be a start. try to increase the size of the field or cast as varchar on the query that loads the data (not as a permanet solution, just to try to isolate the problem)
Which connection you are using ADO.Net or OLEDB connection ??
Try deleting the source and destination if there are not much of changes you have to make ..Sometime the metadata cuases this problems. If this doesn't solve your problem post the screen shot of error.

Bulk Insert Error Messages SQL Server 2008 R2

Is there any way to capture the error messages that occur during a bulk insert?
If I specify an error file, I get 2 seperate files, one that contains the record that errored, and one that contains the row.
The messages that are displayed for errors contain more information:
Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 1, column 6 (temp_batch_date).
Is it possible to write these messages into a temp table so I can handle them accordingly?
Check out the SqlBulkCopy class to perform the operation and you should have programmatic access to any errors (exceptions) that are generated. This should allow you to attempt recovery/logging.
If you don't want to use Visual Studio (why not? it's free), you could also use PowerShell.
Handling these errors outside of SQL Server really opens up what/how you can workaround hurdles with your source data.