VS 2005 SSIS Error value origin - sql

I have an ssis package created in vs 2005 that has started to give me the following error:
[Lawson Staging Table [4046]] Error: There was an error with input column "JOB_CODE" (4200) on input "OLE DB Destination Input" (4059). The column status returned was: "The value violated
the integrity constraints for the column.".
My first question is: what are the 4046, 4200 & 4059 values following my table, column and destination?
My second question is about the integrity constraint message. The destination table is a heap (no keys or indexes) with no constraints. The destination column is defined as a varchar(10). The input column is from oracle, is defined as char(9) and is called job_code. So - where is there an integrity constraint defined?
The final question is about the select statement; looks like the following:
Select ...
,lpad(trim(e.job_code),10,'0') as job_code ...
If I take the lpad and trim functions out, it works but I need these functions in place because my spec calls for a fixed length column padded with leading zeros. This column returns data as expected in TOAD but fails in the ssis package. Does anyone see an issue with how the functions are being used?
Since this package worked in the past but suddenly started to throw this error, I'm assuming that new invalid data has come into play. however, recently added rows don't seem to be any different then historical records.

Those numbers are more likely to be the ids assigned to the each task/table/column etc.
You could probably go to the advanced editor of the data flow task and look at the input and output properties. You can see that for each input or for each column there is an ID assigned.
Next: The error that you are getting occurs usually when "Allow Nulls" option is unchecked.
Try this:
Look at the name of the column for this error/warning.
Go to SSMS and find the table
Allow Nulls for that Column
Save the table
Rerun the SSIS

Related

How to fix this error: Microsoft.Data.SqlClient.SqlException (0x80131904): Invalid column name 'NormalizedEmail' and etc

My application runs off my database that I created. I added migration, update-database and all that jazz so that it works perfectly fine.
Now that I have to convert my project to use the LIVE database, I'm getting this error message:
Microsoft.Data.SqlClient.SqlException (0x80131904): Invalid column name 'NormalizedEmail'.
Invalid column name 'ConcurrencyStamp'.
Invalid column name 'LockoutEnd'.
Invalid column name 'NormalizedEmail'.
Invalid column name 'NormalizedUserName'.
Invalid column name 'UserType'.
I'm not sure how to go about fixing this but when I use my old database it works, with this new one it just keeps giving me this error when I try to log in a user or anything to do with my database.
Help please!
ThANK YOU!
Remove this columns from code
I think it's mismatch with database
look at this
The problem is very simple to state and very difficult to solve. But there IS a solution.
Manual solution
If all automatic approaches fail and you do not have any extra information, then you can at least ensure that your schema is technically compatible with the application's expectations.
Depending on the RDBMS that you use and which was not specified in the question, you can get all the table names and column names. In MySQL and PostgreSQL you could query information_schema.columns, in SQL Server you can join sys.tables and sys.columns for that purpose. Make sure that you order the results by tablename, columnname and export it. Do it both for your old db and prod db. Find out what the differences are and implement alter statements to add the missing columns.
Automatic solution
If you have some scripts versioned somewhere that were doing the alters and possibly filling the new columns with data, then run those either by hand or a migration tool. Make sure that if such files exist, then you find them.
Removal
You can also remove the column references from code, as Cemil suggested in his/her answer, but you should avoid doing this, unless you are absolutely sure that it is feasible for your situation. The basic assumption is that the code references these for a reason and you are missing the columns from the database where they need to be created. Do not remove the column references from code until this basic assumption is proven wrong.

Finding the column throwing exception during data migration with SSIS from Oracle to MS SQL

I am working on a data migration project. In current task, I have to select data from n number of tables from Oracle, join them and insert the data into a single SQL table. The number of rows are in millions.
Issue: There is data in Oracle which when we are trying to insert in SQL is giving exception. For example the datatype of the Oracle column is VARCHAR2 whereas in SQL it's int. The data is numbers. But there are few columns which have special characters like ','. This is one such example which will fail when we are trying to insert into SQL table. It's failing for many such columns.
I am using SSIS for this task. I am moving all the error id's of the rows into an error table which are throwing such error as mentioned in above example.
Question: I want the column name for which the insertion is failing for each row. Is there an option in SSIS? On error I want to store the id and the column name in an Error table.
Tried to search on internet, but didn't get anything. In SSIS, we do have option to configure the rows having Error. But didn't find that giving column name option to insert into a error table.
Edit: The data will come on daily basis i.e. the SSIS package will be executed daily.
The Error Output contains many columns providing information about it.
The list of columns includes the columns in the component input, the ErrorCode and ErrorColumn columns added by previous error outputs, and the ErrorCode and ErrorColumn columns added by this component.
If you are using OLEDB Destination, you cannot redirect the error rows while using Fast load option. And since you mentioned that
The number of rows are in millions.
Then it is not recommended to use the Row-by-Row insertion.
If there are few columns, i suggest adding a Data Conversion Transformation and use its Error output to get the error information.
References and helpful links
Configuring Error Output Columns
SSIS how to redirect the rows in OLEDB Destination when the fast load option is turned on and maximum insert commit size set to zero
Error Handling in Data
Error Handling With OLE DB Destinations

DB2 LOAD Modifier - GeneratedOverride or IdentityOverride

I am performing a DB2 load, and I am struggling to understand the impact of using GeneratedOverride over IdentityOverride. When I run the following command:
db2 load from tab123.ixf of ixf replace into application.table_abc
All rows are rejected, with the following error being the culprit:
SQL3550W The field value in row row-number and column column-number is not NULL, but the target column has been defined as GENERATED ALWAYS.
So to try and step around this, I executed
:
db2 load from tab123.ixf of ixf modified by identityoverride replace into application.table_abc
But this immediately returned this error:
SQL3526N The modifier clause "IDENTITY OVERRIDE" is inconsistent with the current load command. Reason code: "3".
From checking the reason code I see that the issue is "Generated or identity related file type modifiers have been specified but the target table contains no such columns." .. but the SQL3550W error seems to infer that the columns are generated always!
The only way I can get these rows to commit to the table is to run..
db2 load from tab123.ixf of ixf modified by generatedoverride replace into application.table_abc
Can anyone enlighten me to why I am recieving the SQL3526N error, or what the implications of running generatedoverride are?
Thanks for sticking with me..
Generated columns are not necessarily identity columns, apparently that's the case in your situation. Check the CREATE TABLE syntax to see what are other ways to generate column values.
By using the GENERATEDOVERRIDE option during the load you are obviously replacing (overriding) the generated values with those from the input file.

SSIS string truncation error

I have a SQL Server 2005 SP2 database which has a table with a poc_resp_city attribute which is nvarchar(35).
It was changed to nvarchar(80) 2 months ago without aligning the very same attribute in the data warehouse. (which still has nvarchar(35) )
The SSIS data loading package (after two months of proper working) now gives back package failure every time I run it with the following error:
There was an error with output column "poc_resp_city" (2250) on output
"OLE DB Source Output" (11). The column status returned was: "Text was
truncated or one or more characters had no match in the target code
page.". There was an error with output column "poc_resp_city"
(2250) on output "OLE DB Source Output" (11). The column status
returned was: "Text was truncated or one or more characters had no
match in the target code page.".
SSIS Error Code DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method on
component "Source Table" (1) returned error code 0xC020902A. The
component returned a failure code when the pipeline engine called
PrimeOutput(). The meaning of the failure code is defined by the
component, but the error is fatal and the pipeline stopped executing.
There may be error messages posted before this with more information
about the failure.
Neither the package nor the databases were modified regarding this issue. I know that I could ignore this error or I could make the arrangements to make sure it's working but I want to provide a proper and acceptable answer why this error appears 2 months after the modification? Because maybe I miss an important step in this situation.
Important note: I don't have even a single record which has more than 35 characters so truncation never occurs. (this warning belongs to some kind of an SSIS validation step)
Now I think that maybe after a period of time, SSIS package recompiles itself and now it sees this misalignment in its metadata (35 =/= 80) and because TruncationRowDisposition attribute is set to RD_FailComponent, it fails the component.
And I would exclude the code page option because every database column is nvarchar, not varchar, so this shouldn't be the case.
Thanks!
You need to refresh size of column:
With right button push on OLE DB Source -> Show Advanced Editor
Choose Input and Output Properties tab -> Ole DB Source Output -> Output Columns
In the right panel Length row insert your new size.
Push OK
Or you can copy your query from OLE DB Source, delete OLE DB Source, insert new OLE DB Source and paste query. This gonna automaticily refresh your columns.
Just remember what there are probably more element in Dataflow where you need to edit length of your column, like Data Converion...

Import Package Error - Cannot Convert between Unicode and Non Unicode String Data Type

I have made a dtsx package on my computer using SQL Server 2008. It imports data from a semicolon delimited csv file into a table where all of the field types are NVARCHAR MAX.
It works on my computer, but it needs to run on the clients server. Whenever they create the same package with the same csv file and destination table, they receive the error above.
We have gone through the creation of the package step by step, and everything seems OK. The mappings are all correct, but when they run the package in the last step, they receive this error. They are using SQL Server 2005.
Can anyone advise where to begin looking for this problem?
The problem of converting from any non-unicode source to a unicode SQL Server table can be solved by:
add a Data Conversion transformation step to your Data Flow
open the Data Conversion and select Unicode for each data type that applies
take note of the Output Alias of each applicable column (they are named Copy Of [original column name] by default)
now, in the Destination step, click on Mappings
change all of your input mappings to come from the aliased columns in the previous step (this is the step that is easily overlooked and will leave you wondering why you are still getting the same errors)
At some point, you're trying to convert an nvarchar column to a varchar column (or vice-versa).
Moreover, why is everything (supposedly) nvarchar(max)? That's a code smell if I ever saw one. Are you aware of how SQL Server stores those columns? They use pointers to where the column is stored from the actual rows, since they don't fit within the 8k pages.
Non-Unicode string data types:
Use STR for text file and VARCHAR for SQL Server columns.
Unicode string data types:
Use W_STR for text file and NVARCHAR for SQL Server columns.
The problem is that your data types do not match, so there could be a loss of data during the conversion.
Two solutions:
1- if the type of the target column is [nvarchar] it should be change to [varchar]
2- Add a "Derived Column" component to the SSIS package and add a new column with the following expression:
(DT_WSTR, «length») [ColumnName]
Length is the length of the column in the target table and ColumnName is the name of the column in the target table.
finally at the mapping part you should use this new added column instead of the original column.
Not sure if this is a best practice with SSIS but sometimes I find their tools are a bit clunky when you want to do this type of activity.
Instead of using their components you can convert the data within your query
Instead of doing
SELECT myField = myNvarchar20Field
FROM myTable
You could do
SELECT myField = CONVERT(VARCHAR(20),myNvarchar20Field)
FROM myTable
This a solution that uses the IDE to fix:
Add a Data Conversion item to your dataflow as shown below;
Double click on the Data Conversion item, and set it as shown:
Now double click on the DB Destination item, Click on Mapping, and ensure that your input Column is actually the same as coming from the Copy of [your column name], which is in fact the Data Conversion output NOT the DB Source Output (be careful here). Here is a screenshot:
And thats it .. save and run ..
Mike, I had the same problem with SSIS in SQL Server 2005...
Apparently, the DataFlowDestination object will always attempt to validate the data coming in,
into Unicode. Go to that object, Advanced Editor, Component Properties pane, change the "ValidateExternalMetaData" property to False. Now, go to the Input and Output Properties pane, Destination Input, External Columns - set each column Data type and Length to match the database table it's going to. Now, when you close that editor, those column changes will be saved and not validated over, and it will work.
Follow the below steps to avoid (cannot convert between unicode and non-unicode string data types) this error
i) Add the Data conversion Transformation tool to your DataFlow.
ii) To open the DataFlow Conversion and select [string DT_STR] datatype.
iii) Then go to Destination flow, select Mapping.
iv) change your i/p name to copy of the name.
Get to the registry to configuration of the client and change the LANG.
For Oracle, go to HLM\SOFTWARE\ORACLE\KEY_ORACLIENT...HOME\NLS_LANG and change to appropriate language.
The dts data Conversion task is time taking if there are 50 plus columns!Found a fix for this at the below link
http://rdc.codeplex.com/releases/view/48420
However, it does not seem to work for versions above 2008. So this is how i had to work around the problem
*Open the .DTSX file on Notepad++. Choose language as XML
*Goto the <DTS:FlatFileColumns> tag. Select all items within this tag
*Find the string **DTS:DataType="129"** replace with **DTS:DataType="130"**
*Save the .DTSX file.
*Open the project again on Visual Studio BIDS
*Double Click on the Source Task . You would get the message
the metadata of the following output columns does not match the metadata of the external columns with which the output columns are associated:
...
Do you want to replace the metadata of the output columns with the metadata of the external columns?
*Now Click Yes. We are done !
Resolved - to the original ask:
I've seen this before. Easiest way to fix (don't need all those data conversion steps as ALL of the meta data is available from the source connection):
Delete the OLE DB Source & OLE DB Destinations
Make sure Delayed Validation is FALSE (you can set it to True later)
Recreate the OLE DB Source with your query, etc.
Verify in the Advanced Editor that all of the output data column types are correct
Recreate your OLE DB Destination, map, create new table (or remap to existing) and you'll see that SSIS got all the data types correct (same as source).
So much easier that the stuff above.
Not sure if this is still a problem but I found this simple solution:
Right-Click Ole DB Source
Select 'Edit'
Select Input and Output Properties Tab
Under "Inputs and Outputs", Expand "Ole DB Source Output" External Columns and Output Columns
In Output columns, select offending field, on the right-hand panel ensure Data Type Property matches that of the field in External Columns properties
Hope this was clear and easy to follow
Sometime we get this error when we select static character as a field in source query/view/procedure and the destination field data type in Unicode.
Below is the issue i faced:
I used the script below at source
and got the error message Column "CATEGORY" cannot convert between Unicode and non-Unicode string data types. as below:
error message
Resolution:
I tried multiple options but none worked for me. Then I prefixed the static value with N to make in Unicode as below:
SELECT N'STUDENT DETAIL' CATEGORY, NAME, DATEOFBIRTH FROM STUDENTS
UNION
SELECT N'FACULTY DETAIL' CATEGORY, NAME, DATEOFBIRTH FROM FACULTY
If anyone is still experiencing this issue, I found that it related to a difference in Oracle Client versions.
I have posted my full experience and solution here: https://stackoverflow.com/a/43806765/923177
1.add a Data Conversion tool from toolbox
2.Open it,It shows all coloumns from excel ,convert it to desire output. take note of the Output Alias of
each applicable column (they are named Copy Of [original column name] by default)
3.now, in the Destination step, click on Mappings
I changed ValidateExternalMetadata=False for each transformation task. It worked for me.