SSIS package fails with error: “Text was truncated or one or more characters had no match in the target code page.” - sql

I recently updated an SSIS package that had been working fine and now I receive the following error:
Text was truncated or one or more characters had no match in the target code page.
The package effectively transferred data from tables in one database to a table in another database on another server. The update I made was to add another column to the transfer. The column is Char(10) in length and it is the same length on both the source and destination server. Before the data is transferred it Char(10) there as well. I've seen people reporting this error in blog posts as well as on Stack, none of what I have read has helped. One solution I read about involved using a data conversion to explicitly change the offending column, this did not help (or I misapplied the fix).

whihc version of SQl Server and SSIS are you usign?
I would say to take a look at the output and imput fields of your components. CHAR always ocupies all it's length (I mean, char(10) will always use 10 bytes) and since you are having a truncation error, it may be a start. try to increase the size of the field or cast as varchar on the query that loads the data (not as a permanet solution, just to try to isolate the problem)

Which connection you are using ADO.Net or OLEDB connection ??
Try deleting the source and destination if there are not much of changes you have to make ..Sometime the metadata cuases this problems. If this doesn't solve your problem post the screen shot of error.

Related

Getting Error "The conversion of the varchar value '6160382514d97' overflowed an int column" in an SSIS Package

I have been searching for a solution to this and haven't had any luck.
I have an SSIS package which is loading data from one table and after some lookups, etc. writes it out to another table.
The above error is occurring during the first step of the Data Flow which is an OLEDB Source (SQL 2016 db).
The column in question is an nvarchar(250) and there is nothing that changes it to an int at any point.
I'm thinking that it must be some sort of implicit conversion, but why when it is nvarchar all the way through?
I'm pulling my hair out with this, does anyone have any ideas please?
Thanks for the responses.
The issue seems to have been a buffer size issue with with a lookup task.
When I checked the lookup table, it didn't have an appropriate index. I have added one and it seems to have solved the issue.
Sometimes SSIS keep the old metadata with the auto-convert.
I suggest that you delete your ole Db source component and rebuild a new with your select. Moreover you can go to advenced option (right click on the source component and in the last tab you can validate the datatype of your column and adapt to varchar if it is integer.

Oracle to SQL Server (SSIS) special character problems

I am trying to import data from Oracle to SQL Server through SSIS package data flow task.
One column has special characters and that column is also part of composite key in that table.
So after loading the data on SQL when I am trying to enforce the uniqueness it fails as while loading the data the special characters are getting converted to something else.
Is there any property or alternative so that the special characters are imported the way they are in Oracle.
Thanks in advance.
This seems to be a common issue from my experience.
I will try to describe some steps that might help you.
First, you need to disable the primary key column and run the dtsx again.
Second, you must locate the identical values which cause the primary key violation.
This means that two different chars are mapped to the same char probably and produces the error.
Take in mind that SSIS cannot read with UTF-8 from Oracle, you can see it for your self if you go to Advanced View in Oracle Source Object and inspect the input/output columns, usually, the code page is 1251.
The only solution is to use a Derived Column and replace the problematic character manually before the insert. Also, it will help more if you do this solution.
OracleDB->File->Derived Column->SQLServer

Can't convert String to Numeric/Decimal in SSIS

I have five or six OLE DB Sources with a String[DT_STR], with a length of 500 and 1252 (Latin) as Code Page.
The format of the column is like 0,08 or 0,10 etc etc. As you can see, it is separated with a comma.
All of them are equal except one of them. In this one source, I have a POINT as separation. On this it is working when I set the Data Type in the advanced editor of the OLE DB Source. On another (with comma separated) it is also working, if I set the Data Type in the advanced editor of the OLE DB Source. BUT the weird thing is, that it isn't working with the other sources although they are the same (sperated with comma).
I tested Numeric(18,2) and decimal(2).
Another try to solve the problem with the conversion task and/or the derived column task, failed.
I'm using SQL Server 2008 R2
Slowly, I think SSIS is fooling me :)
Has anyone an idea?
/// EDIT
Here a two screens:
Is working:
click
Isn't working:
click
I would not set the Data Type in the Advanced Editor of the OLE DB Source. I would convert the data in the SQL Code of the OLE DB Source, or in a Script Transformation e.g. using Decimal.TryParse , which would update a new column.
SSIS is unbeleivably fussy over datatypes and trying to mess with its internals is not productive.
Check that there are any spaces in between the commas, so that the SSIS is throwing an error trying to convert the blank space to a number. A blank space does not equal nothing in between spaces.
Redirect error rows and output the data to a file. Then you can examine the data that is being rejected by the SSIS and determine why it's causing error.
Reason for the error
1) Null’s are not properly handled either in the destination database or during SSIS package creation. It is quite possible that the source contains a null database but the destination is not accepting the NULL data leading to build generate above error.
2) Data types between source and destination does not match. For example, source column has varchar data and destination column have an int data type. This can easily generate above error. There are certain kind of datatypes which will automatically convert to another data type and will not generate the error but there are for incompatible datatypes which will generate The value could not be converted because of a potential loss of data. error.
The Issue arises when there is unhandled space or null. I have worked around using the Conditional (Ternary) Operator which checks the length:
LEN(TRIM([Column Name])) >= 1 ? (DT_NUMERIC,38,8)[Column Name] : 0

SQL error message when querying table containing one varbinary (blob) field

We are using Delphi XE3 together with a TSQLDataSet and a TClientDataset to read a table into memory from SQL server 2012.
The table contains various fields, one of them a blob "varbinary(max)" where we store the content from a text file.
My problem is that we get an error message saying "connection is busy with results for another command" when we do a open on the ClientDataset.
The commandtext is a simple "select * from tablename".
This happens only if there are more than one item in the table. It also happens only if there are data in the blob field (<> NULL).
Everything work fine if we add a second varbinary field to the table. The second field does not have to contain any data.
This is driving me crazy, please help.
EDIT: As a workaround we have simply added a "dummy" varbinary field to the table. Because of this strange behavior, we have come to the conclusion that this has to be a bug in the TClientDataset component. Tried to do the same in a older version of delphi (XE2 SP3) with the same result.

Import Package Error - Cannot Convert between Unicode and Non Unicode String Data Type

I have made a dtsx package on my computer using SQL Server 2008. It imports data from a semicolon delimited csv file into a table where all of the field types are NVARCHAR MAX.
It works on my computer, but it needs to run on the clients server. Whenever they create the same package with the same csv file and destination table, they receive the error above.
We have gone through the creation of the package step by step, and everything seems OK. The mappings are all correct, but when they run the package in the last step, they receive this error. They are using SQL Server 2005.
Can anyone advise where to begin looking for this problem?
The problem of converting from any non-unicode source to a unicode SQL Server table can be solved by:
add a Data Conversion transformation step to your Data Flow
open the Data Conversion and select Unicode for each data type that applies
take note of the Output Alias of each applicable column (they are named Copy Of [original column name] by default)
now, in the Destination step, click on Mappings
change all of your input mappings to come from the aliased columns in the previous step (this is the step that is easily overlooked and will leave you wondering why you are still getting the same errors)
At some point, you're trying to convert an nvarchar column to a varchar column (or vice-versa).
Moreover, why is everything (supposedly) nvarchar(max)? That's a code smell if I ever saw one. Are you aware of how SQL Server stores those columns? They use pointers to where the column is stored from the actual rows, since they don't fit within the 8k pages.
Non-Unicode string data types:
Use STR for text file and VARCHAR for SQL Server columns.
Unicode string data types:
Use W_STR for text file and NVARCHAR for SQL Server columns.
The problem is that your data types do not match, so there could be a loss of data during the conversion.
Two solutions:
1- if the type of the target column is [nvarchar] it should be change to [varchar]
2- Add a "Derived Column" component to the SSIS package and add a new column with the following expression:
(DT_WSTR, «length») [ColumnName]
Length is the length of the column in the target table and ColumnName is the name of the column in the target table.
finally at the mapping part you should use this new added column instead of the original column.
Not sure if this is a best practice with SSIS but sometimes I find their tools are a bit clunky when you want to do this type of activity.
Instead of using their components you can convert the data within your query
Instead of doing
SELECT myField = myNvarchar20Field
FROM myTable
You could do
SELECT myField = CONVERT(VARCHAR(20),myNvarchar20Field)
FROM myTable
This a solution that uses the IDE to fix:
Add a Data Conversion item to your dataflow as shown below;
Double click on the Data Conversion item, and set it as shown:
Now double click on the DB Destination item, Click on Mapping, and ensure that your input Column is actually the same as coming from the Copy of [your column name], which is in fact the Data Conversion output NOT the DB Source Output (be careful here). Here is a screenshot:
And thats it .. save and run ..
Mike, I had the same problem with SSIS in SQL Server 2005...
Apparently, the DataFlowDestination object will always attempt to validate the data coming in,
into Unicode. Go to that object, Advanced Editor, Component Properties pane, change the "ValidateExternalMetaData" property to False. Now, go to the Input and Output Properties pane, Destination Input, External Columns - set each column Data type and Length to match the database table it's going to. Now, when you close that editor, those column changes will be saved and not validated over, and it will work.
Follow the below steps to avoid (cannot convert between unicode and non-unicode string data types) this error
i) Add the Data conversion Transformation tool to your DataFlow.
ii) To open the DataFlow Conversion and select [string DT_STR] datatype.
iii) Then go to Destination flow, select Mapping.
iv) change your i/p name to copy of the name.
Get to the registry to configuration of the client and change the LANG.
For Oracle, go to HLM\SOFTWARE\ORACLE\KEY_ORACLIENT...HOME\NLS_LANG and change to appropriate language.
The dts data Conversion task is time taking if there are 50 plus columns!Found a fix for this at the below link
http://rdc.codeplex.com/releases/view/48420
However, it does not seem to work for versions above 2008. So this is how i had to work around the problem
*Open the .DTSX file on Notepad++. Choose language as XML
*Goto the <DTS:FlatFileColumns> tag. Select all items within this tag
*Find the string **DTS:DataType="129"** replace with **DTS:DataType="130"**
*Save the .DTSX file.
*Open the project again on Visual Studio BIDS
*Double Click on the Source Task . You would get the message
the metadata of the following output columns does not match the metadata of the external columns with which the output columns are associated:
...
Do you want to replace the metadata of the output columns with the metadata of the external columns?
*Now Click Yes. We are done !
Resolved - to the original ask:
I've seen this before. Easiest way to fix (don't need all those data conversion steps as ALL of the meta data is available from the source connection):
Delete the OLE DB Source & OLE DB Destinations
Make sure Delayed Validation is FALSE (you can set it to True later)
Recreate the OLE DB Source with your query, etc.
Verify in the Advanced Editor that all of the output data column types are correct
Recreate your OLE DB Destination, map, create new table (or remap to existing) and you'll see that SSIS got all the data types correct (same as source).
So much easier that the stuff above.
Not sure if this is still a problem but I found this simple solution:
Right-Click Ole DB Source
Select 'Edit'
Select Input and Output Properties Tab
Under "Inputs and Outputs", Expand "Ole DB Source Output" External Columns and Output Columns
In Output columns, select offending field, on the right-hand panel ensure Data Type Property matches that of the field in External Columns properties
Hope this was clear and easy to follow
Sometime we get this error when we select static character as a field in source query/view/procedure and the destination field data type in Unicode.
Below is the issue i faced:
I used the script below at source
and got the error message Column "CATEGORY" cannot convert between Unicode and non-Unicode string data types. as below:
error message
Resolution:
I tried multiple options but none worked for me. Then I prefixed the static value with N to make in Unicode as below:
SELECT N'STUDENT DETAIL' CATEGORY, NAME, DATEOFBIRTH FROM STUDENTS
UNION
SELECT N'FACULTY DETAIL' CATEGORY, NAME, DATEOFBIRTH FROM FACULTY
If anyone is still experiencing this issue, I found that it related to a difference in Oracle Client versions.
I have posted my full experience and solution here: https://stackoverflow.com/a/43806765/923177
1.add a Data Conversion tool from toolbox
2.Open it,It shows all coloumns from excel ,convert it to desire output. take note of the Output Alias of
each applicable column (they are named Copy Of [original column name] by default)
3.now, in the Destination step, click on Mappings
I changed ValidateExternalMetadata=False for each transformation task. It worked for me.