Change SQL decimal delimiter - sql

I am trying to import data into my table using
INPUT INTO
The problem is my decimals is using , as a delimiter, and it expects .. So it won't work!
How can i change this? Search and replace in the input file is not an option!
I am using SQL Anywhere 10

I don't believe that it's possible to change the decimal delimiter. You could either preprocess the file (which I know you said is not an option) or load it into a temporary table with the decimal column defined as a string and then use an insert from the temporary table to your real table, performing the necessary conversion at that point.

Related

Replacing Null Values Using Derived Column in SSIS

I have a data flow task which picks up data from a non-unicode flat file to a SQL Server table destination.
I'm using a Derived Column task to replace NULL values in a date column with the string "1900-01-01". The destination table column is a varchar data type.
I'm using this SSIS Expression (DT_STR,10,1252)REPLACENULL(dateColumn,"1900-01-01") and the task executes successfully but I still see NULLs instead of the "1900-01-01" string at the destination.
Why is this? I've tried replacing the column, adding a new column but whatever I do I still see NULLs and not the replacement string. I can see my new derived column in the Advanced Editor so can see no reason why this isn't working. Any help would be most welcome.
If your your source is non-unicode, why are you using DT_STR? varchar is a already a non-unicode data-type. You should just be able to do it with
REPLACENULL(dateColumn,"1900-01-01")
Also, did you put in a lookup transformation to update the column? Have you made sure the right keys are being looked up and updated?

INSERT Statement in SQL Server Strips Characters, but using nchar(xxx) works - why?

I have to store some strange characters in my SQL Server DB which are used by an Epson Receipt Printer code page.
Using an INSERT statement, all are stored correctly except one - [SCI] (nchar(154)). I realise that this is a control character that isn't representable in a string, but the character is replaced by a '?' in the stored DB string, suggesting that it is being parsed (unsuccessfully) somewhere.
The collation of the database is LATIN1_GENERAL_CI_AS so it should be able to cope with it.
So, for example, if I run this INSERT:
INSERT INTO Table(col1) VALUES ('abc[SCI]123')
Where [SCI] is the character, a resulting SELECT query will return 'abc?123'.
However, if I use NCHAR(154), by directly inserting or by using a REPLACE command such as:
UPDATE Table SET col1 = REPLACE(col1, '?', NCHAR(154))
The character is stored correctly.
My question is, why? And how can I store it directly from an INSERT statement? The latter is preferable as I am writing from an existing application that produces the INSERT statement that I don't really want to have to change.
Thank you in advance for any information that may be useful.
When you write a literal string in SQL is is created as a VARCHAR unless you prefix is with N. This means if you include any Unicode characters, they will be removed. Instead write your INSERT statement like this:
INSERT INTO Table(col1) VALUES (N'abc[SCI]123')

Integration Services double to string

I'm using Integration Services to load data from an Excel file to SQL Server table. When I try to send a number stored as double (DT_R8) into a database column where data are stored as varchar(50) I find a queer rounding.
For example consider data in first row first column of above image. Original value is 31.35 but as a string it's stored as shown below
I already tried to use a Delivered Column transformation to cast to string before exporting to SQL, I also added a Round(x, 5) but I get the same result.
How can I solve this problem given that I can't change SQL column data type?
The only working solution was changing the input type from double (DT_R8) to currency [DT_CY]. It seems that the rounding performed on double (DT_R8) make its use difficult when parsing is somehow involved in the export process.

BULK INSERT is not working correctly

I used bulk insert into SQL Server Management Studio 2008 R2, 10 words from a text UTF-8 file, into single column.
However, the words do not appear correctly, I get extra space in front of some words.
Note: None of the answers have solved my problem, so far. :(
SCREENSHOT OF THE PROBLEM
This issue may occur if you are not using the correct collation (language settings). You need to use the appropriate collation in order to display your data in the correct format.
See the link http://technet.microsoft.com/en-us/library/ms187582(v=sql.105).aspx for more details.
You can also try using a different row terminator:
bulk insert table_name
from 'filename.txt' WITH (ROWTERMINATOR='\n')
Look at this post How to write UTF-8 characters using bulk insert in SQL Server?
Quote: You can't. You should first use a N type data field, convert
your file to UTF-16 and then import it. The database does not support
UTF-8.
Original answers
look at the encoding of youre text file. Should be utf8. If not this could cause problems.
Open with notepad, file-> save as and choose encoding
After this try to import as a bulk
secondly, make sure the column datatype is nvarchar and not varchar. Also see here

Import Package Error - Cannot Convert between Unicode and Non Unicode String Data Type

I have made a dtsx package on my computer using SQL Server 2008. It imports data from a semicolon delimited csv file into a table where all of the field types are NVARCHAR MAX.
It works on my computer, but it needs to run on the clients server. Whenever they create the same package with the same csv file and destination table, they receive the error above.
We have gone through the creation of the package step by step, and everything seems OK. The mappings are all correct, but when they run the package in the last step, they receive this error. They are using SQL Server 2005.
Can anyone advise where to begin looking for this problem?
The problem of converting from any non-unicode source to a unicode SQL Server table can be solved by:
add a Data Conversion transformation step to your Data Flow
open the Data Conversion and select Unicode for each data type that applies
take note of the Output Alias of each applicable column (they are named Copy Of [original column name] by default)
now, in the Destination step, click on Mappings
change all of your input mappings to come from the aliased columns in the previous step (this is the step that is easily overlooked and will leave you wondering why you are still getting the same errors)
At some point, you're trying to convert an nvarchar column to a varchar column (or vice-versa).
Moreover, why is everything (supposedly) nvarchar(max)? That's a code smell if I ever saw one. Are you aware of how SQL Server stores those columns? They use pointers to where the column is stored from the actual rows, since they don't fit within the 8k pages.
Non-Unicode string data types:
Use STR for text file and VARCHAR for SQL Server columns.
Unicode string data types:
Use W_STR for text file and NVARCHAR for SQL Server columns.
The problem is that your data types do not match, so there could be a loss of data during the conversion.
Two solutions:
1- if the type of the target column is [nvarchar] it should be change to [varchar]
2- Add a "Derived Column" component to the SSIS package and add a new column with the following expression:
(DT_WSTR, «length») [ColumnName]
Length is the length of the column in the target table and ColumnName is the name of the column in the target table.
finally at the mapping part you should use this new added column instead of the original column.
Not sure if this is a best practice with SSIS but sometimes I find their tools are a bit clunky when you want to do this type of activity.
Instead of using their components you can convert the data within your query
Instead of doing
SELECT myField = myNvarchar20Field
FROM myTable
You could do
SELECT myField = CONVERT(VARCHAR(20),myNvarchar20Field)
FROM myTable
This a solution that uses the IDE to fix:
Add a Data Conversion item to your dataflow as shown below;
Double click on the Data Conversion item, and set it as shown:
Now double click on the DB Destination item, Click on Mapping, and ensure that your input Column is actually the same as coming from the Copy of [your column name], which is in fact the Data Conversion output NOT the DB Source Output (be careful here). Here is a screenshot:
And thats it .. save and run ..
Mike, I had the same problem with SSIS in SQL Server 2005...
Apparently, the DataFlowDestination object will always attempt to validate the data coming in,
into Unicode. Go to that object, Advanced Editor, Component Properties pane, change the "ValidateExternalMetaData" property to False. Now, go to the Input and Output Properties pane, Destination Input, External Columns - set each column Data type and Length to match the database table it's going to. Now, when you close that editor, those column changes will be saved and not validated over, and it will work.
Follow the below steps to avoid (cannot convert between unicode and non-unicode string data types) this error
i) Add the Data conversion Transformation tool to your DataFlow.
ii) To open the DataFlow Conversion and select [string DT_STR] datatype.
iii) Then go to Destination flow, select Mapping.
iv) change your i/p name to copy of the name.
Get to the registry to configuration of the client and change the LANG.
For Oracle, go to HLM\SOFTWARE\ORACLE\KEY_ORACLIENT...HOME\NLS_LANG and change to appropriate language.
The dts data Conversion task is time taking if there are 50 plus columns!Found a fix for this at the below link
http://rdc.codeplex.com/releases/view/48420
However, it does not seem to work for versions above 2008. So this is how i had to work around the problem
*Open the .DTSX file on Notepad++. Choose language as XML
*Goto the <DTS:FlatFileColumns> tag. Select all items within this tag
*Find the string **DTS:DataType="129"** replace with **DTS:DataType="130"**
*Save the .DTSX file.
*Open the project again on Visual Studio BIDS
*Double Click on the Source Task . You would get the message
the metadata of the following output columns does not match the metadata of the external columns with which the output columns are associated:
...
Do you want to replace the metadata of the output columns with the metadata of the external columns?
*Now Click Yes. We are done !
Resolved - to the original ask:
I've seen this before. Easiest way to fix (don't need all those data conversion steps as ALL of the meta data is available from the source connection):
Delete the OLE DB Source & OLE DB Destinations
Make sure Delayed Validation is FALSE (you can set it to True later)
Recreate the OLE DB Source with your query, etc.
Verify in the Advanced Editor that all of the output data column types are correct
Recreate your OLE DB Destination, map, create new table (or remap to existing) and you'll see that SSIS got all the data types correct (same as source).
So much easier that the stuff above.
Not sure if this is still a problem but I found this simple solution:
Right-Click Ole DB Source
Select 'Edit'
Select Input and Output Properties Tab
Under "Inputs and Outputs", Expand "Ole DB Source Output" External Columns and Output Columns
In Output columns, select offending field, on the right-hand panel ensure Data Type Property matches that of the field in External Columns properties
Hope this was clear and easy to follow
Sometime we get this error when we select static character as a field in source query/view/procedure and the destination field data type in Unicode.
Below is the issue i faced:
I used the script below at source
and got the error message Column "CATEGORY" cannot convert between Unicode and non-Unicode string data types. as below:
error message
Resolution:
I tried multiple options but none worked for me. Then I prefixed the static value with N to make in Unicode as below:
SELECT N'STUDENT DETAIL' CATEGORY, NAME, DATEOFBIRTH FROM STUDENTS
UNION
SELECT N'FACULTY DETAIL' CATEGORY, NAME, DATEOFBIRTH FROM FACULTY
If anyone is still experiencing this issue, I found that it related to a difference in Oracle Client versions.
I have posted my full experience and solution here: https://stackoverflow.com/a/43806765/923177
1.add a Data Conversion tool from toolbox
2.Open it,It shows all coloumns from excel ,convert it to desire output. take note of the Output Alias of
each applicable column (they are named Copy Of [original column name] by default)
3.now, in the Destination step, click on Mappings
I changed ValidateExternalMetadata=False for each transformation task. It worked for me.