How to insert Japanese (double byte) characters from a flat file (.txt) in a table in sql server - sql-server-2012

I am using ODI to load data from .txt to a table in sql server. The file has the Japanese and Russian characters, but when ODI loads the data in the table it’s showing as “?????”.
Can anyone please help me here.
Thank you.

Related

SQL encoding after restore from backup (mariadb, unix)

I need to restore a SQL table from daily backup but there are problems with encoding. Backup is made by virtualmin, encoding set to "default". Texts are in French language, so with accents...
Here is the dump of the webmin backup file:
For the table (wordpress table, interesting fields are:)
I need to insert a part of this table into the live table (after some deletion of lines..). So the table is already created with
Default collation UTF8mb4_unicode_ci
When I import the table lines into the table, text is not "converted" into the right charset. For example the french "é" shows up as "é". And so on.
I tried a few things, adding SET commands to utf8mb4 before the INSERT, no way, encoding is never done correctly. Text in the base itself shows "é" instead "é", and of course the same when displaying in a browser.
Any suggestion? Thank you!

Japanese ANSI character in CSV file

I have a csv file generated from Japanese source system. The Japanese character is shown as given below ¬¼ˆã—Ê튔Ž®‰ïŽÐ ‘åã‰c‹ÆŠ. I have changed file type to UTF-8 and also ETL setting to incorporate that but that is working on new data only.
How can I change existing data in my table which shows characters like ‘åã‰c‹ÆŠ.
Is it possible to get original Japanese characters using SQL functions. I am using SQL Sever as database.
Thanks in advance.

Adding Urdu language text while exporting csv file from MSSQL Server

I have a column in my database containing some urdu language text. When I use bcp to export the data and opening the exported file in excel I am getting all the way question marks there.
What am I missing ?
Thanks in advance
You're using unicode urdu characters here. Use bcp -w instead for unicode characters.

error on converting persian characters from excel to sql

I'm trying to convert an excel file to sql.
but i'm getting below error.
when i change on truncation value to "ignore" , the convert process will be complete but persian characters will be seen as "?".
Make sure that all your destination columns are NVARCHAR if they are going to be accepting Unicode text.
See here for a detailed explanation: What is the difference between varchar and nvarchar?
As you know, import excel files to SQL server database is doing by Access Engine, in most of the cases that you see this error, it could be solved by below method:
At first, you create an access file and save it in MDB format (this format could be used in SQL server),
and then import your excel file to this Access.mdb file, when all your data is complete, you can import this MDB file from the SQL server to your database,

How to export Japanese characters from SQL Server 2008 database to a text file?

I have an odd problem. I need to export Japanese characters from a table to a raw text file. When I run the select statement in SQL, I am able to see the characters displayed correctly. However, when I run the SSIS package to export these values to a text file they are displayed as ?'s.
Data type of the field is NTEXT. Has anyone ran into this problem before?
SQL statement:
select cast(body as nvarchar(max)) as body from msgsMarket
In SSIS package's flat file connection manager, I have set the output encoding to use 932
This is not a solution but might probably help you to identify the problem in your case.
Created a sample SSIS package using SSIS 2008 R2 with UTF-8 and Unicode encodings and the SQL Server data exported correctly to flat files.
Sample SQL data in the file. Description field was of data type NVARCHAR. The sample was also tried by changing the data type of the Description field to NTEXT and the flat files still exported correctly.
SSIS package was created with a data flow task with two outputs for UTF-8 and Unicode.
First flat file connection manager to generate flat file with encoding UTF-8.
Output file generated with UTF-8 encoding.
Second flat file connection manager to generate flat file with encoding Unicode.
Output file generated with Unicode encoding.