Read SQL Server field text value in Delphi XE with simultaneously character conversion - sql-server-2005

I have a SQL Server 2005 database with COLLATION SQL_Latin_General_CP1_CI_AS and I want to run a query from Delphi XE via ADO. Data in SQL Server is Greek and Latin characters. But in Delphi I get unreadable character strings. How can I manage this problem with Delphi XE ?

Since you say that you have both Greek and Latin characters in the db I guess that you are already using nvarchar in the db.
In Delphi you should then use TWideStringField for nvarchar fields. TStringField is for varchar (ansistring).
Field1 contains "γειά σου"
StringField := ADODataSet1.FieldByName('Field1') as TStringField;
ShowMessage(StringField.Value);
ShowMessage shows "?e??s??"
This works fine
WideStringField := ADODataSet1.FieldByName('Field1') as TWideStringField;
ShowMessage(WideStringField.Value);
Edit 1
If you have varchar fields in db you should use TStringField and you need to make sure that the "Language for non-Unicode programs" is Greek(Greece).
"Control Panel - Region and Language - Administrative - Change system locale..."

I have found that sometimes UTF-8 is stored in databases in VarChar fields, usually from Java programs.
If you see things like â€", there's a good chance that's what is going on.
You could try
// Delphi 2009+
UTF8ToUnicodeString(RawByteString( db_value ))
// Delphi 2007 and older
UTF8Decode( db_value )
If this is the case, you can also use a sql function to convert the VarChar fields to NVarChar

Related

which data type can save bangla Language in sql server?

I want to save bangla Language in sql server. Using which data type I can Do it in sql server 2005 or sql server 2008.
I tried varchar and varbinary type but it cannot save bangla Language.
How is it possible?
You're using SQL_Latin1_General_CP1_CI_AS for your collation, which is suited for the Latin character set (ISO-8859-1). To store characters fromother character sets, you can use the NVARCHAR() which can store the full Unicode range, irrespective of collation - this does mean it will need to be treated as NVARCHAR() all the way, as quoted constants (e.g. N'বাংলা Bangla'), as the data types for parameters to stored procedures, etc.

Date stored as a string in sql server when upsized from access 2007

When I upsize from Access2007 to SQL Server 2008, I have few issues...
1. text to nvarchar(255)
fields with text data type in Access are automatically converted to nvarchar(255)(I have unicode data) in sql server, but in reality the column-length is not that big so can I change the data type to nvarchar(55) or varchar(100)? Will there be any problem?
2. Date stored as text
Some tables throwed an error when tried to upsize because of the date column(mm/dd/yyyy), what I did is I changed the date/time column data type to text datatype in access, then the upsizing was successful, it converted to nvarchar(255) in sql server. I have converted nvarchar data type to date data type in sql server, but that does not show me a calendar symbol in access front-end. How to get a calendar symbol in the date field in my access front end?
I have tried the solution given in this link, but it did not work... Please give me some suggestions
text in sql server is deprecated, use nvarchar if you need to store unicode (multi lang support). Otherwise you can use varchar.

Japanese character are saved as question mark in SQL Server

I'm working on SQL Server 2005, in which I have a database. When I use Japanese Characters in my application, they are stored as question marks in the databse. I would like to which Collations should I use save the japanese characters properly.
Note: Additional info(if it helps) In MySQL, we have used UTF8 as default character set in the startup variable and it works file.
Thank you,
Pavan
Japanese_90 appears to be the new collation name.
http://msdn.microsoft.com/en-us/library/bb330962%28v=sql.90%29.aspx#intlftrql2005_topic24
Note, you might want to consider the _KS suffix if you want to consider Hirigana/Katakana whilst sorting.
Like Marc_S says, you will also want to ensure your column datatype is nvarchar

SQL Server database with Latin1 codepage shows Japanese Chars as "?"

Three questions with the following scenario:
SQL Server 2005 production db with a Latin1 codepage and showing "?" for invalid chars in Management Studio.
SomeCompanyApp client as a service that populates the data from servers and workstations.
SomeCompanyApp management console that shows "?" for Asian characters.
Since this is a prod db I will not write to it.
I don't know if the client app that is storing the data in the database is actually storing it correctly as Unicode and it simply doesn't show because they are using Latin1 for the console.
Q1: As I understand it, SQL Server stores nvarchar text as Unicode regardless of the codepage or am I completely wrong and if the codepage is Latin1 then everything that is not in that codepage gets converted to "?".
Q2: Is it the same with a text column?
Q3: Is there a way using SQL Server Management Studio or Visual Studio and some code (don't care which language :)) to query the db and show me if the chars really do show up as Japanese, Chinese, Korean, etc.?
My final goal is to extract data from the db and store it in another db using UTF-8 to show Japanese and other Asian chars as what they are in my own client webapp. I will settle for an answer to Q3. I can code in several languages and at the very least understand some others but I'm just not knowledgeable enough about Unicode. In case you want to know my webapp will be using pyodbc and cassandra but for these questions that doesn't matter.
When inserting into an NVARCHAR column in SSMS, you need to make absolutely sure you're prefixing your string with a N:
This will NOT work:
INSERT INTO dbo.MyTable(NVarcharColumn) VALUES('Some Text with Special Char')
SQL Server will interpret your string in the VALUES(..) as VARCHAR and thus strip off any special characters.
You need this:
INSERT INTO dbo.MyTable(NVarcharColumn) VALUES(N'Some Text with Special Char')
Prefixing your text literal with an N'..' tells SQL Server to treat this as NVARCHAR all the way.
Does this help you solve your Q3 ??

Inserting Japanese characters to Sybase db from Excel

I can see the Japanese test in the excel cells. I've built the insert query using ADO. It does the insert in the DB, but Japanese characters are simply represented as "????"
Any help would be appreciated.
Is it the Sybase client where you are seeing the Japanse characters misrepresented? If you are lucky then it's just a mix-up between the server and a client. You can try running:
set char_convert off
in the Sybase client which will turn off Sybases automatic character conversion that it attempts to do.
If the above doesn't work then you have to find out what your Sybase servers default charset is. You can do this with:
sp_default_charset
This will return the default charset for your Sybase server (e.g roman8 ). Check the charset your server returns supports Japanese characters.