Convert DB2 EBCDIC column to Ascii in a SQL Statement - sql

I know I can do the conversion very easily using C# or any other number of programming languages, however, I want to know if I can do the conversion of a column that has EBCDIC encoded text so that the result of the query is in a readable string such as ascii encoded.
Ultimately, I will import the data into SQL Server and I know SSIS can do it, but before I do that, I want to exhaust any paths that are not SQL language.
For example a little know combination of built in functions available in DB2 SQL or SQL-Server 2008
Here is an example of data
Data as stored in DB2: 0xC6C3C3C1E3C5D9D7C9D3D360F8F840
Text: FCCATERPILL-88
The C# conversion is so easy so I included it here:
System.Text.Encoding ei = Encoding.GetEncoding(37);
textBox1.Text = ei.GetString(allBytes.ToArray())
I'm not an AS400 admin so I'm not sure what to do with what is being suggested in the comments.

Related

DB2 to COBOL String Losing Line Feed and Carriage Returns

i'm trying to grab some data out of a table. The Variable is VARCHAR 30000 and when I use COBOL EXEC SQL to retrieve it, the string is returned but without the necessary (hex) Line Feeds and Carriage Returns. I inspect the string one character at a time looking for the Hex Values of 0A or 0D but they never come up.
The LF and CR seem to be lost as soon as fill the string into my cobol variable.
Ideas?
If the data is stored / converted to ebcdic when retrieved on the mainframe, you should get the EBCDIC New-Line characters x'15' decimal=21 rather than 0A or 0D.
It is only if you are retrieving the data in ASCII / UTF-8 that you would get 0A or 0D.
Most java editors can edit EBCDIC (with the EBCDIC New-Line Character x'15') just as easily as ASCII (with \n), not sure about Eclipse though.
I have seen situations where CR and LF were in data in the database. These are valid characters so it's possible for them to be stored there.
Have you tried to confirm that there really are CR and LF characters in the database using some other tool or method? My Z-Series experience is quite limited, so I'm unable to suggest options. However, there must be some equivalent of SSMS and SQL Server on the Z-Series to query the DB2 database.
Check out this SO link on querying DB2 and cleaning up CR and LF characters.
DB2/iSeries SQL clean up CR/LF, tabs etc
Well, I believe this could be dialect dependent (both COBOL and DB2) but if it were me I would be using FOR BIT DATA on the VARCHAR in the table definition. Your issue could also relate to the code page defined for the database in which the table resides.
I routinely store all kinds of binary, EBCDIC and Unicode data mixed within the same VARCHAR FOR BIT DATA column with no problems, and all you are trying to do is include CR & LF. My approach works in both DB2 z/OS and DB2 LUW.
I hope this helps.

how to pass an image as paramenter to sql function

I am a quality analysts and I got a task to test a database function n_execute() which takes 3 parameters
1. image_name
2. image_data (byte array) this is a blob type.
3. data_created
Now my problem is that to test this function I want to call this function but don't know how to input iamge o the second parameter of this function.
I know we can do it by writing some java code which calls this function but I primarily wants to execute it through SQL editor only.
You haven't mentioned which database platform you are working with but in a SQL Server T/SQL script IMAGE data would be a binary constant. A simple example would be 0x0102030405 which represents 5 hex bytes; 01, 02, 03, 04 and 05.
Edit for PostgreSQL
For PostgreSQL take a look at the docs, Binary data types. Note;
The bytea type supports two external formats for input and output:
PostgreSQL's historical "escape" format, and "hex" format.
The PostgreSQL equivalent to my SQL Server example would be E'\\x0102030405'.

Lookup transformation between DB2 packed decimal and SQL Server DT_NUMERIC in SSIS

We use DB2 as our main production database, but we use SQL Server for many other things i.e. to do integration between other customers and vendors via EDI etc.
I have a table in SQL with SO numbers and I try to make a lookup in DB2 to get all the invoices for the SO's in my table, so here's what I did.
Created a connection to the DB2 using the Microsoft® OLEDB Provider for DB2
Created a data fllow with a source using a SQL Server connection.
Added a Data Conversion Transformation trying to convert the INT so value to a decimal with a precision of 12, but I couldn't change a precision in a DT_DECIMAL, so the only datatype that I have the option to change the precision is DT_NUMERIC.
Added a lookup transformation to lookup the data withing DB2.
Now when i try to create the join between the source table and DB2 I get an error Cannot map the input column, 'so', to the lookup column, 'orno', because the data types do not match.
According to Microsoft this is not a bug and they suggest to use the DT_NUMERIC where you can change the precision.
If I try to convert the SO to a DT_DECIMAL without changing the precision I'd get the same error mentioned above.
Is there any way to work around the limitations from SSIS and change the precision in a DT_DECIMAL conversion so I could do the match?
Or any other suggestions?
The simple answer is to change the connection property in the DB2 connection to treat DECIMAL as NUMERIC.
See bellow

Encoding in databases sql commands

I will like to know what entity is responsible for doing the encoding conversions necessaries to accomplish a SQL command successfully. For example: you have several places where output a SQL command.
SELECT title from T1 where title='tĂ­tulo'
This may be execute from within the database client (which I assume it reads the database encoding and encode its commands after that) but what happen when this is a string in a programming language whose string encoding is not the same as the database?
Where the conversion takes place? In the class that connects to the database? The database and the connector do some kind of agreement when they are handshaking?
I'll love some information about this topic or some link where I can read about it.
Thanks in advance.
Case Java + MySQL
Internally in Java String is text is Unicode encoded.
In a Java source text should have the same encoding that the java compiler uses. A wrong matching between editor and compiler would mess up string literals.
Java thus transfers a Unicode string to the JDBC driver, the database client library.
The MySQL connections string can indicate which encoding to use in the client library to communicate with the database server. useEncoding=UTF-8, so Unicode, would be a good international choice.
The database can set a default encoding.
As also any table.
As also per column (say one for Hindi one for Chinese).
Besides the encoding, also the collation (sorting order of strings) is language and encoding specific. And have to be considered too.

SQL Server database with Latin1 codepage shows Japanese Chars as "?"

Three questions with the following scenario:
SQL Server 2005 production db with a Latin1 codepage and showing "?" for invalid chars in Management Studio.
SomeCompanyApp client as a service that populates the data from servers and workstations.
SomeCompanyApp management console that shows "?" for Asian characters.
Since this is a prod db I will not write to it.
I don't know if the client app that is storing the data in the database is actually storing it correctly as Unicode and it simply doesn't show because they are using Latin1 for the console.
Q1: As I understand it, SQL Server stores nvarchar text as Unicode regardless of the codepage or am I completely wrong and if the codepage is Latin1 then everything that is not in that codepage gets converted to "?".
Q2: Is it the same with a text column?
Q3: Is there a way using SQL Server Management Studio or Visual Studio and some code (don't care which language :)) to query the db and show me if the chars really do show up as Japanese, Chinese, Korean, etc.?
My final goal is to extract data from the db and store it in another db using UTF-8 to show Japanese and other Asian chars as what they are in my own client webapp. I will settle for an answer to Q3. I can code in several languages and at the very least understand some others but I'm just not knowledgeable enough about Unicode. In case you want to know my webapp will be using pyodbc and cassandra but for these questions that doesn't matter.
When inserting into an NVARCHAR column in SSMS, you need to make absolutely sure you're prefixing your string with a N:
This will NOT work:
INSERT INTO dbo.MyTable(NVarcharColumn) VALUES('Some Text with Special Char')
SQL Server will interpret your string in the VALUES(..) as VARCHAR and thus strip off any special characters.
You need this:
INSERT INTO dbo.MyTable(NVarcharColumn) VALUES(N'Some Text with Special Char')
Prefixing your text literal with an N'..' tells SQL Server to treat this as NVARCHAR all the way.
Does this help you solve your Q3 ??