Update oracle table and change one character with another - sql

I have this table where one of the Fields contains values like this ¤1¤. It is in an Unicode database and nvarchar2 Fields.
I then want to switch the ¤ With an ? and Write this line:
update table1 set col1 = REPLACE(col1,'¤','?');
commit;
The col1 is not updated.
What am I doing wrong?

select ascii('¤') from dual;
Even though this states 164 as the result on a non-unicode database, it doesn't so on a unicode database. Hence the replace as asked for, will not work.
select chr(164 USING NCHAR_CS) from dual;
Using this states '¤' as the result.
Hence the following replace should work.
select replace(col1, chr(164 USING NCHAR_CS), chr(63)) from table1;

Sometime simple cut and paste may not get the character properly in your query so .. instead of writing the character in your query.. get the ascii value (you can use ascii or dump function in oracle to get the ascii value of the character) of the character and then use that in your replace as below
Your character seems to me ASCII value 164 ...
-- try as below
update table1 set col1 = replace(col1,chr(164),'?');
commit;

Related

How to save Russian character in Oracle Database [duplicate]

I have a database with one column of the type nvarchar. If I write
INSERT INTO table VALUES ("玄真")
It shows ¿¿ in the table. What should I do?
I'm using SQL Developer.
Use single quotes, rather than double quotes, to create a text literal and for a NVARCHAR2/NCHAR text literal you need to prefix it with N
SQL Fiddle
Oracle 11g R2 Schema Setup:
CREATE TABLE table_name ( value NVARCHAR2(20) );
INSERT INTO table_name VALUES (N'玄真');
Query 1:
SELECT * FROM table_name
Results:
| VALUE |
|-------|
| 玄真 |
First, using NVARCHAR might not even be necessary.
The 'N' character data types are for storing data that doesn't 'fit' in the database's defined character set. There's an auxiliary character set defined as the NCHAR Character set. It's kind of a band aid - once you create a database it can be difficult to change its character set. Moral of this story - take great care in defining the Character Set when creating your database and do not just accept the defaults.
Here's a scenario (LiveSQL) where we're storing a Chinese string in both NVARCHAR and VARCHAR2.
CREATE TABLE SO_CHINESE ( value1 NVARCHAR2(20), value2 varchar2(20 char));
INSERT INTO SO_CHINESE VALUES (N'玄真', '我很高興谷歌翻譯。' )
select * from SO_CHINESE;
Note that both the character sets are in the Unicode family. Note also I told my VARCHAR2 string to hold 20 characters. That's because some characters may require up to 4 bytes to be stored. Using a definition of (20) would give you only room to store 5 of those characters.
Let's look at the same scenario using SQL Developer and my local database.
And to confirm the character sets:
SQL> clear screen
SQL> set echo on
SQL> set sqlformat ansiconsole
SQL> select *
2 from database_properties
3 where PROPERTY_NAME in
4 ('NLS_CHARACTERSET',
5 'NLS_NCHAR_CHARACTERSET');
PROPERTY_NAME PROPERTY_VALUE DESCRIPTION
NLS_NCHAR_CHARACTERSET AL16UTF16 NCHAR Character set
NLS_CHARACTERSET AL32UTF8 Character set
First of all, you should to establish the Chinese character encoding on your Database, for example
UTF-8, Chinese_Hong_Kong_Stroke_90_BIN, Chinese_PRC_90_BIN, Chinese_Simplified_Pinyin_100_BIN ...
I show you an example with SQL Server 2008 (Management Studio) that incorporates all of this Collations, however, you can find the same characters encodings in other Databases (MySQL, SQLite, MongoDB, MariaDB...).
Create Database with Chinese_PRC_90_BIN, but you can choose other Coallition:
Select a Page (Left Header) Options > Collation > Choose the Collation
Create a Table with the same Collation:
Execute the Insert Statement
INSERT INTO ChineseTable VALUES ('玄真');

SQL REPLACE not working as expected

I have a temp table that I'm trying to eliminate all the white spaces from a specific column. However my replace isn't working at all. Here's the code I have
IF OBJECT_ID('tempdb..#attempt1temptable') IS NOT NULL
BEGIN
DROP TABLE #attempt1temptable
END
GO
CREATE TABLE #attempt1temptable
(
temp_description varchar(MAX),
temp_definition varchar(MAX)
)
INSERT INTO #attempt1temptable
SELECT graphic_description, graphic_definition
FROM graphic
UPDATE #attempt1temptable SET temp_description=REPLACE(temp_description, ' ', '')
UPDATE #attempt1temptable SET temp_description=REPLACE(temp_description, char(160), '')
--I have no idea why it won't update correctly here
select temp_description, LEN(temp_description) from #attempt1temptable
The Insert and select work as expected however it's not updating temp_description to have no white spaces. The result of the query gives me the temp_description without anything changed to it. What am I doing wrong here?
Try replacing some other whitespace characters:
select replace(replace(replace(replace(
description
,char(9)/*tab*/,'')
,char(10)/*newline*/,'')
,char(13)/*carriage return*/,'')
,char(32)/*space*/,'')
from #attemp1temptable
You are probably dealing with other characters than space. You could be dealing with tab for example.
I would suggest to copy and paste the character to remove from the actual data into your replace statement to ensure you have the right character(s).
Edit :
Also, you seem to use LEN to verify if the data was updated or not. However, keep in mind that LEN doesn't count trailing white space as character. So the count might not change even if the data was updated

Inserting statements from .sql file into Oracle database resulting ORA-01704: string literal too long

I exported some data from my database table into sql file as insert statements.
Now I want to launch them but I get error ORA-01704: string literal too long.
The problem cause is propably one CLOB column which has XML data more than 4000 chars.
What would be the best workaround?
I have about ~50 SQL insert statements in that file.
Rather than using insert statements, you could leave the data in a delimited file and look at using either SQLLDR, or external tables. External tables are awesome.
The way to get something larger then 4000 bytes in is to use pl\sql which supports up to 32767 bytes. Here is an example of how to solve the ORA-01704: string literal too long error:
declare
vClobVal varchar2(32767) := '<Add text string here>';
begin
update CLOBTAB set CLOBCOL = vClobVal;
end;
you can also change your colomun type from varchar2 to CLOB
also see if this link can help you - http://www.dba-oracle.com/t_ora_01704_string_literal_too_long.htm
Just a thought for that particular column you should have split columns. For example say you current column is RANDOM_TEXT as VARCHAR(4000) it is exceeding the limit, you can split this into 2 columns say RANDOM_TEXT_1 and RANDOM_TEXT_2, when you are writing you should take the first 4000 characters to first RANDOM_TEXT_1 and the remaining to RANDOM_TEXT_2. When you are giving this back to any app or any api you have to combine and give as a single string.
You can try this, it worked for me:
DECLARE
big_text_ CLOB := 'very very very very long text or XML.......';
BEGIN
INSERT INTO table_name (column1, column2, column3_CLOB) VALUES ('value1', 'value2', big_text_);
COMMIT;
END;

Convert HEX value to CHAR on DB2

In connection with data replication from SQL Server to DB2 I have the following question:
On DB2 I have a table containing (for simplicity) two columns: COL1 and COL2.
COL1 is defined as CHAR(20). COL2 is defined as CHAR(10).
COL1 is replicated from SQL by converting a string into hex, e.g. "abcdefghij" to "6162636465666768696A" or "1111111111" to "31313131313131313131" by using the following SQL query:
CONVERT(char(20), cast(#InputString as binary) 2)
where #InputString would be "abedefghij".
In other words COL1 contains the hex value, but as a string (sorry if the wording is incorrect).
I need to convert the hex value back to a string and put this value into COL2.
What should the SQL query be on DB2 to do the convertion? I know how to do this on SQL Server, but not on DB2.
Note: The reason the hex-value is not pre-fixed with "0x" is because style 2 is used in the CONVERT statement.
select hex('A') from sysibm.sysdummy1;
returns 41.
and
select x'41' from sysibm.sysdummy1;
gives you 'A'. So you can put that in a for loop and loop through each pair of hex characters to arrive at your original string. Or you can write your own unhex function.
Taken from dbforums.com /db2/1627076-display-hex-columns.html (edit Nov 2020: original source link is now a spam site)
DB2 has built-in encoding/decoding.
For OPs question, use....
select CAST(ColumnName as char(20) CCSID 37) as ColumnName from TableName where SomeConditionExists
http://www-01.ibm.com/support/knowledgecenter/SSEPEK_10.0.0/com.ibm.db2z10.doc.intro/src/tpc/db2z_introcodepage.dita
This is one of most close topics to subject of my problem:
I have lost 2 days to figure out how to migrate XML files stored in DB2 BLOB field using SQL Developer.
(Yes, migrating to and doing the queries from SQL Developer - we are migrating data to Oracle from DB2, so we were using this tool)!
How to show XML file/string stored in BLOB?
Let's start with, what the problem was:
Data in BLOB was a XML file.
When selected in query, got:
When casted, like:
select CAST(BLOBCOLUMN as VARCHAR(1000)) from TABLE where id = 100;
output was in HEX:
Nothing worked... Not even the solution from the links in this topic.
!NOTHING!
By mistake found a solution:
CREATE FUNCTION in DB2:
CREATE FUNCTION unhex(in VARCHAR(32000) FOR BIT DATA)
RETURNS VARCHAR(32000)
LANGUAGE SQL
CONTAINS SQL
DETERMINISTIC NO EXTERNAL ACTION
BEGIN ATOMIC
RETURN in;
END
Run SELECT:
select UNHEX( CAST(BLOBCOLUMN as VARCHAR(32000) FOR BIT DATA)) from TABLE where id = 100;
Result:
I use this to convert FOR BIT DATA to characters:
cast (colvalue as varchar(2000) ccsid ebcdic for sbcs data)

using trim in a select statement

I have a table, my_table, that has a field my_field. myfield is defined as VARCHAR(7). When I do:
SELECT myfield
FROM my_table;
I get what appears to be the entire 7 characters, but I only want the actual data.
I tried:
SELECT TRIM(myfield)
FROM my_table;
and several variations. But instead of getting 'abcd', I get 'abcd '.
How do I get rid of the trailing blanks?
As others have said:
trim whitespace before data enters the database ("Mop the floor...);
ensure this is not actually a column of type CHAR(7).
Additionally, add a CHECK constraint to ensure no trailing spaces ("...fix the leak.") While you are at it, also prevent leading spaces, double spaces and zero-length string e.g.
CREATE TABLE my_table
(
myfield VARCHAR(7) NOT NULL
CONSTRAINT myfield__whitespace
CHECK (
NOT (
myfield = ''
OR myfield LIKE ' %'
OR myfield LIKE '% '
OR myfield LIKE '% %'
)
)
);-
VARCHAR columns will not pad the string you insert, meaning if you are getting 'ABCD ', that's what you stored in the database. Trim your data before inserting it.
Make sure you are not using the CHAR datatype, which will pad your data in the way you suggest. In any case:
SELECT TRIM(myfield) FROM mytable;
will work.
Make sure also that you are not confusing the way the SQL interpreter adds padding chars to format the data as a table with the actual response.
Make sure that you are not inserting data in this column from a CHAR(7) field.
You need to trim your result when selecting as opposed to when inserting, eg:
SELECT TRIM(myfield) FROM my_table;