Number formatting in SQL - sql

I have a number that needs to be formatted like this:
Thousands need to be separated with .
Decimals need to be separated with ,
For example, number 1,234,567.89 needs to look like 1.234.567,89.
Is there any way that I can do this with a simple sql function or I have to make my own function?

Use to_char() together with the specification that you want to use , as the decimal separator and . for the thousands separator (which is not the default in Oracle)
select to_char(1234567.89, '9G999G999G999D00', 'NLS_NUMERIC_CHARACTERS = '',.''')
from dual;
Results in: 1.234.567,89
Details about format models: http://docs.oracle.com/cd/E11882_01/server.112/e41084/sql_elements004.htm#SQLRF00211
Details about the to_char() function: http://docs.oracle.com/cd/E11882_01/server.112/e41084/functions201.htm#SQLRF51882

You can alternatively also work with the session territory setting.
create table mytest (field1 number);
insert into mytest values (1234567.89);
alter session set NLS_TERRITORY=GERMANY;
select field1, to_char(field1,'9G999G999G999D00') from mytest;
alter session set NLS_TERRITORY=AMERICA;
select field1, to_char(field1,'9G999G999G999D00') from mytest;
Output:
Table created.
1 row created.
Session altered.
FIELD1 TO_CHAR(FIELD1,'9G999G999G999D00')
---------- ----------------------------------
1234567,89 1.234.567,89
1 row selected.
Session altered.
FIELD1 TO_CHAR(FIELD1,'9G999G999G999D00')
---------- ----------------------------------
1234567.89 1,234,567.89
1 row selected.

Related

"not a valid month" while inserting timestamp into table

Error while trying to insert a query which transforms multiple merged date(eg.20230208065521019355) into proper timestamp format to a new column.
INSERT INTO NEWTABLE2(RC_DATETIME)
SELECT TO_CHAR(TO_TIMESTAMP(RC_TIMESTAMP, 'YYYY-MM-DD HH:MI:SS:FF'), 'YYYY-MM-DD HH:MI:SS.FF')
FROM NEWTABLE;
Upon just executing the SELECT statement I get the query but while including the INSERT I get the error of 'not valid month'.
Data within the RC_TIMESTAMP(VARCHAR) are the merged data which are as follows:
20230208065521019355, 20230208065523019356, 20230208065532019357, etc.
RC_DATETIME has VARCHAR(35) datatype.
I have tried reordering the format of TO_CHAR, 'YYYY-MM-DD HH:MI:SS.FF' to 'Mon-DD-YYYY HH:MI:SS.FF' to name a few.
From what you posted:
Source table:
SQL> CREATE TABLE newtable (rc_timestamp)
2 AS (SELECT '20230208065521019355' FROM DUAL);
Table created.
Target table:
SQL> CREATE TABLE newtable2
2 (rc_datetime VARCHAR2 (35));
Table created.
Insert:
SQL> INSERT INTO newtable2 (rc_datetime)
2 SELECT TO_CHAR (TO_TIMESTAMP (rc_timestamp, 'yyyymmddhh24missff6'),
3 'yyyy-mm-dd hh24:mi:ss:ff')
4 FROM newtable;
1 row created.
However, you'd rather store timestamps into a timestamp column, not as a string. What benefit do you expect? It causes problems in later data processing.
SQL> DROP TABLE newtable2;
Table dropped.
SQL> CREATE TABLE newtable2
2 (rc_datetime TIMESTAMP);
Table created.
SQL> INSERT INTO newtable2
2 SELECT TO_TIMESTAMP (rc_timestamp, 'yyyymmddhh24missff6') FROM newtable;
1 row created.
SQL>
You commented that you still have the "not a valid month" error.
It means that data - at position where TO_TIMESTAMP expects a valid month value (01, 02, ..., 12) - contains something else. What? No idea, you have all the data. Try to find it by selecting a substring (month starts at position 5 and takes 2 places):
SQL> SELECT rc_timestamp, SUBSTR (rc_timestamp, 5, 2) month FROM newtable;
RC_TIMESTAMP MO
-------------------- --
20230208065521019355 02
SQL>
Invalid data is most probably here:
SELECT rc_timestamp
FROM newtable
WHERE SUBSTR (rc_timestamp, 5, 2) NOT BETWEEN '01' AND '12';
Once you find invalid values, you'll decide what to do with it. Maybe you'll ignore those values (so you'd include appropriate where clause into the insert statement), or fix it (somehow; can't tell how as it depends on what you'll find), or ...
If you want to identify invalid values during insert, a simple option is a loop with an inner begin-exception-end block which lets you capture those values and still proceed with other row(s). Something like this:
create table invalid_values as
select id, value from source_table where 1 = 2;
begin
for cur_r in (select * from source_table) loop
begin
insert into newtable2 ...
exception
when others then
insert into invalid_values (id, value) values (cur_r.id, cur_r.value);
end;
end loop;
end;
Once you're done, select * from invalid_value so that you could deal with what's left.
That should be OK as you have 10.000 rows so loop won't take infinite time to complete. True, it will be slower than set-oriented operation, but ... you have to fetch invalid rows, somehow.

I am stuck using SQL*Loader with to_date() for one of my field of type date

I am using a SQL*Loader to load the table from db2 to oracle DB using
LOAD DATA INFILE '<path><File_name>.del'
replace
into table schema.tableName fields terminated by ','
(
col1,
col2,
col3,....
)
Rejected - Error on table schema_name.table_name, column col3.
ORA-01861: literal does not match format string.
As col3 is of type date, we need to convert it to oracle acceptable date format.
Could anyone please tell me how to use the to_date() in sql loader?
Here's an example:
My test table, which will hold input data:
SQL> desc test
Name Null? Type
----------------------------------------- -------- -------------------
ID NUMBER
DATUM DATE
SQL>
Control file; note
to_date function call (double quotes, proper date format mask which has to match format of data you're loading)
the 3rd row, whose format is "invalid"
load data
infile *
replace
into table test
fields terminated by ','
trailing nullcols
(
id,
datum "to_date(:datum, 'yyyy-dd-mm')"
)
begindata
1,2020-20-05
2,2020-28-12
3,20200215
Loading session:
SQL> alter session set nls_date_format = 'dd.mm.yyyy';
Session altered.
SQL> $sqlldr scott/tiger control=test20.ctl log=test20.log
SQL*Loader: Release 11.2.0.2.0 - Production on Sri Svi 20 08:10:53 2020
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Commit point reached - logical record count 2
Commit point reached - logical record count 3
SQL> select * From test;
ID DATUM
---------- ----------
1 20.05.2020
2 28.12.2020
SQL>
As you can see, the 3rd row wasn't loaded. Log file contents. Note the ORA-01861 error:
Table TEST, loaded from every logical record.
Insert option in effect for this table: REPLACE
TRAILING NULLCOLS option in effect
Column Name Position Len Term Encl Datatype
------------------------------ ---------- ----- ---- ---- ---------------------
ID FIRST * , CHARACTER
DATUM NEXT * , CHARACTER
SQL string for column : "to_date(:datum, 'yyyy-dd-mm')"
Record 3: Rejected - Error on table TEST, column DATUM.
ORA-01861: literal does not match format string
Table TEST:
2 Rows successfully loaded.
1 Row not loaded due to data errors.
0 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.
So: make sure that all input data follow the same format mask.

Oracle sort order and greater than operator are not consistent on varchar column

I have a simple table
CREATE TABLE TRIAL
( "COL" VARCHAR2(20 BYTE)
)
and I insert there two values, '0', and 'A'.
The query
select * from trial order by col
returns
A
0
in this order, while the query
select * from trial where col>'A'
returns no results.
What could be the reason for such behaviour, and is there some simple trick, without changing db configuration, to get order by and > behave in a consistent manner?
EDIT:
to answer the comments:
select * from v$parameter where name like 'nls_sort'
returns
and
select dump(col,16),col from trial
returns
Typ=1 Len=1: 30 0
Typ=1 Len=1: 41 A
It should be sorting by the binary/ASCII value of the string.
http://www.ascii-code.com/
Translating the values
0 => 48
A => 65
When you sort by col, the default is ascending, so I would expect the 0 to come first, then the A.
When you ask for > 'A', you are asking for > 65, and neither 'A' or '0' is greater, so that makes sense.
As mentioned in the comments, I would check your sort NLS_SORT value to see if something is odd there for the sorting:
https://docs.oracle.com/cd/B19306_01/server.102/b14237/initparams130.htm#REFRN10127
You can also make sure this matches your NLS_COMP value:
https://docs.oracle.com/cd/B19306_01/server.102/b14237/initparams120.htm#REFRN10117
You can find more info in this answer:
https://stackoverflow.com/a/7191170/137649
NOTE : The actual issue turned out to be with NLS_SORT parameter. Please have a look at Oracle – Case Insensitive Sorts & Compares/ to get a good hold and understanding on the specific parameter value.
Actual Issue
The problem is actually due to NLS_SORT parameter value modified to 'WEST_EUROPEAN' from 'BINARY'.
Setup
SQL> CREATE TABLE TRIAL
2 ( "COL" VARCHAR2(20 BYTE)
3 );
Table created.
SQL> INSERT INTO trial(col) VALUES('0');
1 row created.
SQL> INSERT INTO trial(col) VALUES('A');
1 row created.
SQL> COMMIT;
Commit complete.
SQL> SELECT * FROM trial ORDER BY col;
COL
--------------------
0
A
NLS Parameter values
SQL> SHOW PARAMETER NLS_SORT;
NAME TYPE VALUE
------------------------------------ ----------- ------
nls_sort string BINARY
SQL> SHOW PARAMETER NLS_COMP;
NAME TYPE VALUE
------------------------------------ ----------- ------
nls_comp string BINARY
Let's change the NLS_SORT parameter value:
SQL> ALTER SESSION SET NLS_SORT='WEST_EUROPEAN';
Session altered.
Error reproduce
SQL> SELECT * FROM trial ORDER BY col;
COL
--------------------
A
0
So, now the sorting of the values has changed with the change in the value of the NLS_SORT parameter.

Is it possible to update data inside a CLOB using SQL?

I have a table having one clob column which has XML data in it.
Say I want to replace XYZ with ABC in the clob column.
Is it possible using sqlplus?
Why not try it ?
SQL> create table nnn(c1 clob);
Table created.
SQL> insert into nnn values ('text ABC end');
1 row created.
SQL> select * from nnn;
C1
-------------------------------------------------
text ABC end
SQL> update nnn set c1=replace(c1,'ABC','XYZ');
1 row updated.
SQL> select * from nnn;
C1
-------------------------------------------------
text XYZ end
SQL>
"i have new line in the column. any
advice?"
Newlines are characters; if you want to amend text which contains them you need to include them in the search string. You can do this using the CHR() which takes an ASCII value as an argument. The precise codes you need to include vary according to OS. Because I ran this example on MS Windows I needed to pass both linefeed (ASCII=10) and carriage return (ASCII=13).
SQL> select * from t42
2 /
TXT
--------------------------------------------------------------------------------
<ABC> ABCD
</ABC>
SQL> update t42 set txt=replace(txt,'ABCD'||chr(10)||chr(13), 'APC woz here')
2 /
1 row updated.
SQL> select * from t42
2 /
TXT
--------------------------------------------------------------------------------
<ABC> APC woz here </ABC>
SQL>
Incidentally, if you are storing XML text it might be worthwhile using the XMLType datatype for the column instead of CLOB. It comes with a lot of useful functionality.
Yes, it's possible with one REPLACE() function. Try:
update nnn set c1 = REPLACE(c1,'ABC>','XYZ>')

Oracle vs. Hypersonic SQL

I need to select by date in a SQL query, for example
SELECT * FROM foo WHERE date = '2009-09-09'
That query works in my Hypersonic test database, but not Oracle, which seems to requires:
SELECT * FROM foo WHERE date = TO_DATE('2009-09-09', 'yyyy-mm-dd')
Is there a way to select by date uniformly across these two databases?
I found the answer - you can create the TO_DATE function in HyperSonic and then the second query works in both. For example, make the class:
public class Date {
public static String toDate( String value, String format ) {
return value;
}
}
And the query
SELECT * FROM foo WHERE date = TO_DATE('2009-09-09', 'yyyy-mm-dd')
works in both.
You could try H2 database as your in memory database (http://www.h2database.com). It should have decent Oracle compablity mode.
HSQLDB 2.0 supports ANSI date literals just as Oracle. So if you can upgrade to HSQLDB 2.0, you can use:
SELECT *
FROM foo
WHERE date_column = DATE '2009-09-09'
in both database (actually a lot more databases even)
A "date = 'literal string'" predicate in Oracle is usually not recommended - it is sensitive to NLS_DATE_FORMAT settings and often leads to misunderstanding on what you're looking for in a result set (in your example above do you want all records for the day or just those created exactly at midnight?)
If you need a uniform query string for both databases, you might rename the table in Oracle and create a view with the name foo and cast the date datatype to varchar2 in the view logic. You'll probably need to add a function-based index to the table to allow efficient searching on the recast value.
If you can, you can set your NLS_DATE_FORMAT in Oracle session, that way you do not need to use the TO_DATE function, oracle will do this for you behind the scenes.
SQL> select value from v$nls_parameters where parameter = 'NLS_DATE_FORMAT';
VALUE
----------------------------------------------------------------
DD/MM/YYYY
SQL> create table nls_date_test ( id number(10) , date_entered date );
Table created.
SQL> insert into nls_date_test values ( 1 , '31/05/2009' );
1 row created.
SQL> insert into nls_date_test values ( 2 , '30/05/2009' );
1 row created.
SQL> select * from nls_date_test where date_entered = '2009-09-09';
select * from nls_date_test where date_entered = '2009-09-09'
*
ERROR at line 1:
ORA-01861: literal does not match format string
SQL> alter session set nls_date_format = 'YYYY-MM-DD';
Session altered.
SQL> select * from nls_date_test where date_entered = '2009-05-30';
ID DATE_ENTER
---------- ----------
2 2009-05-30
SQL> select value from v$nls_parameters where parameter = 'NLS_DATE_FORMAT';
VALUE
----------------------------------------------------------------
YYYY-MM-DD
SQL>