Well, I am going to convert the epoch into a normal datetime in oracle sqldeveloper, I wrote the below code, but it says "missing expression"
My code:
SELECT to_date(CreationDate, 'yyyymmdd','nls_calendar=persian')+ EpochDate/24/60/60
from table1
My table1:
ID
EpochDate
100
16811048
101
16810904
102
12924715
103
15667117
I don not know what is wrong!
If the CreationDate is a Date and EpochDate is a Varchar you can try this:
SELECT to_date(to_char(CreationDate, 'yyyymmdd','nls_calendar=persian'),'yyyymmdd') +
EpochDate/24/60/60 as newDate
from table1
or:
select to_date(to_char(CreationDate, 'yyyymmdd','nls_calendar=persian'),'yyyymmdd') +
numtodsinterval(EpochDate,'SECOND') as newDate
from dual
Let's show you how in an example
Demo data
SQL> create table c1 ( id number generated always as identity, date_test date) ;
Table created.
SQL> insert into c1 ( date_test ) values ( sysdate ) ;
1 row created.
SQL> insert into c1 ( date_test ) values ( sysdate-365 ) ;
1 row created.
SQL> insert into c1 ( date_test ) values ( sysdate-4000 ) ;
1 row created.
SQL> insert into c1 ( date_test ) values ( sysdate-7200 ) ;
1 row created.
SQL> commit ;
Commit complete.
Now, let's add a column called epoch, and a small function to make easier to update the column.
SQL> alter table c1 add epoch number ;
Table altered.
SQL> create or replace function date_to_unix_ts( PDate in date ) return number is
l_unix_ts number;
begin
l_unix_ts := ( PDate - date '1970-01-01' ) * 60 * 60 * 24;
return l_unix_ts;
end;
/
Function created
We update the column epoch with the real epoch date based on the timestamp field
SQL> update c1 set epoch=date_to_unix_ts (date_test) ;
4 rows updated.
SQL> select * from c1 ;
ID DATE_TEST EPOCH
---------- ---------------------------------------- -----------------
1 2021-09-15 12:25:25 1631708725
2 2020-09-15 12:25:25 1600172725
3 2010-10-03 12:25:25 1286108725
4 2001-12-29 12:25:26 1009628726
SQL> select to_char(to_date('1970-01-01','YYYY-MM-DD') + numtodsinterval(EPOCH,'SECOND'),'YYYY-MM-DD HH24:MI:SS') from c1 ;
TO_CHAR(TO_DATE('19
-------------------
2021-09-15 12:25:25
2020-09-15 12:25:25
2010-10-03 12:25:25
2001-12-29 12:25:26
Related
Hi because of technical limitation of a Framework I need another way to do a smiliar query without using single quotes
--> current
Select Json_Value(json, '$.bankReference') From R_O where json like '%12345%';
--> Need a Valid query without single quotes which is doing exactly the same thing, maybe a function or something.
Select Json_Value(json, '$.bankReference') From R_O where json like %12345%;
A similar alternative, but a little bit more dynamic
Demo
SQL> create table t1 ( c1 varchar2(10) ) ;
Table created.
SQL> insert into t1 values ( 'A12345B' );
1 row created.
SQL> insert into t1 values ( 'A12345C' );
1 row created.
SQL> insert into t1 values ( 'A12345D' );
1 row created.
SQL> insert into t1 values ( 'A12399B' );
1 row created.
SQL> insert into t1 values ( 'A13299B' );
1 row created.
SQL> insert into t1 values ( 'A21399B' );
1 row created.
SQL> commit ;
Commit complete.
SQL> select * from t1 ;
C1
----------
A12345B
A12345C
A12345D
A12399B
A13299B
A21399B
6 rows selected.
Now let's create a function that admits two parameters:
The column we want to check
The value we want to apply the % ( I am guessing that is always a number ). If the value contains any string, it won't work.
Function
SQL> create or replace function p_chk_json(p_text varchar2, p_val number)
return integer is
begin
if p_text like '%'||p_val||'%' then
return 1;
else
return 0;
end if;
end;
/
Function created.
Then test it for 12345 or 99
SQL> select * from t1 where p_chk_json(c1 , 12345) = 1;
C1
----------
A12345B
A12345C
A12345D
SQL> select * from t1 where p_chk_json(c1 , 99 ) = 1 ;
C1
----------
A12399B
A13299B
A21399B
In Oracle you can write a function something like that:
create or replace function test1(p_text varchar2)
return integer is
begin
if p_text like '%12345%' then
return 1;
else
return 0;
end if;
end test1;
Your modified SQL statement will be:
Select Json_Value(json, '$.bankReference') From R_O where test1(json) = 1;
I have the following code, which is working fine. If I run the procedure more than once with the same values I get a PRIMARY KEY violation, which I expect. Could the INSERT be converted into a MERGE or NOT EXISTS to avoid this issue?
The examples I saw online appear to be using literal values or an ON statement with the MERGE.
As I am a novice SQL developer any help or sample code, which reflects my requirement would be greatly appreciated.
Thanks in advance to all who answer.
ALTER SESSION SET NLS_DATE_FORMAT = 'MMDDYYYY HH24:MI:SS';
CREATE OR REPLACE TYPE nt_date IS TABLE OF DATE;
/
CREATE OR REPLACE FUNCTION generate_dates_pipelined(
p_from IN DATE,
p_to IN DATE
)
RETURN nt_date PIPELINED DETERMINISTIC
IS
v_start DATE := TRUNC(LEAST(p_from, p_to));
v_end DATE := TRUNC(GREATEST(p_from, p_to));
BEGIN
LOOP
PIPE ROW (v_start);
EXIT WHEN v_start >= v_end;
v_start := v_start + INTERVAL '1' DAY;
END LOOP;
RETURN;
END generate_dates_pipelined;
/
create table schedule_assignment(
schedule_id number(4),
schedule_date DATE,
employee_id NUMBER(6) DEFAULT 0,
constraint sa_chk check (schedule_date=trunc(schedule_date, 'dd')),
constraint sa_pk primary key (schedule_id, schedule_date)
);
CREATE OR REPLACE PROCEDURE
create_schedule_assignment (
p_schedule_id IN NUMBER,
p_start_date IN DATE,
p_end_date IN DATE
)
IS
BEGIN
INSERT INTO schedule_assignment(
schedule_id,
schedule_date
)
SELECT
p_schedule_id,
COLUMN_VALUE
FROM TABLE(generate_dates_pipelined(p_start_date, p_end_date));
END;
EXEC create_schedule_assignment (1, DATE '2021-08-21', DATE '2021-08-30');
Rewrite procedure to
SQL> CREATE OR REPLACE PROCEDURE
2 create_schedule_assignment (
3 p_schedule_id IN NUMBER,
4 p_start_date IN DATE,
5 p_end_date IN DATE
6 )
7 IS
8 BEGIN
9 merge into schedule_assignment s
10 using (select p_schedule_id as schedule_id,
11 column_value as schedule_date
12 from table(generate_dates_pipelined(p_start_date, p_end_date))
13 ) x
14 on ( x.schedule_id = s.schedule_id
15 and x.schedule_date = s.schedule_date
16 )
17 when not matched then insert (schedule_id, schedule_date)
18 values (x.schedule_id, x.schedule_date);
19 END;
20 /
Procedure created.
SQL>
Testing: initially, table is empty:
SQL> select schedule_id, min(schedule_date) mindat, max(schedule_date) maxdate, count(*)
2 from schedule_assignment group by schedule_id;
no rows selected
Run the procedure for the 1st time:
SQL> EXEC create_schedule_assignment (1, DATE '2021-08-21', DATE '2021-08-30');
PL/SQL procedure successfully completed.
Table contents:
SQL> select schedule_id, min(schedule_date) mindat, max(schedule_date) maxdate, count(*)
2 from schedule_assignment group by schedule_id;
SCHEDULE_ID MINDAT MAXDATE COUNT(*)
----------- ---------- ---------- ----------
1 21/08/2021 30/08/2021 10
Run the procedure with same parameters again:
SQL> EXEC create_schedule_assignment (1, DATE '2021-08-21', DATE '2021-08-30');
PL/SQL procedure successfully completed.
SQL> EXEC create_schedule_assignment (1, DATE '2021-08-21', DATE '2021-08-30');
PL/SQL procedure successfully completed.
SQL> EXEC create_schedule_assignment (1, DATE '2021-08-21', DATE '2021-08-30');
PL/SQL procedure successfully completed.
Result: nothing changed, no rows in table (but no error either):
SQL> select schedule_id, min(schedule_date) mindat, max(schedule_date) maxdate, count(*)
2 from schedule_assignment group by schedule_id;
SCHEDULE_ID MINDAT MAXDATE COUNT(*)
----------- ---------- ---------- ----------
1 21/08/2021 30/08/2021 10
SQL>
Run the procedure with the same SCHEDULE_ID, but different dates:
SQL> EXEC create_schedule_assignment (1, DATE '2021-08-29', DATE '2021-09-02');
PL/SQL procedure successfully completed.
SQL> select schedule_id, min(schedule_date) mindat, max(schedule_date) maxdate, count(*)
2 from schedule_assignment group by schedule_id;
SCHEDULE_ID MINDAT MAXDATE COUNT(*)
----------- ---------- ---------- ----------
1 21/08/2021 02/09/2021 13
SQL>
Right; number of rows is now increased to 13 (was 10 previously, because 31.08., 01.09. and 02.09. were added).
New SCHEDULE_ID:
SQL> EXEC create_schedule_assignment (2, DATE '2021-09-05', DATE '2021-09-07');
PL/SQL procedure successfully completed.
SQL> select schedule_id, min(schedule_date) mindat, max(schedule_date) maxdate, count(*)
2 from schedule_assignment group by schedule_id;
SCHEDULE_ID MINDAT MAXDATE COUNT(*)
----------- ---------- ---------- ----------
1 21/08/2021 02/09/2021 13
2 05/09/2021 07/09/2021 3
SQL>
Looks OK to me.
table 1
id pk1 timestamp
1 a 10-jul-2019
2 h 11-mar-2019
3 k 19-jul-2019
4 j 7-n0v-2018
5 h 11-jul-2019
table 2
col start_date end_date
a 10-jul-2019
h 11-mar-2019 11-jul-2019
k 19-jul-2019
j 7-nov-2018
h 11-jul-2019
Q:> I want this process to be repeat after equal interval of time.
if a new value got enter into the table 1 then same value should enter into the table 2 but if existing values enters in table 1 then just update end date of previous same value of table 2 and add one new value with null end date into table 2 (example value H in table 1 and table 2).
we need to use only single query.
with merge we are not able to get this
if a new value got enter into the table 1 then same value should enter
into the table 2 but if existing values enters in table 1 then just
update end date of previous same value of table 2 and add one new
value with null end date into table 2
Your scenario need a Trigger to be created on Table1. You can put down your logic to update Table2 in the Trigger.
See below demo:
--Table1 DDL
Create table tab1 (
id number,
pk1 varchar(1),
time_stamp date
);
--Table2 DDL
create table tab2 (
col varchar(1),
start_date date,
end_date date
);
Here is Trigger on Table1
Create or replace trigger t1
before insert on tab1
for each row
begin
DECLARE
l_exists INTEGER;
BEGIN
SELECT COUNT(*)
INTO l_exists
FROM tab2
WHERE col = :new.pk1 ;
IF l_exists = 0
THEN
INSERT INTO TAB2
values
(:new.pk1,:new.time_stamp,null);
ELSE
Update tab2
set end_date = :new.time_stamp
where col = :new.pk1;
INSERT INTO TAB2
values
(:new.pk1,:new.time_stamp,null);
END IF;
END;
end;
--Execution:
insert into tab1 values (1,'a',to_date('10-jul-2019','DD-MON-YYYY'));
insert into tab1 values (2,'h',to_date('11-mar-2019','DD-MON-YYYY'));
insert into tab1 values (3,'k',to_date('19-jul-2019','DD-MON-YYYY'));
insert into tab1 values (4,'j',to_date('07-nov-2019','DD-MON-YYYY'));
insert into tab1 values (5,'h',to_date('11-jul-2019','DD-MON-YYYY'));
Commit;
SQL> select * from tab1;
ID P TIME_STAM
---------- - ---------
1 a 10-JUL-19
3 k 19-JUL-19
4 j 07-NOV-19
2 h 11-MAR-19
5 h 11-JUL-19
SQL> select * from tab2;
C START_DAT END_DATE
- --------- ---------
a 10-JUL-19
k 19-JUL-19
j 07-NOV-19
h 11-MAR-19 11-JUL-19
h 11-JUL-19
This is my insert trigger on Table_A where I store parameters to my system. When I do insert to the table, I want to change end_date of last record in order to keep record versioning.
create or replace trigger parameter_version
before insert
on parameters
for each row
declare
v_is_exist number := 0;
v_rowid rowid;
begin
select count(*) into v_is_exist from parameters where name = :new.name; -- check if parameter exist
select rowid into v_rowid from parameters where name = :new.name and end_date is null; -- record rowid, which sholud be changed
if v_is_exist <> 0 then
set end_date = :new.start_date - 1
end if;
end;
Situation in table before insert is:
| id | name | value | start_date | end_date |
-----------------------------------------------
| 1 |Par_A | 10 | 2016-09-01 | 2016-10-01 |
-----------------------------------------------
| 2 |Par_A | 20 | 2016-10-02 | 2016-10-03 |
-----------------------------------------------
| 3 |Par_A | 30 | 2016-10-05 | <null> |
-----------------------------------------------
Record with id=3 should set end_date on :new_start_date - 1 (close version) and in inserting record I have a next param version with start_date = sysdate.
I've got an ORA-04091 error 'table name is mutating, trigger/function may not see it'.
I know that this case is hard and probably impossible but maybe someone know the solution?
Or maybe exists another solution that case?
You can handle this with an After Statement trigger with the LEAD Analytic Function:
DROP TABLE demo;
CREATE TABLE demo( id NUMBER
, name VARCHAR2( 30 )
, VALUE NUMBER
, start_date DATE
, end_date DATE
);
INSERT INTO demo( id, name, VALUE, start_date, end_date )
VALUES ( 1, 'Par_A', 10, TO_DATE( '2016-09-01', 'YYYY-MM-DD' ), TO_DATE( '2016-10-01', 'YYYY-MM-DD' ) );
INSERT INTO demo( id, name, VALUE, start_date, end_date )
VALUES ( 2, 'Par_A', 20, TO_DATE( '2016-10-02', 'YYYY-MM-DD' ), TO_DATE( '2016-10-04', 'YYYY-MM-DD' ) );
INSERT INTO demo( id, name, VALUE, start_date )
VALUES ( 3, 'Par_A', 30, TO_DATE( '2016-10-05', 'YYYY-MM-DD' ) );
INSERT INTO demo( id, name, VALUE, start_date )
VALUES ( 4, 'Par_A', 40, TO_DATE( '2016-10-07', 'YYYY-MM-DD' ) );
INSERT INTO demo( id, name, VALUE, start_date )
VALUES ( 5, 'Par_A', 50, TO_DATE( '2016-10-11', 'YYYY-MM-DD' ) );
COMMIT;
SELECT id
, name
, start_date
, end_date
, LEAD( start_date ) OVER( PARTITION BY name ORDER BY start_date ) - 1 AS new_date
FROM demo
WHERE end_date IS NULL
ORDER BY id;
CREATE OR REPLACE TRIGGER demo_aius
AFTER INSERT OR UPDATE
ON demo
REFERENCING NEW AS new OLD AS old
DECLARE
CURSOR c_todo
IS
SELECT id, new_date
FROM (SELECT id
, name
, start_date
, end_date
, LEAD( start_date ) OVER( PARTITION BY name ORDER BY start_date ) - 1 AS new_date
FROM demo
WHERE end_date IS NULL)
WHERE new_date IS NOT NULL;
BEGIN
FOR rec IN c_todo
LOOP
UPDATE demo
SET end_date = rec.new_date
WHERE id = rec.id;
END LOOP;
END demo_aius;
/
INSERT INTO demo( id, name, VALUE, start_date )
VALUES ( 6, 'Par_A', 60, TO_DATE( '2016-10-15', 'YYYY-MM-DD' ) );
COMMIT;
SELECT id
, name
, start_date
, end_date
FROM demo
ORDER BY id;
Like the Script shows, such an Update can even handle multiple missing end dates, in case the trigger was accidentally disabled. The "PARTITION BY name" part makes sure that it also functions after complex insert statements.
BtW I agree that Autonomous Transactions in triggers are a last resort. I try to avoid triggers in general by controlling the User Interface and putting all such functionality in packages.
Try something like this:
create or replace trigger parameter_version
before insert
on parameters
for each row
begin
/*Don't care if there's 0 rows updated */
update parameters
set end_date = :new.start_date - 1
where name = :new.name and end_date is null;
:new.end_date := null;
end;
I must insert into table 2 fields (first the Primary key(about the articles) and the second concerns their size(of these articles).
In source envrionnement, i have into table, the primary key(TK Articles) and a concatenation of a size into second field. However, i must insert into target table, the TK Articles and the several size of the Artcles.
For example,
Source:
ART SIZE**
1 | 28/30
2 | 30/32
3 | Size 10/Size 12/Size 14/Size 14
Target:
ART Size
1 | 28
1 | 30
2 | 30
2 | 32
3 | Size 10
3 | Size 12
3 | Size 14
3 | Size 16
The difficulty is to know how many '/' is included in the field?
I have made a query
SELECT ART,
REGEXP_SUBSTR(SIZE,'[^/]+',1,level)
FROM TABLLE
CONNECT BY REGEXP_SUBSTR(SIZE,'[^/]+',1,level) IS NOT NULL;
the select transaction works and display results in 46 seconds. But the TABLE have 100 000 lines and the insert transaction is too long and doesn't work.
Somebody can help me on this point?
Thanks & Regards
Regular expressions are very expensive to compute. If there is a need to process a large number of rows, personally I would go with a stored procedure - pipelined table function:
-- table with 100000 rows
create table Tb_SplitStr(col1, col2) as
select level
, 'Size 10/Size 12/Size 14/Size 14/Size 15/Size 16/Size 17'
from dual
connect by level <= 100000
PL/SQL package:
create or replace package Split_Pkg as
type T_StrList is table of varchar2(1000);
function Str_Split(
p_str in varchar2,
p_dlm in varchar2
) return T_StrList pipelined;
end;
create or replace package body Split_Pkg as
function Str_Split(
p_str in varchar2,
p_dlm in varchar2
) return T_StrList pipelined
is
l_src_str varchar2(1000) default p_str;
l_dlm_pos number;
begin
while l_src_str is not null
loop
l_dlm_pos := instr(l_src_str, p_dlm);
case
when l_dlm_pos = 0
then pipe row (l_src_str);
l_src_str := '';
else pipe row(substr(l_src_str, 1, l_dlm_pos - 1));
l_src_str := substr(l_src_str, l_dlm_pos + 1);
end case;
end loop;
return;
end;
end;
SQL Query with regexp functions:
with ocrs(ocr) as(
select level
from ( select max(regexp_count(col2, '[^/]+')) as mx
from tb_splitStr) t
connect by level <= t.mx
)
select count(regexp_substr(s.col2, '[^/]+', 1, o.ocr)) as res
from tb_splitStr s
cross join ocrs o
Result:
-- SQL with regexp
SQL> with ocrs(ocr) as(
2 select level
3 from ( select max(regexp_count(col2, '[^/]+')) as mx
4 from tb_splitStr) t
5 connect by level <= t.mx
6 )
7 select count(regexp_substr(s.col2, '[^/]+', 1, o.ocr)) as res
8 from tb_splitStr s
9 cross join ocrs o
10 ;
Res
------------------------------
700000
Executed in 4.093 seconds
SQL> /
Res
------------------------------
700000
Executed in 3.812 seconds
--Query with pipelined table function
SQL> select count(*)
2 from Tb_SplitStr s
3 cross join table(split_pkg.Str_Split(s.col2, '/'))
4 ;
COUNT(*)
----------
700000
Executed in 2.469 seconds
SQL> /
COUNT(*)
----------
700000
Executed in 2.406 seconds
This blogpost of mine shows six different techniques to handle this query.
The difference is that it handles dates and you need to handle string. You can solve this by using "regexp_count(size,'/') + 1" as your iteration-stopper and regexp_substr(size,'[^/]+',1,i) in the select-list.
How about using some XML?
> set serveroutput on
> drop table test_tab
table TEST_TAB dropped.
> create table test_tab
(
art number,
siz varchar2(100)
)
table TEST_TAB created.
> insert into test_tab values (1, '28/30')
1 rows inserted.
> insert into test_tab values (2, '30/32')
1 rows inserted.
> insert into test_tab values (3, 'Size 10/Size 12/Size 14/Size 14')
1 rows inserted.
> commit
committed.
> drop table test_tab2
table TEST_TAB2 dropped.
> create table test_tab2 as
select * from test_tab where 1=0
table TEST_TAB2 created.
> insert into test_tab2 (art, siz)
select art, extractvalue(x.column_value, 'e')
from test_tab, xmltable ('e' passing xmlparse( content '<e>' || replace(siz, '/', '</e><e>') || '</e>')) x
8 rows inserted.
> commit
committed.
> select * from test_tab2
ART SIZ
---------- ----------------------------------------------------------------------------------------------------
1 28
1 30
2 30
2 32
3 Size 10
3 Size 12
3 Size 14
3 Size 14
8 rows selected
Here it is again, but with 100,000 rows initially, and showing timings. Insert of 400,000 rows took just over 2 minutes:
> set serveroutput on
> set timing on
> drop table test_tab
table TEST_TAB dropped.
Elapsed: 00:00:00.055
> create table test_tab
(
art number,
siz varchar2(100)
)
table TEST_TAB created.
Elapsed: 00:00:00.059
> --insert into test_tab values (1, '28/30');
> --insert into test_tab values (2, '30/32');
> --insert into test_tab values (3, 'Size 10/Size 12/Size 14/Size 14');
> insert into test_tab (art, siz)
select level, 'Size 10/Size 12/Size 14/Size 16'
from dual
connect by level <= 100000
100,000 rows inserted.
Elapsed: 00:00:00.191
> commit
committed.
Elapsed: 00:00:00.079
> drop table test_tab2
table TEST_TAB2 dropped.
Elapsed: 00:00:00.081
> create table test_tab2 as
select * from test_tab where 1=0
table TEST_TAB2 created.
Elapsed: 00:00:00.076
> -- perform inserts. This will result in 400,000 rows inserted
> -- note inserts are done conventionally (timing is acceptable)
> insert into test_tab2 (art, siz)
select art, extractvalue(x.column_value, 'e')
from test_tab, xmltable ('e' passing xmlparse( content '<e>' || replace(siz, '/', '</e><e>') || '</e>')) x
400,000 rows inserted.
Elapsed: 00:02:17.046
> commit
committed.
Elapsed: 00:00:00.094
> -- show some data in target table
> select * from test_tab2
where art = 1
ART SIZ
---------- ----------------------------------------------------------------------------------------------------
1 Size 10
1 Size 12
1 Size 14
1 Size 16
Elapsed: 00:00:00.103