how to print the each column separated by dot in pl/sql - sql

I am a beginner to PL/SQL. Consider I have three tables: emp, organization, emp_detail. Refer the image to know the table Schema, and result format.
I can get the result by joining these three tables based on the emp_id but I don't know how to print the dots (....) in the result.

Here's how; I don't have your tables (and don't feel like creating ones, as you didn't feel like providing test case yourself) so I used Scott's EMP.
If you don't care about nice alignment, omit RPAD function call and just concatenate desired number of dots.
SQL> set serveroutput on
SQL> begin
2 for cur_r in (select empno, ename, job from emp where deptno = 10) loop
3 dbms_output.put_line(cur_r.empno ||'.....'||
4 rpad(cur_r.ename, 15, '.') ||
5 cur_r.job
6 );
7 end loop;
8 end;
9 /
7782.....CLARK..........MANAGER
7839.....KING...........PRESIDENT
7934.....MILLER.........CLERK
PL/SQL procedure successfully completed.
SQL>

select cast(emp_id as varchar2(5)) || '....' || emp_name || '....' || organisation || '....'
|| cast(salary as varchar2(10))
from emp
join organisation on emp.emp_id=organisation.emp_id
join emp_details on emp.emp_id=emp_details.emp_id

Related

Oracle SQL/PLSQL: change type of specific columns in one time

Assume following table named t1:
create table t1(
clid number,
A number,
B number,
C number
)
insert into t1 values(1, 1, 1, 1);
insert into t1 values(2, 0, 1, 0);
insert into t1 values(3, 1, 0, 1);
clid A B C
1 1 1 1
2 0 1 0
3 1 0 1
Type of columns A, B, and C is number. What I need to do is to change types of those columns to VARCHAR but in a quite tricky way.
In my real table I need to change datatype for hundred of columns so it is not so convenient to write a statement like following hundred of time:
ALTER TABLE table_name
MODIFY column_name datatype;
What i need to do is rather to convert all columns to VARCHAR except CLID column like we can do that in Python or R
Is there any way to do so in Oracle SQL or PLSQL?
Appreciate your help.
Here is a example of procedure that can help...
It accepts two parameters that should be a name of your table and list of columns you do not want to change...
At the begining there is a cursor that gets all the column names for your table except the one that you do not want to change...
Then it loop's though the columns and changes them...
CREATE OR REPLACE procedure test_proc(p_tab_name in varchar2
, p_col_names in varchar2)
IS
v_string varchar2(4000);
cursor c_tab_cols
is
SELECT column_name
FROM ALL_TAB_COLS
WHERE table_name = upper(p_tab_name)
and column_name not in (select regexp_substr(p_col_names,'[^,]+', 1, level) from dual
connect by regexp_substr(p_col_names, '[^,]+', 1, level) is not null);
begin
FOR i_record IN c_tab_cols
loop
v_string := 'alter table ' || p_tab_name || ' modify '
|| i_record.column_name || ' varchar(30)';
EXECUTE IMMEDIATE v_string;
end loop;
end;
/
Here is a demo:
DEMO
You can also extend this procedure with a type of data you want to change into... and with some more options I am sure....
Unfortunately, that isn't as simple as you'd want it to be. It is not a problem to write query which will write query for you (by querying USER_TAB_COLUMNS), but - column must be empty in order to change its datatype:
SQL> create table t1 (a number);
Table created.
SQL> insert into t1 values (1);
1 row created.
SQL> alter table t1 modify a varchar2(1);
alter table t1 modify a varchar2(1)
*
ERROR at line 1:
ORA-01439: column to be modified must be empty to change datatype
SQL>
If there are hundreds of columns involved, maybe you can't even
create additional columns in the same table (of VARCHAR2 datatype)
move values in there
drop "original" columns
rename "new" columns to "old names"
because there'a limit of 1000 columns per table.
Therefore,
creating a new table (with appropriate columns' datatypes),
moving data over there,
dropping the "original" table
renaming the "new" table to "old name"
is probably what you'll finally do. Note that it won't be necessarily easy either, especially if there are foreign keys involved.
A "query that writes query for you" might look like this (Scott's sample tables):
SQL> SELECT 'insert into dept (deptno, dname, loc) '
2 || 'select '
3 || LISTAGG ('to_char(' || column_name || ')', ', ')
4 WITHIN GROUP (ORDER BY column_name)
5 || ' from emp'
6 FROM user_tab_columns
7 WHERE table_name = 'EMP'
8 AND COLUMN_ID <= 3
9 /
insert into dept (deptno, dname, loc) select to_char(EMPNO), to_char(ENAME), to_char(JOB) from emp
SQL>
It'll save you from typing names of hundreds of columns.
I think its not possible to change data type of a column if values are there...
Empty the column by copying values to a dummy column and change data types.

Convert a single row with dynamic number of columns into a single column in Oracle

I have to convert a single row that is obtained via select statement into a single column with concatenated values of the individual columns of the result. The problem is that the columns are unknown and can vary in number.
Let's say the table looks similar to this:
Table USER
Name Surname Age Logindate City
Max Smith 25 20.05.20 NY
I need to SELECT * FROM USER and convert the result into a single string like Max, Smith, 25, 20.05.20, NY or with column names Name: Max, Surname: Smith, Age: 25, Logindate: 20.05.20, City: NY that I can afterwards insert into a column of other table. The name of the table that I'm selecting from is known and hardcoded into the SELECT statement that is executed inside a stored procedure.
Since the number of columns and column names are unknown, I cannot use a CONCAT function. I was also going to be satisfied with the output format of SELECT JSON_OBJECT(*) FROM USER, but the function with such usage of star operator is not supported in Oracle18c (it is in Oracle19c).
The transformation of column values of a single row into a single string seems like a basic operation, but I wasn't able to find any simple solution.
Use the data dictionary to generate the right SQL statement and then use dynamic SQL to execute it.
--Sample tables for input and output:
create table user_table as
select 'Max' Name, 'Smith' Surname, 25 Age, date '2020-05-20' LoginDate, 'NY' City
from dual;
create table concatenated_values(value varchar2(4000));
--Procedure to read all columns from USER_TABLE and write them to CONCATENATED_VALUES.
create or replace procedure concatenate_values(p_table_name varchar2) is
v_sql varchar2(4000);
begin
--Generate a SQL statement to concatenate all the values.
select
'select ' ||
listagg(column_name, '||'',''||') within group (order by column_id) ||
' from ' || owner || '.' || table_name
into v_sql
from all_tab_columns
where owner = user
and table_name = p_table_name
group by owner, table_name;
--Run the SQL statement and insert the value.
execute immediate 'insert into concatenated_values ' || v_sql;
end;
/
--Call the procedure.
begin
concatenate_values('USER_TABLE');
end;
/
--Results:
select * from concatenated_values;
VALUE
-------------------------
Max,Smith,25,20-MAY-20,NY

Comparing Substring Against Multiple Columns

I have a table which has 20 similar text attribute columns, text1..text20. These columns are of type CLOB. I am looking to find rows where one of these text attribute columns contain a specific phrase, such as '%unemployed%'. I need to know 2 things, which rows match, and which column was matched on. I thought I could use ANY as a starting point, but I am having issues.
It appears the ANY statement does NOT work with '%'. For example,
select * from emp where 'BLAKE' = ANY(ename, job); -- Returns Data
but
select * from emp where '%BLAKE%' = ANY(ename, job) -- No Data Found
What would be the proper way to do this? Pseudo-code would be...
Select name, addr,
which_column_matched(%unemployed%, text1..text20),
text1..text20
from table
where %unemployed% = ANY(text1..text20);
In Oracle, you can use unpivot for this. It still requires you to enumerate all the columns, but the syntax is quite neat.
If you want one record for each column that matches:
select *
from emp unpivot (col for src in (text1, text2, text3))
where col like '%unemployed%'
If you wanted one additional column with the list of matching columns instead, you can aggregate the resultset:
select ename, listagg(src, ', ')
from emp unpivot (col for src in (text1, text2, text3))
where col like '%unemployed%'
group by ename
You can use a subquery to identify the first column that matches and then return that:
select t.*
from (select t.*,
(case when text1 like '%unemployed%' then 'text1'
when text2 like '%unemployed%' then 'text2'
. . .
when text20 like '%unemployed%' then 'text20'
end) as col_match
from t
) t
where col_match is not null;
I'm always worried about how Oracle treats CLOB data, so here's a test that shows that the Pivot solution should do the trick.
drop table emptest;
-- Assuming we are using the venerable EMP table
create table emptest as select * from emp;
alter table emptest add(
text1 CLOB,
text2 CLOB,
text3 CLOB
)
/
declare
v_text clob;
begin
-- set one column to a length well beyond 16k but below 32k, max VARCHAR2 for PL/SQL
v_text := lpad('X', 16000, 'X')||' unemployed ' || lpad('X', 10000, 'X');
update emptest set text2 = v_text where ename = 'SMITH';
-- set others to short values
v_text := 'an unemployed salesman in text 1';
update emptest set text1 = v_text where ename = 'TURNER';
v_text := 'an unemployed manager in text 3';
update emptest set text3 = v_text where ename = 'JONES';
commit;
end;
/
declare
v_clob clob;
begin
-- Set a field to an absurdly long value, with the match value way beyond 32k.
update emptest set text1 = empty_clob() where ename = 'SMITH' returning text1 into v_clob;
for i in 1..10000 loop
dbms_lob.writeappend(v_clob, 36, 'ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890');
end loop;
dbms_lob.writeappend(v_clob, 18, 'unemployed manager');
commit;
end;
/
select empno, ename, clob_name, clob_value, length(clob_value) clob_length
from emptest unpivot (clob_value for clob_name in (text1, text2, text3))
where clob_value like '%unemployed%'
/
The result of this will be:
EMPNO ENAME CLOB_NAME CLOB_VALUE CLOB_LENGTH
----- ------- --------- ----------- -----------
7566 JONES TEXT3 <excluded> 31
7369 SMITH TEXT1 <excluded> 360018
7369 SMITH TEXT2 <excluded> 26012
7844 TURNER TEXT1 <excluded> 32
Of key importance is how Oracle handles the LIKE keyword when dealing with TEXT1 for SMITH: notice that the column has a length of >360k characters.
Much of the standard syntax we try using with CLOB datatypes works only because Oracle coerces the CLOB into a VARCHAR2, but that has the inherent length limitations.
As this test shows, the LIKE comparison does work for fat CLOB values--at least in Oracle 12c, where I tested it.
Things will be somewhat different if you try to display the actual content that matches: you will need to familiarize yourself with the DBMS_LOB package and its subprograms such as DBMS_LOB.INSTR and DBMS_LOB.SUBSTR if you are dealing with long CLOB values.

Select records from table where table name come from another table

We generate tables dynamically Eg. Table T_1, T_2, T_3, etc & we can get that table names from another table by following query.
SELECT CONCAT('T_', T_ID) AS T_NAME FROM T_NAMES WHERE T_KEY = 'ABC';
Now I want to get records from this retrieved table name. What can I do ?
I'm doing like following but that's not working :
SELECT * FROM (SELECT CONCAT('T_', T_ID) AS T_NAME FROM T_NAMES WHERE T_KEY = 'ABC')
FYI : I'm hitting two individual queries as of now though I want to eliminate one and I can not follow cursor/procedure approach due to some limitations.
A procedure which utilizes refcursor seems to be the most appropriate to me. Here's an example:
SQL> -- creating test case (your T_NAMES table and T_1 which looks like Scott's DEPT)
SQL> create table t_names (t_id number, t_key varchar2(3));
Table created.
SQL> insert into t_names values (1, 'ABC');
1 row created.
SQL> create table t_1 as select * from dept;
Table created.
SQL> -- a procedure; accepts KEY and returns refcursor
SQL> create or replace procedure p_test
2 (par_key in varchar2, par_out out sys_refcursor)
3 as
4 l_t_name varchar2(30);
5 begin
6 select 'T_' || t_id
7 into l_t_name
8 from t_names
9 where t_key = par_key;
10
11 open par_out for 'select * from ' || l_t_name;
12 end;
13 /
Procedure created.
OK, let's test it:
SQL> var l_out refcursor
SQL> exec p_test('ABC', :l_out)
PL/SQL procedure successfully completed.
SQL> print l_out
DEPTNO DNAME LOC
---------- -------------- -------------
10 ACCOUNTING NEW YORK
20 RESEARCH DALLAS
30 SALES CHICAGO
40 OPERATIONS BOSTON
SQL>
I could propose to you Dynamic SQL.
First of all, you need to create a cursor. The cursor will iterate by the dynamic tables. Then you could use dynamic SQL to create a query and then execute it.
So example:
https://livesql.oracle.com/apex/livesql/file/content_C81136WLRFYZF8ION6Q57GWE1.html - detailed cursor example.
https://docs.oracle.com/cd/B28359_01/appdev.111/b28370/dynamic.htm#i13057 - dynamic SQL in Oracle

Alternate of using NVL with IN clause in Oracle 11g

I have a workplace problem to which I am looking for an easy solution.
I am trying to replicate it in a smaller scenario.
Problem in short
I want to use nvl inside an in clause. Currently I have an input string which consists of a name. It is used in a where clause like below
and column_n = nvl(in_parameter,column_n)
Now I want to pass multiple comma separated values in same input parameter. So if I replace = with in, and transpose the input comma separated string as rows, I cannot use the nvl clause with it.
Problem in Detail
Lets consider an Employee table emp1.
Emp1
+-------+-------+
| empno | ename |
+-------+-------+
| 7839 | KING |
| 7698 | BLAKE |
| 7782 | CLARK |
+-------+-------+
Now this is a simple version of an existing stored procedure
create or replace procedure emp_poc(in_names IN varchar2)
as
cnt integer;
begin
select count(*)
into cnt
from emp1
where
ename = nvl(in_names,ename); --This is one of the condition where we will make the change.
dbms_output.put_line(cnt);
end;
So this will give the count of number of employees passed as Input Parameter. But if we pass null, it will return the whole count of employee becuase of the nvl.
So these procedure calls will render the given outputs.
Procedure Call Output
exec emp_poc('KING') 1
exec emp_poc('JOHN') 0
exec emp_poc(null) 3
Now what I want to achieve is to add another functionality. So exec emp_poc('KING,BLAKE') should give me 2. So I figured a way to split the comma separated string to rows and used that in the procedure.
So if I change the where clause as below to in
create or replace procedure emp_poc2(in_names IN varchar2)
as
cnt integer;
begin
select count(*)
into cnt
from emp1
where
ename in (select trim(regexp_substr(in_names, '[^,]+', 1, level))
from dual
connect by instr(in_names, ',', 1, level - 1) > 0
);
dbms_output.put_line(cnt);
end;
So exec emp_poc2('KING','BLAKE') gives me 2. But passing null will give result as 0. However I want to get 3 like the case with emp_proc
And I cannot use nvl with in as it expect the subquery to return a single value.
1 way I can think of is rebuilding the whole query in a variable, based in input paramteter, and then use execute immediate. But I am using some variables to collect the value and it would be difficult to achieve it with execute immediate.
I am again emphasizing that this is a simple version of a complex procedure where we are capturing many variables and it joins many tables and has multiple AND conditions in where clause.
Any ideas on how to make this work.
This may help you
CREATE OR REPLACE PROCEDURE emp_poc2(in_names IN varchar2)
AS
cnt integer;
BEGIN
SELECT COUNT(*) INTO cnt
FROM emp1
WHERE
in_names IS NULL
OR ename IN (
SELECT TRIM(REGEXP_SUBSTR(in_names, '[^,]+', 1, level))
FROM dual
CONNECT BY INSTR(in_names, ',', 1, level - 1) > 0
);
dbms_output.put_line(cnt);
END;
Other way could be use IF ELSE or UNION ALL
If your real code is much more complex then your code readability might be greatly enhanced by using a proper collection type instead.
In the example below I have created an user defined type str_list_t that is a real collection of strings.
I also use common table expression (CTE) in the sql query to enhance the readability. In this simple example the CTE benefits for readability are not obvious but for all non-trivial queries it's a valuable tool.
Test data
create table emp1(empno number, empname varchar2(10));
insert into emp1 values(5437, 'GATES');
insert into emp1 values(5438, 'JOBS');
insert into emp1 values(5439, 'BEZOS');
insert into emp1 values(5440, 'MUSK');
insert into emp1 values(5441, 'CUBAN');
insert into emp1 values(5442, 'HERJAVEC');
commit;
Supporting data type
create or replace type str_list_t is table of varchar2(4000 byte);
/
Subprogram
create or replace function emp_count(p_emps in str_list_t) return number is
v_count number;
v_is_null_container constant number :=
case
when p_emps is null then 1
else 0
end;
begin
-- you can also test for empty collection (that's different thing than a null collection)
with
-- common table expression (CTE) gives you no benefit in this simple example
emps(empname) as (
select * from table(p_emps)
)
select count(*)
into v_count
from emp1
where v_is_null_container = 1
or empname in (select empname from emps)
;
return v_count;
end;
/
show errors
Example run
SQL> select 2 as expected, emp_count(str_list_t('BALLMER', 'CUBAN', 'JOBS')) as emp_count from dual
union all
select 0, emp_count(str_list_t()) from dual
union all
select 6, emp_count(null) from dual
;
EXPECTED EMP_COUNT
---------- ----------
2 2
0 0
6 6