SQL - Combining information with similar primary key/ID - sql

I know this question was asked before but the solution given for that question did not work for me. I couldn't write a comment stating my problem as I needed 50 reputation.
Details:
Oracle version 10g
Solution Provided:
select book_id,
'Total Hours of Loan: ' || to_char(sum(hours_of_loan)) || chr(10) ||
lpad('-', 30, '-') || chr(10) ||
xmlagg(xmlelement(x, descr, chr(10) || lpad('-', 30, '-') || chr(10)).extract('//text()')
order by date_of_loan) as book_description
from (
select book_id, date_of_loan, hours_of_loan,
'Written by: ' || checked_by || chr(10) ||
'Date of Loan : ' || to_char(date_of_loan, 'mm/dd/yy') || chr(10) ||
'Hour(s) of Loan: ' || to_char(hours_of_loan) || chr(10) ||
'Comments: ' || comments || chr(10) ||
'Student: ' || student as descr
from student_book
)
group by book_id
order by book_id
;
I tried creating a dummy table with fake data to test out the logic to see if it works for me and it does. But when I used the query for my real data, it prompted out an error that says:
ORA-22813: operand value exceeds system limits
ERROR prompted
After reading about it, I found that it was because my data was too large.
I tried creating table and a query to call from it
DECLARE
sSUM_TEXT CLOB;
sBOOK_ID VARCHAR2(10):= '1';
nCount NUMBER:=1;
BEGIN
FOR rRECORDRec IN(SELECT BOOK_ID, CHECKED_BY, DATE_OF_lOAN, HOURS_OF_LOAN, COMMENTS, STUDENT
FROM STUDENT_BOOK
WHERE BOOK_ID = '1')
LOOP
IF nCount != 1 THEN
sSUM_TEXT := sSUM_TEXT||chr(10)||
'Written by: ' || rRECORDRec.checked_by || chr(10) ||
'Date of Loan : ' || rRECORDRec.date_of_loan || chr(10) ||
'Hour(s) of Loan: ' || rRECORDRec.hours_of_loan || chr(10) ||
'Comments: ' || rRECORDRec.comments || chr(10) ||
'Student: ' || rRECORDRec.student || chr(10) ||
'--------------------------------';
ELSE
sSUM_TEXT := 'Written by: ' || rRECORDRec.checked_by || chr(10) ||
'Date of Loan : ' || rRECORDRec.date_of_loan || chr(10) ||
'Hour(s) of Loan: ' || rRECORDRec.hours_of_loan || chr(10) ||
'Comments: ' || rRECORDRec.comments || chr(10) ||
'Student: ' || rRECORDRec.student || chr(10) ||
'--------------------------------';
END IF;
nCount := nCount + 1;
END LOOP;
INSERT INTO COMMENTS_SUMMARY(BOOK_ID,SUM_TEXT)
VALUES (sBOOK_ID,sSUM_TEXT);
COMMIT;
END;
This did not really solve my problem but it combined all the "COMMENTS" which had that specific BOOK_ID but without format
Tried Solution
select book_id,
'Total Hours of Loan: ' || to_char(sum(hours_of_loan)) || chr(10) ||
lpad('-', 30, '-') || chr(10) ||
xmlagg(xmlelement(x, descr, chr(10) || lpad('-', 30, '-') || chr(10)).extract('//text()')
order by date_of_loan) as book_description
from (
select book_id, date_of_loan, hours_of_loan,
to_clob('Written by: ' || checked_by || chr(10) ||
'Date of Loan : ' || date_of_loan || chr(10) ||
'Hour(s) of Loan: ' || hours_of_loan || chr(10) ||
'Comments: ' || comments || chr(10) ||
'Student: ' || student) as descr
from student_book
)
group by book_id
order by book_id;
The same error prompted out
2nd SELECT Statement
I tried running the query separately. I ran the 2nd SELECT Statement for my real data and no errors were prompted even without the TO_CLOB
select book_id, date_of_loan, hours_of_loan,
'Written by: ' || checked_by || chr(10) ||
'Date of Loan : ' || date_of_loan || chr(10) ||
'Hour(s) of Loan: ' || hours_of_loan || chr(10) ||
'Comments: ' || comments || chr(10) ||
'Student: ' || student as descr
from student_book
order by book_id
)
--group by book_id
order by book_id;
My Question:
How may I overcome this problem and group the data base on their ID?

There are probably at least a few books for which you have too many loans, so the string built in XMLAGG is longer than 4000 characters. So you have to deal with CLOBs.
You need to add .getclobval() here:
... order by date_of_loan).getclobval() as book_description
It is possible that even the text for one "loan" is already too long (more than 4000 characters). This is a problem for concatenation - if all the inputs are less than 4000 characters Oracle treats them as VARCHAR2 and expects the result to also be VARCHAR2 (no more than 4000 characters). To "force" it to treat everything as CLOBs, it suffices to make the first input a CLOB.
Instead of 'Written by: ' (find it in the code) use
... to_clob('Written by: ')

Related

How do I make a select statement for the following question?

Sorry for asking a simple question but how do you create this Select statement and what is your thought process?
I've listed down the following table that I think is needed for this select statement.
Question: Create a SELECT statement to display a listing of the number of times of the service
requested from each staff. Display staff id, staff’s name, service_no, service’s
description and a total number of times requested.
Table: SERVICES
Column Name || Constraints || Default Value || Data Type || Length
SERVICE_NO || Primary Key || || VarChar2 || 10
DESCRIPTION || || || VarChar2 || 50
CONTRACTOR || || || VarChar2 || 20
CCONTACT_NO || || || VarChar2 || 10
Table: SERVICE_REQUEST
Column Name || Constraints || Default Value || Data Type || Length
SR_ID || Primary Key || || Number || 10
SERVICE_NO || Foreign Key to the SERVICES table || || VarChar2 || 10
STAFF_ID || Foreign Key to the STAFF table || || VarChar2 || 10
TEC_ID || Foreign Key to the TECHNICIANS table || || VarChar2 || 10
REQUEST_DATE || || || Date ||
REQUEST_TIME || || || VarChar2 || 10
Table: STAFF
Column Name || Constraints || Default Value || Data Type || Length
STAFF_ID || Primary key || || VarChar2 || 10
SNAME || || || VarChar2 || 30
SIC_NO || Secondary key || || VarChar2 || 10
SADDRESS || || || VarChar2 || 70
SPHONE || || || VarChar2 || 8
POSITION || || || VarChar2 || 30
HIRE_DATE || || || Date ||
SALARY || || || Number || 7,2
SCH_ID || Foreign Key to the SCHOOL table || || VarChar2 || 10
Use the query.
select s.staff_id,
s.sname,
ser.service_no,
ser.description,
(select count(service_no) from staff where service_no = ser )
from service_request sr ,services ser
where s.staff_id = sr.staff_id
and ser.service_no = sr.service_no;

error signaled in parallel query server P000 ORA-01722: invalid number

I am getting the error:
error signaled in parallel query server P000
ORA-01722: invalid number
while executing the code
There is a cursor that fetches the values from the table
/* VALIDATE MANDATORY CFA FIELDS START */
CURSOR c_cfa_fields IS
select storage_col_name,
view_col_name
from cfa_attrib
where group_id = 150
and value_req = 'Y'
order by display_seq;
---
TYPE cfa_fields_tbl IS TABLE OF c_cfa_fields%ROWTYPE;
cfa_fields_rec cfa_fields_tbl;
/* VALIDATE MANDATORY CFA FIELDS END */
I then fetch the cursor:
OPEN c_cfa_fields;
FETCH c_cfa_fields BULK COLLECT INTO cfa_fields_rec;
CLOSE c_cfa_fields;
---
IF cfa_fields_rec.COUNT > 0 THEN -- Skip if no mandatory CFA Field exists for the GROUP_ID
FOR i IN 1 .. cfa_fields_rec.COUNT
LOOP
L_val_query := 'INSERT /*+ APPEND NOLOGGING PARALLEL */ INTO dmf_val_error_log' || CHR(10) ||
'SELECT /*+ PARALLEL (s,50) */ ' || CHR(10) ||
' ''' || I_TABLE_NAME || ''', -- error_table ' || CHR(10) ||
' ''STORE='' || s.store || ''; GROUP_ID='' || s.group_id, -- pk_value' || CHR(10) ||
' ''' || cfa_fields_rec(i).storage_col_name || ''', -- error_column' || CHR(10) ||
' ''' || cfa_fields_rec(i).storage_col_name || '='' || ' ||
'NVL(s.'|| cfa_fields_rec(i).storage_col_name || ',''(null)''), -- error_value' || CHR(10) ||
' ''' || cfa_fields_rec(i).view_col_name || ' (' ||
cfa_fields_rec(i).storage_col_name || ') cannot be NULL'', -- error_desc' || CHR(10) ||
' SYSDATE -- error_datetime' || CHR(10) ||
' FROM ' || I_TABLE_NAME || ' s' || CHR(10) ||
' WHERE ' || cfa_fields_rec(i).storage_col_name || ' IS NULL';
---
EXECUTE IMMEDIATE L_val_query;
One thing I have noticed:
If the Datatype of storage_col_name is NUMBER,
it gives the error.
It works fine for VARCHAR2 data type for storage_col_name
STORAGE_COL_NAME VIEW_COL_NAME
VARCHAR2_1 LATITUDE
VARCHAR2_2 LONGITUDE
VARCHAR2_3 IS_CROSS_DOCK
VARCHAR2_4 CLEARANCE_STORE
VARCHAR2_5 CLIMATE
VARCHAR2_6 DEMOGRAPHY
NUMBER_11 DEFAULT_WH
When errors occur within dynamically generated code, it's worth checking what is actually being generated and testing it.
Test to reproduce your situation:
declare
cursor c_cfa_fields is
with cfa_attrib (storage_col_name, view_col_name) as
( select 'VARCHAR2_1', 'LATITUDE' from dual union all
select 'VARCHAR2_2', 'LONGITUDE' from dual union all
select 'VARCHAR2_3', 'IS_CROSS_DOCK' from dual union all
select 'VARCHAR2_4', 'CLEARANCE_STORE' from dual union all
select 'VARCHAR2_5', 'CLIMATE' from dual union all
select 'VARCHAR2_6', 'DEMOGRAPHY' from dual union all
select 'NUMBER_11', 'DEFAULT_WH' from dual )
select storage_col_name, view_col_name
from cfa_attrib;
type cfa_fields_tbl is table of c_cfa_fields%rowtype;
cfa_fields_rec cfa_fields_tbl;
l_val_query long;
i_table_name varchar2(30) := 'SAMPLE_TABLE_NAME';
begin
open c_cfa_fields;
fetch c_cfa_fields bulk collect into cfa_fields_rec;
close c_cfa_fields;
if cfa_fields_rec.count > 0 then -- skip if no mandatory cfa field exists for the group_id
for i in 1 .. cfa_fields_rec.count loop
l_val_query := 'INSERT /*+ APPEND PARALLEL */ INTO dmf_val_error_log' || chr(10) ||
'SELECT /*+ PARALLEL (s,50) */ ' || chr(10) ||
' ''' || i_table_name || ''', -- error_table ' || chr(10) ||
' ''STORE='' || s.store || ''; GROUP_ID='' || s.group_id, -- pk_value' || chr(10) ||
' ''' || cfa_fields_rec(i).storage_col_name || ''', -- error_column' || chr(10) ||
' ''' || cfa_fields_rec(i).storage_col_name || '='' || ' ||
'NVL(s.'|| cfa_fields_rec(i).storage_col_name || ',''(null)''), -- error_value' || chr(10) ||
' ''' || cfa_fields_rec(i).view_col_name || ' (' ||
cfa_fields_rec(i).storage_col_name || ') cannot be NULL'', -- error_desc' || chr(10) ||
' SYSDATE -- error_datetime' || chr(10) ||
'FROM ' || i_table_name || ' s' || chr(10) ||
'WHERE ' || cfa_fields_rec(i).storage_col_name || ' IS NULL';
dbms_output.put_line(l_val_query);
dbms_output.new_line();
end loop;
end if;
end;
I've used a with clause to generate your sample data, rather than reading an actual table, and I just print the generated INSERTs rather than executing them (I don't know what your dmf_val_error_log looks like, or the table you are dynamically querying in the generated code).
(I took out the word NOLOGGING from your hint. If you want dmf_val_error_log to be nologging, then use alter table dmf_val_error_log nologging. There's no hint for it.)
The output is a series of INSERTs like this:
INSERT /*+ APPEND PARALLEL */ INTO dmf_val_error_log
SELECT /*+ PARALLEL (s,50) */
'SAMPLE_TABLE_NAME', -- error_table
'STORE=' || s.store || '; GROUP_ID=' || s.group_id, -- pk_value
'VARCHAR2_1', -- error_column
'VARCHAR2_1=' || NVL(s.VARCHAR2_1,'(null)'), -- error_value
'LATITUDE (VARCHAR2_1) cannot be NULL', -- error_desc
SYSDATE -- error_datetime
FROM SAMPLE_TABLE_NAME s
WHERE VARCHAR2_1 IS NULL
Substituting dummy data via a WITH clause, you can test how that will behave with different datatypes.
If the VARCHAR2_1 column is a string, it works:
with sample_table_name(store, group_id, varchar2_1) as
( select 'London', 123, cast(null as varchar2(1)) from dual )
select 'SAMPLE_TABLE_NAME', -- error_table
'STORE=' || s.store || '; GROUP_ID=' || s.group_id, -- pk_value
'VARCHAR2_1', -- error_column
'VARCHAR2_1=' || nvl(s.varchar2_1,'(null)'), -- error_value
'LATITUDE (VARCHAR2_1) cannot be NULL', -- error_desc
sysdate -- error_datetime
from sample_table_name s
where varchar2_1 is null;
If it's a number, it fails:
with sample_table_name(store, group_id, varchar2_1) as
( select 'London', 123, cast(null as number) from dual )
select 'SAMPLE_TABLE_NAME', -- error_table
'STORE=' || s.store || '; GROUP_ID=' || s.group_id, -- pk_value
'VARCHAR2_1', -- error_column
'VARCHAR2_1=' || nvl(s.varchar2_1,'(null)'), -- error_value
'LATITUDE (VARCHAR2_1) cannot be NULL', -- error_desc
sysdate -- error_datetime
from sample_table_name s
where varchar2_1 is null;
ERROR at line 6:
ORA-01722: invalid number
Essentially, it is attempting something like this:
select nvl(1, '(null)')
from dual;
when it would need to be:
select nvl(to_char(1), '(null)')
from dual;
Applying that back to your code, it should be
l_val_query := 'INSERT /*+ APPEND PARALLEL */ INTO dmf_val_error_log' || chr(10) ||
'SELECT /*+ PARALLEL (s,50) */ ' || chr(10) ||
' ''' || i_table_name || ''', -- error_table ' || chr(10) ||
' ''STORE='' || s.store || ''; GROUP_ID='' || s.group_id, -- pk_value' || chr(10) ||
' ''' || cfa_fields_rec(i).storage_col_name || ''', -- error_column' || chr(10) ||
' ''' || cfa_fields_rec(i).storage_col_name || '='' || ' ||
'NVL(to_char(s.'|| cfa_fields_rec(i).storage_col_name || '),''(null)''), -- error_value' || chr(10) ||
' ''' || cfa_fields_rec(i).view_col_name || ' (' ||
cfa_fields_rec(i).storage_col_name || ') cannot be NULL'', -- error_desc' || chr(10) ||
' SYSDATE -- error_datetime' || chr(10) ||
'FROM ' || i_table_name || ' s' || chr(10) ||
'WHERE ' || cfa_fields_rec(i).storage_col_name || ' IS NULL';
which generates INSERT statements like this:
INSERT /*+ APPEND PARALLEL */ INTO dmf_val_error_log
SELECT /*+ PARALLEL (s,50) */
'SAMPLE_TABLE_NAME', -- error_table
'STORE=' || s.store || '; GROUP_ID=' || s.group_id, -- pk_value
'VARCHAR2_1', -- error_column
'VARCHAR2_1=' || NVL(to_char(s.VARCHAR2_1),'(null)'), -- error_value
'LATITUDE (VARCHAR2_1) cannot be NULL', -- error_desc
SYSDATE -- error_datetime
FROM SAMPLE_TABLE_NAME s
WHERE VARCHAR2_1 IS NULL

How to quickly see what columns in a table have data?

We are currently undertaking a testing phase which requires us to see if there is any data in each column for each table. Now, the route that is long and labour-intensive is:
SELECT COUNT(Col1), COUNT(Col2)...FROM TABLE
Is there any easier way to do this? We can go down this route by concatenating each column name from our data lineage document with the COUNT() function, but we have a lot of tables and a lot of columns in each table, making this a bit unfeasible.
Essentially we just need a count of records in each column for each table, without having to write long COUNT(Col) queries.
Thanks
This query will return accurate results if the table statistics were recently gathered with the default value for ESTIMATE_PERCENT:
SELECT utab.table_name
, tcol.column_name
, utab.num_rows
from user_tables utab,
user_tab_cols tcol
where utab.table_name = tcol.table_name
and utab.num_rows > 0
and utab.num_rows = tcol.num_nulls;
You could use a dynamic query to build the queries. This will generate all the queries.
SELECT 'SELECT COUNT(' || t.column_name || ' ) FROM ' || t.owner || '.' || t.table_name || ';' FROM dba_tab_columns t
You can generate all the select statements like so:
SELECT CASE WHEN column_id = 1 AND column_id_desc != 1 THEN 'SELECT ''' || LOWER(owner) || '.' || LOWER(table_name) || ''' table_name, ' || CHR(10) || 'COUNT(' || LOWER(column_name) || ') ' || SUBSTR(LOWER(column_name), 1, 26) || '_cnt,'
WHEN column_id = 1 AND column_id_desc = 1 THEN 'SELECT ''' || LOWER(owner) || '.' || LOWER(table_name) || ''' table_name, ' || CHR(10) || 'COUNT(' || LOWER(column_name) || ') ' || SUBSTR(LOWER(column_name), 1, 26) || '_cnt FROM ' || LOWER(owner) || '.' || LOWER(table_name) || ';'
WHEN column_id_desc = 1 THEN ' COUNT(' || LOWER(column_name) || ') ' || SUBSTR(LOWER(column_name), 1, 26) || '_cnt' || CHR(10) || 'FROM ' || LOWER(owner) || '.' || LOWER(table_name) || ';'
ELSE ' COUNT(' || LOWER(column_name) || ') ' || SUBSTR(LOWER(column_name), 1, 26) || '_cnt,'
END sql_text
FROM (SELECT owner,
table_name,
column_name,
column_id,
row_number() OVER (PARTITION BY owner, table_name ORDER BY column_id DESC) column_id_desc
FROM all_tab_columns)
WHERE <predicates to filter on the tables you're interested in>
ORDER BY owner,
table_name,
column_id;
This goes through all the tables you're interested in plus their columns and outputs text that will, when taken together, form a select statement for each table.
The text that is output in the sql_text column depends on whether the column in the list is the first or last (or both!); this way you get the full statement which queries each table once, rather than one per table and column.
You can then copy and paste the results and run that as a script.
It's can help you
SELECT
a.table_name,
a.column_name
FROM
ALL_TAB_COLUMNS a
WHERE owner = '<your user>'
AND a.SAMPLE_SIZE = a.NUM_NULLS

Oracle SQL: Alternative to aggregate large texts (when exceeding Listagg limit)

I have a simple Select query that aggregates one column containing large texts.
The following worked for me with small texts but I am now exceeding the Listagg character limit (4000 bytes ?).
I am very new to Oracle and couldn't find a proper solution for this online that I could apply here.
Can someone tell me the best alternative to this ?
My Query (simplified):
SELECT
m.S_ID AS SID
, LISTAGG
(
'ITEM NO.: ' || m.ITEM ||
' -nl-ARTICLE: ' || a.ARTICLE ||
' -nl-NET: ' || m.NET ||
' -nl-TAX: ' || NVL(m.TAX, 0) ||
' -nl-GROSS: ' || (m.NET + m.TAX),
' -nl--nl-'
) WITHIN GROUP (ORDER BY m.S_ID) AS Details
/* ... */
FROM
myTable m
/* ... */
Many thanks for any help with this,
Mike
One of possible method.
select xmlagg(xmlelement(xxx,'ITEM NO.: ' || m.ITEM ||
' -nl-ARTICLE: ' || a.ARTICLE ||
' -nl-NET: ' || m.NET ||
' -nl-TAX: ' || NVL(m.TAX, 0) ||
' -nl-GROSS: ' || (m.NET + m.TAX),
' -nl--nl-'||',<-separator').extract('//text()') order m.S_ID).getClobval() from mytable
group by ...
2nd method.
oracle allows to creat own aggregation function user defined aggregation function

Trying To Select A Foreign Key Column Using Concatenation Operator

I have been trying to select all columns from one table into a single column. The problem is I can't seem to get the foreign key "department_id" to show up. If I run this code:
select employee_id || ',' || ' ' || last_name || ',' || ' ' || job_id || ',' || ' ' ||TO_CHAR(hire_date, 'DD Mon YY' ) || ',' || ' '|| department_id
AS "The_Output"
FROM employee;
The information from the "department_id" simply doesn't show. While this code:
select employee_id || ',' || ' ' || last_name || ',' || ' ' || job_id || ',' || ' ' ||TO_CHAR(hire_date, 'DD Mon YY' ) || ',' || ' '|| department_id
AS "The_Output"
FROM employee, department;
gives me this error:
ORA-00918: column ambiguously defined
I have tried to UNION them but that didn't work
That is because department_id exists in both employee and department table. So use alias while selecting.
Also your query is missing the join condition, so you will get cartesian product from both tables. Use something like below.
SELECT E.EMPLOYEE_ID || ',' || ' ' || E.LAST_NAME || ',' || ' ' || E.JOB_ID || ',' || ' ' ||TO_CHAR(E.HIRE_DATE, 'DD Mon YY' ) || ',' || ' '|| E.DEPARTMENT_ID
AS "The_Output"
FROM EMPLOYEE E INNER JOIN DEPARTMENT D
ON E.DEPARTMENT_ID=D.DEPARTMENT_ID