Datatype in sql developer is opaque - sql

So I am debugging a part of my code. I created datatype called array2D. which is a table of number_array which is table of number.
create or replace type NUMBER_ARR is table of NUMBER;
create or replace TYPE ARRAY2D AS TABLE OF NUMBER_ARR;
I compile both types for debug and after doing so i am able to see each row of the table. But not individual elements stored in it. Element's type is listed as opaque. I had this for the whole table before I compiled datatype for debug.

create or replace type NUMBER_ARR is table of NUMBER;
/
create or replace TYPE ARRAY2D AS TABLE OF NUMBER_ARR;
/
create table t (id integer primary key, n array2d)
nested table n store as n_a(nested table column_value store as n_c);
insert into t values(1, array2d(number_arr(1,2), number_arr(3,4)));
insert into t values(2, array2d(number_arr(1,4)));
Select * from t displays:
ID N
----------------------------
1 [unsupported data type]
2 [unsupported data type]
Now, to display the values, you have to convert each element into table and join like so:
select t1.id, t3.column_value value
from t t1, table(t1.column_value) t2, table(t2.column_value) t3;
ID value
----------------
1 1
1 2
1 3
1 4
2 1
2 4
To display all the values per ID, you may use listagg
select t1.id,
listagg(t3.column_value, ', ')
within group (order by t3.column_value) value
from t t1, table(t1.column_value) t2, table(t2.column_value) t3
group by t1.id;
ID value
----------------
1 1, 2, 3, 4
2 1, 4
Ofcourse, it's displaying all the element of 2d array as single string of numbers.
The proper display (i.e. each array inside its own brackets like ((1,2,3,4),(4,5,6))) is not possible as per your current type definitions.
This doesn't work:
select id, '(' || listagg(value, ',')
within group (order by value) || ')' from
(select t1.id, t2.column_value c, '(' || listagg(t3.column_value, ',')
within group (order by t3.column_value) ||')' value
from t t1, table(t1.column_value) t2, table(t2.column_value) t3
group by t1.id, c)
group by id;
To make query like above work, you have to define object type and define map and order functions.
Lastly, I want to say you should stick with regular datatypes as they much more efficient.
P.S. - If there is no unique key, then you have to use rownum or rowid.

Related

How to count how many times a tag appear inside a CLOB PLSQL oracle 11g

i have a clob column in a table. In this clob i have a XML and i want to count how many times a tag appear inside this clob.
For example:
<TPQ>
<LTP>N<LTP>
<SUBLTP>N</SUBLTP>
<TIMES>446</TIMES>
<TIMES>321</TIMES>
<TIMES>546</TIMES>
<TIMES>547</TIMES>
<LTP>N</LTP>
<LTP2>N<LTP2>
<SUBLTP>N</SUBLTP>
<NODES>1</NODES>
<NODES>2</NODES>
<NODES>3</NODES>
<NODES>4</NODES>
<SUBLTP>H</SUBLTP>
<SUBLTP3>A</SUBLTP3>
<SUBLTP2>N</SUBLTP2>
<LTP2>N</LTP2>
</TPQ>
I want to know that the tag "TIMES" appears 4 times, and tag "NODES" appears 4 times.
Im using this query for getting all TIMES tag but i need know how to count:
SELECT EXTRACT(xmltype.createxml(T.columnCLOB), '//TPQ/LTP/TIMES').getStringVal()
FROM table1 T;
and the result is this:
Result of the Select Statement is
<TIMES>446</TIMES><TIMES>321</TIMES><TIMES>546</TIMES><TIMES>547</TIMES>
This in a example, i need a solution for a dinamic clob column that can have x tags inside, not always with the same structure. But i only need to know how many times appears a specified tag.
You can use:
SELECT t.id,
x.tag_name,
COUNT(*)
FROM table_name t
CROSS JOIN XMLTABLE(
'//*'
PASSING XMLTYPE(t.xml)
COLUMNS
tag_name varchar2(100) path 'name()'
) x
GROUP BY t.id, x.tag_name
Which, for the sample data:
CREATE TABLE table_name (id NUMBER, xml CLOB);
INSERT INTO table_name (id, xml)
VALUES (1, '<TPQ>
<LTP>N</LTP>
<SUBLTP>N</SUBLTP>
<TIMES>446</TIMES>
<TIMES>321</TIMES>
<TIMES>546</TIMES>
<TIMES>547</TIMES>
<LTP>N</LTP>
<LTP2>N</LTP2>
<SUBLTP>N</SUBLTP>
<NODES>1</NODES>
<NODES>2</NODES>
<NODES>3</NODES>
<NODES>4</NODES>
<SUBLTP>H</SUBLTP>
<SUBLTP3>A</SUBLTP3>
<SUBLTP2>N</SUBLTP2>
<LTP2>N</LTP2>
</TPQ>');
Outputs:
ID
TAG_NAME
COUNT(*)
1
LTP
2
1
LTP2
2
1
SUBLTP2
1
1
NODES
4
1
TPQ
1
1
SUBLTP
3
1
TIMES
4
1
SUBLTP3
1
If you only want a specific tag name and want to aggregate the tags' contents then:
SELECT t.id,
x.tag_name,
COUNT(*),
LISTAGG(x.value, ',') WITHIN GROUP (ORDER BY value) AS contents
FROM table_name t
CROSS JOIN XMLTABLE(
'//TIMES'
PASSING XMLTYPE(t.xml)
COLUMNS
tag_name VARCHAR2(100) PATH 'name()',
value VARCHAR2(4000) PATH 'text()'
) x
GROUP BY t.id, x.tag_name
Which outputs:
ID
TAG_NAME
COUNT(*)
CONTENTS
1
TIMES
4
321,446,546,547
db<>fiddle here
XPATH functions can be used
with
x as
(select xmltype('<TPQ><LTP>N</LTP><SUBLTP>N</SUBLTP>
<TIMES>446</TIMES><TIMES>321</TIMES><TIMES>546</TIMES><TIMES>547</TIMES>
<LTP>N</LTP><LTP2>N</LTP2><SUBLTP>N</SUBLTP>
<NODES>1</NODES><NODES>2</NODES><NODES>3</NODES><NODES>4</NODES>
<SUBLTP>H</SUBLTP><SUBLTP3>A</SUBLTP3><SUBLTP2>N</SUBLTP2><LTP2>N</LTP2></TPQ>') xval
from dual)
select z.*
from x, xmltable ('count(/TPQ/TIMES)' passing x.xval) z;

Redshift comma separated string to column [duplicate]

I am wondering how to convert comma-delimited values into rows in Redshift. I am afraid that my own solution isn't optimal. Please advise. I have table with one of the columns with coma-separated values. For example:
I have:
user_id|user_name|user_action
-----------------------------
1 | Shone | start,stop,cancell...
I would like to see
user_id|user_name|parsed_action
-------------------------------
1 | Shone | start
1 | Shone | stop
1 | Shone | cancell
....
A slight improvement over the existing answer is to use a second "numbers" table that enumerates all of the possible list lengths and then use a cross join to make the query more compact.
Redshift does not have a straightforward method for creating a numbers table that I am aware of, but we can use a bit of a hack from https://www.periscope.io/blog/generate-series-in-redshift-and-mysql.html to create one using row numbers.
Specifically, if we assume the number of rows in cmd_logs is larger than the maximum number of commas in the user_action column, we can create a numbers table by counting rows. To start, let's assume there are at most 99 commas in the user_action column:
select
(row_number() over (order by true))::int as n
into numbers
from cmd_logs
limit 100;
If we want to get fancy, we can compute the number of commas from the cmd_logs table to create a more precise set of rows in numbers:
select
n::int
into numbers
from
(select
row_number() over (order by true) as n
from cmd_logs)
cross join
(select
max(regexp_count(user_action, '[,]')) as max_num
from cmd_logs)
where
n <= max_num + 1;
Once there is a numbers table, we can do:
select
user_id,
user_name,
split_part(user_action,',',n) as parsed_action
from
cmd_logs
cross join
numbers
where
split_part(user_action,',',n) is not null
and split_part(user_action,',',n) != '';
Another idea is to transform your CSV string into JSON first, followed by JSON extract, along the following lines:
... '["' || replace( user_action, '.', '", "' ) || '"]' AS replaced
... JSON_EXTRACT_ARRAY_ELEMENT_TEXT(replaced, numbers.i) AS parsed_action
Where "numbers" is the table from the first answer. The advantage of this approach is the ability to use built-in JSON functionality.
If you know that there are not many actions in your user_action column, you use recursive sub-querying with union all and therefore avoiding the aux numbers table.
But it requires you to know the number of actions for each user, either adjust initial table or make a view or a temporary table for it.
Data preparation
Assuming you have something like this as a table:
create temporary table actions
(
user_id varchar,
user_name varchar,
user_action varchar
);
I'll insert some values in it:
insert into actions
values (1, 'Shone', 'start,stop,cancel'),
(2, 'Gregory', 'find,diagnose,taunt'),
(3, 'Robot', 'kill,destroy');
Here's an additional table with temporary count
create temporary table actions_with_counts
(
id varchar,
name varchar,
num_actions integer,
actions varchar
);
insert into actions_with_counts (
select user_id,
user_name,
regexp_count(user_action, ',') + 1 as num_actions,
user_action
from actions
);
This would be our "input table" and it looks just as you expected
select * from actions_with_counts;
id
name
num_actions
actions
2
Gregory
3
find,diagnose,taunt
3
Robot
2
kill,destroy
1
Shone
3
start,stop,cancel
Again, you can adjust initial table and therefore skipping adding counts as a separate table.
Sub-query to flatten the actions
Here's the unnesting query:
with recursive tmp (user_id, user_name, idx, user_action) as
(
select id,
name,
1 as idx,
split_part(actions, ',', 1) as user_action
from actions_with_counts
union all
select user_id,
user_name,
idx + 1 as idx,
split_part(actions, ',', idx + 1)
from actions_with_counts
join tmp on actions_with_counts.id = tmp.user_id
where idx < num_actions
)
select user_id, user_name, user_action as parsed_action
from tmp
order by user_id;
This will create a new row for each action, and the output would look like this:
user_id
user_name
parsed_action
1
Shone
start
1
Shone
stop
1
Shone
cancel
2
Gregory
find
2
Gregory
diagnose
2
Gregory
taunt
3
Robot
kill
3
Robot
destroy
Here are two ways to achieve this.
In my example, I'm assuming that I am accepting a comma separated list of values. My values look like schema.table.column.
The first involves using a recursive CTE.
drop table if exists #dep_tbl;
create table #dep_tbl as
select 'schema.foobar.insert_ts,schema.baz.load_ts' as dep
;
with recursive tmp (level, dep_split, to_split) as
(
select 1 as level
, split_part(dep, ',', 1) as dep_split
, regexp_count(dep, ',') as to_split
from #dep_tbl
union all
select tmp.level + 1 as level
, split_part(a.dep, ',', tmp.level + 1) as dep_split_u
, tmp.to_split
from #dep_tbl a
inner join tmp on tmp.dep_split is not null
and tmp.level <= tmp.to_split
)
select dep_split from tmp;
the above yields:
|dep_split|
|schema.foobar.insert_ts|
|schema.baz.load_ts|
The second involves a stored procedure.
CREATE OR REPLACE PROCEDURE so_test(dependencies_csv varchar(max))
LANGUAGE plpgsql
AS $$
DECLARE
dependencies_csv_vals varchar(max);
BEGIN
drop table if exists #dep_holder;
create table #dep_holder
(
avoid varchar(60000)
);
IF dependencies_csv is not null THEN
dependencies_csv_vals:='('||replace(quote_literal(regexp_replace(dependencies_csv,'\\s','')),',', '\'),(\'') ||')';
execute 'insert into #dep_holder values '||dependencies_csv_vals||';';
END IF;
END;
$$
;
call so_test('schema.foobar.insert_ts,schema.baz.load_ts')
select
*
from
#dep_holder;
the above yields:
|dep_split|
|schema.foobar.insert_ts|
|schema.baz.load_ts|
in conclusion
If you only care about one single column in your input (the X delimited values), then I think the stored procedure is easier/faster.
However, if you have other columns you care about and want to keep those columns along with your comma separated value column now transformed to rows, OR, if you want to know the argument (original list of delimited values), I think the stored procedure is the way to go. In that case, you can just add those other columns to your columns selected in the recursive query.
You can get the expected result with the following query. I'm using "UNION ALL" to convert a column to row.
select user_id, user_name, split_part(user_action,',',1) as parsed_action from cmd_logs
union all
select user_id, user_name, split_part(user_action,',',2) as parsed_action from cmd_logs
union all
select user_id, user_name, split_part(user_action,',',3) as parsed_action from cmd_logs
Here's my equally-terrible answer.
I have a users table, and then an events table with a column that is just a comma-delimited string of users at said event. eg
event_id | user_ids
1 | 5,18,25,99,105
In this case, I used the LIKE and wildcard functions to build a new table that represents each event-user edge.
SELECT e.event_id, u.id as user_id
FROM events e
LEFT JOIN users u ON e.user_ids like '%' || u.id || '%'
It's not pretty, but I throw it in a WITH clause so that I don't have to run it more than once per query. I'll likely just build an ETL to create that table every night anyway.
Also, this only works if you have a second table that does have one row per unique possibility. If not, you could do LISTAGG to get a single cell with all your values, export that to a CSV and reupload that as a table to help.
Like I said: a terrible, no-good solution.
Late to the party but I got something working (albeit very slow though)
with nums as (select n::int n
from
(select
row_number() over (order by true) as n
from table_with_enough_rows_to_cover_range)
cross join
(select
max(json_array_length(json_column)) as max_num
from table_with_json_column )
where
n <= max_num + 1)
select *, json_extract_array_element_text(json_column,nums.n-1) parsed_json
from nums, table_with_json_column
where json_extract_array_element_text(json_column,nums.n-1) != ''
and nums.n <= json_array_length(json_column)
Thanks to answer by Bob Baxley for inspiration
Just improvement for the answer above https://stackoverflow.com/a/31998832/1265306
Is generating numbers table using the following SQL
https://discourse.looker.com/t/generating-a-numbers-table-in-mysql-and-redshift/482
SELECT
p0.n
+ p1.n*2
+ p2.n * POWER(2,2)
+ p3.n * POWER(2,3)
+ p4.n * POWER(2,4)
+ p5.n * POWER(2,5)
+ p6.n * POWER(2,6)
+ p7.n * POWER(2,7)
as number
INTO numbers
FROM
(SELECT 0 as n UNION SELECT 1) p0,
(SELECT 0 as n UNION SELECT 1) p1,
(SELECT 0 as n UNION SELECT 1) p2,
(SELECT 0 as n UNION SELECT 1) p3,
(SELECT 0 as n UNION SELECT 1) p4,
(SELECT 0 as n UNION SELECT 1) p5,
(SELECT 0 as n UNION SELECT 1) p6,
(SELECT 0 as n UNION SELECT 1) p7
ORDER BY 1
LIMIT 100
"ORDER BY" is there only in case you want paste it without the INTO clause and see the results
create a stored procedure that will parse string dynamically and populatetemp table, select from temp table.
here is the magic code:-
CREATE OR REPLACE PROCEDURE public.sp_string_split( "string" character varying )
AS $$
DECLARE
cnt INTEGER := 1;
no_of_parts INTEGER := (select REGEXP_COUNT ( string , ',' ));
sql VARCHAR(MAX) := '';
item character varying := '';
BEGIN
-- Create table
sql := 'CREATE TEMPORARY TABLE IF NOT EXISTS split_table (part VARCHAR(255)) ';
RAISE NOTICE 'executing sql %', sql ;
EXECUTE sql;
<<simple_loop_exit_continue>>
LOOP
item = (select split_part("string",',',cnt));
RAISE NOTICE 'item %', item ;
sql := 'INSERT INTO split_table SELECT '''||item||''' ';
EXECUTE sql;
cnt = cnt + 1;
EXIT simple_loop_exit_continue WHEN (cnt >= no_of_parts + 2);
END LOOP;
END ;
$$ LANGUAGE plpgsql;
Usage example:-
call public.sp_string_split('john,smith,jones');
select *
from split_table
You can try copy command to copy your file into redshift tables
copy table_name from 's3://mybucket/myfolder/my.csv' CREDENTIALS 'aws_access_key_id=my_aws_acc_key;aws_secret_access_key=my_aws_sec_key' delimiter ','
You can use delimiter ',' option.
For more details of copy command options you can visit this page
http://docs.aws.amazon.com/redshift/latest/dg/r_COPY.html

Oracle SQL: using LAG function with user-defined-type returns "inconsistent datatypes"

I have a type MyType defined as follows:
create or replace type MyType as varray(20000) of number(18);
And a table MyTable defined as follows:
create table MyTable (
id number(18) primary key
,widgets MyType
)
I am trying to select the widgets for each row and its logically previous row in MyTable using the following SQL:
select t.id
,lag(t.widgets,1) over (order by t.id) as widgets_previous
from MyTable t
order by t.id;
and I get the response:
ORA-00932: inconsistent datatypes: expected - got MYSCHEMA.MYTYPE
If I run the exact same query using a column of type varchar or number instead of MyType it works fine.
The type of the column in the current row and its previous row must be the same so I can only assume it is something related to the user defined type.
Do I need to do something special to use LAG with a user defined type, or does LAG not support user defined types? If the latter, are there any other utility functions that would provide the same functionality or do I need to do a traditional self join in order to achieve the same?
After reading all the above I've opted for the following as the most effective method for achieving what I need:
select curr.id
,curr.widgets as widgets
,prev.widgets as previous_widgets
from (select a.id
,a.widgets
,lag(a.id,1) over (order by a.id) as previous_id
from mytable a
) curr
left join mytable prev on (prev.id = curr.previous_id)
order by curr.id
ie. a lag / self join hybrid using lag on a number field that it doesn't complain about to identify the join condition. It's fairly tidy I think and I get my collections as desired. Thanks to everyone for the extremely useful input.
You can use lag with UDT. The problem is varray
Does this give you a result?
select t.id
,lag(
(select listagg(column_value, ',') within group (order by column_value)
from table(t.widgets))
,1) over (order by t.id) as widgets_previous
from MyTable t
order by t.id;
You could try something like:
SQL> create or replace type TestType as varray(20000) of number(18);
Type created.
SQL> create table TestTable (
id number(18) primary key
,widgets TestType
)
Table created.
SQL> delete from testtable
0 rows deleted.
SQL> insert into TestTable values (1, TestType(1,2,3,4))
1 row created.
SQL> insert into TestTable values (2, TestType(5,6,7))
1 row created.
SQL> insert into TestTable values (3, TestType())
1 row created.
SQL> insert into TestTable values (4,null)
1 row created.
SQL> commit
Commit complete.
SQL> -- show all data with widgets
SQL> select t.id, w.column_value as widget_ids
from testtable t, table(t.widgets) w
ID WIDGET_IDS
---------- ----------
1 1
1 2
1 3
1 4
2 5
2 6
2 7
7 rows selected.
SQL> -- show with lag function
SQL> select t.id, lag(w.column_value, 1) over (order by t.id) as widgets_previous
from testtable t, table(t.widgets) w
ID WIDGETS_PREVIOUS
---------- ----------------
1
1 1
1 2
1 3
2 4
2 5
2 6
7 rows selected.

how to give column values as xml element names in Oracle

I have two tables. I am trying to join them and prepare an XML by using SQL/XML (SQLX) in oracle. Here the problem is XMLELEMENT function is taking hardcoded values for element name, but I want to have those names as a column data. Is it possible?
create table PRODUCTEDIT
(
PRODUCTEDIT_NUM NUMBER(12) primary key,
API_NAME VARCHAR2(255)
);
create table PRODUCTEDITPARAMETER
(
PRODUCTEDIT_NUM NUMBER(12) not null,
PARAMETER_SEQ NUMBER(9) not null,
PARAMETER_VALUE VARCHAR2(4000),
CONSTRAINT fk_producteditparameter
FOREIGN KEY (PRODUCTEDIT_NUM )
REFERENCES PRODUCTEDIT(PRODUCTEDIT_NUM )
);
There are 2 records in first table.
PRODUCTEDIT_NUM Api_Name
1 ModifyProd
2 CreateProd
Records in second table:
PRODUCTEDIT_NUM PARAMETER_SEQ PARAMETER_VALUE
1 1 10
1 2 Data
1 3 1
1 4 Data1
1 5 1
2 1 11
2 2 Voice
2 3 1
Now I want to get XMLOUTPUT like below:
<?xml version='1.0'?>
<ModifyProd>
<1>10</1>
<2>Data</2>
<3>1</3>
<4>Data1</4>
<5>1</5>
</ModifyProd>
<CreateProd>
<1>11</1>
<2>Voice</2>
<3>1</3>
</CreateProd>
In above XML we have XMLELEMENT names (ModifyProd,CreateProd, 1,2 e.t.c) coming from table data.I am not able to achieve that by using SQLXML in oracle.
I tried below, but doesn't seem to be working. XMLELEMENT is not taking the value the way I am passing.
SELECT XMLROOT(
XMLELEMENT(d.api_name,
(SELECT XMLAGG(
XMLELEMENT(e.parameter_seq,e.parameter_value
)
)
FROM producteditparameter e
WHERE e.productedit_num = d.productedit_num
)
),version '1.0', standalone yes
)
FROM productedit d
I assume you are looking for this:
WITH t AS
(SELECT 'foo' AS ELEMENT_NAME, 'bar' AS ELEMENT_CONTENT FROM dual)
SELECT XMLELEMENT(EVALNAME ELEMENT_NAME, ELEMENT_CONTENT)
FROM t;
<foo>bar</foo>
Update based on additional input
Your result is not a well-formed XML. A XML document must have only one single root element. So, either you ask for several XML documents, then you can do this:
SELECT
XMLELEMENT(
EVALNAME Api_Name,
XMLAGG(XMLELEMENT(EVALNAME parameter_seq, e.parameter_value) ORDER BY parameter_seq)
) AS xml_result
FROM PRODUCTEDITPARAMETER e
JOIN PRODUCTEDIT d USING (productedit_num)
GROUP BY productedit_num, Api_Name;
<ModifyProd><1>10</1><2>Data</2><3>1</3><4>Data1</4><5>1</5></ModifyProd>
<CreateProd><1>11</1><2>Voice</2><3>1</3></CreateProd>
or if you need a single XML you have to enclose it by another element, e.g.
SELECT
XMLELEMENT("Products",
XMLAGG(
XMLELEMENT(
EVALNAME Api_Name,
XMLAGG(XMLELEMENT(EVALNAME parameter_seq, e.parameter_value) ORDER BY parameter_seq)
)
)
) AS xml_result
FROM PRODUCTEDITPARAMETER e
JOIN PRODUCTEDIT d USING (productedit_num)
GROUP BY productedit_num, Api_Name;
<Products><ModifyProd><1>10</1><2>Data</2><3>1</3><4>Data1</4><5>1</5></ModifyProd><CreateProd><1>11</1><2>Voice</2><3>1</3></CreateProd></Products>
Another solution if I understood what you want:
WITH t AS
( SELECT LEVEL col1, 'test' || LEVEL col2
FROM DUAL
CONNECT BY LEVEL < 10)
SELECT XMLSERIALIZE (DOCUMENT (xmltype (CURSOR (SELECT col1, col2 FROM t))) INDENT SIZE = 0)
FROM DUAL;
Then, you can use a XSD to transform the output as you want.
Hope it helps

Returning rows that had no matches

I've read and read and read but I haven't found a solution to my problem.
I'm doing something like:
SELECT a
FROM t1
WHERE t1.b IN (<external list of values>)
There are other conditions of course but this is the jist of it.
My question is: is there a way to show which in the manually entered list of values didn't find a match? I've looked but I can't find and I'm going in circles.
Create a temp table with the external list of values, then you can do:
select item
from tmptable t
where t.item not in ( select b from t1 )
If the list is short enough, you can do something like:
with t as (
select case when t.b1='FIRSTITEM' then 1 else 0 end firstfound
case when t.b1='2NDITEM' then 1 else 0 end secondfound
case when t.b1='3RDITEM' then 1 else 0 end thirdfound
...
from t1 wher t1.b in 'LIST...'
)
select sum(firstfound), sum(secondfound), sum(thirdfound), ...
from t
But with proper rights, I would use Nicholas' answer.
To display which values in the list of values haven't found a match, as one of the approaches, you could create a nested table SQL(schema object) data type:
-- assuming that the values in the list
-- are of number datatype
create type T_NumList as table of number;
and use it as follows:
-- sample of data. generates numbers from 1 to 11
SQL> with t1(col) as(
2 select level
3 from dual
4 connect by level <= 11
5 )
6 select s.column_value as without_match
7 from table(t_NumList(1, 2, 15, 50, 23)) s -- here goes your list of values
8 left join t1 t
9 on (s.column_value = t.col)
10 where t.col is null
11 ;
Result:
WITHOUT_MATCH
-------------
15
50
23
SQLFiddle Demo
There is no easy way to convert "a externally provided" list into a table that can be used to do the comparison. One way is to use one of the (undocumented) system types to generate a table on the fly based on the values supplied:
with value_list (id) as (
select column_value
from table(sys.odcinumberlist (1, 2, 3)) -- this is the list of values
)
select l.id as missing_id
from value_list l
left join t1 on t1.id = l.id
where t1.id is null;
There are ways to get what you have described, but they have requirements which exceed the statement of the problem. From the minimal description provided, there's no way to have the SQL return the list of the manually-entered values that did not match.
For example, if it's possible to insert the manually-entered values into a separate table - let's call it matchtbl, with the column named b - then the following should do the job:
SELECT matchtbl.b
FROM matchtbl
WHERE matchtbl.b NOT IN (SELECT distinct b
FROM t1)
Of course, if the data is being processed by a programming language, it should be relatively easy to keep track of the set of values returned by the original query, by adding the b column to the output, and then perform the set difference.
Putting the list in an in clause makes this hard. If you can put the list in a table, then the following works:
with list as (
select val1 as value from dual union all
select val2 from dual union all
. . .
select valn
)
select list.value, count(t1.b)
from list left outer join
t1
on t1.b = list.value
group by list.value;