How to query a club column in Oracle [duplicate] - sql

I would like to find the distinct CLOB values that can assume the column called CLOB_COLUMN (of type CLOB) contained in the table called COPIA.
I have selected a PROCEDURAL WAY to solve this problem, but I would prefer to give a simple SELECT as the following: SELECT DISTINCT CLOB_COLUMN FROM TABLE avoiding the error "ORA-00932: inconsistent datatypes: expected - got CLOB"
How can I achieve this?
Thank you in advance for your kind cooperation. This is the procedural way I've thought:
-- Find the distinct CLOB values that can assume the column called CLOB_COLUMN (of type CLOB)
-- contained in the table called COPIA
-- Before the execution of the following PL/SQL script, the CLOB values (including duplicates)
-- are contained in the source table, called S1
-- At the end of the excecution of the PL/SQL script, the distinct values of the column called CLOB_COLUMN
-- can be find in the target table called S2
BEGIN
EXECUTE IMMEDIATE 'TRUNCATE TABLE S1 DROP STORAGE';
EXECUTE IMMEDIATE 'DROP TABLE S1 CASCADE CONSTRAINTS PURGE';
EXCEPTION
WHEN OTHERS
THEN
BEGIN
NULL;
END;
END;
BEGIN
EXECUTE IMMEDIATE 'TRUNCATE TABLE S2 DROP STORAGE';
EXECUTE IMMEDIATE 'DROP TABLE S2 CASCADE CONSTRAINTS PURGE';
EXCEPTION
WHEN OTHERS
THEN
BEGIN
NULL;
END;
END;
CREATE GLOBAL TEMPORARY TABLE S1
ON COMMIT PRESERVE ROWS
AS
SELECT CLOB_COLUMN FROM COPIA;
CREATE GLOBAL TEMPORARY TABLE S2
ON COMMIT PRESERVE ROWS
AS
SELECT *
FROM S1
WHERE 3 = 9;
BEGIN
DECLARE
CONTEGGIO NUMBER;
CURSOR C1
IS
SELECT CLOB_COLUMN FROM S1;
C1_REC C1%ROWTYPE;
BEGIN
FOR C1_REC IN C1
LOOP
-- How many records, in S2 table, are equal to c1_rec.clob_column?
SELECT COUNT (*)
INTO CONTEGGIO
FROM S2 BETA
WHERE DBMS_LOB.
COMPARE (BETA.CLOB_COLUMN,
C1_REC.CLOB_COLUMN) = 0;
-- If it does not exist, in S2, a record equal to c1_rec.clob_column,
-- insert c1_rec.clob_column in the table called S2
IF CONTEGGIO = 0
THEN
BEGIN
INSERT INTO S2
VALUES (C1_REC.CLOB_COLUMN);
COMMIT;
END;
END IF;
END LOOP;
END;
END;

If it is acceptable to truncate your field to 32767 characters this works:
select distinct dbms_lob.substr(FIELD_CLOB,32767) from Table1

You could compare the hashes of the CLOB to determine if they are different:
SELECT your_clob
FROM your_table
WHERE ROWID IN (SELECT MIN(ROWID)
FROM your_table
GROUP BY dbms_crypto.HASH(your_clob, dbms_crypto.HASH_SH1))
Edit:
The HASH function doesn't guarantee that there will be no collision. By design however, it is really unlikely that you will get any collision. Still, if the collision risk (<2^80?) is not acceptable, you could improve the query by comparing (with dbms_lob.compare) the subset of rows that have the same hashes.

add TO_CHAR after distinct keyword to convert CLOB to CHAR
SELECT DISTINCT TO_CHAR(CLOB_FIELD) from table1; //This will return distinct values in CLOB_FIELD

Use this approach. In table profile column content is NCLOB. I added the where clause to reduce the time it takes to run which is high,
with
r as (select rownum i, content from profile where package = 'intl'),
s as (select distinct (select min(i) from r where dbms_lob.compare(r.content, t.content) = 0) min_i from profile t where t.package = 'intl')
select (select content from r where r.i = s.min_i) content from s
;
It is not about to win any prizes for efficiency but should work.

select distinct DBMS_LOB.substr(column_name, 3000) from table_name;

If truncating the clob to the size of a varchar2 won't work, and you're worried about hash collisions, you can:
Add a row number to every row;
Use DBMS_lob.compare in a not exists subquery. Exclude duplicates (this means: compare = 0) with a higher rownum.
For example:
create table t (
c1 clob
);
insert into t values ( 'xxx' );
insert into t values ( 'xxx' );
insert into t values ( 'yyy' );
commit;
with rws as (
select row_number () over ( order by rowid ) rn,
t.*
from t
)
select c1 from rws r1
where not exists (
select * from rws r2
where dbms_lob.compare ( r1.c1, r2.c1 ) = 0
and r1.rn > r2.rn
);
C1
xxx
yyy

To bypass the oracle error, you have to do something like this :
SELECT CLOB_COLUMN FROM TABLE COPIA C1
WHERE C1.ID IN (SELECT DISTINCT C2.ID FROM COPIA C2 WHERE ....)

I know this is an old question but I believe I've figure out a better way to do what you are asking.
It is kind of like a cheat really...The idea behind it is that You can't do a DISTINCT of a Clob column but you can do a DISTINCT on a Listagg function of a Clob_Column...you just need to play with the partition clause of the Listagg function to make sure it will only return one value.
With that in mind...here is my solution.
SELECT DISTINCT listagg(clob_column,'| ') within GROUP (ORDER BY unique_id) over (PARTITION BY unique_id) clob_column
FROM copia;

Related

Redshift comma separated string to column [duplicate]

I am wondering how to convert comma-delimited values into rows in Redshift. I am afraid that my own solution isn't optimal. Please advise. I have table with one of the columns with coma-separated values. For example:
I have:
user_id|user_name|user_action
-----------------------------
1 | Shone | start,stop,cancell...
I would like to see
user_id|user_name|parsed_action
-------------------------------
1 | Shone | start
1 | Shone | stop
1 | Shone | cancell
....
A slight improvement over the existing answer is to use a second "numbers" table that enumerates all of the possible list lengths and then use a cross join to make the query more compact.
Redshift does not have a straightforward method for creating a numbers table that I am aware of, but we can use a bit of a hack from https://www.periscope.io/blog/generate-series-in-redshift-and-mysql.html to create one using row numbers.
Specifically, if we assume the number of rows in cmd_logs is larger than the maximum number of commas in the user_action column, we can create a numbers table by counting rows. To start, let's assume there are at most 99 commas in the user_action column:
select
(row_number() over (order by true))::int as n
into numbers
from cmd_logs
limit 100;
If we want to get fancy, we can compute the number of commas from the cmd_logs table to create a more precise set of rows in numbers:
select
n::int
into numbers
from
(select
row_number() over (order by true) as n
from cmd_logs)
cross join
(select
max(regexp_count(user_action, '[,]')) as max_num
from cmd_logs)
where
n <= max_num + 1;
Once there is a numbers table, we can do:
select
user_id,
user_name,
split_part(user_action,',',n) as parsed_action
from
cmd_logs
cross join
numbers
where
split_part(user_action,',',n) is not null
and split_part(user_action,',',n) != '';
Another idea is to transform your CSV string into JSON first, followed by JSON extract, along the following lines:
... '["' || replace( user_action, '.', '", "' ) || '"]' AS replaced
... JSON_EXTRACT_ARRAY_ELEMENT_TEXT(replaced, numbers.i) AS parsed_action
Where "numbers" is the table from the first answer. The advantage of this approach is the ability to use built-in JSON functionality.
If you know that there are not many actions in your user_action column, you use recursive sub-querying with union all and therefore avoiding the aux numbers table.
But it requires you to know the number of actions for each user, either adjust initial table or make a view or a temporary table for it.
Data preparation
Assuming you have something like this as a table:
create temporary table actions
(
user_id varchar,
user_name varchar,
user_action varchar
);
I'll insert some values in it:
insert into actions
values (1, 'Shone', 'start,stop,cancel'),
(2, 'Gregory', 'find,diagnose,taunt'),
(3, 'Robot', 'kill,destroy');
Here's an additional table with temporary count
create temporary table actions_with_counts
(
id varchar,
name varchar,
num_actions integer,
actions varchar
);
insert into actions_with_counts (
select user_id,
user_name,
regexp_count(user_action, ',') + 1 as num_actions,
user_action
from actions
);
This would be our "input table" and it looks just as you expected
select * from actions_with_counts;
id
name
num_actions
actions
2
Gregory
3
find,diagnose,taunt
3
Robot
2
kill,destroy
1
Shone
3
start,stop,cancel
Again, you can adjust initial table and therefore skipping adding counts as a separate table.
Sub-query to flatten the actions
Here's the unnesting query:
with recursive tmp (user_id, user_name, idx, user_action) as
(
select id,
name,
1 as idx,
split_part(actions, ',', 1) as user_action
from actions_with_counts
union all
select user_id,
user_name,
idx + 1 as idx,
split_part(actions, ',', idx + 1)
from actions_with_counts
join tmp on actions_with_counts.id = tmp.user_id
where idx < num_actions
)
select user_id, user_name, user_action as parsed_action
from tmp
order by user_id;
This will create a new row for each action, and the output would look like this:
user_id
user_name
parsed_action
1
Shone
start
1
Shone
stop
1
Shone
cancel
2
Gregory
find
2
Gregory
diagnose
2
Gregory
taunt
3
Robot
kill
3
Robot
destroy
Here are two ways to achieve this.
In my example, I'm assuming that I am accepting a comma separated list of values. My values look like schema.table.column.
The first involves using a recursive CTE.
drop table if exists #dep_tbl;
create table #dep_tbl as
select 'schema.foobar.insert_ts,schema.baz.load_ts' as dep
;
with recursive tmp (level, dep_split, to_split) as
(
select 1 as level
, split_part(dep, ',', 1) as dep_split
, regexp_count(dep, ',') as to_split
from #dep_tbl
union all
select tmp.level + 1 as level
, split_part(a.dep, ',', tmp.level + 1) as dep_split_u
, tmp.to_split
from #dep_tbl a
inner join tmp on tmp.dep_split is not null
and tmp.level <= tmp.to_split
)
select dep_split from tmp;
the above yields:
|dep_split|
|schema.foobar.insert_ts|
|schema.baz.load_ts|
The second involves a stored procedure.
CREATE OR REPLACE PROCEDURE so_test(dependencies_csv varchar(max))
LANGUAGE plpgsql
AS $$
DECLARE
dependencies_csv_vals varchar(max);
BEGIN
drop table if exists #dep_holder;
create table #dep_holder
(
avoid varchar(60000)
);
IF dependencies_csv is not null THEN
dependencies_csv_vals:='('||replace(quote_literal(regexp_replace(dependencies_csv,'\\s','')),',', '\'),(\'') ||')';
execute 'insert into #dep_holder values '||dependencies_csv_vals||';';
END IF;
END;
$$
;
call so_test('schema.foobar.insert_ts,schema.baz.load_ts')
select
*
from
#dep_holder;
the above yields:
|dep_split|
|schema.foobar.insert_ts|
|schema.baz.load_ts|
in conclusion
If you only care about one single column in your input (the X delimited values), then I think the stored procedure is easier/faster.
However, if you have other columns you care about and want to keep those columns along with your comma separated value column now transformed to rows, OR, if you want to know the argument (original list of delimited values), I think the stored procedure is the way to go. In that case, you can just add those other columns to your columns selected in the recursive query.
You can get the expected result with the following query. I'm using "UNION ALL" to convert a column to row.
select user_id, user_name, split_part(user_action,',',1) as parsed_action from cmd_logs
union all
select user_id, user_name, split_part(user_action,',',2) as parsed_action from cmd_logs
union all
select user_id, user_name, split_part(user_action,',',3) as parsed_action from cmd_logs
Here's my equally-terrible answer.
I have a users table, and then an events table with a column that is just a comma-delimited string of users at said event. eg
event_id | user_ids
1 | 5,18,25,99,105
In this case, I used the LIKE and wildcard functions to build a new table that represents each event-user edge.
SELECT e.event_id, u.id as user_id
FROM events e
LEFT JOIN users u ON e.user_ids like '%' || u.id || '%'
It's not pretty, but I throw it in a WITH clause so that I don't have to run it more than once per query. I'll likely just build an ETL to create that table every night anyway.
Also, this only works if you have a second table that does have one row per unique possibility. If not, you could do LISTAGG to get a single cell with all your values, export that to a CSV and reupload that as a table to help.
Like I said: a terrible, no-good solution.
Late to the party but I got something working (albeit very slow though)
with nums as (select n::int n
from
(select
row_number() over (order by true) as n
from table_with_enough_rows_to_cover_range)
cross join
(select
max(json_array_length(json_column)) as max_num
from table_with_json_column )
where
n <= max_num + 1)
select *, json_extract_array_element_text(json_column,nums.n-1) parsed_json
from nums, table_with_json_column
where json_extract_array_element_text(json_column,nums.n-1) != ''
and nums.n <= json_array_length(json_column)
Thanks to answer by Bob Baxley for inspiration
Just improvement for the answer above https://stackoverflow.com/a/31998832/1265306
Is generating numbers table using the following SQL
https://discourse.looker.com/t/generating-a-numbers-table-in-mysql-and-redshift/482
SELECT
p0.n
+ p1.n*2
+ p2.n * POWER(2,2)
+ p3.n * POWER(2,3)
+ p4.n * POWER(2,4)
+ p5.n * POWER(2,5)
+ p6.n * POWER(2,6)
+ p7.n * POWER(2,7)
as number
INTO numbers
FROM
(SELECT 0 as n UNION SELECT 1) p0,
(SELECT 0 as n UNION SELECT 1) p1,
(SELECT 0 as n UNION SELECT 1) p2,
(SELECT 0 as n UNION SELECT 1) p3,
(SELECT 0 as n UNION SELECT 1) p4,
(SELECT 0 as n UNION SELECT 1) p5,
(SELECT 0 as n UNION SELECT 1) p6,
(SELECT 0 as n UNION SELECT 1) p7
ORDER BY 1
LIMIT 100
"ORDER BY" is there only in case you want paste it without the INTO clause and see the results
create a stored procedure that will parse string dynamically and populatetemp table, select from temp table.
here is the magic code:-
CREATE OR REPLACE PROCEDURE public.sp_string_split( "string" character varying )
AS $$
DECLARE
cnt INTEGER := 1;
no_of_parts INTEGER := (select REGEXP_COUNT ( string , ',' ));
sql VARCHAR(MAX) := '';
item character varying := '';
BEGIN
-- Create table
sql := 'CREATE TEMPORARY TABLE IF NOT EXISTS split_table (part VARCHAR(255)) ';
RAISE NOTICE 'executing sql %', sql ;
EXECUTE sql;
<<simple_loop_exit_continue>>
LOOP
item = (select split_part("string",',',cnt));
RAISE NOTICE 'item %', item ;
sql := 'INSERT INTO split_table SELECT '''||item||''' ';
EXECUTE sql;
cnt = cnt + 1;
EXIT simple_loop_exit_continue WHEN (cnt >= no_of_parts + 2);
END LOOP;
END ;
$$ LANGUAGE plpgsql;
Usage example:-
call public.sp_string_split('john,smith,jones');
select *
from split_table
You can try copy command to copy your file into redshift tables
copy table_name from 's3://mybucket/myfolder/my.csv' CREDENTIALS 'aws_access_key_id=my_aws_acc_key;aws_secret_access_key=my_aws_sec_key' delimiter ','
You can use delimiter ',' option.
For more details of copy command options you can visit this page
http://docs.aws.amazon.com/redshift/latest/dg/r_COPY.html

Oracle capture count of minus operation

I have 2 tables (TABLE_A & TABLE_B) where I'm using the MINUS command to see if there are differences in the tables.
In my example below you can see that TABLE_A has an additional row.
Is there a way to capture the numeric difference between the two tables, in this case 1 row.
If there is a difference >0 then display the value. Although my example is small it could contain many rows. Therefore I would only like to do the MINUS command once if possible. I'm also also amenable to alternative solutions and not tied to the MINUS command or if this can be done with SQL only that will work too.
Thanks in advance for your expertise and all who answer.
CREATE TABLE TABLE_A(
seq_num NUMBER GENERATED BY DEFAULT AS IDENTITY (START WITH 1) NOT NULL,
nm VARCHAR(30)
);
/
CREATE TABLE TABLE_B(
seq_num NUMBER GENERATED BY DEFAULT AS IDENTITY (START WITH 1) NOT NULL,
nm VARCHAR(30)
);
/
BEGIN
FOR I IN 1..4 LOOP
INSERT INTO TABLE_A (nm) VALUES('Name '||I);
end loop;
FOR I IN 1..3 LOOP
INSERT INTO TABLE_B (nm) VALUES('Name '||I);
end loop;
END;
-- MINUS operation
SELECT nm FROM TABLE_A
MINUS
SELECT nm FROM TABLE_B;
Output:
NM
Name 4
Pseudo code
Do minus command
If difference >0 then display rows
There are many ways for this, you can try 1 as below -
SELECT COUNT(*)
FROM (SELECT nm FROM TABLE_A
MINUS
SELECT nm FROM TABLE_B);
Another method maybe -
SELECT COUNT(*)
FROM TABLE_A A
WHERE NOT EXISTS (SELECT NULL
FROM TABLE_B B
WHERE A.nm = B.nm)
If I understood the question correctly you can do it using analytic count:
select *
from (
select v.*,count(*)over() cnt
from (
SELECT nm FROM TABLE_A
MINUS
SELECT nm FROM TABLE_B
) v
)
where cnt>=4;
DBFiddle: https://dbfiddle.uk/?rdbms=oracle_21&fiddle=0ac62f3d1ea835f60427a1da8efb965e

How to use a global temporary table in a function?

so I have a function that takes in 3 IN parameters (hour, date, code)
and returns a number.
My function code is below:
create or replace FUNCTION get_max_value (rhr NUMBER,
rdate VARCHAR2,
rcode VARCHAR2)
RETURN NUMBER
IS
rvalue_day NUMBER;
--
BEGIN
SELECT MAX (v.value)
INTO rvalue_day
FROM table v
JOIN rel_table_1 sv ON (v.value_id = sv.value_id)
JOIN look_up_table ff ON (sv.form_field_id = ff.form_field_id)
WHERE v.date = rdate
AND v.code = rcode
AND v.hr_num = rhr
AND (v.code = 'PASS' OR v.code IS NULL);
RETURN rvalue_day;
END;
Because of performance issues, I am trying to use a global temporary table that grabs the values (v.value) and the primary_identifier associated to it (value_Id). My code is below:
with table_c as
(
select value, value_id
from table where date = rdate
AND code = rcode
AND hr_num = rhr
)
select MAX (v.value)
FROM table_c v
JOIN rel_table_1 sv ON (v.value_id = sv.value_id)
JOIN look_up_table ON (sv.form_field_id = ff.form_field_id)
WHERE ff.code_desc = rcode;
Is there a way I can incorporate the above method into a function so that it can accept values for multiple parameters? I currently have a stored proc that is trying to derive a value by inserting 3 values into those 3 parameters...
Thanks in advance!
This is not an answer to your question "how can I optimize this function".
I'm going to show you that the very idea of using a function can be a reason of your performance problems - and I'am guessing that this is the case in your case.
Please look at the below very simple case:
create table ttest as
select * from all_objects
fetch first 10000 rows only;
create index ttest_ix on ttest(object_type);
create or replace function get_max(p_object_type varchar)
return number
is
ret_val number;
begin
select max(object_id) into ret_val
from ttest
where object_type = p_object_type;
return ret_val;
end;
/
create or replace view get_max_view as
select object_type, max(object_id) as max_id
from ttest
group by object_type
;
The view get_max_view is an equivalent of the function get_max, you can use both for example in this way in queries:
select object_id, object_type, get_max(object_type) as max_id
from ttest;
select object_id, object_type,
(select max_id from get_max_view x where x.object_type = t.object_type) as max_id
from ttest t;
And now please examine the case where both the above queries are run against 10000 records - to do so I nest both queries as subqueries, and caclulate a sum of all their results:
set timings on;
select sum(max_id)
from (
select object_id, object_type, get_max(object_type) as max_id
from ttest
);
SUM(MAX_ID)
-----------
214087478
Elapsed: 00:00:11.764
select sum(max_id)
from (
select object_id, object_type,
(select max_id from get_max_view x where x.object_type = t.object_type) as max_id
from ttest t
);
SUM(MAX_ID)
-----------
214087478
Elapsed: 00:00:00.011
Please examine the times - 11.76 seconds vs. 11 miliseconds.
This is over 1000 times faster - 100000% faster !!!
This is why I suggested you in the comment to replace this functin by the view, because this is the most probably cause of your performance issues, and trying to optimize this function is probably the wrong way.

Assigning a unique id to the first instance of a string in PL/SQL

I have a set of data that has multiple rows with the same unique_string_identifier. I want to assign a new unique ID from a sequence to the first instance of a row with that unique_string_identifier then give any following rows with the same unique_string_identifier the ID times -1. I've tried it three different ways, but I always get
ORA-30483: window functions are not allowed here
Here are my attempts:
UPDATE my_table
set my_id =
CASE WHEN LAG(unique_string_identifier, 1, '-') OVER (order by unique_string_identifier) <> unique_string_identifier THEN my_id_seq.nextval
ELSE LAG(-1 * my_id, 1, '-') OVER (order by unique_string_identifier) END CASE
where import_run_id = a_run_id;
I've also tried this:
UPDATE my_table
set my_id = my_id_seq.nextval
where row_number() over (partition by unique_string_identifier order by line_id) = 1;
//another update statement to make the rows with null ID's equal to the negative id joined on unique_string_identifier
And this:
UPDATE my_Table
set my_id =
decode(unique_string_identifier, LAG(unique_string_identifier, 1, '-') OVER (order by unique_string_identifier), LAG( my_id, 1, '-') OVER (order by unique_string_identifier), my_id_seq.nextval)
where import_run_id = a_run_id;
How can I make this work?
EDIT: Also for my own enrichment, if anyone can explain why these 3 statements (which all seem pretty different to me) end up getting the exact same ORA error, I'd appreciate it.
I couldn't work out a simple MERGE or set of UPDATEs, but here is a potential solution that might work fine, tested on Oracle 11g, using PL/SQL:
Test scenario:
create table my_table (unique_string varchar2(100));
insert into my_table values ('aaa');
insert into my_table values ('aaa');
insert into my_table values ('aaa');
insert into my_table values ('bbb');
insert into my_table values ('bbb');
insert into my_table values ('ccc');
alter table my_table add (id number);
create sequence my_seq;
Here's the PL/SQL to do the update:
declare
cursor c is
select unique_string
,row_number()
over (partition by unique_string order by 1)
as rn
from my_table
order by unique_string
for update of id;
r c%rowtype;
begin
open c;
loop
fetch c into r;
exit when c%notfound;
if r.rn = 1 then
update my_table
set id = my_seq.nextval
where current of c;
else
update my_table
set id = my_seq.currval * -1
where current of c;
end if;
end loop;
close c;
end;
/
Results from my test (note that the sequence had advanced a little by this stage):
select * from my_table;
UNIQUE_STRING ID
============= ==
aaa 7
aaa -7
aaa -7
bbb 8
bbb -8
ccc 9
P.S. I've been a bit sneaky and taken advantage of Oracle's tendency to return ROW_NUMBER in the order that the rows are returned; to be more robust and correct, I'd put the query in a subquery and ORDER BY unique_string, rn.

Is it possible to select from multiples tables, having theirs names as the result of a subquery?

I have some tables with the same structure and I want to make a select in a group of them.
Rather than just making a loop to all of those tables, I would like to put a subquery after the FROM of the main query.
Is it possible or it will fail?
Thanks!
(Using Oracle)
Additional info: I don't have the name of the table right away! They're stored in another table. Is it possible to have a subquery that I could put after the FROM of my main query?
"I don't have the name of the table
right away! They're stored in another
table"
Oracle doesn't do this sort of thing in SQL. You'll need to use PL/SQL and assemble a dynamic query.
create or replace function get_dynamic_rows
return sys_refcursor
is
stmt varchar2(32767) := null;
return_value sys_refcursor;
begin
for r in ( select table_name from your_table )
loop
if stmt is not null then
stmt := stmt||' union all ';
end if;
stmt := stmt||'select * from '||r.table_name;
end loop;
open return_value for stmt;
return return_value;
end;
/
This will assemble a query like this
select * from table_1 union all select * from table_2
The UNION ALL is a set operator which combines the output of several queries in a single result set without removing duplicates. The columns in each query must match in number and datatype.
Because the generated statement will be executed automatically there's no real value in formatting it (unless the actual bits of the query are more complicated and you perhaps need to debug it).
Ref Cursors are PL/SQL contructs equivalent to JDBC or .Net ResultSets. Find out more.
Sure, just union them together:
select * from TableA
union all
select * from TableB
union all
select * from TableC
You can union in a subquery:
select *
from (
select * from TableA
union all
select * from TableB
) sub
where col1 = 'value1'
Use union if you're only interested in unique rows, and union all if you want all rows including duplicates.