SELECT POM.TABLE_NAME, POM.COLUMN_NAME
FROM ALL_TAB_COLUMNS POM
WHERE POM.COLUMN_NAME LIKE'%STATUS%'
I want to see all possible values in columns on the list(in one row if possible). How can i modify this select to do it?
i want soemthing like this
TABLE_NAME | COLUMN_NAME |VALUES
-----------| ----------- | -------
CAR | COLOR | RED,GREEN
You can use the below query for your requirement. It fetched distinct column values for a table.
It can be used only for the table having limited number of distinct values as I have used LISTAGG function.
SELECT POM.TABLE_NAME, POM.COLUMN_NAME,
XMLTYPE(DBMS_XMLGEN.GETXML('SELECT LISTAGG(COLUMN_NAME,'','') WITHIN GROUP (ORDER BY COLUMN_NAME) VAL
FROM (SELECT DISTINCT '|| POM.COLUMN_NAME ||' COLUMN_NAME
FROM '||POM.OWNER||'.'||POM.TABLE_NAME||')')
).EXTRACT('/ROWSET/ROW/VAL/text()').GETSTRINGVAL() VAL
FROM ALL_TAB_COLUMNS POM
WHERE POM.COLUMN_NAME LIKE'%STATUS%';
Related
I need to move some data from Environment A to Environment B. So far, so easy. But some of the columns have FK Constraints and unfortunately the lookup data is already on Environment B and has different PKs. Lucky me, there are other unique columns I could do a mapping on. Therefore I wonder if SQL Developer has an export feature which allows me to replace certain column values by subqueries. To make it clear, I'm looking for a SQL Developer Feature, Query or similar which generates INSERT Statements that look like this:
INSERT INTO table_name(col1, col2, fkcol)
VALUES('value', 'value', (SELECT id FROM lookup_table WHERE unique_value = 'unique'))
My best approach was to try to generate them by hand, like this:
SELECT
'INSERT INTO table_name(col1, col2, fkcol) '
|| 'VALUES( '
|| (SELECT LISTAGG(col1, col2,
'SELECT id FROM lookup_table
WHERE unique_value = ''' || lookup_table.uniquevalue || '''', ', ')
WITHIN GROUP(ORDER BY col1)
FROM table_name INNER JOIN lookup_table ON table_name.fkcol = lookup_table.id)
|| ' );'
FROM table_name;
Which is absolutely a pain. Do you know something better to achive this without approaching the other db?
Simple write a query that produces the required data (with the mapped key) using a join of both tables.
For example (see the sample data below) such query (mapping the unique_value to the id):
select
tab.col1, tab.col2, lookup_table.id fkcol
from tab
join lookup_table
on tab.fkcol = lookup_table.unique_value
COL1 COL2 FKCOL
---------- ------ ----------
1 value1 11
2 value2 12
Now you can use the normal SQL Developer export feature in the INSERT format, which would yield following script - if you want to transer it to other DB or insert it direct with INSERT ... SELECT.
Insert into TABLE_NAME (COL1,COL2,FKCOL) values ('1','value1','11');
Insert into TABLE_NAME (COL1,COL2,FKCOL) values ('2','value2','12');
Sample Data
select * from tab;
COL1 COL2 FKCOL
---------- ------ -------
1 value1 unique
2 value2 unique2
select * from lookup_table
ID UNIQUE_
---------- -------
11 unique
12 unique2
I have a table my_table with a column name itinerary in my Postgres 12 DB.
select column_name, data_type from information_schema.columns where table_name = 'my_table' and column_name = 'itinerary';
column_name | data_type
-------------+-----------
itinerary | ARRAY
(1 row)
Every element in the itinerary is a JSON with the field name address, which has the field name city. I can find the count which matches the condition for the first element of the itinerary by using the following query:
select count(*) from my_table where lower(itinerary[1]->'address'->>'city') = 'oakland';
count
-------
12
(1 row)
and I can also find the length of an array by using the following query:
select array_length(itinerary, 1) from my_table limit 1;
I would like to find all the records which can have a city name Oakland in their itinerary, not only as a first stop. I am not sure how to figure out that. Thanks in advance.
You can use exists and unnest():
select count(*)
from mytable t
where exists (
select 1
from unnest(t.itinerary) as x(obj)
where x.obj -> 'address'->>'city' = 'oakland'
)
I am using Oracle SQL Developer. I want to run a SQL query that pulls column_name, data_type and nullable values from a certain table. I can accomplish this by running the following code:
select column_name, data_type, nullable
from all_tab_columns
where table_name = 'mytable'
order by column_id asc
This outputs results as such:
Column_Name | Data_Type | Nullable
-----------------------------------
Column 1 | VARCHAR2 | N
Column 2 | NUMBER | Y
Column 3 | DATE | N
In order to make this information useful to me, I need to transpose this data so that column_name is all one row (right now its all one column) and its corresponding data below it. It should look something like this.
Column 1 | Column 2 | Column 3
------------------------------
VARCHAR2 | NUMBER | DATE
N | Y | N
Does anyone know the best way to go about doing this? In Teradata, this was as easy as running a case command, but it doesn't seem to be the case (pun) here in Oracle. Any help is appreciated!
One method is to use listagg() to put everything into one string column:
select listagg(column_name || ',' || data_type || ',' || nullable, ';') within group (order by column_id asc)
from all_tab_columns
where table_name = 'mytable'
I have been doing some reseach but didn't find much. I need to compare two tables to get a list of which columns are in table 1, but not in table 2. I am using Snowflake.
Now, I've found this answer: postgresql - get a list of columns difference between 2 tables
The problem is that when I run the code I get this error:
SQL compilation error: invalid identifier TRANSIENT_STAGE_TABLE
The code works fine if I run it separately, so if I run:
SELECT column_name
FROM information_schema.columns
WHERE table_schema = 'your_schema' AND table_name = 'table2'
I actually get a list of column names, but when I chain it to the second expression, the above error is returned.
Any hint on what's going on?
Thank you
The query from the original post should work, maybe you're missing single quotes somewhere? See this example
create or replace table xxx1(i int, j int);
create or replace table xxx2(i int, k int);
-- Query from the original post
SELECT column_name
FROM information_schema.columns
WHERE table_name = 'XXX1'
AND column_name NOT IN
(
SELECT column_name
FROM information_schema.columns
WHERE table_name = 'XXX2'
);
-------------+
COLUMN_NAME |
-------------+
J |
-------------+
You can also write a slightly more complex query to see all columns not matching, from both tables:
with
s1 as (
select table_name, column_name
from information_schema.columns
where table_name = 'XXX1'),
s2 as (
select table_name, column_name
from information_schema.columns
where table_name = 'XXX2')
select * from s1 full outer join s2 on s1.column_name = s2.column_name;
------------+-------------+------------+-------------+
TABLE_NAME | COLUMN_NAME | TABLE_NAME | COLUMN_NAME |
------------+-------------+------------+-------------+
XXX1 | I | XXX2 | I |
XXX1 | J | [NULL] | [NULL] |
[NULL] | [NULL] | XXX2 | K |
------------+-------------+------------+-------------+
You can add WHERE s1.column_name IS NULL or s2.column_name IS NULL to find only missing columns of course.
You can also easily extend it to detect column type differences.
I'm not sure how to formulate this query. I think I need a subquery? Here's basically what I'm trying to do in a single query.
This query gives me the list of tables I need:
SELECT table_name
FROM information_schema.tables
WHERE table_schema = 'abc_dev_12345'
AND table_name like 'fact_%';
For the list of tables given, then I want to do a count from each table_name (each table_name has the same column info I need to query)
SELECT table_name,
count (domain_key) key_count,
domain_key,
form_created_datetime
FROM (List of tables above)
GROUP BY domain_key,
form_created_datetime;
Can I iterate through each table listed in the first query to do my count?
Do this in a single query?
So expected out would be similar to this:
table_name | key_count | domain_key | form_created_datetime
--------------------------------------------------------------
fact_1 1241 5 2015-09-22 01:47:36.136789
fact_2 32 9 2015-09-22 01:47:36.136789
Example data:
abc_dev_12345=> SELECT *
FROM information_schema.tables
where table_schema='abc_dev_own_12345'
and table_name='fact_1';
table_catalog | table_schema | table_name | table_type | self_referencing_column_name | reference_generation | user_defined_type_catalog | user_defined_type_schema | use
r_defined_type_name | is_insertable_into | is_typed | commit_action
---------------+-------------------+--------------------+------------+------------------------------+----------------------+---------------------------+--------------------------+----
--------------------+--------------------+----------+---------------
abc_dev_12345 | abc_dev_own_12345 | fact_1 | BASE TABLE | | | | |
| YES | NO |
(1 row)
abc_dev_12345=> SELECT column_name
FROM information_schema.columns
WHERE table_schema = 'abc_dev_own_12345'
AND table_name = 'fact_1';
column_name
------------------------
email_date_key
email_time_key
customer_key
form_created_datetime
client_key
domain_key
Like Eelke and Craig Ringer noted, you need a dynamic query in a plpgsql function. The basic statement you want to apply to each table is:
SELECT <table_name>, count(domain_key) AS key_count, domain_key, form_created_datetime
FROM <table_name> GROUP BY 3, 4
and you want to UNION the lot together.
The most efficient way to do this is to first build a query as a text object from the information in information_schema.tables and then EXECUTE that query. There are many ways to build that query, but I particularly like the below dirty trick with string_agg():
CREATE FUNCTION table_domains()
RETURNS TABLE (table_name varchar, key_count bigint, domain_key integer, form_created_datetime timestamp)
AS $$
DECLARE
qry text;
BEGIN
-- format() builds query for individual table
-- string_agg() UNIONs queries from all tables into a single statement
SELECT string_agg(
format('SELECT %1$I, count(domain_key), domain_key, form_created_datetime
FROM %1$I GROUP BY 3, 4', table_name),
' UNION ') INTO qry
FROM information_schema.tables
WHERE table_schema = 'abc_dev_12345'
AND table_name LIKE 'fact_%';
-- Now EXECUTE the query
RETURN QUERY EXECUTE qry;
END;
$$ LANGUAGE plpgsql;
No need for loops or cursors so pretty efficient.
Use like you would any other table:
SELECT * FROM table_domains();