How can we see the details of columns based on data type for a table in DB2 .
Like suppose I have a table with 100 columns but I want to see about the columns of data type timestamp only. How can I achieve this ?
If you're on Linux/Unix/Windows DB2, then you can use the SYSCAT.COLUMNS catalog view:
SELECT *
FROM SYSCAT.COLUMNS
WHERE TABSCHEMA= 'YOUR_SCHEMA'
AND TABNAME = 'YOUR_TABLE'
AND TYPENAME = 'TIMESTAMP'
Replacing YOUR_SCHEMA and YOUR_TABLE, obviously. If you're on mainframe DB2, then you will use the similar SYSIBM.SYSCOLUMNS catalog view:
SELECT *
FROM SYSIBM.SYSCOLUMNS
WHERE TBCREATOR='YOUR_SCHEMA'
AND TBNAME ='YOUR_TABLE'
AND COLTYPE ='TIMESTMP'
Related
In Snowflake, I retrieve different views with the following SQL query:
SELECT * FROM "myDatabase"."mySchema"."VIEWS"
That returns a table with these columns notably:
TABLE_ID
TABLE_NAME
TABLE_SCHEMA_ID
TABLE_SCHEMA
TABLE_CATALOG_ID
TABLE_CATALOG
TABLE_OWNER
VIEW_DEFINITION
For each VIEW_DEFINITION column entries, I am trying to extract all the strings <Schema_Name>.<View_Name> (or at least the <Schema_Name>).
Is it possible to do that with a SQL query (or by any other way)?
Edit
The table I obtain using the initial query is as follows:
TABLE_ID
TABLE_NAME
TABLE_SCHEMA_ID
TABLE_SCHEMA
TABLE_CATALOG_ID
TABLE_CATALOG
TABLE_OWNER
VIEW_DEFINITION
0001
MY_TABLE_NAME
99
MY_TABLE_SCHEMA
20
PMY_TABLE_CATALOG
MY_OWNER_VIEWS_ADMIN
…
where the VIEW_DEFINITION column contains queries like the one below:
"CREATE OR REPLACE VIEW My_Table_Schema_VIEWS.My_Table_Name AS
WITH STUDY_SITE_SCOPE AS (
SELECT
...
FROM (
SELECT
A.SUBJECT_NUMBER
, A.SUBJECT_STATUS
FROM <Schema_Name>.<View_Name_1> X
JOIN <Schema_Name>.<View_Name_2> Y
...
)
JOIN (
SELECT
...
FROM <Schema_Name>.<View_Name_3> X
JOIN <Schema_Name>.<View_Name_4> Y
...
)
..."
From this VIEW_DEFINITION I am trying to extract all the <Schema_Name>.<View_Name_XX> strings (or at least the <Schema_Name>).
I assume you want to get all base schemas your current view is built on top of.
To answer your question short: Yes, it is.
Maybe the following procedure is giving you an idea on how to solve it via SQL or a Stored Procedure:
Query the view definition
Search for all strings within the view definition that follow the "FROM" or "JOIN" clause
Extract them and probably check for the database name in front of the schema name
You can use information_schema.tables:
select t.*
from information_schema.tables t
where t.table_type = 'VIEW'
I have one table CONFIG_PRAM which contains columns like colname , datatype and many more details of existing tables.
Example
CREATE TABLE CONFIG_PRAM
( colname varchar(40),
datatype varchar(40)
);
I have to compare those columns and datatype present in CONFIG_PRAM table with the existing table`s columns.
Example: I have one existing table test1 table in database
create table test1 ( employee_id NUMBER(6),
sal NUMBER(6,8));
And if I found any mismatch I need to update CONFIG_PRAM table with correct data type.
FOR above one in CONFIG_PRAM table we have sal number
but actually it is number(6,8) in table so I have to update CONFIG_PRAM table with exact datatype.
I have tried like this:
select distinct colname , datatype
from CONFIG_PRAM , all_tab_columns
where upper(column_name)=upper(colname )
and data_type=datatype
and table_name in ('TEST1')
But Suppose Table A has Number(6,8)
and CONFIG_PRAM table contain only Number
then it is not giving correct results.
Issue is this query is not comparing decimal values exactly. Can you please provide a solution for this in sql/PLSQL?
This query joins your table to ALL_TAB_COLUMNS on the basis of COLUMN_NAME. This means it only works properly when CONFIG_PRAM has entries for just the one table. Perhaps it needs a column for TABLE_NAME as well?
select cp.colname
, cp.datatype as config_datatype
, atc.data_type as actual_datatype
, atc.data_length as actual_length
, atc.data_precision as actual_precision
, atc.data_scale as actual_scale
from CONFIG_PRAM cp
join all_tab_columns atc
on atc.column_name = cp.colname
where atc.owner = user
and atc.table_name in ('TEST1')
and upper(cp.datatype) != case
when atc.data_type = 'VARCHAR2'
then atc.data_type||'('||atc.data_length||')'
when atc.data_type = 'NUMBER'
and instr(cp.datatype, ',') = 0
and atc.data_scale = 0
then atc.data_type||'('||atc.data_precision||')'
when atc.data_type = 'NUMBER'
then atc.data_type||'('||atc.data_precision||','||atc.data_scale||')'
else atc.data_type
end
;
The WHERE clause compares your datatype column with an assembled datatype string. Obviously there are more potential datatypes than this query handles. You will need to extend it as necessary. Also, variations in the formatting of the datatype string will produce false positives. So you should have a proper think about the structure of your CONFIG_PRAM table: the looser the rules you apply on insert or update the more work you have to do when it comes to selecting it for use.
Here is a SQL Fiddle demo.
ALL_TAB_COLUMNS contains many more columns than just data_type. You will also need to compare at the very least data_length, data_precision and data_scale.
Your join is also missing table_name and owner, and it is better to use ANSI join syntax.
System is HP VERTICA 7.1
I am trying to create a SQL query which will dynamically find all particular tables in a specific schema that have a Timestamp column named DWH_CREATE_TIMESTAMP from system tables. (I have completed this part successfully)
Then, pass this list of tables to an outer query or some kind of looping statement which will select the MAX(DWH_CREATE_TIMESTAMP) and TABLE_NAME from all the tables in the list (200+) and union all the results together into one list.
The expected output is a 2 column table with all said tables with that TS field and the max of each value. Tables are constantly being created and dropped, so the point is to make everything totally dynamic where no TABLE_NAME values are ever hard-coded.
Any idea of Vertica specific ways to accomplish this without UDF's would be greatly appreciated.
Inner Query (working):
select distinct(table_name)
from columns
where column_name = 'DWH_CREATE_TIMESTAMP'
and table_name in (select DISTINCT(table_name) from all_tables where schema_name = 'PTG_DWH')
Outer Query (attempted - not working):
SELECT Max(DWH_CREATE_DATE) from
WITH table_name AS (
select distinct(table_name)
from columns
where column_name = 'DWH_CREATE_DATE' and table_name in (select DISTINCT(table_name) from all_tables where schema_name = 'PTG_DWH'))
SELECT MAX(DWH_CREATE_DATE)
FROM table_name
Thanks!!!
No way to do that in one SQL .
You can used the below method for node max timestamp columns values
select projections.anchor_table_name,vs_ros.colname,max(max_value) from vs_ros,vs_ros_min_max_values,storage_containers,projections where vs_ros.colname ilike 'timestamp'
and vs_ros.salstorageid=storage_containers.sal_storage_id
and vs_ros_min_max_values.rosid=vs_ros.rosid
and storage_containers.projection_name=projections.projection_name
group by projections.anchor_table_name,vs_ros.colname
Guys is there any other way to determine a table exists other than below
select count(*) from <table> where rownum =1
select * from user_table where table_name=<table>
kindly let me know the best way to check whether a table exists using oracle sql.
Thanks for the answer , my requirement is to check from the first date of current month ie 01/12/2010 with table name in the format suresh_20101201 exists in the database, if not then it should check for table suresh_20101202 and thereon till suresh_20101231 . is it possible to do in oracle sql query.
You can do this (in oracle, in mssql there is a bit different):
select count(*)
from all_objects
where object_type in ('TABLE','VIEW')
and object_name = 'your_table_name';
In most sql servers there is a system domain where you can query for a table's existence. It's highly implementation specific though. For example, in recent versions of MySql:
SELECT table_name FROM INFORMATION_SCHEMA.TABLES
WHERE table_schema = 'db_name'
AND table_name LIKE 'whatever'
You need to ask your server's system catalog. Not sure what database you meant but for SQL Server it would be:
select * from sys.tables where name='your-table-name-'
Used this in Oracle SQL Developer:
SELECT COUNT(*) FROM DUAL WHERE EXISTS (
SELECT * FROM ALL_OBJECTS WHERE OBJECT_TYPE = 'TABLE' AND OWNER = 'myschema' AND OBJECT_NAME = 'your_table_name')
This will return either a 0 or 1 if your table exists or not in the ALL_OBJECTS records.
Below query can be triggered to Oracle for checking whether any Table present in DB or not:
SELECT count(*) count FROM dba_tables where table_name = 'TABLE_NAME'
Above query will return count 1 if table 'TABLE_NAME' is present in Database
Look in the schema, might event be able to use sys.objects and check for a type at the same time.....
Something like
How to find all indexes available on table in db2?
db2 "select * from syscat.indexes where tabname = 'your table name here' \
and tabschema = 'your schema name here'"
You can also execute:
DESCRIBE INDEXES FOR TABLE SCHEMA.TABLE SHOW DETAIL
You can get the details of indexes with the below command.
describe indexes for table schemaname.tablename show detail
To see all indexes :-
select * from user_objects
where object_type='INDEX'
To see index and its columns on table :
select * from USER_IND_COLUMNS where TABLE_NAME='my_table'
This depends upon which version of DB2 you are using.
We have v7r1m0 and the following query works quite well.
WITH IndexCTE (Schema, Table, Unique, Name, Type, Columns) AS
(SELECT i.table_schema, i.Table_Name, i.Is_Unique,
s.Index_Name, s.Index_Type, s.column_names
FROM qsys2.SysIndexes i
INNER JOIN qsys2.SysTableIndexStat s
ON i.table_schema = s.table_schema
and i.table_name = s.table_name
and i.index_name = s.index_name)
SELECT *
FROM IndexCTE
WHERE schema = 'LIBDEK'
AND table = 'ECOMROUT'
If you're not familiar with CTE's they are worth getting to know. Our AS400 naming conventions are awful so I've been using CTE's to normalize field names. I ended up making a library of CTE's and have it automatically append to the top of all my queries.
For checking the indexes of a table on IBM Db2 on Cloud (previously DashDb) the following query should do it:
SELECT * FROM SYSCAT.INDEXES WHERE TABNAME = 'my_tablename' AND TABSCHEMA = 'my_table_schema'
You can use also check by index name:
SELECT COUNT(*) FROM SYSCAT.INDEXES WHERE TABNAME = 'my_tablename' AND TABSCHEMA = 'my_table_schema' AND INDNAME='index_name'
The same result can be achieved by using SYSIBM.SYSINDEXES. However, this table is not referenced directly on the product documentation page.
SELECT COUNT(*) FROM SYSIBM.SYSINDEXES WHERE TBNAME = 'my_tablename' AND TBCREATOR = 'my_table_schema' AND NAME='my_index_name'
See SYSCAT.INDEXES catalog view.
One more way is to generate the DDL of the table.
It will give you the complete description of table including index on it.
Just right click on table and click on generate DDL/Scripts.
Works on most of the database.