Is any tool help me to query table by CONSTRAINTS rule? - sql

I have a big database, the database have more than 1000 tables and have many
table constraints rules. To remember all table constraints rule is too hard for me.Every day i need write many simple sql to query several tables. Writing so many sqls by hand is waste my time. Is there any tool to save my time to help me query table by CONSTRAINTS rule?

if database is Oracle , you didn't try data dictionary ?
USER_CONSTRAINTS / ALL_CONSTRAINTS / DBA_CONSTRAINTS table ?
for query constrains on certain tables that own by current user
SELECT CONSTRAINT_NAME, CONSTRAINT_TYPE, R_CONSTRAINT_NAME, STATUS
FROM USER_CONSTRAINTS
WHERE TABLE_NAME = 'CRUISES';
or you can try all constraints in table that user have right to query using ALL_CONSTRAINTS table
or you can try on database level constraint using DBA_CONSTRAINTS table
both 3 tables share same column structure , so I won't repeat much .

If you want a GUI tool, you could do worse than any of the following products:
SQL Developer (Oracle)
TOAD (Quest)
PL/SQL Developer (Alround Automation)

Related

How to check if 2 tables contain the the same columns in SQL?

I have to write a scheduled ETL job where I have to load the difference of the data between the source tables and the target tables. However it is possible that the difference is not just in the number of records, but in the data structure itself. Columns can be added/deleted/renamed.
If the difference would be in the number of records only, it would be easy-peasy a simple EXCEPT would do the job.
Now in my head the order would be:
Check if the column names are the same in the 2 tables (Main question: How to do this?)
If so, load the differences
If not then it implies another question: What is the best practice? Drop the table and recreate it based on the new source table, or start some altering on the target table?
Every suggestion would be greatly appreciated.
DB2 supports the standard information_schema.columns table -- as well as bespoke naming conventions. You can look at two tables using:
select c.column_name,
(case when min(table_name) = max(table_name)
then min(table_name) || ' only'
else 'both'
end)
from information_schema.columns c
where table_name in ('source', 'target') and
table_schema = ?
group by column_name;
As for what to do when the data structure changes. I think humans need to be involved in that process to figure out the best approach. Typical options are:
Ignore new columns.
Set old columns that are missing to NULL.
However, there may be other approaches depending on the problem.

Oracle: Script to verify that objects belong to their tablespaces

I have 2 tablespaces, one to store tables and other on to store indexes. I created a script that can be run in any of my schemas and it will move objects (tables or indexes) to their respective tablespaces.
However, I am failing to come up with a script that will verify that objects have been moved to the correct tablespaces (meaning tables have been moved to the table tablespace, and indexes have been moved to the index tablespace).
Any thoughts?
You can use the query below to get the information through a spesific schema :
select t.table_name, t.tablespace_name as "TS Name For Table",
i.index_name, i.tablespace_name as "TS Name For Indexes"
from user_tables t
join user_indexes i on i.table_name = t.table_name
order by t.table_name, i.index_name;

Whats the best way to select fields from multiple tables with a common prefix?

I have sensor data from a client which is in ongoing acquisition. Every week we get a table of new data (about one million rows each) and each table has the same prefix. I'd like to run a query and select some columns across all of these tables.
what would be the best way to go about this ?
I have seen some solutions that use dynammic sql and i was considering writing a stored procedure that would form a dynamic sql statement and execute it for me. But im not sure this is the best way.
I see you are using Postgresql. This is an ideal case for partitioning with constraint exclusion based on dates. You create one master table without data, and the other tables added daily inherit from it. In your case, you don't even have to worry about the nuisance of triggers on INSERT; sounds like there is never any insertion other than the daily bulk creation of a new table. See the link above for full documentation.
Queries can be run against the parent table, and Postgres takes care of looking in all the child tables, plus it is smart enough to skip child tables ruled out by WHERE criteria.
You could query the meta data for tables with the same prefix.
select table_name from information_schema.tables where table_name like 'week%'
Then you could use union all to combine queries like
select * from week001
union all
select * from week002
[...]
However I suggest appending new records to one single table, and use an index on the timestamp column. This would especially speed up queries which span multiple weeks etc. It will simplify your queries a lot, if you only have to deal with one table. If the table is getting too large, you could partition by date etc. So there should be no need to partition manually by having multiple tables.
You are correct, sometimes you have to write dynamic SQL to handle cases such as this.
If all of your tables are loaded you can query for table names within your stored procedure. Something like this:
SELECT TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE = 'BASE TABLE'
Play with that to get the specific table names you need.
How are the table names differentiated? By date? Some incrementing ID?

No indexes found in any table - is that possible?

By executing the following command that will list out all indexes found in my schema, the query returned nothing - suggesting that either no index is created, or probably i do not have sufficient permission.
select * from user_indexes;
Are there any more ways to list the indexes i have in a schema?
Sure it's possible.
Common, even :)
It just means nobody's created any indexes.
If the query returned nothing, it means that you DO have permission ... and there simply aren't any indexes.
Here's a good link on "Managing indexes in Oracle" (it sounds like you're probably running Oracle):
http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/indexes.htm
As paulsm4 say, you do not have any indexes in your schema.
you can use
select * from all_indexes;
and you'll see all your indexes + the others where you have rights.
Florin is correct, USER_INDEXES is a view on ALL_INDEXES that only shows those which were created by you. You can query ALL_INDEXES directly to determine if anyone else has created indexes on the table in question but you will probably want to add a where clause for the table_name as it will list all indexes for all tables in the instance and also only some of the columns.
SELECT TABLE_NAME, INDEX_NAME FROM ALL_INDEXES WHERE TABLE_NAME='XYZ';
You can limit which tables using an IN CLAUSE if there is are several tables you are interested in
SELECT TABLE_NAME, INDEX_NAME FROM ALL_INDEXES WHERE TABLE_NAME IN ('ABC','XYZ');
and you can use a like clause if there is a prefix or suffix
SELECT TABLE_NAME, INDEX_NAME FROM ALL_INDEXES WHERE TABLE_NAME like 'XYZ%';
Also, if you want to see which columns these indexes are on, you can select from ALL_IND_COLUMNS;
SELECT * FROM ALL_IND_COLUMNS WHERE TABLE_NAME='XYZ'
Note that whether a table has indexes or not depends on the data and the usage. A small lookup table that has maybe a hundred rows or less would not need an index whereas a table that contains millions of rows but is queried for just a handful when needed would need an index.

Oracle - drop table constraints without dropping tables

I'm doing some bulk migration of a large Oracle database. The first step of this involves renaming a whole load of tables as a preparation for dropping them later (but I need to keep the data in them around for now). Any foreign key constraints on them need to be dropped - they shouldn't be connected to the rest of the database at all. If I were dropping them now I could CASCADE CONSTRAINTS, but rename simply alters the constraints.
Is there a way I can drop all of the constraints that CASCADE CONSTRAINTS would drop without dropping the table itself?
You can do it with dynamic SQL and the data dictionary:
begin
for r in ( select table_name, constraint_name
from user_constraints
where constraint_type = 'R' )
loop
execute immediate 'alter table '|| r.table_name
||' drop constraint '|| r.constraint_name;
end loop;
end;
If the tables are owned by more than one user you'll need to drive from DBA_CONSTRAINTS and include OWNER in the projection and the executed statement. If you want to touch less than all the tables I'm afraid you'll need to specify the list in the WHERE clause, unless there's some pattern to their names.
You can disable/re-enable constraints without dropping them. Take a look at this article.