How To Query INFORMATION_SCHEMA.TABLE_STORAGE_TIMELINE_BY_PROJECT in BigQuery - google-bigquery

I am the project owner in my organization and I have the BigQuery Admin role at the organization level. How do I query INFORMATION_SCHEMA.TABLE_STORAGE_TIMELINE_BY_PROJECT?
I am using Console and following along with this documentation and just trying to view more BigQuery metadata:
SELECT * FROM `region-us`.INFORMATION_SCHEMA.TABLE_STORAGE_TIMELINE_BY_PROJECT;
Error:
Not found: Table [My Project ID]:region-us.INFORMATION_SCHEMA.TABLE_STORAGE_TIMELINE_BY_PROJECT was not found in location US
I get the same error if I include [My Project ID] in the SELECT statement.
Without Project ID:
With Project ID:
This query does work:
SELECT * FROM `region-us`.INFORMATION_SCHEMA.SCHEMATA

This is because TABLE_STORAGE_TIMELINE_BY is still in Preview and not Generally Available yet.
https://cloud.google.com/bigquery/docs/information-schema-tables#table_storage_timeline_by_views
I just had the same with TABLE_STORAGE.

You need to specify the schema and the table, these need to be in the "us-region".
You can see this example.
SELECT
timestamp AS start_time,
table_name,
total_logical_bytes
FROM
`region-REGION`.INFORMATION_SCHEMA.TABLE_STORAGE_TIMELINE_BY_PROJECT
WHERE
table_schema = "TABLE_SCHEMA"
AND
table_name = "TABLE_NAME"
ORDER BY
start_time DESC;

Related

run sql for other tables if one does not exists (ignore missing tables do not fail)

I have many tables in bigquery. I need to create a table which has name of people in each table who is older than 20.
I created something like this but this one fails if one of the tables does not exist.
(I am running it for different projects and their tables are slightly different for example one of the projects does not have tableA)
WITH
a As (
SELECT name
FROM 'tableA'
WHERE age >20
),
b As (
SELECT name
FROM 'tableB'
WHERE age >20
)
SELECT name FROM a
UNION ALL
SELECT name FROM b
How can I prevent the failure and say if the table exist then find people older than 20 otherwise ignore it and run for other tables?
(This is an Airflow task which fails)
As I understand you have one composer environment and want it to use the BigQueryOperator() to query data in 3 different projects.
I am assumig you have already created your Composer environment in your project. Then, you can follow the below steps:
1) Create 3 different connections between your Composer environment and each project you will query against. Such as described here.
2) Create a specific query for each project, where you filter age>20 and append all the tables together. So, you will address properly the tables for each project.
3) Create one DAG file with 3 BigQueryOperators, each referencing a particular connection and using the appropriate query, created in 2. The DAG creation is described here and the DAG would be as follows:
task_custom = bigquery_operator.BigQueryOperator(
task_id='task_custom_connection_school1',
bql_1='SELECT * WHERE age>20', use_legacy_sql=False,
# Set a connection ID to use a connection that you have created in step 1.
bigquery_conn_id='my_gcp_connection')
As an alternative you can create multiple DAG files, one for each connection. In addition, notice that the connection name is specified with bigquery_conn_id.
Following the above steps, each of your queries would be tailored specifically for each project. Thus, they would be executed properly.
A BigQuery-only solution would be to use wildcard tables.
There are several options here:
If all the tables in the dataset are being queried:
SELECT name
FROM `project.dataset.*`
WHERE age > 20
If all the table names in all the datasets are known:
SELECT name
FROM `project.dataset.*`
WHERE age > 20
AND _TABLE_SUFFIX IN ('tableA', 'tableB', ..., 'tableN')
If all the tables in all the datasets conform to a specific pattern:
SELECT name
FROM `project.dataset.*`
WHERE age > 20
AND _TABLE_SUFFIX LIKE 'table%'
The LIKE operator (combined with logical operators) on the _TABLE_SUFFIX field give a lot of freedom to match a lot of patterns for table names without the need to explicitly list all the table names, as it is needed for the IN operator. If no table matches the _TABLE_SUFFIX specified (i.e. it is not listed in the array of the IN operator), the query will return 0 results instead of failing.
More details about querying wildcard tables in the BigQuery documentation.
Note that non-matching schemas could cause some issues, so you might want to include a verification that the matched tables have the right schema with a query to the INFORMATION_SCHEMA.COLUMNS table:
WITH correct_schema_tables as (SELECT table_name FROM (
SELECT * FROM project.dataset.INFORMATION_SCHEMA.COLUMNS
WHERE
column_name = 'name'
AND data_type = 'STRING')
JOIN (
SELECT * FROM project.dataset.INFORMATION_SCHEMA.COLUMNS
WHERE
column_name = 'age'
AND data_type = 'INT64')
USING (table_name)
)
SELECT name
FROM `project.dataset.*`
WHERE age > 20
AND _TABLE_SUFFIX IN (SELECT table_name FROM correct_schema_tables)
AND _TABLE_SUFFIX LIKE 'table%'

DSN8a10.emp is an undefined name

I just created a table named TELE by running the following query:
CREATE TABLE TELE
(NAME2 VARCHAR(15) NOT NULL,
NAME1 VARCHAR(12) NOT NULL,
PHONE CHAR(4));
Now, I am trying to populate it with data from the DSN8A10.EMP table, by running the following query:
INSERT INTO TELE
SELECT LASTNAME, FIRSTNME, PHONENO
FROM DSN8A10.EMP
WHERE WORKDEPT = 'D21';
But I get the following error:
[42704][-204] "DSN8A10.EMP" is an undefined name.. SQLCODE=-204,
SQLSTATE=42704, DRIVER=4.23.42.
I am using IntelliJ IDEA with com.ibm.db2.jcc.DB2Driver Data Server Driver.
Can you help me with a solution, please?
Thanks in advance!
Some possibilities:
the table does not exist because you have a typo in the schema name or the table name
the table does exist but in a different database
the table exists in the database, but the name or schema has a MiXed case, in which case you must use double quotes around the schema name and table name. So "DSN8a10"."emp" is DIFFERENT from DSN8a10.EMP.
If the Db2-server runs on Linux/Unix/Windows, this query may help to show a mixed case name. It's possible the table is a view or alias or nickname.
select tabschema, tabname from syscat.tables where upper(tabschema)='DSN8A10' and upper(tabname) = 'EMP'
If the Db2-server runs on i-Series: use QSYS2.SYSTABLES instead.
select table_schema, table_name from qsys2.systables where upper(table_schema)='DSN8A10' and table_name='EMP'
If the Db2-server runs on Z/OS: use SYSIBM.SYSTABLES instead:
select creator, name from sysibm.systables where upper(creator)='DSN8A10' and upper(name) = 'EMP'

select latest Table in a Big Query Dataset - Standard SQL syntax

I have dataset containing multiple tables with similar names:
e.g.
affilinet_4221_first_20180911_204956
affilinet_4221_first_20180911_160004
affilinet_4221_first_20180911_085559
affilinet_4221_first_20180910_201323
affilinet_4221_first_20180910_201042
affilinet_4221_first_20180910_080006
affilinet_4221_first_20180909_160707
This query identifies the latest dataset (according to yyyymmdd_hhmmss naming convention) with __TABLES_SUMMARY__ method
SELECT max(table_id) as table_id FROM `modemutti-8d8a6.feed_first.__TABLES_SUMMARY__`
where table_id LIKE "affilinet_4221_first_%"
query result
this query extracts all values from a specific table with _TABLE_SUFFIX method
SELECT * FROM `modemutti-8d8a6.feed_first.*`
WHERE _TABLE_SUFFIX = "affilinet_4221_first_20180911_204956"
query result
This query combines __TABLES_SUMMARY__ (which returns affilinet_4221_first_20180911_204956) and _TABLE_SUFFIX
SELECT * FROM `modemutti-8d8a6.feed_first.*`
WHERE _TABLE_SUFFIX = (
SELECT max(table_id) FROM `modemutti-8d8a6.feed_first.__TABLES_SUMMARY__`
where table_id LIKE "affilinet_4221_first_%")
this query fails:
Error: Cannot read field 'modemio_cat_level' of type INT64 as STRING
error screenshot
any idea why is this happening or how I could solve the issue?
------------EDIT------------
#Mikhail solution works correctly but processes a huge amount of data. See explicit call Vs the suggested Method. Another solution would have been
SELECT * FROM `modemutti-8d8a6.feed_first.affilinet_4221_first_*` WHERE _TABLE_SUFFIX =
(
SELECT MAX(_TABLE_SUFFIX) FROM`modemutti-8d8a6.feed_first.affilinet_4221_first_*`
)
but this processes also a much bigger amount of data compared to the explicit query. Is there are way to achieve through a view in the UI or should I rather use the Python / Java SDK via API?
Try below
#standardSQL
SELECT * FROM `modemutti-8d8a6.feed_first.affilinet_4221_first_*`
WHERE _TABLE_SUFFIX = (
SELECT REPLACE(MAX(table_id), 'affilinet_4221_first_', '')
FROM `modemutti-8d8a6.feed_first.__TABLES_SUMMARY__`
WHERE table_id LIKE "affilinet_4221_first_%"
)

How to get the list of external tables in oracle schema 11g

I want to get the list of only external tables in oracle.
i tried to get the list using Select * from tab . But it return list of all the table including actual and external . But i only want list of external tables
Use
select *
from all_external_tables;
to see all external tables your user as access to. To see them for a specific schema/user:
select *
from all_external_tables
where owner = 'ARTHUR';
If you only want to see the ones owned by your current user, use
select *
from user_external_tables;
To see all table that are not external tables use this:
select ut.table_name
from user_tables ut
where not exists (select 42
from user_external_tables uet
where uet.table_Name = ut.table_name);
More details in the manual:
http://docs.oracle.com/cd/E11882_01/server.112/e25513/statviews_1092.htm#REFRN20074
http://docs.oracle.com/cd/E11882_01/server.112/e25513/statviews_5490.htm#REFRN26286

How can I find all indexes available on a table in DB2

How to find all indexes available on table in db2?
db2 "select * from syscat.indexes where tabname = 'your table name here' \
and tabschema = 'your schema name here'"
You can also execute:
DESCRIBE INDEXES FOR TABLE SCHEMA.TABLE SHOW DETAIL
You can get the details of indexes with the below command.
describe indexes for table schemaname.tablename show detail
To see all indexes :-
select * from user_objects
where object_type='INDEX'
To see index and its columns on table :
select * from USER_IND_COLUMNS where TABLE_NAME='my_table'
This depends upon which version of DB2 you are using.
We have v7r1m0 and the following query works quite well.
WITH IndexCTE (Schema, Table, Unique, Name, Type, Columns) AS
(SELECT i.table_schema, i.Table_Name, i.Is_Unique,
s.Index_Name, s.Index_Type, s.column_names
FROM qsys2.SysIndexes i
INNER JOIN qsys2.SysTableIndexStat s
ON i.table_schema = s.table_schema
and i.table_name = s.table_name
and i.index_name = s.index_name)
SELECT *
FROM IndexCTE
WHERE schema = 'LIBDEK'
AND table = 'ECOMROUT'
If you're not familiar with CTE's they are worth getting to know. Our AS400 naming conventions are awful so I've been using CTE's to normalize field names. I ended up making a library of CTE's and have it automatically append to the top of all my queries.
For checking the indexes of a table on IBM Db2 on Cloud (previously DashDb) the following query should do it:
SELECT * FROM SYSCAT.INDEXES WHERE TABNAME = 'my_tablename' AND TABSCHEMA = 'my_table_schema'
You can use also check by index name:
SELECT COUNT(*) FROM SYSCAT.INDEXES WHERE TABNAME = 'my_tablename' AND TABSCHEMA = 'my_table_schema' AND INDNAME='index_name'
The same result can be achieved by using SYSIBM.SYSINDEXES. However, this table is not referenced directly on the product documentation page.
SELECT COUNT(*) FROM SYSIBM.SYSINDEXES WHERE TBNAME = 'my_tablename' AND TBCREATOR = 'my_table_schema' AND NAME='my_index_name'
See SYSCAT.INDEXES catalog view.
One more way is to generate the DDL of the table.
It will give you the complete description of table including index on it.
Just right click on table and click on generate DDL/Scripts.
Works on most of the database.