I have my big query table Tab1 in GCP Project A. I have created a new GCP Project B. I have written a query that retrieves data stored in Tab1 and I want to store this as view in Project B.
I am getting an error like this:
Not found: Dataset Project A:Tab1 not found
Both projects are under the same organization. How do I create views in new projects based on data stored in another project.
If you are going to query a table that is not located in the project that you are using, you have to also specified the project_name in the FROM.
For instance,
SELECT * FROM `project_A.dataset.tab1`
Based on the error message, you are not doing that properly (`project_ID.dataset.table`)
If the rights are good, you can do as Alvaro said, othewise if it didnt works, you can add/declare some rights for your view as :
One possibility is to create an authorized view in the dataset permission ,
And after that you can add your view :
Related
I have created a Sink using Log explorer that pushes data to Bigquery. I can get information about tables by using the following query.
SELECT
SPLIT(REGEXP_EXTRACT(protopayload_auditlog.resourceName, '^projects/[^/]+/datasets/[^/]+/tables/(.*)$'), '$')[OFFSET(0)] AS TABLE
FROM `project.dataset` WHERE
JSON_EXTRACT(protopayload_auditlog.metadataJson, "$.tableDataRead") IS NOT NULL
OR JSON_EXTRACT(protopayload_auditlog.metadataJson, "$.tableDataChange") IS NOT NULL
However, I am unable to find information about Views. I have tried
Audit logs https://cloud.google.com/bigquery/docs/reference/auditlogs
And biguqery asset information https://cloud.google.com/asset-inventory/docs/resource-name-format
however, I am unable to find how to get the information about "View". What do I need to include? Is that something in my sink or there is an alternative resource name I should use?
It seems like auditLogs treat tables and views the same way.
I made this query to track view/table changes. InsertJob will tell you about view creations. UpdateTable/PatchTable will tell you about updates
SELECT
resource.labels.dataset_id,
resource.labels.project_id,
--protopayload_auditlog.methodName,
REGEXP_EXTRACT(protopayload_auditlog.methodName,r'.*\.([^/$]*)') as method,
--protopayload_auditlog.resourceName,
REGEXP_EXTRACT(protopayload_auditlog.resourceName,r'.*tables\/([^/$]*)') as tableName,
protopayload_auditlog.authenticationInfo.principalEmail,
protopayload_auditlog.metadataJson,
case when protopayload_auditlog.methodName = 'google.cloud.bigquery.v2.JobService.InsertJob' then JSON_EXTRACT(JSON_EXTRACT(JSON_EXTRACT(JSON_EXTRACT(protopayload_auditlog.metadataJson, "$.tableCreation"),"$.table"),"$.view"),"$.query")
else JSON_EXTRACT(JSON_EXTRACT(JSON_EXTRACT(JSON_EXTRACT(protopayload_auditlog.metadataJson, "$.tableChange"),"$.table"),"$.view"),"$.query") end
as query,
receiveTimestamp
FROM `<project-id>.<bq_auditlog>.cloudaudit_googleapis_com_activity_*`
WHERE DATE(timestamp) >= "2022-07-10"
and protopayload_auditlog.methodName in
('google.cloud.bigquery.v2.TableService.PatchTable',
'google.cloud.bigquery.v2.TableService.UpdateTable',
'google.cloud.bigquery.v2.TableService.InsertTable',
'google.cloud.bigquery.v2.JobService.InsertJob',
'google.cloud.bigquery.v2.TableService.DeleteTable' )
Views are virtual table which are created and queried in the same way as queried from tables. Since you are looking for Views in BigQuery which is setup as a logging sink, you need to create Views in BigQuery by using the steps given in this documentation.
Currently there are two versions supported, v1 and v2. V1 reports API invocation and V2 reports resource interactions. After creating the views, you can do further analysis in BigQuery by saving or querying the Views.
I'm looking to save a view which uses federated queries (from a MySQL Cloud SQL connection) between two projects. I'm receiving two different errors (depending on which project I try to save in).
If I try to save in the project containing the dataset I get error:
Not found: Connection my-connection-name
If I try to save in the project that contains the connection I get error:
Not found: Dataset my-project:my_dataset
My example query that crosses projects looks like:
SELECT
bq.uuid,
sql.item_id,
sql.title
FROM
`project_1.my_dataset.psa_v2_202005` AS bq
LEFT OUTER JOIN
EXTERNAL_QUERY( 'project_2.us-east1.my-connection-name',
'''SELECT item_id, title
FROM items''') AS sql
ON
bq.looks_info.query_item.item_id = sql.item_id
The documentation at https://cloud.google.com/bigquery/docs/cloud-sql-federated-queries#known_issues_and_limitations doesn't mention any limitations here.
Is there a way around this so I can save a view using an external connection from one project and dataset from another?
Your BigQuery table is located in US and your MySQL data source is located in us-east1. BigQuery automatically chooses to run the query in the location of your BigQuery table (i.e. in US), however, your Cloud MySQL is in us-east1 and that's why your query fails. Therefore the BigQuery table and Cloud SQL instance, must be in the same location in order for this query to succeed.
The solution for this kind of cases is moving your BigQuery dataset to the same location as your Cloud SQL instance manually by following the steps explained in detail in this documentation. However, the us-east1 is not currently supported for copying datasets. Thus, I will recommend you to create a new connection in one of the locations mentioned in the documentation.
I hope you find the above pieces of information useful.
I clicked a table on bigquery dashboard, got this error:
However, I can get data when I do a select on this table. (That means the table does exist)
I already have the highest admin privilege so it shouldn't be a permission issue.
I created this table with python script, which collects data, writes into a csv file, and upload the csv file to bigquery everyday. After I created the table I once changed the schema both in the script and on the dashboard. Not sure if that's the cause, but the table loading error occurred several days after I changed the schema.
If you have Addblock extensions, this might be the root cause of this issue. Thus, try disabling it, then try running your query again.
Hope it helps.
I have a txt file containing a table with two columns student ID and GPA. I want to create a similar table in Oracle SQL developer. Is there some way to copy this data directly into SQL developer
If you are looking for a simple GUI method, you can connect to a db, right click on "Tables", "import data..." and use the Data Import Wizard.
Select the correct options (csv/delimited, tableName, cloumnsToImport...) and click "Finish".
Sorry, I cant post pics yet see the example screenshot here: http://i.stack.imgur.com/S7HFx.png
You can create external table with that file, then copy the external table into a new internal table (usual table), here is a full example:
External Tables Concepts
Then go to:
Example: Creating and Loading an External Table Using ORACLE_LOADER
I've never touched PervasiveSql before and now I have a bunch of .ddf and .Btr files. I read that all I had to do was create a new database in the control center and point to the folder that contains these files.
When I do this and look at the database there is nothing in it. Since I am new to Pervasive, I'm more than likely sure that I'm doing something wrong.
EDIT: Added a screen shot after running command prompt
To create a database name in the PCC, you need to connect to the engine then right click the engine name and select New then Database. Once you do that, the following dialog should be displayed:
Enter the database name, and path. The path being where the DDFs are located. In most cases the default options are sufficient.
A longer process is documented at http://docs.pervasive.com/products/database/psqlv11/wwhelp/wwhimpl/js/html/wwhelp.htm#href=uguide/using.02.5.html.
If you pointed to a directory that had DDF files (FILE.DDF, FIELD.DDF,and INDEX.DDF) when you created the database name, you should see tables listed.
If you pointed to a directory that does not have DDF files, the database will still be created but will have no tables defined. You'll either need to get DDFs from the vendor or create the table entries using CREATE TABLE (with IN DICTIONARY clauses) or use DDF BUilder to add table entries.
Based on your screen shot, you only have 10 records in FILE.DDF. This is not enough. There are minimum system tables required (X$FILE, X$FIELD, X$INDEX, and a few others). It appears your DDFs are not a valid set. Contact the client / vendor that provided the DDFs and ask for a set that include all of the table definitions.
Once you have tables listed in your Database Name, you can use ODBC to access the data.