I'm trying to remove a table from a dataset using bq without success:
BigQuery error in rm operation: Not found: Table carbon-web-...:AS_....Orders_01Jun2014_31May2015_3704438_01
The table is listed whenever I run bq ls AS_....
I'm seeing similar behavior when I try to access the table from the BigQuery UI. When I click on the link to the table, I receive an error message:
Unable to find table: carbon-web-...:AS_....Orders_01May2017_31May2017
Is there a way to force a refresh on the metadata for this dataset?
These are tables in transient state that shouldn't have been exposed. We found a bug in a feature that we were rolling out with listing tables where in some rare scenarios tables in transient state would show up in the list. We have reverted that now.
Related
I'm currently migrating around 200 tables in Bigquery (BQ) from one dataset (FROM_DATASET) to another one (TO_DATASET). Each one of these tables has a _TABLE_SUFFIX corresponding to a date (I have three years of data for each table). Each suffix contains typically between 5 GB and 80 GB of data.
I'm doing this using a Python script that asks BQ, for each table, for each suffix, to run the following query:
-- example table=T_SOME_TABLE, suffix=20190915
CREATE OR REPLACE TABLE `my-project.TO_DATASET.T_SOME_TABLE_20190915`
COPY `my-project.FROM_DATASET.T_SOME_TABLE_20190915`
Everything works except for three tables (and all their suffixes) where the copy job fails at each _TABLE_SUFFIX with this error:
An internal error occurred and the request could not be completed. This is usually caused by a transient issue. Retrying the job with back-off as described in the BigQuery SLA should solve the problem: https://cloud.google.com/bigquery/sla. If the error continues to occur please contact support at https://cloud.google.com/support. Error: 4893854
Retrying the job after some time actually works but of course is slowing the process. Is there anyone who has an idea on what the problem might be?
Thanks.
It turned out that those three problematic tables were some legacy ones with lots of columns. In particular, the BQ GUI shows this warning for two of them:
"Schema and preview are not displayed because the table has too many
columns and may cause the BigQuery console to become unresponsive"
This was probably the issue.
In the end, I managed to migrate everything by implementing a backoff mechanism to retry failed jobs.
I clicked a table on bigquery dashboard, got this error:
However, I can get data when I do a select on this table. (That means the table does exist)
I already have the highest admin privilege so it shouldn't be a permission issue.
I created this table with python script, which collects data, writes into a csv file, and upload the csv file to bigquery everyday. After I created the table I once changed the schema both in the script and on the dashboard. Not sure if that's the cause, but the table loading error occurred several days after I changed the schema.
If you have Addblock extensions, this might be the root cause of this issue. Thus, try disabling it, then try running your query again.
Hope it helps.
One of our BQ datasets is no longer accessible via BQ Web UI and Cloud Shell.
It shows message "Not found: Dataset project:dataset" immediately upon opening the UI.
We tried a couple of bq shell commands as well:
bq ls: successfully lists the "missing" dataset
bq ls dataset: returns "BigQuery error in ls operation: Not found: Dataset project:dataset"
But we were able to query the views inside and access the contents of the dataset via PowerBI.
IAM Permission: Owner
Anyone encountering similar issue?
I had a similar issue - the BigQuery client library would list the dataset when I called ListDatasets(), but attempting to call UploadCsv() with the same dataset ID would return 404 Dataset not found.
Turns out it was because I had selected 'asia-northeast1' as the Data Location when creating the dataset - it doesn't tell you when you create the dataset that this region is treated differently, but a line in the BigQuery docs says:
If your data is in a location other than the US or EU multi-regional
location, you must specify the location when you perform actions such
as loading data, querying data, and exporting data.
Re-creating the dataset in the US region fixed my issue. Or you could use the options in the docs above to specify the 'asia-northeast1' location everytime instead.
At that time, there was a caching issue that caused this to happen and was already resolved.
We have a large table (somewhat large < 15 million rows) that we have been filling up with stress and stability testing. We are trying to delete the table but it is resisting.
Here's what we have tried:
delete table from the web console. No errors...but it doesn't delete the table.
delete from command line interface. We get an error message: "BigQuery error in rm operation: Backend Error"
We have also tried to delete the whole dataset from the console and that fails as well. No errors reported.
We tried to delete the whole dataset from the commandline. We get the same error message: "BigQuery error in rm operation: Backend Error"
Other tables with the same schema can be deleted without error. Our schema does use 9999 columns (the max) which would be the only odd thing we may be doing.
You've hit a bug with tables that have a large number of updates and a wide schema. We're working on a fix.
I've ran into a situation where a BigQuery table has become stale. I can't even run a count query on it. This occurred right after I ran the first load job.
For each query I run I get an error:
Error: Unexpected. Please try again.
See for example Job IDs: job_OnkmhMzDeGpAQvG4VLEmCO-IzoY, job_y0tHM-Zjy1QSZ84Ek_3BxJ7Zg7U
The error is "illegal field name". It looks like the field 69860107_VID is causing it. BigQuery doesn't support column rename, so if you want to change the schema you'll need to recreate the table.
I've filed a bug to fix the internal error -- this should have been blocked when the table was created.