Cannot run query: project does not have the reservation in the data region - google-bigquery

Since today, suddenly, I am constantly receiving the following error when tried to execute query jobs in Google Big Query:
Cannot run query: project does not have the reservation in the data region
I tried with several projects and still this error persists.
Has anyone ever encountered this error?

"Reservation" here refers to computing slots. You have slots for computation in one region (or none available at all), but data lies in another region.

Your reservation has been configured on Feb 13th. Now the problem should have been fixed.

Related

NOAA Forecast data in BigQuery no longer being updated?

Looks like there is no longer data being published to the public NOAA forecast table in bigquery's public dataset. Does anyone know why that is happening? I cannot find any info about the data being discontinued on either website.
project: bigquery-public-data
dataset: noaa_global_forecast_system
table: NOAA_GFS0P25
BigQuery sql that you can use to test this out:
SELECT * FROM `bigquery-public-data.noaa_global_forecast_system.NOAA_GFS0P25` WHERE DATE(creation_time) >= "2022-04-11" LIMIT 100
New forecast data has not been inserted into the table since 4/10/22. They have missed a day before, but we have not seen them miss multiple days in a row before. We would like to know if we need to migrate to a new forecast source, but we cannot find any info on whether this one is being shut down or if they are just having temporary technical difficulties.
Thanks for the heads up! This looks like a temporary technical issue, but we are working on getting this dataset back up and running.

Can't save GBQ results in any format

I ran a query in Google BigQuery that is a basic Select * From [table] where [single column = (name)]
The results came out to about 310 lines and 48 columns, but when I try to save the results in ANY format, nothing happens.
I've tried saving as a view AND a table, which I can do just fine, but trying to download the results, or trying to export the results to GCP, fails every time. There is no error, no notification that something went wrong, literally nothing happens.
I'm about ready to yank out my hair and throw my computer out the window. I ran a query that was almost identical except for the (name) this morning and had no issue. Now it's after 4pm and it's not working.
All of my browsers are up to date, my logins are fine, my queries aren't reliant on tables that update during that time, I've restarted my computer four times in the hope that SOMETHING will help.
Has anyone had this issue? What else can I do to troubleshoot?
Do you have any field of RECORD (REPEATED) type in your results? I had a similar problem today, trying to save my results to Google Sheets - literally nothing happened, no error message whatsoever - but fortunately (and quite puzzlingly) - I got this error while trying to save them to CSV on Google Drive instead: "Operation cannot be performed on a nested schema. Field: ...". After removing the "offending" field, which was of RECORD (REPEATED) type, I was able to save to Google Sheets again.

BigQuery dataset deletion and name reuse

When I delete a dataset in BigQuery and create another one, in the same project but in a different region, with the same dataset name, this throws an error. It simply says 'Not found: Dataset 'projectId:datasetName' '
This is an important problem, as GA360 imports rely on having the dataset named with the view ID. Now that we have BigQuery in Australia, we would like to be able to use it.
How can I fix this problem?
False alarm. It turns out that BQ just needs some more time to complete this operation. I tried again after some minutes and it now works.

Why BigtableIO writes records one by one after GroupBy/Combine DoFn?

Is someone aware of how the bundles are working within BigtableIO? Everything looks fine until one is using GroupBy or Combine DoFn. At this point, the pipeline would change the pane of our PCollection element from PaneInfo.NO_FIRING to PaneInfo{isFirst=true, isLast=true, timing=ON_TIME, index=0, onTimeIndex=0} and then BigtableIO will output the following log INFO o.a.b.sdk.io.gcp.bigtable.BigtableIO - Wrote 1 records. Is the logging causing a performance issue when one have millions records to output or is it the fact that BigtableIO is opening and closing a writer for each record?
BigtableIO sends multiple records in a batch RPC. However, that assumes there there are multiple records sent in the "bundle". Bundle sizes are dependent on a combination of the step before hand, and the Dataflow framework. The problems you're seeing don't seem to be related to BigtableIO directly.
FWIW, here's the code for logging the number of records that occurs in the finishBundle() method.

Big Query Issue: Query Failed with Request was blocked to protect the systems operation

Please can you advise why we are seeing this error for a query we were previously able to run?
Error: Request was blocked to protect the systems operation. Please contact
We have tried running this query several times
Writing an email to the address returned:
you may not have permission to post messages to the group
I get this message when querying a 12TB table with around 25B rows. The query I am trying to run is selecting from one table, with a cross join on another table where two values in table A are between two values in table B, and I am doing a group by on two field. As mentioned before, all was working fine for the last 15 months until yesterday
To address your points in turn :
1 - Copy from shollyman's comment concerning your error:
The short answer: a cross join involving a table of that size is problematic given any reasonably sized second table. The message indicates that the BQ team is explicitly blocking this query due to its behavior.
2- I think you couldn't email at the address because it's a Google Group. You need to register to these first. There should be a way for you to do so. It's also possible (notice the error message says "may") that your message just needs to be accepted by a member of the group before it goes through.
3- If your issue is recent, it's most likely because you recently added enough data to one of your tables to make the cross join too big.