I am trying to load csv file to BigQuery from Google Cloud Storage by WebUI.
But sometimes occurs error.
Error message is "Cannot process data in separate locations".
What does it mean?
And how can I fix it?
This was an unintended consequence of an update to the BigQuery service. We'll provide additional followup on this bug:
https://code.google.com/p/google-bigquery/issues/detail?id=270
Related
We try to save data on a daily basis in BigQuery. When we do this process manually via the save function there is no problem with this process.
In detail, we try to load some data from a Google Cloud SQL instance to BigQuery.
But if we use the same settings with the planning function we receive an error without any details about the problem.
Does anybody know about this problem or know where we can get more details about the error message?
I've been trying to connect a CSV I have in Google Drive to a BigQuery table for a week but I've been getting the following error:
"An internal error occurred and the request could not be completed. This is usually caused by a transient issue. Retrying the job with back-off as described in the BigQuery SLA should solve the problem: https://cloud.google.com/bigquery/sla. If the error continues to occur please contact support at https://cloud.google.com/support. Error: 33652656"
Since I have Basic Support I think I can't contact Google directly to report it. What can I do?
If you can generate a version of your sheet/CSV file that demonstrates the issue and is suitable for inclusion in a public issue tracker (e.g. any sensitive info is redacted), posting to the BigQuery public issue tracker may be another path forward.
I am trying to export a table from big query to google cloud storage from console/command line. The console job runs for a few minutes and errors out without any error code and the command line job also after running for sometime gives the below error:
BigQuery error in extract operation: Error processing job 'data-flow-experiment:bqjob_r308ff0f73d1820a6_00000157f77e8ab9_1': Backend error. Job aborted.
Job id of the command line is given above.
The billing is enabled for the project and the big query service is also enabled.
Also I get the below error when I try to create a bucket in the Google Cloud Storage:
AccessDeniedException: 403 The account for the specified project is read only.
Though the IAM user I am using has owner access and I have created buckets using this account previously and have also extracted tables in the past.
Please guide.
For the bigquery issue:
Do you happen to have a timestamp column which have out-of-range values (say, far far far into the future)?
If so, you can just wait for two more days, as the fix is
Today, after many successful loads into a BigQuery table, received this error message:
tableUnavailable
Something went wrong with the table you queried. Contact the table owner for assistance
I do not see this error in the error table: https://cloud.google.com/bigquery/troubleshooting-errors#errortable
What conditions could cause this error? Other load jobs, using the same code and in same dataset, do not display this error
What causes a "tableUnavailable" message?
There are two cases that I can think of:
First, this error can be returned for queries over a (small) set of tables that BigQuery exposes access to, but are not directly managed by the BigQuery team itself. You can consider these equivalent to "internalError" from a troubleshooting perspective.
These data sources are typically accessible to GCP customers that have specific relationships with Google product teams exposing their data in BigQuery.
We expose these under a different error code since you will resolve the issue more quickly by contacting the group that granted you access to their data. Going through BigQuery customer support to get this resolved will work too, it'll just take a little longer.
Second, you encountered this through a load job so this is clearly not the case above! We are testing a new load implementation that is faster than the current implementation, and I suspect some errors are mapped slightly differently now.
In this case, I suspect you encountered a "backendError" and should try the operation again. If you can give us a project_id:job_id of a job that hit this problem, we can verify this and make sure the error mapping is more consistent.
Thank you!
Whenever I try to load a CSV file stored in CloudStorage into BigQuery, I get an InternalError (both using the web interface as well as the command line). The CSV is (an abbreviated) part of the Google Ngram dataset.
command like:
bq load 1grams.ngrams gs://otichybucket/import_test.csv word:STRING,year:INTEGER,freq:INTEGER,volume:INTEGER
gives me:
BigQuery error in load operation: Error processing job 'otichyproject1:bqjob_r28187461b449065a_000001504e747a35_1': An internal error occurred and the request could not be completed.
However, when I load this file directly using the web interface and the File upload as a source (loading from my local drive), it works.
I need to load from Cloud Storage, since I need to load much larger files (original ngrams datasets).
I tried different files, always the same.
I'm an engineer on the BigQuery team. I was able to look up your job, and it looks like there was a problem reading the Google Cloud Storage object.
Unfortunately, we didn't log much of the context, but looking at the code, the things that could cause this are:
The URI you specified for the job is somehow malformed. It doesn't look malformed, but maybe there is some odd UTF8 non-printing character that I didn't notice.
The 'region' for your bucket is somehow unexpected. Is there any chance you've set data location on your GCS bucket to something other than {US, EU, or ASIA}. See here for more info on bucket locations. If so, and you've set location to a region, rather than a continent, that could cause this error.
There could have been some internal error in GCS that caused this. However, I didn't see this in any of the logs, and it should be fairly rare.
We're putting in some more logging to detect this in the future and to fix the issue with regional buckets (however, regional buckets may fail, because bigquery doesn't support cross-region data movement, but at least they will fail with an intelligible error).