Running very simple queries against my table and getting unexpected errors. Anybody from the BQ engineering team able to look at my Job IDs and figure out what's happening here?
SELECT field from mydataset.mytable
limit 100
Error: Unexpected. Please try again.
Job ID: natural-terra-531:job_OhD8zgm8btXQPNuwGSdIMoDX_f4
Another one with aggregation:
SELECT city, max(timestamp) from mydataset.mytable
GROUP BY city limit 100
Error: Unexpected. Please try again.
Job ID: natural-terra-531:job_9DjzAgn05yKZSYKCANHNbckE4f8
The BigQuery service had a temporary glitch last night that caused a high rate of failure for some queries between 10 and 10:30 PM due to a misconfiguration. We're conducting a postmortem to make sure we can prevent it in the future. If you retry the queries, they should work. If they do not, please let us know.
Related
I've had this query in BigQuery that I have been updating every day for the last few months. It's been fine - some occasional errors, but retrying has solved the problem.
Bet last few days I am getting the error: The job encountered an error during execution. Retrying the job may solve the problem.
The error description says that it's an external error, so how can I fix that?
I have been retrying (with rather long pauses in between), but I still get the error.
JobID example: bquxjob_152ced5d_169917f0145
Does anyone have any idea what's going on? Is there any data/time limitations I might encounter (but why just the last few days then)?
You can use CGP stackdriver to monitor your BigQuery process using this URL
Interesting information you can find among others are the queryTime heatmap and the Slot usage which might help you understand your problems better
On the subject of the external table usage, you can use Google transfer (See this link for details) to schedule a repeated transfer from CSV to BigQuery table.
The below Image show you how to get to the transfer set up page from the webUI
I encountered this dreadfully useless error in a scheduled query. It was working great and then one day it stopped working at all and has been failing ever since without any other explanation. The StackDriver (now "Logs Explorer") showed nothing more enlightening:
jobStatus: {
errorResult: {
code: 14
message: "Error encountered during execution. Retrying may solve the problem."
}
errors: [
0: {
code: 14
message: "Error encountered during execution. Retrying may solve the problem."
}
]
jobState: "DONE"
}
Figuring out the actual issue takes a long time because scheduled queries start slowly since they use BATCH priority. What I found in my case was that the partitioned table and "Partition field" setting in the scheduled query was the culprit. I dropped the table and removed the partition field and voila the thing works again (although far from ideal since I need partitioning).
I hope this helps someone else running up against that useless error but in any case, I hope the good folks working on BigQuery find a better error to bubble up.
I ran into this problem when replacing the contents of a partitioned table. Two retries did not help. When I removed the --range_partitioning from the command the update was processed correctly. The table remained partitioned.
So there seems to be an issue about updates to partitioned tables, and when that is the cause these errors might not benefit from retry. I don't know whether there are other causes of this error.
This kind of issue probably has a lot to do with BigQuery quota errors : https://cloud.google.com/bigquery/docs/troubleshoot-quotas#ts-number-column-partition-quota, as mentionned by other answers, such as the 4000 partitions-by-table quota.
I'm getting an error when tries to execute the following query:
select r.*
from dataset.table1 r
where id NOT IN (select id from staging_data.table1);
It's basically a query to load incremental data on a table. The dataset.table1 has 360k rows and the incremental on staging_data has 40k. But when I try to run this on my script to load on another table, I got the error:
Resources exceeded during query execution: The query could not be executed in the allotted memory
This started to happen on the last week, before that it was working well.
I looked over for solutions on internet, but all the solutions doesn't work on my case.
Has anyone know how to solve it?
I changed the cronjob time and it worked. Thank you!
You can try using writing the results to another table, as Big Query has a limitation for the maximum response size that can be processed. You can do that either if you are using Legacy or Standard SQL, and you can follow the steps to do it in the documentation.
I am unable to execute a simple select query (select * from [<DatasetName>.<TableName>] LIMIT 1000) with Google BigQuery. It is giving me below error:
Query Failed
Error: Unexpected. Please try again.
Job ID: job_51MSM2uo1vE5QomZA8Sv_RUE7M8
The table contains around 10 records. I am able to execute queries on other tables.
Job Id is - job_51MSM2uo1vE5QomZA8Sv_RUE7M8.
It looks like there was a temporary issue where we were seeing timeouts when querying tables that had been inserted to recently via streaming insert (tabledata.insertall()). We're currently addressing the underlying issue, but it should be working now. Please ping this thread if you see it again.
Since yesterday, 1-09-2012, I can't run any queries over a table that has been created from the result of another query.
example query:
SELECT region FROM [project.table] LIMIT 1000
result:
Query Failed
Error: Field 'region' is incompatible with the table schema.
49077933619
These kinds of queries passed successfully every day, last couple of weeks. Has anybody else encountered a similar problem?
We added some additional schema checking on friday. I was unable to reproduce the problem but I'll look into your examples (I was able to find your failed job in the logs). I'm in the process of turning off the additional schema checking in the meantime. Please try again and let us know if the problem continues.
I seem to be intermittently receiving the following error:
BigQuery error in query operation: Backend error. Please try again.
When I run queries that look like this:
SELECT campaign_id, cookie, point_type FROM myproject.mytable WHERE campaign_id IN ( [CSV list of ids] ) GROUP BY cookie, point_type, campaign_id
With the following bq command:
bq --format=none query --destination_table=myproject.mytable_unique [query]
The errors seem to happen randomly though, the exact same query will work a couple of minutes later. Any idea what is causing this?
The Job Ids for a recent failed job is job_3c05e162605342acb64fce6f71bb8b71
So this job actually succeeded -- what you're seeing is that when bq tries to get the status of the job, the call fails. We're investigating the issue. If you run bq show -j job_3c05e162605342acb64fce6f71bb8b71, you should see that the job completed successfully.