I am unable to execute a simple select query (select * from [<DatasetName>.<TableName>] LIMIT 1000) with Google BigQuery. It is giving me below error:
Query Failed
Error: Unexpected. Please try again.
Job ID: job_51MSM2uo1vE5QomZA8Sv_RUE7M8
The table contains around 10 records. I am able to execute queries on other tables.
Job Id is - job_51MSM2uo1vE5QomZA8Sv_RUE7M8.
It looks like there was a temporary issue where we were seeing timeouts when querying tables that had been inserted to recently via streaming insert (tabledata.insertall()). We're currently addressing the underlying issue, but it should be working now. Please ping this thread if you see it again.
Related
I'm getting an error when tries to execute the following query:
select r.*
from dataset.table1 r
where id NOT IN (select id from staging_data.table1);
It's basically a query to load incremental data on a table. The dataset.table1 has 360k rows and the incremental on staging_data has 40k. But when I try to run this on my script to load on another table, I got the error:
Resources exceeded during query execution: The query could not be executed in the allotted memory
This started to happen on the last week, before that it was working well.
I looked over for solutions on internet, but all the solutions doesn't work on my case.
Has anyone know how to solve it?
I changed the cronjob time and it worked. Thank you!
You can try using writing the results to another table, as Big Query has a limitation for the maximum response size that can be processed. You can do that either if you are using Legacy or Standard SQL, and you can follow the steps to do it in the documentation.
I'm simply doing a select * from one of my tables. It is a daily table containing yesterday's data and was working fine yesterday and earlier today. It is suddenly returning "Error: Unexpected. Please try again."
Are there any production issues currently? The job id for one of these failed queries is job_Tj_vZhAqfZH9jpW52M8IUld4XKE
A configuration change this morning caused a problem querying from tables that had been written to via streaming. The issue should be fixed now. Please ping this thread if you continue to see problems.
Running very simple queries against my table and getting unexpected errors. Anybody from the BQ engineering team able to look at my Job IDs and figure out what's happening here?
SELECT field from mydataset.mytable
limit 100
Error: Unexpected. Please try again.
Job ID: natural-terra-531:job_OhD8zgm8btXQPNuwGSdIMoDX_f4
Another one with aggregation:
SELECT city, max(timestamp) from mydataset.mytable
GROUP BY city limit 100
Error: Unexpected. Please try again.
Job ID: natural-terra-531:job_9DjzAgn05yKZSYKCANHNbckE4f8
The BigQuery service had a temporary glitch last night that caused a high rate of failure for some queries between 10 and 10:30 PM due to a misconfiguration. We're conducting a postmortem to make sure we can prevent it in the future. If you retry the queries, they should work. If they do not, please let us know.
Since yesterday, 1-09-2012, I can't run any queries over a table that has been created from the result of another query.
example query:
SELECT region FROM [project.table] LIMIT 1000
result:
Query Failed
Error: Field 'region' is incompatible with the table schema.
49077933619
These kinds of queries passed successfully every day, last couple of weeks. Has anybody else encountered a similar problem?
We added some additional schema checking on friday. I was unable to reproduce the problem but I'll look into your examples (I was able to find your failed job in the logs). I'm in the process of turning off the additional schema checking in the meantime. Please try again and let us know if the problem continues.
I seem to be intermittently receiving the following error:
BigQuery error in query operation: Backend error. Please try again.
When I run queries that look like this:
SELECT campaign_id, cookie, point_type FROM myproject.mytable WHERE campaign_id IN ( [CSV list of ids] ) GROUP BY cookie, point_type, campaign_id
With the following bq command:
bq --format=none query --destination_table=myproject.mytable_unique [query]
The errors seem to happen randomly though, the exact same query will work a couple of minutes later. Any idea what is causing this?
The Job Ids for a recent failed job is job_3c05e162605342acb64fce6f71bb8b71
So this job actually succeeded -- what you're seeing is that when bq tries to get the status of the job, the call fails. We're investigating the issue. If you run bq show -j job_3c05e162605342acb64fce6f71bb8b71, you should see that the job completed successfully.