Big Query The job encountered an error during execution - google-bigquery

I've had this query in BigQuery that I have been updating every day for the last few months. It's been fine - some occasional errors, but retrying has solved the problem.
Bet last few days I am getting the error: The job encountered an error during execution. Retrying the job may solve the problem.
The error description says that it's an external error, so how can I fix that?
I have been retrying (with rather long pauses in between), but I still get the error.
JobID example: bquxjob_152ced5d_169917f0145
Does anyone have any idea what's going on? Is there any data/time limitations I might encounter (but why just the last few days then)?

You can use CGP stackdriver to monitor your BigQuery process using this URL
Interesting information you can find among others are the queryTime heatmap and the Slot usage which might help you understand your problems better
On the subject of the external table usage, you can use Google transfer (See this link for details) to schedule a repeated transfer from CSV to BigQuery table.
The below Image show you how to get to the transfer set up page from the webUI

I encountered this dreadfully useless error in a scheduled query. It was working great and then one day it stopped working at all and has been failing ever since without any other explanation. The StackDriver (now "Logs Explorer") showed nothing more enlightening:
jobStatus: {
errorResult: {
code: 14
message: "Error encountered during execution. Retrying may solve the problem."
}
errors: [
0: {
code: 14
message: "Error encountered during execution. Retrying may solve the problem."
}
]
jobState: "DONE"
}
Figuring out the actual issue takes a long time because scheduled queries start slowly since they use BATCH priority. What I found in my case was that the partitioned table and "Partition field" setting in the scheduled query was the culprit. I dropped the table and removed the partition field and voila the thing works again (although far from ideal since I need partitioning).
I hope this helps someone else running up against that useless error but in any case, I hope the good folks working on BigQuery find a better error to bubble up.

I ran into this problem when replacing the contents of a partitioned table. Two retries did not help. When I removed the --range_partitioning from the command the update was processed correctly. The table remained partitioned.
So there seems to be an issue about updates to partitioned tables, and when that is the cause these errors might not benefit from retry. I don't know whether there are other causes of this error.

This kind of issue probably has a lot to do with BigQuery quota errors : https://cloud.google.com/bigquery/docs/troubleshoot-quotas#ts-number-column-partition-quota, as mentionned by other answers, such as the 4000 partitions-by-table quota.

Related

Error: Schema changed for Timestamp field

Suddenly some of our queries started to fail with a weird error.
Error: Schema changed for Timestamp field timestamp
Job ID: aerobic-forge-504:job_TX7MOFd-OPDgKGkE7vdamX8Et4E
Job ID: aerobic-forge-504:job_j7SbCVXdzcH9ZhOi2QyYODi58H4
As I know BQ engineers are checking these, hopefully this will be picked up.
This issue is a result of internal timestamp handling changes applied tonight, and if you are getting this crash, you must contact Google support in order to fix the issue.
They are also working to better handle the timestamps in future.

Unexpected error from a table that was working correctly

I'm simply doing a select * from one of my tables. It is a daily table containing yesterday's data and was working fine yesterday and earlier today. It is suddenly returning "Error: Unexpected. Please try again."
Are there any production issues currently? The job id for one of these failed queries is job_Tj_vZhAqfZH9jpW52M8IUld4XKE
A configuration change this morning caused a problem querying from tables that had been written to via streaming. The issue should be fixed now. Please ping this thread if you continue to see problems.

Oracle error: Application failover does not support non-single-SELECT statement

We are getting the following error using Oracle:
[Oracle JDBC Driver]Application failover does not support non-single-SELECT statement
The error occurs when we try to make a delete or insert over a large number of rows (tens of millions of rows).
I know that the script works, because it was working for almost an year before these error messages start to pop.
We know that no one change any database configuration, so we figure out that the problem must be on the volume of processed data (row number is growing as time goes by...).
But we never see that kind of error before! What does it means? It seems that a failover engine tries to recover from an error, but when oracle is 'taken over' by this engine, it enter in a more restricted state, where some kinds of queries does not work (like Windows Safe Mode...)
Well, if this is what is happening, how can I get the real error message? The one that trigger the failover mechanism?
BTW, below is one of the deletes that triggers the error:
delete from odf_ca_rnv_av_snapshot_week
(we tried this one just to test the simplest delete we could think of... a truncate won't help us with the real deal :) )
check this link
the error seems to come not from Oracle or JDBC, but from "progress". It means that it can only recover from SELECT statements and not from DML.
You'll have to figure out why the failover occurs in the first place.

Strange result when running query over a table created from a result of another query

Since yesterday, 1-09-2012, I can't run any queries over a table that has been created from the result of another query.
example query:
SELECT region FROM [project.table] LIMIT 1000
result:
Query Failed
Error: Field 'region' is incompatible with the table schema.
49077933619
These kinds of queries passed successfully every day, last couple of weeks. Has anybody else encountered a similar problem?
We added some additional schema checking on friday. I was unable to reproduce the problem but I'll look into your examples (I was able to find your failed job in the logs). I'm in the process of turning off the additional schema checking in the meantime. Please try again and let us know if the problem continues.

What problems may occur while querying SQL databases with big amount of data over internet

I am having this big database on one MSSQL server that contains data indexed by a web crawler.
Every day I want to update SOLR SearchEngine Index using DataImportHandler which is situated in another server and another network.
Solr DataImportHandler uses query to get data from SQL. For example this query
SELECT * FROM DB.Table WHERE DateModified > Config.LastUpdateDate
The ImportHandler does 8 selects of this types. Each select will get arround 1000 rows from database.
To connect to SQL SERVER i am using com.microsoft.sqlserver.jdbc.SQLServerDriver
The parameters I can add for connection are:
responseBuffering="adaptive/all"
batchSize="integer"
So my question is:
What can go wrong while doing this queries every day ? ( except network errors )
I want to know how is SQL Server working in this context ?
Further more I have to take a decicion regarding the way I will implement this importing and how to handle errors, but first I need to know what errors can arise.
Thanks!
Later edit
My problem is that I don't know how can this SQL Queries fail. When i am calling this importer every day it does 10 queries to the database. If 5th query fails I have to options:
rollback the entire transaction and do it again, or commit the data I got from the first 4 queries and redo somehow the queries 5 to 10. But if this queries always fails, because of some other problems, I need to think another way to import this data.
Can this sql queries over internet fail because of timeout operations or something like this?
The only problem i identified after working with this type of import is:
Network problem - If the network connection fails: in this case SOLR is rolling back any changes and the commit doesn't take place. In my program I identify this as an error and don't log the changes in the database.
Thanks #GuidEmpty for providing his comment and clarifying out this for me.
There could be issues with permissions (not sure if you control these).
Might be a good idea to catch exceptions you can think of and include a catch all (Exception exp).
Then take the overall one as a worst case and roll-back (where you can) and log the exception to include later on.
You don't say what types you are selecting either, keep in mind text/blob can take a lot more space and could cause issues internally if you buffer any data etc.
Though just a quick re-read and you don't need to roll-back if you are only selecting.
I think you would be better having a think about what you are hoping to achieve and whether knowing all possible problems will help?
HTH