I would like to monitor the state of my BigQuery scheduled queries in a Cloud Monitoring dashboard. I have created several logs-based metrics to track errors in other services/resources, but having trouble finding any indication of scheduled query errors in Cloud Logging.
From the Scheduled Queries page in the BigQuery UI, I can check the run details on failed scheduled queries and it shows some log entries explaining the error e.g.
9:02:59 AM Error code 8 : Resources exceeded during query execution: Not enough resources for query planning - too many subqueries or query is too complex..; JobID: PROJECT:12345abc-0000-12a3-1234-123456abcdf
9:00:17 AM Starting to process the query job with no parameters.
9:00:00 AM Dispatched run to data source with id 1234567890
But for some reason I cannot find any of these messages in Cloud Logging. For succeeded jobs, there are some entries in the BigQuery logs, but the failed jobs are missing completely.
Any idea how to view failed scheduled queries in Cloud Logging or Cloud Monitoring?
You can use the following advanced filter to filter all the BigQuery errors related to "jobservice.insert"
resource.type="bigquery_resource"
protoPayload.serviceName="bigquery.googleapis.com"
protoPayload.methodName="jobservice.insert"
severity: "ERROR"
This is the result of that query:
Even a simple query like:
resource.type="bigquery_resource"
severity: "ERROR"
Is able to retrieve all the BigQuery related errors, as you can see here:
Once you find the one related to failed scheduled queries, you can click over the protopayload of the result and select "Show matching entries" to start constructing your own Advanced Query.
I was able to assemble this filter using the Advanced logs queries and BigQuery queries documents.
Please verify the permissions you have - in my case I was able to see failed scheduled queries in Cloud Logging in project where I had been set up as Owner. For another project, where I am just an Editor I was unable to see the erros, just like in your case.
The code bellow works (e.g. I am able to get errors related to "Permission denied while getting Drive credentials" - missing credentials on the Google Drive files for service account running scheduled query):
resource.type="bigquery_resource"
protoPayload.serviceName="bigquery.googleapis.com"
protoPayload.methodName="jobservice.insert"
severity=ERROR
Related
I am using Microstrategy to connect to Bigquery using a service account. I want to collect user level job statistics from MSTR but since I am using a service account, I need a way to track user level job statistics in Bigquery for all the jobs executed via Microstrategy.
Since you are using a Service account to make the requests from Microstrategy, you could look up for all your project Jobs by listing them then, by using each Job ID in the list, gather the information of the job as this shows the Email used for the Job ID.
A workaround for this would be also using Stackdriver Logging advanced filters and use a filter to get the jobs made by the Service Account. For instance:
resource.type="bigquery_resource"
protoPayload.authenticationInfo.principalEmail="<your service account>"
Keep in mind this only shows jobs in the last 30 days due to the Logs retention periods.
Hope it helps.
SQL Jobs are running from the SQL Agent. At this moment, we don't have the Email Notification configured for failure from the SQL Job.
I am considering to fetch the errors from the log table. Further then to have a stored procedure to generate the error report.
select top 10 * from msdb.dbo.sysjobhistory
How to log the SQL Job failure to the Log Table? What will happen, when a job is scheduled to run for every 5 mins and the error will get updated (or) error will get inserted as new record?
Follow these steps:
Open Properties of job
Go under Steps
Get into the step by pressing Edit button
Navigate under Advanced
Change the on Failure Action to Quit the job reporting failure
Then Check Log To table
I ran into the limitations of the built-in logs, job history, etc., where it doesn't capture when a step fails, but the job itself doesn't.
Based upon this sqlshack article, I built and implemented something very similar which maintains a permanent history of job/step failures, etc., even if you delete a job/step. This solution will even notify when this job fails.
Best of luck!
I want to get job information from the job history server for the tez execution engine.
Currently all map reduce jobs are reflected on the job history server but not the tez ones.
Job history is using some kind of logs to get all the information. Where can I find those logs? If the information is not available on the job history server I can parse those logs to get the information I need.
I have already tried parsing pig-tez cmd logs. Parsing that does not contain enough information and does not work for hive on tez.
I have been looking at the new BigQuery Logging feature in the Cloud Platform Console, but it seems a bit inconsistent in what is being logging.
I can see some creates, deletes, inserts and queries. However, when I did a few queries and copy jobs through the web UI they do not show up.
Should activity in the BigQuery web UI also be logged?
Does it differ from where the request comes from, eg. console or API access?
There is no difference between console or API access. An activity in the BigQuery web UI should be logged.
Are you using Cloud Log viewer to view these logs? In some cases, there might be a few secs delay when these logs show up in the log viewer. And you might have to refresh the logs.
Logs containing information about queries are written to the Data Access log stream as opposed to the Admin Activity stream. Only users with Project Owner permissions can view the contents of the Data Access logs which might be why you aren't seeing these.
You should check with the project owner to confirm you have the right permissions to see these logs.
To view the Data Access audit logs, you must have the Cloud IAM roles Logging/Private Logs Viewer or Project/Owner. I had similar issue recently and after enabling the Logging/Private Logs Viewer I was able to see the logs
There is some time that the query runs perfectly, but lately has appeared to me this error: "Backend Error".
I know that my query is huge, and it takes about 300 seconds to execute. But I imagine this is some BigQuery's bug, so I wonder why this error is happening.
This error started appears when I was executing some other queries, when I just wanted the results and not export them.
So I started to create a table with the results hopping that BigQuery could be able to perform the query
Here is an image that shows the error:
I looked up your job in the BigQuery job database, and it completed successfully after 160 seconds.
BigQuery queries are fundamentally asynchronous. That is, when you run a query, it runs as a named Job by the BigQuery service. Since the original call may timeout, usual best-practice is to poll for completion by using the jobs.getQueryResults() API. My guess is that this is the API call that actually failed.
We had reports of an elevated number of Backend Errors yesterday and we're still investigating. However, these don't appear to be actual failed queries, instead they are failures getting the status of queries or getting the results, that should go away by retrying.
How did you run the query? Did you use the BigQuery Web UI? If you are using the API, did you call the bigquery.jobs.insert() api or the bigquery.jobs.query() api?