BigQuery: An internal error occurred and the request could not be completed. Error: 7367027 - google-bigquery

Trying to run a simple delete statement and the command simply times out and throws the error in the title.
If I change the where to operate on a subset of the data, it works and then I was able to run the where true version successfully. I can't wrap my head around why I'm getting this error.
I've tried running this from the python library as well as from the console, with both returning the same error. I've also tried deleting the table entirely and then recreating it, which did nothing.
delete from `<table>`
where true

Related

Python: If error occurs anywhere, do specific line of code

I have a script I'm trying to write to process a large amount of data. There are, of course, potential for errors. In the script I need to connect to databases. If the script encounters an error, the code never reaches the point where the connection to the database is terminated. I'd like to have something in my python code that will recognize an error occurs, not matter where, and if nothing else at least close those databases. Does something like this exist? I know I can use try/except, but that would only work if I know exactly where I could get the error? I'm basically looking for a catchall to close my databases in the event an error occurs in a location I didn't anticipate.
To run certain cleanup code even if there is an error, use the finally block:
try:
# do stuff, possible exception
except:
# run this if exception
finally:
# always run this, even if exception
Reference: https://docs.python.org/3/tutorial/errors.html#defining-clean-up-actions

NodeJS + Express "Error: -2006,\, Can not bind parameter(s)"

My company is working on converting from ColdFusion to NodeJS with Express, I'm running into an error trying to update some data in SQLAnywhere.
I have one update function working with 5 pieces of data. I'm working on my second, with 23 data points, but I'm running into an error stating:
"Error: Code: -2006 Msg: Can not bind parameter(s)."
I can't find any information about this online, not even using the error code. Any help, or pointing me in the right direction, would be appreciated.
Turns out it was trying to save integers into "char" fields on the database. Odd that we never had this issue with ColdFusion, but using "String(…)" around the values seemed to solve the issue.

Getting exception while reading data from blob in azure

While I am trying to read the list of blob data on azure, I am getting the following error:
Function evaluation disabled because a previous function evaluation timed out. You must continue execution to reenable function evaluation.
How to resolve this?
Please see the following link. Your code likely has a endless loop. https://msdn.microsoft.com/en-us/library/ms234762.aspx

robot framework: exception handling

Is it possible to handle exceptions from the test case? I have 2 kinds of failure I want to track: a test failed to run, and a test ran but received the wrong output. If I need to raise an exception to fail my test, how can I distinguish between the two failure types? So say I have the following:
*** Test Cases ***
Case 1
Login 1.2.3.4 user pass
Check Log For this log line
If I can't log in, then the Login Keyword would raise an ExecutionError. If the log file doesn't exist, I would also get an ExecutionError. But if the log file does exist and the line isn't in the log, I should get an OutputError.
I may want to immediately fail the test on an ExecutionError, since it means my test did not run and there is some issue that needs to be fixed in the environment or with the test case. But on an OutputError, I may want to continue the test. It may only refer to a single piece of output and the test may be valuable to continue to check the rest of the output.
How can this be done?
Robot has several keywords for dealing with errors, such as Run keyword and ignore error which can be used to run another keyword that might fail. From the documentation:
This keyword returns two values, so that the first is either string
PASS or FAIL, depending on the status of the executed keyword. The
second value is either the return value of the keyword or the received
error message. See Run Keyword And Return Status If you are only
interested in the execution status.
That being said, it might be easier to write a python-based keyword which calls your Login keyword, since it will be easier to deal with multiple exceptions.
You can use something like this
${err_msg}= Run Keyword And Expect Error * <Your keyword>
Should Not Be Empty ${err_msg}
There are couple of different variations you could try like
Run Keyword And Continue On Failure, Run Keyword And Expect Error, Run Keyword And Ignore Error for the first statement above.
Option for the second statement above are Should Be Equal As Strings, Should Contain, Should Match.
You can explore more on Robot keywords

Determine actual errors from a load job

Using the Java SDK I am creating a load job for just a single record with a fairly complicated schema. When monitoring the status of the load job, it takes a surprisingly long time (but perhaps this is due to working out the schema), but then says:
11:21:06.975 [main] INFO xxx.GoogleBigQuery - Job status (21694ms) create_scans_1384744805079_172221126: DONE
11:24:50.618 [main] ERROR xxx.GoogleBigQuery - Job create_scans_1384744805079_172221126 caused error (invalid) with message
Too many errors encountered. Limit is: 0.
11:24:50.810 [main] ERROR xxx.GoogleBigQuery - {
"message" : "Too many errors encountered. Limit is: 0.",
"reason" : "invalid"
?}
BTW - how do I tell the job that it can have more than zero errors using Java?
This load job does not appear in the list of recent jobs in the console, and as far as I can see, none of the Java objects contains any more details about the actual errors encountered. So how can I pro-grammatically find out what is going wrong? All I can find is:
if (err != null) {
log.error("Job {} caused error ({}) with message\n{}", jobID, err.getReason(), err.getMessage());
try {
log.error(err.toPrettyString());
}
...
In general I am having a difficult time finding good documentation for some of these things and am working it out by trial and error and short snippets of code found on here and older groups. If there is a better source of information than the getting started guides, then I would appreciate any pointers to that information. The Javadoc does not really help and I cannot find any complete examples of loading, querying, testing for errors, cataloging errors and so on.
This job is submitted via a NEWLINE_DELIMITIED_JSON record, supplied to the job via:
InputStream dummy = getClass().getResourceAsStream("/googlebigquery/xxx.record");
final InputStreamContent jsonIn = new InputStreamContent("application/octet-stream", dummy);
createTableJob = bigQuery.jobs().insert(projectId, loadJob, jsonIn).execute();
My authentication and so on seems to work correctly as separate Java code to list the projects, and the datasets in the project all works correctly. So I just need help in working what the actual error is - does it not like the schema (I have records nested within records for instance), or does it think that there is an error in the data I am submitting.
Thanks in advance for any help. The job number cited above is an actual failed load job if that helps any Google staffers who might read this.
It sounds like you have a couple of questions, so I'll try to address them all.
First, the way to get the status of the job that failed is to call jobs().get(jobId), which returns a job object that has an errorResult object that has the error that caused the job to fail (e.g. "too many errors"). The errorStream list is a lost of all of the errors on the job, which should tell you which lines hit errors.
Note if you have the job id, it may be easier to use bq to lookup the job -- you can run bq show <job_id> to get the job error information. If you add the --format=prettyjson it will print out all of the information in the job.
A hint you also might want to consider is to supply your own job id when you create the job -- then even if there is an error starting the job (i.e. the insert() call fails, perhaps due to a network error) you can look up the job to see what actually happened.
To tell BigQuery that some errors are allowed during import, you can use the maxBadResults setting in the load job. See https://developers.google.com/resources/api-libraries/documentation/bigquery/v2/java/latest/com/google/api/services/bigquery/model/JobConfigurationLoad.html#getMaxBadRecords().