I'm running a simple query which reads from a five-column table with five million rows and copies it to a new table adding an extra constant column.
This query keeps on failing with the following message:
u'status': {u'errorResult': {u'message': u'Unexpected. Please try again.',
u'reason': u'internalError'},
u'errors': [{u'message': u'Unexpected. Please try again.',
u'reason': u'internalError'}],
u'state': u'DONE'}}
Checking the jobs in BigQuery I get:
Errors encountered during job execution. Unexpected. Please try again.
The following jobIds had this problem:
job_6eae11c7792446a1b600fe5554071d32 query FAILURE 02 Aug 09:46:04 0:01:00
job_92ff013841574399a459e8c4296d7c73 query FAILURE 02 Aug 09:45:10 0:00:49
job_a1a11ee7bbec4c08b5e58b91b27aafad query FAILURE 02 Aug 09:43:40 0:00:55
job_496f8af99da94d8292f90580f73af64e query FAILURE 02 Aug 09:42:46 0:00:51
How can I fix this problem?
This is a known bug with large result sets ... we've got a fix and will release it today.
Related
Trying to import a 2.8GB of file via GS to Big query, and the job failed with:
Unexpected. Please try again.
here are some other outputs
Job ID: aerobic-forge-504:job_a6H1vqkuNFf-cJfAn544yy0MfxA
Start Time: 5:12pm, 3 Jul 2014
End Time: 7:12pm, 3 Jul 2014
Destination Table: aerobic-forge-504:wr_dev.phone_numbers
Source URI: gs://fls_csv_files/2014-6-11_Global_0A43E3B1-2E4A-4CA9-BD2A-012B4D0E4C69.txt
Source Format: CSV
Allow Quoted Newlines: true
Allow Jagged Rows: true
Ignore Unknown Values: true
Schema:
area: INTEGER
number: INTEGER
The job failed due to timeout; there is a maximum of 2 hours allowed for processing; after that the import job is killed. I'm not sure why the import was so slow; from what I can tell we only processed at about 100KB/sec, which is far slower than expected. It is quite possible that the error is transient.
In the future, you can speed up the import by setting allow_quoted_newlines to false, which will allow bigquery to process the import in parallel. Alternately, you can partition the file yourself and send multiple file paths in the job.
Can you try again and let us know whether it works?
I am getting this error and don't understand why:
Error Executing Database Query. [Macromedia][SQLServer JDBC
Driver][SQLServer]Invalid column name 'buildno'. The error occurred
in C:/data/wwwroot/webappsdev/cfeis/redbook/redbook_bio_load.cfm: line
10
8 : select *
9 : from redbook_bio
10 : where build_num = '#session.build_num#'
11 : </cfquery>
12 :
VENDORERRORCODE: 207 SQLSTATE: 42S22 SQL: select * from
redbook_bio where buildno = '4700' DATASOURCE: xxxx
******"
It is saying buildno is an invalid column name, but I do not have that name in my query. I used to, but changed both the column in the database and the column name in the query to build_num. You can see my exact code with line numbers, and that there is no 'buildno' in there. But looking at the SQL statement below that, it is still trying to use 'buildno'.
I had my editor check the directory for anywhere it says buildno and no results came back. I have restarted the CF Service and cleared the cache. Why would it still be trying to run it with buildno instead of build_num like the code says?
There was a cfquery cache setting in the Administrator. We had it set to 100. Apparently clearing the cache template and component cache doesn't clear the cfquery cache. I changed the query name and it fixed the problem. It most likely could have been fixed by setting the cfquery cache value to 0.
Using the Java SDK I am creating a load job for just a single record with a fairly complicated schema. When monitoring the status of the load job, it takes a surprisingly long time (but perhaps this is due to working out the schema), but then says:
11:21:06.975 [main] INFO xxx.GoogleBigQuery - Job status (21694ms) create_scans_1384744805079_172221126: DONE
11:24:50.618 [main] ERROR xxx.GoogleBigQuery - Job create_scans_1384744805079_172221126 caused error (invalid) with message
Too many errors encountered. Limit is: 0.
11:24:50.810 [main] ERROR xxx.GoogleBigQuery - {
"message" : "Too many errors encountered. Limit is: 0.",
"reason" : "invalid"
?}
BTW - how do I tell the job that it can have more than zero errors using Java?
This load job does not appear in the list of recent jobs in the console, and as far as I can see, none of the Java objects contains any more details about the actual errors encountered. So how can I pro-grammatically find out what is going wrong? All I can find is:
if (err != null) {
log.error("Job {} caused error ({}) with message\n{}", jobID, err.getReason(), err.getMessage());
try {
log.error(err.toPrettyString());
}
...
In general I am having a difficult time finding good documentation for some of these things and am working it out by trial and error and short snippets of code found on here and older groups. If there is a better source of information than the getting started guides, then I would appreciate any pointers to that information. The Javadoc does not really help and I cannot find any complete examples of loading, querying, testing for errors, cataloging errors and so on.
This job is submitted via a NEWLINE_DELIMITIED_JSON record, supplied to the job via:
InputStream dummy = getClass().getResourceAsStream("/googlebigquery/xxx.record");
final InputStreamContent jsonIn = new InputStreamContent("application/octet-stream", dummy);
createTableJob = bigQuery.jobs().insert(projectId, loadJob, jsonIn).execute();
My authentication and so on seems to work correctly as separate Java code to list the projects, and the datasets in the project all works correctly. So I just need help in working what the actual error is - does it not like the schema (I have records nested within records for instance), or does it think that there is an error in the data I am submitting.
Thanks in advance for any help. The job number cited above is an actual failed load job if that helps any Google staffers who might read this.
It sounds like you have a couple of questions, so I'll try to address them all.
First, the way to get the status of the job that failed is to call jobs().get(jobId), which returns a job object that has an errorResult object that has the error that caused the job to fail (e.g. "too many errors"). The errorStream list is a lost of all of the errors on the job, which should tell you which lines hit errors.
Note if you have the job id, it may be easier to use bq to lookup the job -- you can run bq show <job_id> to get the job error information. If you add the --format=prettyjson it will print out all of the information in the job.
A hint you also might want to consider is to supply your own job id when you create the job -- then even if there is an error starting the job (i.e. the insert() call fails, perhaps due to a network error) you can look up the job to see what actually happened.
To tell BigQuery that some errors are allowed during import, you can use the maxBadResults setting in the load job. See https://developers.google.com/resources/api-libraries/documentation/bigquery/v2/java/latest/com/google/api/services/bigquery/model/JobConfigurationLoad.html#getMaxBadRecords().
I got the "backend error. Job aborted" , job ID below.
I know this question was asked, but I still need some help to try & resolve this.
what happen if this happen in production,we want to have a 5min periodic loads?
thanks in advance
Errors:
Backend error. Job aborted.
Job ID: job_744a2b54b1a343e1974acdae889a7e5c
Start Time: 4:32pm, 30 Aug 2012
End Time: 5:02pm, 30 Aug 2012
Destination Table: XXXXXXXXXX
Source URI: gs://XXXXX/XXXXXX.csv.Z
Delimiter: ,
Max Bad Records: 99999999
This job hit an internal error. Since you ran this job, BigQuery has been updated to a new version, and a number of internal errors have been fixed. Can you retry your job?
CRM 2011 Server and SQL Server 2008 R2 (running in same machine)...Installed CRM 2011 with Reporting extensions. When I try to import process the CRM 4.0 database through CRM 2011 Deployment Manager,it fails. I map all users and validation screen passes with warnings and without errors ...Why is it failing can someone point me into a right direction looking at the logs.
It doesn't take long before it fails...|I copied most frequent lines of the log before it failed...
Your help is much appreciated...
Info| Upgrading the views in the MSCRM database
Info| CrmAction execution time; UpgradeDatabaseAction;
00:21:23.6584839
| Error| Installer Complete: OrganizationUpgrader - Error encountered
|Error| Exception occured during
Microsoft.Crm.Tools.Admin.OrganizationUpgrader:
Action Microsoft.Crm.Tools.Admin.UpgradeDatabaseAction failed.
InnerException: System.Reflection.TargetInvocationException: Exception
has been thrown by the target of an invocation. --->
Microsoft.Crm.CrmException: SqlException: Invalid column name
'New_name'. Invalid column name 'New_Details'. Invalid column name
'New_Test'., View Script: if exists (select * from sysobjects where
name = 'New_action' and xtype = 'V') begin
drop view [New_action] end go
.......... ..........
System.Data.SqlClient.SqlException: Invalid column name
Could anyone please help me with this problem
Thanks in Advance
Prajosh
I found a solution - I added the missing columns manually