“Error: Connection error. Please try again.” when uploading a table - google-bigquery

I am trying to upload a json file through the web UI but I am receiving this generic error message: Error: Connection error. Please try again.. Can you please let me know what's wrong.
Job Id = job_VmEiQY0xYPWjjLa-Knaz-C3INNA
Thanks.

It looks like your job encountered a transient error in one of our data centers that prevented us from loading your data into BigQuery. This problem appears to be resolved as of 2014-02-12.
As always, we recommend that you write client code that retries on error. We also recommend that you generate your own job IDs when loading data. That way, if you encounter an error, you can retry with the same job ID and be assured that at most one of your attempts to load the data will succeed.

Related

Azure Data Factory Failing with Bulk Load

I am trying to extract data from a Azure SQL Database, however I'm getting the
Operation on target Copy Table to EnrDB failed: Failure happened on 'Source' side. ErrorCode=SqlOperationFailed,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=A database operation failed with the following error: 'Cannot bulk load because the file "https://xxxxxxx.dfs.core.windows.net/dataverse-xxxxx-org5a2bcccf/appointment/2022-03.csv" could not be opened. Operating system error code 12(The access code is invalid.).
You might be thinking this is permission issue, but if you take a look at the error code 12 you will see the issue is related to Bulk Load.. a related answer can be found here..
https://learn.microsoft.com/en-us/answers/questions/988935/cannot-bulk-load-file-error-code-12-azure-synapse.html
I thought I might be able to fix the issue by selecting Bulk lock see image.
But I still get the error.
Any thoughts on how to resolve this issue?
As I see that the error is refering to a source side (2022-03.csv) , so I am not sure as to why are you making changes on the sink side . As explained in the threads which you referd , it appears the the CSV file is getting updated once the you pipeline starts execute by some other process . Refering back to the same thread .https://learn.microsoft.com/en-us/answers/questions/988935/cannot-bulk-load-file-error-code-12-azure-synapse.html
The changes suggested below should be made on the pipeline/process which is writing to 2022-03.csv .
[![enter image description here][1]][1]
HTH
[1]: https://i.stack.imgur.com/SSzwt.png

BigQuery Error: 33652656 | I can't directly contact Google

I've been trying to connect a CSV I have in Google Drive to a BigQuery table for a week but I've been getting the following error:
"An internal error occurred and the request could not be completed. This is usually caused by a transient issue. Retrying the job with back-off as described in the BigQuery SLA should solve the problem: https://cloud.google.com/bigquery/sla. If the error continues to occur please contact support at https://cloud.google.com/support. Error: 33652656"
Since I have Basic Support I think I can't contact Google directly to report it. What can I do?
If you can generate a version of your sheet/CSV file that demonstrates the issue and is suitable for inclusion in a public issue tracker (e.g. any sensitive info is redacted), posting to the BigQuery public issue tracker may be another path forward.

BigQuery preview: Unknown error response from the server

I'm getting an "Unknown error response from the server" error on a lot of my datasets, when trying to do a preview. I have this already since a couple of days with no workaround to fix it. Do you have any idea, what's happening or how to solve this? Is there a bug tracker for BigQuery or some other way to reach out to the Google Platform staff?
This issue happens due to the request (created during "preview" command) being too large for some tables with a lot of fields. We are implementing a partial solution where we reduce the size of the request. The changes are currently being pushed in all regions (issuetracker).

Google Bigquery: In internal error occurred and the request could not be completed

I am having difficulty getting a BigQuery job to execute from the Web Interface. If I try to run the job I get the error message
Error: An internal error occurred and the request could not be completed.
Job ID: rhi-localytics-db:job_V-6F5YOk0k9ENTgDfGX84Ghnxz8
Does anyone have any idea what problem this error message means? The query I'm using is not terribly complicated.
Thanks,
Brad
I checked the internal error details and your query appears to have hit a transient internal error. The error should have nothing to do with your specific query. We'll investigate internally to reduce the occurrence of errors like this.
Does your query reliably fail with this error if you rerun it, or did you only receive this error on the one query job?
Thank you for the report. We are now tracking the issue internally.

BigQuery ingestion throws "tableUnavailable" message: What causes this error specifically?

Today, after many successful loads into a BigQuery table, received this error message:
tableUnavailable
Something went wrong with the table you queried. Contact the table owner for assistance
I do not see this error in the error table: https://cloud.google.com/bigquery/troubleshooting-errors#errortable
What conditions could cause this error? Other load jobs, using the same code and in same dataset, do not display this error
What causes a "tableUnavailable" message?
There are two cases that I can think of:
First, this error can be returned for queries over a (small) set of tables that BigQuery exposes access to, but are not directly managed by the BigQuery team itself. You can consider these equivalent to "internalError" from a troubleshooting perspective.
These data sources are typically accessible to GCP customers that have specific relationships with Google product teams exposing their data in BigQuery.
We expose these under a different error code since you will resolve the issue more quickly by contacting the group that granted you access to their data. Going through BigQuery customer support to get this resolved will work too, it'll just take a little longer.
Second, you encountered this through a load job so this is clearly not the case above! We are testing a new load implementation that is faster than the current implementation, and I suspect some errors are mapped slightly differently now.
In this case, I suspect you encountered a "backendError" and should try the operation again. If you can give us a project_id:job_id of a job that hit this problem, we can verify this and make sure the error mapping is more consistent.
Thank you!