I am trying to do multiple file upload simultaneously to google big-query using command line tool. I got following error :
BigQuery error in load operation: Could not connect with BigQuery server.
Http response status: 503
Http response content:
Service Unavailable
Any way to workaround this problem ?
How do I upload multiple files simultaneously to google big-query using command line tool.
Multiple file upload should work (and we use it every day). If you're getting a 503, that indicates something is wrong with the service. One thing you might want to make sure of is that if you're using a * in your command line that you have it quoted so that the shell doesn't expand it automatically before it gets passed to bq.
If you're getting a 503 error, can you retry the command the flag --apilog=- (this needs to be one of the first params) which will dump the interaction with the server to stdout. The problem may be obvious from that log, but if it isn't can you update your question with the relevant portions of the log? If you're not comfortable posting that information on a public forum, can you e-mail it to me at tigani at google dot com?
Related
I have developed an application in mule3 to transform data and then upload the data as a file to sftp location. I have included all common errors, such as http 400 series and 500 but what is a proper handling status code for when ftp fails, for example with file upload, connection or permission.
I have searched a lot on the internet and the more I search the more I get lost.
Does anyone have experience with this?
Thanks
If you are asking for a table for mapping error codes between SFTP and HTTP, there is no standard for it. These are completely different protocols. You have to define your own mapping. Most of them will probably be 5xx in HTTP, with authentication errors probably 403.
Not sure which connector version you use. But if you open the documentation of the SFTP connector, like: https://docs.mulesoft.com/sftp-connector/1.4/sftp-documentation.
You can see the documentation refers to the error that could be thrown, for example the copy operation can throw the following errors.
Based on those errors you should do your logic. Also the HTTP connector is throwing such errors, but then in the HTTP namespace. If needed you can also remap errors to a different and new namespace. Based on your remapped errors you could also implement logic.
I am getting an intermittent HTTP error when I try to load the contents of files in Azure Databricks from ADLS Gen2. The storage account has been mounted using a service principal associated with Databricks and has been given Storage Blob Data Contributor access through RBAC on the data lake storage account. A sample statement to load is
df = spark.read.format("orc").load("dbfs:/mnt/{storageaccount}/{filesystem}/{filename}")
The error message I get is:
Py4JJavaError: An error occurred while calling o214.load. : java.io.IOException: GET https://{storageaccount}.dfs.core.windows.net/{filesystem}/{filename}?timeout=90``` StatusCode=412 StatusDescription=The condition specified using HTTP conditional header(s) is not met.
ErrorCode=ConditionNotMet ErrorMessage=The condition specified using HTTP conditional header(s) is not met.
RequestId:51fbfff7-d01f-002b-49aa-4c89d5000000
Time:2019-08-06T22:55:14.5585584Z
This error is not with all the files in the filesystem. I can load most of the files. The error is just with some of the files. Not sure what the issue is here.
This has been resolved now. The underlying issue was due to a change at Microsoft end. This is the RCA I got from Microsoft Support:
There was a storage configuration that is turned on incorrectly during the latest storage tenant upgrade. This type of error would only show up for the namespace enabled account on the latest upgraded tenant. The mitigation for this issue is to turn off the configuration on the specific tenant, and we had kicked off the super sonic configuration rollout for the all the tenants. We have since added additional Storage upgrade validation for ADLS Gen 2 to help cover this type of scenario.
I had the same problem on one file today. Downloading the file, deleting it from storage and putting it back solved the problem.
Tried to rename file -> didn't work.
Edit: we have it on more files, random.
We worked around the problem by copying the entire folder to a new folder and rename it to original. Jobs run without problems again.
Still the question remains, why did the files end up in this situation?
Same issue here. After some research, it seems it was probably an If-Match eTag condition failure in the http GET request. Microsoft talk about how they will return error 412 when this happens in this post: https://azure.microsoft.com/de-de/blog/managing-concurrency-in-microsoft-azure-storage-2/
Regardless, Databricks seem to have resolved the issue on their end now.
I'm writing a Python CGI script and trying to test the behaviour of the system when I need to return Status: 500 Internal Server Error.
My script is something like that:
#!/usr/bin/env python3
print("Content-type: text/html")
print("Status: 500 Internal Server Error")
print()
When I run this script there is a report in apache access log with code 500, but it's not reported in the error log. I also don't get a "500 page" in the browser.
If an internal error is caused by some other means (e.g., a script that is not executable, or contains bad HTTP header) I do get the "normal" behaviour of internal server error.
It seems like apache is ignoring, somehow, the status returned from (my) CGI scripts. I've searched for an answer but couldn't find anything.
Just for clarity, CGI is working fine on this server in any other aspect.
Any thoughts? Am I missing something?
Thansk,
Amit
Answering to myself: it seems that I was barking up the wrong tree. Based on some clues and more empirical results, It seems that when passing a request to an external script (e.g. a cgi script, php etc) the apache server expects the external script to handle any error, and it's the responsibility of the external script's to return a document that includes the error code and an error message. The external script is also responsible to log the error (it's usually enough to print it to the standard error, and it'll be picked by apache and be written to its error log).
So, for example, if my cgi script needs to report an "Internal Server Error" it is not enough to return just the header (see in my question), but it should create and return the whole error message, in HTML format. In addition, it should print an error message to the standard error.
I haven't found an official source for that, but perhaps I somehow overlooked it.
Trying to ingest a 7MB json file with Watson Discovery Service. When using WDS tooling interface to ingest, the interface indicates successful ingestion, but the document then looks to have failed. The error returned when using the API was: Your request could not be processed because of a problem on the server". The error is not really helping troubleshoot the problem. Any ideas? How do we troubleshoot these problems?
Thank you
The interface indicates successful ingestion because the process is asynchronous. Actually it means that the document was loaded and submitted for ingestion.
Check if your input json file has a top-level array. Currently this type of json files are not supported so it might be the cause of your issue.
I have a file on AWS S3 that is public:
https://s3-eu-west-1.amazonaws.com/voxist-greetings/33631222504/33651291239_95113eed-386b-4264-a4cf-46182faae125COUCOU1.wav
Now when RVD try to play it I get:
INFO [org.mobicents.servlet.restcomm.interpreter.VoiceInterpreter] (RestComm-akka.actor.default-dispatcher-8586) MediaGroupResponse, succeeded: false jain.protocol.ip.mgcp.JainIPMgcpException: The IVR request failed with the following error code 312
I don't know why... The same file used to work with another name.
Thanks for any hint on how to debug this.
The problem seems to happen on Media Server side. More specifically, it seems the file cannot be opened for some reason.
Relevant code line can be found here.
Can you please take a tcpdump and share it, so we can see the MGCP Play request?
Hope this helps.
UPDATE:
Here is an example:
The 200 OK simply indicates that the MGCP transaction completed successfully. Now we need to dissect the notification (NTFY) sent from Media Server to RestComm, mainly the ObservedEvents parameter.
If you look at the picture, you will see the event triggered is an OperationFailed (of) with ReturnCode (rc) equal to 312, which is an error.
Relevant link to specs can be found here.
To summarise, Media Server receives the request to play the file (in this case a cached version of it) but if fails to open the URL for some reason.
Is the URL reachable from Media Server side?