Logic Apps - Bad Gateway 502 executing SQL Server stored procedure - sql-server-2016

I am trying to execute a stored procedure on-premise which inserts 1000 records at a time from a Logic App. I have tested the stored procedure separately and it takes less than a second to insert the data and the I have the on-premise gateway connection tested as well from the Logic Apps with another procedure. When I run the Logic Apps which gets the data from another web service in a JSON format and passed to the on-premise SQL Server stored procedure, I am getting a bad gateway (502) error.
I do see a similar question, but it's not answered on Stack Overflow. As I don't see much helpful information from the error, can anyone advise how to troubleshoot this issue? All I see is the command is not supported. I appreciate the help!
{
"error": {
"code": 502,
"source": "logic-apis-centralus.azure-apim.net",
"clientRequestId": "xxxxxxx",
"message": "BadGateway",
"innerError": {
"status": 502,
"message": **"The command '**[ AzureConnection = [gateway=\"true\",server=\"xxx.com\",database=\"xxx\"],\r\n request = [Connection = AzureConnection],\r\n dataSet = \"default\",\r\n procedure = \"[dbo].[UpsertEventData]\",\r\n parameters = [jsonObject=[\r\n {\r\n \"eventId\": \"a835795db0f7a13ab03be8e41bde7f56bc9b772905d82e7c111252d998a002be\",\r\n \"startTime\": \"2018-09-26T00:23:06+00:00\",\r\n \"endTime\": \"2018-09-26T00:24:50+00:00\",\r\n \"userId\": \"6aa080743b27d1a9a67afd2683ede38a\",\r\n \"channelId\": \"987fb87a4f7b4d867c1f391d38ec98b2\",\r\n \"shareId\": null,\r\n \"deviceId\": \"bdf8ada28095551a705737900d7c5bf3\",\r\n \"divisionId\": null,\r\n,\r\n \"contactId\": null,\r\n \"type\": \"asset-in-app-viewed\",\r\n \"page\": 1\r\n }, **isn't supported.**\r\n **inner exception:** The command '[ AzureConnection = [gateway=\"true\",server=\"xxx.com\",database=\"xxx\"],\r\n request = [Connection = AzureConnection],\r\n dataSet = \"default\",\r\n procedure = \"[dbo].[UpsertEventData]\",\r\n parameters = [jsonObject=[\r\n {\r\n \"eventId\": \"a835795db0f7a13ab03be8e41bde7f56bc9b772905d82e7c111252d998a002be\",\r\n \"startTime\": \"2018-09-26T00:23:06+00:00\",\r\n \"endTime\": \"2018-09-26T00:24:50+00:00\",\r\n \"userId\": \"6aa080743b27d1a9a67afd2683ede38a\",\r\n \"channelId\": \"987fb87a4f7b4d867c1f391d38ec98b2\",\r\n \"shareId\": null,\r\n \"deviceId\": \"bdf8ada28095551a705737900d7c5bf3\",\r\n \"divisionId\": null,\r\n \"assetId\": \"44ecbff1f7427e1fdc2bc3e1ad47f827\",\r\n \"contactId\": null,\r\n \"type\": \"asset-in-app-viewed\",\r\n \"page\": 1\r\n },\r\n {\r\n \"eventId\": \"a835795db0f7a13ab03be8e41bde7f5666f3195ab9eaf651a159d6ce22b04cc7\",\r\n \"startTime\": \"2018-09-26T00:25:45+00:00\",\r\n \"endTime\": \"2018-09-26T00:25:49+00:00\",\r\n \"userId\": \"6aa080743b27d1a9a67afd2683ede38a\",\r\n \"channelId\": \"7ae82...**' isn't supported.**\r\nclientRequestId: 7d9cbf4d-b50f-4905-8825-554f7e48d54c",
"source": "sql-logic-cp-centralus.logic-ase-centralus.p.azurewebsites.net"
}
}
}

Finally I am able to fix the issue with some debugging on the Logic Apps and Stored Proc.
Fix: The issue is with passing the 'Parse JSON' output from Logic Apps as an input to the Stored Procedure. I used the string conversion function to fix the issue.
I wish Logic Apps future release would give more specific exception details instead of a generic bad gateway error.

Related

Big JSON record to BigQuery is not showing up

I wanted to try to upload big JSON record object to BigQuery.
I am talking of JSON records of 1.5 MB each, with a complex nested schema up to 7th degree.
For simplicity, I started to load file with a single record on one line.
At first I try to have BigQuery to autodetect my schema, but that resulted in table that is not responsive and I cannot perform query on, albeit it says it had at least a record.
Then, assuming that my schema could be too hard to reverse for the loader, I tried to write the schema myself and I then I tried to load my my file with single record.
At first I got a simple error with just "invalid".
bq load --source_format=NEWLINE_DELIMITED_JSON invq_data.test_table
my_single_json_record_file
Upload complete.
Waiting on bqjob_r5a4ce64904bbba9d_0000015e14aba735_1 ... (3s) Current
status: DONE
BigQuery error in load operation: Error processing job 'invq-
test:bqjob_r5a4ce64904bbba9d_0000015e14aba735_1': JSON table
encountered too many errors, giving up. Rows:
1; errors: 1.
Which after checking for the job error was just giving me the following:
"status": {
"errorResult": {
"location": "file-00000000",
"message": "JSON table encountered too many errors, giving up. Rows: 1; errors: 1.",
"reason": "invalid"
},
"errors": [
{
"location": "file-00000000",
"message": "JSON table encountered too many errors, giving up. Rows: 1; errors: 1.",
"reason": "invalid"
}
],
"state": "DONE"
},
The after a couple of more attempts creating new tables, it actually started to succeed on command line, without reporting errors:
bq load --max_bad_records=1 --source_format=NEWLINE_DELIMITED_JSON invq_data.test_table_4 my_single_json_record_file
Upload complete.
Waiting on bqjob_r368f1dff98600a4b_0000015e14b43dd5_1 ... (16s) Current status: DONE
with no error on the status checker...
"statistics": {
"creationTime": "1503585955356",
"endTime": "1503585973623",
"load": {
"badRecords": "0",
"inputFileBytes": "1494390",
"inputFiles": "1",
"outputBytes": "0",
"outputRows": "0"
},
"startTime": "1503585955723"
},
"status": {
"state": "DONE"
},
But no actual records are added to my tables.
I tried to perform the same from WebUI but the result is the same. Green on the completed job, but no actual record added.
Is there something else that I can do for checking where the data is sinking to? Maybe some more log?
I can imagine that maybe I am on the the edge of the 2 MB JSON row size limit but, if so, should this be reported as error?
Thanks in advance for the help!!
EDIT:
It turned out the complexity of my schema was a bit the devil in here.
My json files were valid, but my complex schema had several errors.
It turned out that I had to simplify it such schema anyway, because I got a new batch of data where single json instances where more 30MB and I had to restructure this data in a more relational way, whilst making smaller rows to insert in the database.
Funny enough when the schema was scattered across multiple entities (ergo, simplified) the actually error/inconsistencies of the schema started to actually show up in error returned and it was easier to fix them. (Mostly it was new nested undocumented data which I was not aware anyway... but still my bad).
The lesson here, is when a table schema is too long (I didn't experiment how much precisely is too long) BigQuery just hide itself behind reporting too many errors to show.
But that is a point where you should consider simplify the schema(/structure) of your data.

GoogleApiException: Google.Apis.Requests.RequestError Backend Error [500] when streaming to BigQuery

I'm streaming data to BigQuery for the past year or so from a service in Azure written in c# and recently started to get increasing amount of the following errors (most of the requests succeed):
Message: [GoogleApiException: Google.Apis.Requests.RequestError An
internal error occurred and the request could not be completed. [500]
Errors [
Message[An internal error occurred and the request could not be completed.] Location[ - ] Reason[internalError] Domain[global] ] ]
This is the code I'm using in my service:
public async Task<TableDataInsertAllResponse> Update(List<TableDataInsertAllRequest.RowsData> rows, string tableSuffix)
{
var request = new TableDataInsertAllRequest {Rows = rows, TemplateSuffix = tableSuffix};
var insertRequest = mBigqueryService.Tabledata.InsertAll(request, ProjectId, mDatasetId, mTableId);
return await insertRequest.ExecuteAsync();
}
Just like any other cloud service, BigQuery doesn't offer a 100% uptime SLA (it's actually 99.9%), so it's not uncommon to encounter transient errors like these. We also receive them frequently in our applications.
You need to build exponential backoff-and-retry logic into your application(s) to handle such errors. A good way of doing this is to use a queue to stream your data to BigQuery. This is what we do and it works very well for us.
Some more info:
https://cloud.google.com/bigquery/troubleshooting-errors
https://cloud.google.com/bigquery/loading-data-post-request#exp-backoff
https://cloud.google.com/bigquery/streaming-data-into-bigquery
https://cloud.google.com/bigquery/sla

BigQuery Load Job [invalid] Too many errors encountered

I'm trying to insert data into BigQuery using the BigQuery Api C# Sdk.
I created a new Job with Json Newline Delimited data.
When I use :
100 lines for inputs : OK
250 lines for inputs : OK
500 lines for inputs : KO
2500 lines : KO
The error encountered is :
"status": {
"state": "DONE",
"errorResult": {
"reason": "invalid",
"message": "Too many errors encountered. Limit is: 0."
},
"errors": [
{
"reason": "internalError",
"location": "File: 0",
"message": "Unexpected. Please try again."
},
{
"reason": "invalid",
"message": "Too many errors encountered. Limit is: 0."
}
]
}
The file works well when I use the Bq Tools with command :
bq load --source_format=NEWLINE_DELIMITED_JSON dataset.datatable pathToJsonFile
Something seems to be wrong on server side or maybe when I transmit the file but we cannot get more log than "internal server error"
Does anyone have more informations on this ?
Thanks you
"Unexpected. Please try again." could either indicate that the contents of the files you provided had unexpected characters, or it could mean that an unexpected internal server condition occurred. There are several questions which might help shed some light on this:
does this consistently happen no matter how many times you retry?
does this directly depend on the lines in the file, or can you construct a simple upload file which doesn't trigger the error condition?
One option to potentially avoid these problems is to send the load job request with configuration.load.maxBadRecords higher than zero.
Feel free to comment with more info and I can maybe update this answer.

Frequent 503 errors raised from BigQuery Streaming API

Streaming data into BigQuery keeps failing due to the following error, which occurs more frequently recently:
com.google.api.client.googleapis.json.GoogleJsonResponseException: 503 Service Unavailable
{
"code" : 503,
"errors" : [ {
"domain" : "global",
"message" : "Connection error. Please try again.",
"reason" : "backendError"
} ],
"message" : "Connection error. Please try again."
}
at com.google.api.client.googleapis.json.GoogleJsonResponseException.from(GoogleJsonResponseException.java:145)
at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:113)
at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:40)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest$1.interceptResponse(AbstractGoogleClientRequest.java:312)
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:1049)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:410)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:343)
Relevant question references:
Getting high rate of 503 errors with BigQuery Streaming API
BigQuery - BackEnd error when loading from JAVA API
We (the BigQuery team) are looking into your report of increased connection errors. From the internal monitoring, there hasn't been global a spike in connection errors in the last several days. However, that doesn't mean that your tables, specifically, weren't affected.
Connection errors can be tricky to chase down, because they can be caused by errors before they get to the BigQuery servers or after they leave. The more information your can provide, the easier it is for us to diagnose the issue.
The best practice for streaming input is to handle temporary errors like this to retry the request. It can be a little tricky, since when you get a connection error you don't actually know whether the insert succeeded. If you include a unique insertId with your data (see the documentation here), you can safely resend the request (within the deduplication window period, which I think is 15 minutes) without worrying that the same row will get added multiple times.

IBM Worklight - How to call a Stored Procedure with the "OUT" parameter?

We using a SQL adapter and I'm getting the below error while invoking the Stored Procedure. we Our database is Oracle 11g. Below is our adapter and procedures.
function deals(param) {
return WL.Server.invokeSQLStoredProcedure({
procedure : "deals_proc",
parameters : []
});
}
and the procedure is
create or replace procedure deals_proc(c1 out sys_refcursor ) AS
begin
open c1 for
select CATEGORYNAME from DEALS;
end deals_proc;
and the error 'm getting is
{
"errors": [
"Runtime: Failed to retrieve data with procedure : deals_proc"
],
"info": [
],
"isSuccessful": false,
"warnings": [
]
}
in console error message is
Failed to retrieve data with procedure : deals_proc
FWLSE0101E: Caused by: [project Test]java.sql.SQLException: ORA-06550: line 1, column 7:
PLS-00306: wrong number or types of arguments in call to 'DEALS_PROC'
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored
java.lang.RuntimeException: Failed to retrieve data with procedure : deals_proc
Worklight does not support the out parameter in SQL adapters. See this question: IBM Worklight - How to get OUT parameter when invoking a stored procedure?
The question I linked you to also contains an answer providing an elaborate workaround if you wish the try it (basically, override Worklight and implement it in Java code).