Inability to call SPROC from Azure Logic Apps - can't find syntax for the parameters - sql

Statement of intent:
I'm trying to automate a workflow, moving data periodically from a CSV in Sharepoint into a table in Azure SQL Database. I've gotten so far as 1) Formatting a JSON array, and 2) Creating a SPROC that successfully takes the text of the JSON Array, and imports it into the appropriate table.
Array appears like:
JSON = [{"col1":"col1Data","col2":"col2Data", ...}, <600-some more iterations>]
Invocation of stored procedure in SQL Management Studio looks like:
EXECUTE SprocName #json=N'<text of JSON above>'
===========================================
Problem:
Lack of documentation allowing me to properly format one of the following two SQL Connectors' parameters to link these two statements together:
Both Execute a Query (v2) and Execute a Stored Procedure (v2) require that parameters or query text be provided, but no indication of how said parameters should be formatted.
For example, in terms of executing a stored procedure that takes a single parameter #json, the following text "looks" correct, but results in an error:
"body": "#json=N'+#string(outputs('Convert_Rows_To_Json').body)+'"
Error:
Failed to save logic app UpdateDomainCoverage. The template validation failed: 'The template action 'Execute_stored_procedure_(V2)' at line '1' and column '3148' is not valid: "The template language expression 'json=N'+#string(outputs('Convert_Rows_To_Json').body)+'' is not valid: the string character '=' at position '4' is not expected.".'.
I've tried a number of variations, for both the #json parameter on Execute Stored Procedure, or simply building the query from whole cloth in Execute SQL, to no avail. Suggestions?

Here is sample from Code View of calling a stored procedure with parameter 'from' that takes a datetime value. When you pick the sproc in the Designer it should show all the parameters for you to populate.
"Get_jobs": {
"inputs": {
"body": {
"from": "#{convertFromUtc( variables('SelectTime'), variables('timeZone'), 'yyyy-MM-dd HH:mm:ss')}"
},
"host": {
"connection": {
"name": "#parameters('$connections')['sql_2']['connectionId']"
}
},
"method": "post",
"path": "/datasets/default/procedures/#{encodeURIComponent(encodeURIComponent('[dbo].[GetJobs]'))}"
},
"runAfter": {
"Refresh_data_for_BI": [
"Succeeded"
]
},
"type": "ApiConnection"
},

OK, I've been messing with this on-and-off in between other tasks today, and finally got tired of trying to get it done in the input of the "Execute query".
Brute Force Solution: I added another Javascript step, with the following code:
var input = workflowContext.actions.Convert_Rows_To_Json.outputs.body;
var sqlQuery = 'EXECUTE [ImportDomainCoverage] N\'' + input + '\'';
return sqlQuery;
It's not pretty (one more step), but it works.
Now to see if I can modify things sufficiently to parameterize the table name, rather than needing six steps for each table.

Finally figured out the syntax. Didn't find any documentation, just tried working from one error message to another.
"Pump_data_into_target_table": {
"inputs": {
"body": {
"json": "#{body('Pull_FeedbackItems_from_source').ResultSets['Table1']}"
},
"headers": {
"Content-Type": "application/json"
},
"host": {
"connection": {
"name": "#parameters('$connections')['sql_2']['connectionId']"
}
},
"method": "post",
"path": "/v2/datasets/#{encodeURIComponent(encodeURIComponent('servername.database.windows.net'))},#{encodeURIComponent(encodeURIComponent('dbname'))}/procedures/#{encodeURIComponent(encodeURIComponent('sprocname'))}"
},
"runAfter": {
"Pull_FeedbackItems_from_Source": [
"Succeeded"
]
},
"type": "ApiConnection"
}
The fundamental answer to my question was: provide the parameter/value pairs as a JSON object. See the value of the "body" element in the listing above. For this to work, though, one also has to enter the "headers" element, which I didn't even see documented on the API call. Was led to that by an error message stating that the content type was plain text, when it was clearly json.

Related

Kafka Lenses SQL - How to WHERE filter based on objects nested in an array

I am working in Kafka Lenses v2.2.2. I need to filter based on a the value of an object inside an array.
Sample message (redacted for simplicity):
{
"payload": {
"Data": {
"something" : "stuff"
},
"foo": {
"bar": [
{
"id": "8177BE12-F69B-4A51-B12E-976D2AE37487",
"info": "more_data"
},
{
"id": "06A846C5-2138-4107-A5B0-A2FC21B9F32D",
"info": "more_data"
}
]
}
}
In lenses this actually appears as a nested object with a integer properties... 0, 1, etc.
So I've tried this, but it is throwing an error: .0 appears out of place
SELECT *
FROM topic_name
WHERE payload.foo.bar.0.id = "8177BE12-F69B-4A51-B12E-976D2AE37487"
LIMIT 10
I tried wrapping the 0 in double/single quotes as well and that throws a 500 error.
I copied and pasted the UUID from the first message in the topic, so it's definitely there. I also copy and pasted the labels to rule out typos. I am thinking there is some special way to access arrays with nested objects like this, but I'm struggling to find any documentation or videos discussing it.
I can be confident the value is stored in the first array element, but methods that can search all objects would be awesome as well.
The syntax (if you know the array index - as in my initial question) is:
SELECT *
FROM topic_name
WHERE payload.foo.bar[0].id = "8177BE12-F69B-4A51-B12E-976D2AE37487"
LIMIT 10
Though I am still struggling to do this if the array index is unknown and you need to check them all. I'm assuming at this point it's not possible without a series of OR statements in the WHERE clause that checks them all.

Count of documents 0 after inserting data with Nest

I am using Nest with the following connection settings:
var connectionPool = new SingleNodeConnectionPool(new Uri("http://localhost:9200"));
var settings = new ConnectionSettings(connectionPool, new InMemoryConnection());
settings.DisableDirectStreaming(true); // needed to see good looking debug log on insert
settings.DefaultIndex(Index);
Client = new ElasticClient(settings);
With new InMemoryConnection() I hope to query with Nest - changing data inside an Azure Cloud function.
Strangely the debug logs look promising Indexing:
/*
var res = await Client.IndexManyAsync(response.Elements, Index); //
Console.WriteLine(res.DebugInformation);
*/
/*
var res = await Client.IndexAsync(response, i => i.Index(Index)); // Index = "data"
Console.WriteLine(res.DebugInformation); // <--
*/
And logging directly after the insertions the count is 0:
// var anyDocs = await Client.CountAsync<OverpassElement>(c => c.Index(Index));
var anyDocs = await Client.CountAsync<OverpassElement>(c => c);
Console.WriteLine("count: " + anyDocs.Count);
..but the entire json data being logged with the insertion.
How come i can't count it (so that I can search in a next step), after insertion?
Actually I get:
Invalid NEST response built from a successful (200) low level call on POST: /data/_doc
And there is 0 Items in the on the IndexResponse inserting.
The data is of Element looking like the following part of an array containing 4221 such items:
{
"type": "relation",
"id": 8353694,
"timestamp": "2018-06-04T22:54:27Z",
"version": 1,
"changeset": 59551528,
"user": "asdf2",
"uid": 1416503,
"members": [
{
"type": "way",
"ref": 89956942,
"role": "from"
},
{
"type": "node",
"ref": 1042756547,
"role": "via"
},
{
"type": "way",
"ref": 89956938,
"role": "to"
}
],
"tags": {
"restriction": "no_left_turn",
"type": "restriction"
}
},
ElasticSearch has many similarities to a NoSql data store. In this case, "read after write" is not guaranteed by default. When the index API call returns success, it doesn't mean "this document is now available for searching"; it means "ElasticSearch has accepted your document and it will be available for searching shortly". ElasticSearch uses eventual consistency by default.
However, this can be annoying during testing. So ElasticSearch has a Refresh API that essentially just blocks until all documents already indexed are available for searching. I strongly recommend that you do not call this in production; only in test code.
As the risk of reviving an old question, this answer from Russ Cam explains that InMemoryConnection does not actually run the operation against Elasticsearch.
InMemoryConnection doesn't actually send any requests or receive any responses from Elasticsearch; used in conjunction with .SetConnectionStatusHandler() on Connection settings (or .OnRequestCompleted() in NEST 2.x+), it's a convenient way to see the serialized form of requests.
So you can inspect the query that NEST generates from your code but you won't be able to observe the results.
I don`t know what Nest is, but I'd bet 100$ that if it use Transactional concepts, maybe you should commit it in order to see count correctly ?

Design pattern - update join table through REST API

I'm struggling with a REST API design concept. I have these classes:
user:
- first_name
- last_name
metadata_fields:
- field_name
user_metadata:
- user_id
- field_id
- value
- unique index on [user_id, field_id]
Ok, so users have many metadata and the type of metadata is defined in metadata_fields. Typical HABTM with extra data in the join table.
If I were to update user_metadata through a Rails form, the data would look like this:
user_metadata: {
id: 1,
user_id: 2,
field_id: 3,
value: 'foo'
}
If I posted to the user#update controller, the data would look like this:
user: {
user_metadata: {
id: 1,
field_id: 3,
value: 'foo'
}
}
The trouble with this approach is that we're ignoring the uniqueness of the user_id/field_id relationship. If I change the field_id in either update, I'm not just changing data, I'm changing the meaning of that data. This tends to work fine in Rails because it's somewhat of a walled garden, but it breaks down when you open up an API endpoint.
If I allow this:
PATCH /api/user_metadata
Then I'm opening myself up to someone modifying the user_id or field_id or both. Similarly with this:
PATCH /api/user/:user_id/metadata
Now user_id is set but field_id can still change. So really the only way to solve this is to limit the update to a single field:
PATCH /api/user/:user_id/metadata/:field_id
Or a bulk update:
PATCH /api/user/:user_id/metadata
But with that call, we have to modify the data structure so that the uniqueness of the user_id/field_id relationship is intact:
user_metadata: {
field_id1: 'value1',
field_id2: 'value2',
...
}
I'd love to hear thoughts here. I've scoured Google and found absolutely nothing. Any recommendations?
As metadata belongs to a certain user /api/user/{userId}/metadata/{metadataId} is probably the clean URI for a single metadata resource of a user. The URI of your resource is already the unique-key you are looking for. There can't be 2 resources with the same URI! Furthermore, the URI already contains the user and field IDs.
A request like GET /api/user/1 HTTP/1.1 could return a HAL-like representation like the one below:
{
"user" : {
"id": "1",
"firstName": "Max",
"lastName": "Sample",
...
"_links": {
"self" : {
"href": "/api/user/1"
}
},
"_embedded": {
"metadata" : {
"fields" : [{
"id": "1",
"type": "string",
"value": "foo",
"_links": {
"self": {
"href": "/api/user/1/metadata/1"
}
}
}, {
"id": "2",
"type": "string",
"value": "bar",
"_links": {
"self": {
"href": "/api/user/1/metadata/2"
}
}
}],
"_links": {
"self": {
"href": "/api/user/1/metadata"
}
}
}
}
}
}
Of course you could send a PUT or a PATCH request to modify an existing metadata field. Though, the URI of the resource will still be the same (unless you move or delete a resource within a PATCH request).
You also have the possibility to ignore certain fields on incomming PUT requests which prevents modification of certain fields like id or _link. I'll assume this should also be valid for PATCH requests, though will have to re-read the spec again therefore.
Therefore, I'd suggest to ignore any id or _link fields contained in requests and update the remaining fields. But you also have the option to return a 403 Forbidden or 409 Conflict response if someone tries to update an ID-field.
UPDATE
If you want to update multiple fields within a single request, you have two options:
Using PUT and replace the current set of fields with the new version
Using PATCH and send the server the necessary steps to transform the current field-set to the new field-set
Example PUT:
PUT /api/user/1/metadata HTTP/1.1
{
"metadata": {
"fields": [{
"type": "string",
"value": "newFoo"
}, {
"type": "string",
"value": "newBar"
}]
}
}
This request would first delete every stored metadata field of the user the metadata belong to and afterwards create a new resoure for each contained field in the request. While this still guarantees unique URIs, there are a couple of drawbacks to this approach however:
all the data which should be available after the update, even fields that do not change, need to be transmitted
clients which have a URI pointing to a certain resource may point to a false representation. F.e. a client has retrieved /user/1/metadata/2right before a further client updated all the metadata, the IDs are dispatched via auto-increment, the update however introduced a new second item and therefore moved the former 2 to position 3, client1 has now a reference to /user/1/metadata/2 while the actual data is /user/1/metadata/3 however. To prevent this, unique UUIDs could be used instead of autoincrement IDs. If client 1 later on tries to retrieve or update former resource 2, his can be notified that the resource is not available anymore, even a redirect to the new location could be created.
Example PATCH:
A PATCH request contains the necessary steps to transform the state of a resource to the new state. The request itself can affect multiple resources at the same time and even create or delete other resources as needed.
The following example is in json-patch+json format:
PATCH /api/user/1/metadata HTTP/1.1
[
{
"op": "add",
"path": "/0/value",
"value": "newFoo"
},
{
"op": "add",
"path": "/2",
"value": { "type": "string", "value": "totally new entry" }
},
{
"op": "remove",
"path": "/1"
},
]
The path is defined as a JSON Pointer for the invoked resource.
The add operation of the JSON-Patch type is defined as:
If the target location specifies an array index, a new value is inserted into the array at the specified index.
If the target location specifies an object member that does not already exist, a new member is added to the object.
If the target location specifies an object member that does exist, that member's value is replaced.
For the removal case however, the spec states:
If removing an element from an array, any elements above the specified index are shifted one position to the left.
Therefore the newly added entry would end up in position 2 in the array. If not an auto-increment value is used for the ID, this should not be a big problem though.
Besindes add, and remove the spec also contains definitions for replace, move, copy and test.
The PATCH should be transactional - either all operations succeed or none. The spec states:
If a normative requirement is violated by a JSON Patch document, or if an operation is not successful, evaluation of the JSON Patch document SHOULD terminate and application of the entire patch document SHALL NOT be deemed successful.
I'll interpret this lines as, if it tries to update a field which it is not supposed to update, you should return an error for the whole PATCH request and therefore do not alter any resources.
Drawback to the PATCH approach is clearly the transactional requirement as well as the JSON Pointer notation, which might not be that popular (at least I haven't used it often and had to look it up again). Same as with PUT, PATCH allows to add new resources inbetween existing resources and shifting further ones to the right which may lead to an issue if you rely on autoincrement values.
Therefore, I strongly recommend to use randomly generated UUIDs as identifier rather than auto-increment values.

Working with dynamic/anonymous objects and JSON.NET

.NET 4.0; VS 2010.
We're consuming a web service that does not offer a WSDL. The data that is returned is not particularly complicated so we thought we would work with dynamic/anonymous types. Here is an example of the JSON returned from one of the service methods (this string has been verified with JSONLint):
[
{
"value": "AAA"
},
{
"value": "BBB"
},
{
"value": "CCC"
},
{
"value": "DDD"
},
{
"value": "EEE"
},
{
"value": "FFF"
}
]
Tried using:
dynamic respDyn = JsonConvert.DeserializeObject(jsonStringAbove);
In this case, no errors are thrown, but in trying to access the resp variable, the Visual Studio debugger reports "The name 'resp' does not exist in the current context".
Tried LINQ next:
var respLinq = JObject.Parse(jsonStringAbove);
Which results in a runtime error: Error reading JObject from JsonReader. Current JsonReader item is not an object: StartArray. Path '', line 1, position 1.
Found this article that recommended different parsing methods depending on the format of the JSON:
if (jsonStringAbove.StartsWith("["))
{
var arr = JArray.Parse(jsonStringAbove);
}
else
{
var obj = JObject.Parse(jsonStringAbove);
}
When var arr = JArray.Parse(jsonStringAbove); is hit, the debugger simply exists the method and returns to the calling procedure. No error is thrown. If the leading and trailing square brackets are removed, another run time error similar to the results in the second example is encountered.
So. Not sure where to turn at this point. Seems like what we're trying to do is very straightforward which make me think I'm missing something blatantly obvious.
Not sure why, but the solution to this was to declare my variables as fields within the class. Variables that were local to the methods I was working with simply did not work. Once declared as class-wide variables, the code behaved as expected. Very odd. I suspect that this problem may be specific to my VS environment and/or solution configuration as it does not appear to be occurring with anyone else. Lucky me.

IBM Worklight - How to construct a JSON object in SQL adapter

To construct a JSON object in a SQL adapter I have tried the following:
{
'PatientID':4,
'FName':'test',
'LName':'test',
'AGE':1,
'DOB':1988-09-01,
'GENDER':'m',
'BG':'A+'
}
However I get an error:
{
"errors": [
"Runtime: Method createSQLStatement was called inside a JavaScript function."
],
"info": [
],
"isSuccessful": false,
"warnings": [
]
}
Full size image
First, in the "Invoke Procedure Data" window for your adapter, don't wrap the object in quotes. If you do, it will think that the entire thing is a string.
If you remove the beginning and ending quotes then you almost have it correct. The window will take valid JSON objects, but only if all non integers are strings. Since 1988-09-01 is not a valid integer, it must be wrapped in quotes. You should be able to copy/paste this object into the wizard:
{
'PatientID':4,
'FName':'test',
'LName':'test',
'AGE':1,
'DOB':"1988-09-01",
'GENDER':'m',
'BG':'A+'
}
createSQLStatement API should not be used inside of your functions. You it outside of functions, just like tutorial shows (slide 10) http://public.dhe.ibm.com/software/mobile-solutions/worklight/docs/v600/04_03_SQL_adapter_-_Communicating_with_SQL_database.pdf