In our application, when we create a new object via the API, we send SIM and GSM module-related information within the c8y_Mobile fragment. The object models an embedded device with limited capabilities, so we make use of the HTTPS API directly.
PUT /inventory/managedObjects/myid HTTP/1.1
Host: mytenant.cumulocity.com
Authorization: Basic ....
Content-Type: application/vnd.com.nsn.cumulocity.managedObject+json
Accept: application/vnd.com.nsn.cumulocity.managedObject+json
{
"c8y_Mobile": {
"imei": 1234567890123456,
"imsi": 23456789011234567890,
"iccid": 01234567890123456789
...
}
}
The managed object shows the new fragment as expected:
...
"c8y_IsDevice": {},
"c8y_Mobile": {
"imei": 1234567890123456,
"imsi": 23456789011234567890,
"iccid": 01234567890123456789
...
},
...
When a user changes the SIM card on the embedded unit, IMSI and ICCID properties should be updated within the managedObject c8y_Mobile fragment. But if we send only those properties the whole fragment is overridden:
PUT /inventory/managedObjects/myid HTTP/1.1
Host: mytenant.cumulocity.com
Authorization: Basic ....
Content-Type: application/vnd.com.nsn.cumulocity.managedObject+json
Accept: application/vnd.com.nsn.cumulocity.managedObject+json
{
"c8y_Mobile": {
"imsi": 23456789011234567890,
"iccid": 01234567890123456789
}
}
So the managed object shows this:
...
"c8y_IsDevice": {},
"c8y_Mobile": {
"imsi": 23456789011234567890,
"iccid": 01234567890123456789
},
...
Please note that the imei property and others have been lost and are not longer present in the managed object.
In order to save data and minimise transactions I would like to know if there is a way to update fragments without having to send all the desired properties again.
I've tried to use HTTP POST instead of PUT, but that gives me a method not allowed error, as stated in the documentation.
There is no direct way to do that (but a workaround).
In general when you do a PUT on any object it will be merged only on the root level of the JSON meaning if your PUT contains c8y_Mobile it will replace the current c8y_Mobile (regardless of what is contained).
Here is what you can do:
First you invent so new fragments that you use as a temporary fragment:
PUT /inventory/managedObjects/myid HTTP/1.1
Host: mytenant.cumulocity.com
Authorization: Basic ....
Content-Type: application/vnd.com.nsn.cumulocity.managedObject+json
Accept: application/vnd.com.nsn.cumulocity.managedObject+json
{
"c8y_Mobile_imsi": "23456789011234567890",
"c8y_Mobile_iccid": "01234567890123456789"
}
Additionally you create an event processing rule that when you update for example "c8y_Mobile_imsi" it will merge this value into the existing c8y_Mobile fragment (preserving the other sub-fragments).
Important:
You either send the PUT as transient (so these values are not persisted in the device object) or your rule removes the temporary fragment immediately (in the same update operation like the merge with c8y_Mobile).
This is important because in CEP you do not know which fragment was updated when you listen to ManagedObjectUpdated. So if you would keep the temporary fragment in the device object the rule would trigger in an endless loop (which would lead to an automatic undeploy of the rule).
Related
I have a lambda, written in Kotlin with Serverless and CORS just is not working. I feel like I've tried everything. I deployed a Node Lambda with identical sls.sh command and yaml files. The function looks like this
hello:
handler: handler.hello
events:
- http:
path: hello
method: post
cors: true
My responses look like this in both Node and Kotlin:
{
"statusCode": 200,
"headers": {
"Access-Control-Allow-Origin": "*"
},
"body": "{\"id\": \"f9f76590-xxxx-xxxx-xxxx-9c8e99238f40\"}"
}
In the Node case this all works great. I make a fetch call like this and it works (omitted the Promise resolutions for brevity):
var makeRequest = function (data) {
fetch('https://{lambda URL}/hello', {
'headers': {
'content-type': 'application/json'
},
'body': JSON.stringify({ data }),
'method': 'POST'
})
}
In the Kotlin case I get this CORS error back
Access to fetch at 'https://{lambda URL}/hello' from origin
'http://127.0.0.1:8080' has been blocked by CORS policy: No
'Access-Control-Allow-Origin' header is present on the requested
resource. If an opaque response serves your needs, set the request's
mode to 'no-cors' to fetch the resource with CORS disabled.
I try to "enable CORS" in the API Gateway panel but I get that it's already enabled:
And hit submit I get the error (invalid response status code)
When I hover over the error icon it says "Invalid Response status code specified".
Under Gateway Responses, under every sub item (Default 4XX, Default 5XXX, etc) there are response headers set. This is the same across my Node and Kotlin lambdas.
I'm completely out of ideas at this point.
The only potentially odd thing is I am noticing that in my Node request I see access-control-allow-origin: * in response headers in the browser network panel but in the Kotlin one I don't see it.
From this:
I can see that you haven't created Integration Response in your post method.
Try these configurations:
I discovered my CORS issue was because of server errors. If your server has an error and the API Gateway can't get a response then you get a CORS error because the Gateway itself doesn't have the CORS headers.
While the fix is easy (just handle that server error) it was hard to uncover. I wish this was documented better somewhere so hopefully this is found for others :)
For my case specifically, and why it didn't show up in Node but showed up in Kotlin, was because of types. the browser was sending a type Node automatically corrected the type (number to string) but Kotlin was expecting the type and threw a type error.
I need to build a http proxy for a jpeg image inside NodeRED. My goal is that the browser does get all page resources in the dashboard from the NodeRED server. And the image is only available from another server.
I tried this abstract flow:
http-in -> http-request -> function node -> http response
In the function node I set the headers:
msg.headers = {
"content-type": "image/jpeg",
"content-disposition": "inline; filename=\"myimage.jpg\""
}
The problem is, that the browser gets these headers (excerpt):
content-type: image/jpeg; charset=utf-8
content-disposition: inline; filename="myimage.jpg"
Where the hell is charset=utf-8 coming from and how to stop NodeRED adding this?
You do not mention what msg.payload is set to in your flow.
If the msg.payload you pass to the HTTP Response node is a String, the content type gets the charset parameter added. This isn't deliberate behaviour of Node-RED - but something happening in the underlying http/express framework.
If msg.payload is a Buffer object, then no such parameter is added.
charset=utf-8, is added by node-red to define the standard there will be no issue if headers added charset on it.
I'm building my own WebhookClient for dialog flow. My code is the following (using Azure Functions, similar to Firebase Functions):
module.exports = async function(context, req) {
const agent = new WebhookClient({ request: context.req, response: context.res });
function welcome(agent) {
agent.add(`Welcome to my agent!!`);
}
let intentMap = new Map();
intentMap.set("Look up person", welcome);
agent.handleRequest(intentMap);
}
I tested the query and the response payload looks like this:
{
"fulfillmentText": "Welcome to my agent!!",
"outputContexts": []
}
And the headers in the response look like this:
Transfer-Encoding: chunked
Content-Type: application/json; charset=utf-8
Server: Microsoft-IIS/10.0
X-Powered-By: ASP.NET
Date: Tue, 11 Dec 2018 18:16:06 GMT
But when I test my bot in dialog flow, it returns the following:
Webhook call failed. Error: Failed to parse webhook JSON response:
Expect message object but got:
"笀ഀ ∀昀甀氀昀椀氀氀洀攀渀琀吀攀砀琀∀㨀 ∀圀攀氀挀漀洀攀 琀漀 洀礀 愀最攀渀琀℀℀∀Ⰰഀ ∀漀甀琀瀀甀琀䌀漀渀琀攀砀琀猀∀㨀 嬀崀ഀ紀".
There's Chinese symbols!? Here's a video of me testing it out in DialogFlow: https://imgur.com/yzcj0Kw
I know this should be a comment (as it isn't really an answer), but it's fairly verbose and I didn't want it to get lost in the noise.
I have the same problem using WebAPI on a local machine (using ngrok to tunnel back to Kestrel). A friend of mine has working code (he's hosting in AWS rather than Azure), so I started examining the differences between our responses. I've notice the following:
This occurs with Azure Functions and WebAPI (so it's not that)
The JSON payloads are identical (so it's not that)
Working payload isn't chunked
Working payload doesn't have a content type
As an experiment, I added this code to Startup.cs, in the Configure method:
app.Use(async (context, next) =>
{
var original = context.Response.Body;
var memory = new MemoryStream();
context.Response.Body = memory;
await next();
memory.Seek(0, SeekOrigin.Begin);
if (!context.Response.Headers.ContentLength.HasValue)
{
context.Response.Headers.ContentLength = memory.Length;
context.Response.ContentType = null;
}
await memory.CopyToAsync(original);
});
This code disables response chunking, which is now causing a new and slightly more interesting error for me in the google console:
*Webhook call failed. Error: Failed to parse webhook JSON response: com.google.gson.stream.MalformedJsonException: Unterminated object at line 1 column 94 path $.\u0000\\"\u0000f\u0000u\u0000l\u0000f\u0000i\u0000l\u0000l\u0000m\u0000e\u0000n\u0000t\u0000M\u0000e\u0000s\u0000s\u0000a\u0000g\u0000e\u0000s\u0000\\"\u0000.\
I thought this could be encoding at first, so I stashed my JSON as a string and used the various Encoding classes to convert between them, to no avail.
I fired up Postman and called my endpoint (using the same payload as Google) and I can see the whole response payload correctly - it's almost as if Google's end is terminating the stream part-way through reading...
Hopefully, this additional information will help us figure out what's going on!
Update
After some more digging and various server/lambda configs, I spotted this post here: https://github.com/googleapis/google-cloud-dotnet/issues/2258
It turns out that json.net IS the culprit! I guess it's something to do with the formatters on the way out of the pipeline. In order to prove this, I added this hard-coded response to my POST controller and it worked! :)
return new ContentResult()
{
Content = "{\"fulfillmentText\": null,\"fulfillmentMessages\": [],\"source\": null,\"payload\": {\"google\": {\"expectUserResponse\": false,\"userStorage\": null,\"richResponse\": {\"items\": [{\"simpleResponse\": {\"textToSpeech\": \"Why hello there\",\"ssml\": null,\"displayText\": \"Why hello there\"}}],\"suggestions\": null,\"linkOutSuggestion\": null}}}}",
ContentType = "application/json",
StatusCode = 200
};
Despite the HTTP header saying the charset is utf-8, that is definitely using the utf-16le character set, and then the receiving side is treating them as utf-16be. Given you're running on Azure, it sounds like there is some configuration you need to make in Azure Functions to represent the output as UTF-8 instead of using UTF-16 strings.
I'm using WL.Server.invokeHttp(options) several times in my adapter. I need to have different values for a given cookie in different calls.
If I call
WL.Server.invokeHttp({cookies: {
mycookie: 'firstValue'
}
...
the back-end gets this header "cookie": "mycookie=firstValue", as expected.
If I later want to make another call with a different cookie value,
WL.Server.invokeHttp({cookies: {
mycookie: 'secondValue'
}
...
the back-end gets this header "cookie": "mycookie=firtsValue; mycookie=secondValue".
Is there some way that will let me forget a previous value of the cookie?
Update 2015/02/27
Using the headers option instead of the cookies option, as suggested by #YoelNunez, does not solve it.
My first request gets a "set-cookie": "name=value1; Path=/" response header
My second request sets headers: {cookie: 'name=value2'}
The second requests gets to the server with the following header "cookie": "name=value2, name=value1"
Change you invokeHttp to the following
WL.Server.invokeHttp({
headers: {
cookie: "mycookie="+myCookieValue
}
...
});
Where myCookieValue is your variable
I'm trying to send a multipart/form-data from a worker with IE. I've already done it with Chrome, Firefox, Safari using formData objects (not supported IE, I need a manual one)
The binary data I'm sending is a crypto-js encrypted data. With formData objects I do:
var enc = new Buffer(encrypted.ciphertext.toString(CryptoJS.enc.Base64), 'base64');
formData.append("userFile" , new Blob([finalEncrypted], {type: 'application/octet-binary'}), 'encrypted')
this works fine generating a multipart like this(missed some parts of it):
request headers:
Accept:*/*
Accept-Encoding:gzip, deflate
Cache-Control:no-cache
Connection:keep-alive
Content-Length:30194
Content-Type:multipart/form-data; boundary=WebKitFormBoundary0.gjepwugw5cy58kt9
body:
--WebKitFormBoundary0.gjepwugw5cy58kt9
Content-Disposition: form-data; name="userFile"; filename="encrypted"
Content-Type: binary
all binary data
--WebKitFormBoundary0.cpe3c80eodgc766r--
With the manual multipart/form-data:
IE11 doesn't accept readAsBinaryString(deprecated)
I would like to avoid sending base64 encoded data(readAsDataURL)(33% payload)
The binary data I'm sending is a crypto-js encrypted data.
I'm trying:
finalEncrypted = new Buffer(encrypted.ciphertext.toString(CryptoJS.enc.Base64), 'base64');
then in my manual multipart I tried to convert the buffer to a binary string:
item.toString('binary')
the multipart result looks looks this:
--WebKitFormBoundary642013568702052
Content-Disposition: form-data; name="userfile"; filename="encrypted"
Content-Type: binary
all binary data
ÐçÀôpRö3§]g7,UOÂmR¤¼ÚS"Ê÷UcíMÆÎÚà/,hy¼øsËÂú#WcGvºÆÞ²i¨¬Ç~÷®}éá?'é·J]þ3«áEÁÞ,4üBçðºÇª bUÈú4
T\Ãõ=òEnýR _[1J\O-ïǹ C¨\Ûøü^%éÓÁóJNÓï¹LsXâx>\aÁV×Þ^÷·{|'
On the .NET server we check the hash calculated on client versus calculated on server. Server reply that hashes doesn't match. This makes me think that I'm not sending the file correctly.
It looks like you did not yet get a solution, at least you did not post it here if you had one.
On my end I use jQuery which handles the low level nitty gritty of the actual post.
It may be that you are doing one small thing wrong and IE fails on it. Since you do not show what you used with FormData. It is rather difficult to see whether you had a mistake in there.
// step 1. setup POST data
var data = new FormData();
data.append("some_variable_name", "value_for_that_variable");
data.append("some_blob_var_name", my_blob);
data.append("some_file_var_name", my_file);
// step 2. options
var ajax_options =
{
method: "POST",
processData: false,
data: data,
contentType: false,
error: function(jqxhr, result_status, error_msg)
{
// react on errors
},
success: function(data, result_status, jqxhr)
{
// react on success
},
complete: function(jqxhr, result_status)
{
// react on completion (after error/success callbacks)
},
dataType: "xml" // server is expected to return XML only
};
// step 3. send
jQuery.ajax(uri, ajax_options);
Step 1.
Create a FormData object and fills the form data, that includes variables and files. You may even add blobs (JavaScript objects, will be transformed to JSON if I'm correct.)
Step 2.
Create an ajax_options object to your liking. Although here I show your the method, processData, data, contentType as they must be in case you want to send a FormData. At least, that works for me... It may be possible to change some of those values.
The dataType should be set to whatever type you expect in return.
Step 3.
Send the request using the ajax() function from the jQuery library. It will build the proper header and results as required for the client's browser.