azure iot-hub device twin remove desired properties - azure-iot-hub

Following is my device twin payload, By mistake I have added "someKey" property to it.
{
desired: {
"state": {
"processor": "running",
"light": "on"
},
"someKey": "someValue"
}
}
I want to remove permanently "someKey" property form JSON twin.

To remove "someKey" from twin JSON
assign the null value to "someKey", then only it get removed from device twin JSON.
{
desired: {
"state": {
"processor": "running",
"light": "on"
},
"someKey": null
}
}
So Next time onward you will receive JSON as below
{
desired: {
"state": {
"processor": "running",
"light": "on"
}
}
}

From: https://learn.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-device-twins#back-end-operations
Device operations
Device operations
The device app operates on the device twin using the following atomic operations:
Partially update reported properties. This operation enables the partial update of the reported properties of the currently connected device. This operation uses the same JSON update format that the solution back end uses for a partial update of desired properties.
And then in Back-End operations
Partially update device twin. This operation enables the solution back end to partially update the tags or desired properties in a device twin. The partial update is expressed as a JSON document that adds or updates any property. Properties set to null are removed. The following example creates a new desired property with value {"newProperty": "newValue"}, overwrites the existing value of existingProperty with "otherNewValue", and removes otherOldProperty. No other changes are made to existing desired properties or tags:
{
"properties": {
"desired": {
"newProperty": {
"nestedProperty": "newValue"
},
"existingProperty": "otherNewValue",
"otherOldProperty": null
}
}
}
(...)

Related

What is the correct way to implement update of the entity? (Symfony 6)

I have an event entity.
What is the correct way to implement update of this entity? Our frontend-developer wants everything to be done with a single PUT request to the backend: changing the values of the title, description fields, as well as adding, deleting, and editing prices, event_dates, and event_dates.
I made separate endpoints put /event/{id}, put /price/{id}, put event_date/{id}
What can you recommend?
{
"id": 504,
"title": "First Event",
"description": "Description of First Event",
"created_at": "2022-08-16T08:42:11+00:00",
"prices": [
{
"id": 4,
"value": "12.99",
"type": "regular",
"is_entrance_free": false,
"info": "some extra infos",
"sorting": 7
}
],
"event_dates": [
{
"id": 2,
"start_date": "2022-12-10",
"end_date": "2022-12-31",
"start_time": "13:00",
"end_time": "16:00",
"entrance_time": "12:30",
"is_open_end": false,
"info": "7"
}
]
}
One of the standard ways is to POST or PUT the JSON for either the complete new record, with everything changed, effectively overwriting the old one, but keeping the same ID, or a subset.
The request would go to an endpoint for PUT /event/{id} where the action reads the current record, and gets the JSON with the information to update.
<?php
// various use statements as required
class ApiEventController extends AbstractController
{
#[Route('/api/event/{id}', methods: ['PUT'])]
public function eventPut(Request $request, \App\Entity\Event $event)
{
// Security here - ensure the current user has permission to access & edit the event
// a custom Deserializer can restrict what is used from the content
// for example, ensuring the ID, or other fields are not changed.
$serializer->deserialize(
$request->getContent(),
\App\Entity\Event::class,
'json',
[
// takes the new values, from the request content,
// and update the old value, fetched by ID from the URL
AbstractNormalizer::OBJECT_TO_POPULATE => $entity,
]
);
// $event is now the mix of the old, and new
$entityManager->persist($event);
$entityManager->flush();
// return the updated event details
}
Updating more complex contents (such as replacing an array of prices, or event_dates within the main entity) will need other deserializers and the configuration in the Event entity and others, so that the Symfony Serializer component understands what is required. https://symfony.com/doc/current/components/serializer.html and https://symfonycasts.com/tracks/symfony has more information and tutorials that well assist in learning more.
API-platform can make much of this simpler, for the simpler cases, but an understanding of the basics would be useful as a basis of understanding.

Count of documents 0 after inserting data with Nest

I am using Nest with the following connection settings:
var connectionPool = new SingleNodeConnectionPool(new Uri("http://localhost:9200"));
var settings = new ConnectionSettings(connectionPool, new InMemoryConnection());
settings.DisableDirectStreaming(true); // needed to see good looking debug log on insert
settings.DefaultIndex(Index);
Client = new ElasticClient(settings);
With new InMemoryConnection() I hope to query with Nest - changing data inside an Azure Cloud function.
Strangely the debug logs look promising Indexing:
/*
var res = await Client.IndexManyAsync(response.Elements, Index); //
Console.WriteLine(res.DebugInformation);
*/
/*
var res = await Client.IndexAsync(response, i => i.Index(Index)); // Index = "data"
Console.WriteLine(res.DebugInformation); // <--
*/
And logging directly after the insertions the count is 0:
// var anyDocs = await Client.CountAsync<OverpassElement>(c => c.Index(Index));
var anyDocs = await Client.CountAsync<OverpassElement>(c => c);
Console.WriteLine("count: " + anyDocs.Count);
..but the entire json data being logged with the insertion.
How come i can't count it (so that I can search in a next step), after insertion?
Actually I get:
Invalid NEST response built from a successful (200) low level call on POST: /data/_doc
And there is 0 Items in the on the IndexResponse inserting.
The data is of Element looking like the following part of an array containing 4221 such items:
{
"type": "relation",
"id": 8353694,
"timestamp": "2018-06-04T22:54:27Z",
"version": 1,
"changeset": 59551528,
"user": "asdf2",
"uid": 1416503,
"members": [
{
"type": "way",
"ref": 89956942,
"role": "from"
},
{
"type": "node",
"ref": 1042756547,
"role": "via"
},
{
"type": "way",
"ref": 89956938,
"role": "to"
}
],
"tags": {
"restriction": "no_left_turn",
"type": "restriction"
}
},
ElasticSearch has many similarities to a NoSql data store. In this case, "read after write" is not guaranteed by default. When the index API call returns success, it doesn't mean "this document is now available for searching"; it means "ElasticSearch has accepted your document and it will be available for searching shortly". ElasticSearch uses eventual consistency by default.
However, this can be annoying during testing. So ElasticSearch has a Refresh API that essentially just blocks until all documents already indexed are available for searching. I strongly recommend that you do not call this in production; only in test code.
As the risk of reviving an old question, this answer from Russ Cam explains that InMemoryConnection does not actually run the operation against Elasticsearch.
InMemoryConnection doesn't actually send any requests or receive any responses from Elasticsearch; used in conjunction with .SetConnectionStatusHandler() on Connection settings (or .OnRequestCompleted() in NEST 2.x+), it's a convenient way to see the serialized form of requests.
So you can inspect the query that NEST generates from your code but you won't be able to observe the results.
I don`t know what Nest is, but I'd bet 100$ that if it use Transactional concepts, maybe you should commit it in order to see count correctly ?

How To Get Particular Security Advisory Repository in Graphql

I have Tried
I have tried this code
`# Type queries into this side of the screen, and you will
# see intelligent typeaheads aware of the current GraphQL type schema,
# live syntax, and validation errors highlighted within the text.
# We'll get you started with a simple query showing your username!
query {
securityAdvisories(orderBy: {field: PUBLISHED_AT, direction: DESC}, first: 2) {
nodes {
description
ghsaId
summary
publishedAt
}
}
}
And got the below response
{
"data": {
"securityAdvisories": {
"nodes": [
{
"description": "In Symfony before 2.7.51, 2.8.x before 2.8.50, 3.x before 3.4.26, 4.x before 4.1.12, and 4.2.x before 4.2.7, when service ids allow user input, this could allow for SQL Injection and remote code execution. This is related to symfony/dependency-injection.",
"ghsaId": "GHSA-pgwj-prpq-jpc2",
"summary": "Critical severity vulnerability that affects symfony/dependency-injection",
"publishedAt": "2019-11-18T17:27:31Z"
},
{
"description": "Tapestry processes assets `/assets/ctx` using classes chain `StaticFilesFilter -> AssetDispatcher -> ContextResource`, which doesn't filter the character `\\`, so attacker can perform a path traversal attack to read any files on Windows platform.",
"ghsaId": "GHSA-89r3-rcpj-h7w6",
"summary": "Moderate severity vulnerability that affects org.apache.tapestry:tapestry-core",
"publishedAt": "2019-11-18T17:19:03Z"
}
]
}
}
}
But i want to get the response for specific security advisory like this
i.e i want to get graphql response for specific id for below example url ID is GHSA-wmx6-vxcf-c3gr
Thanks!
The simplest way would be to use the securityAdvisory() query.
query {
securityAdvisory(ghsaId: "GHSA-wmx6-vxcf-c3gr") {
ghsaId
summary
}
}
If you need to use the securityAdvisories() query for some reason, you simply have to add an identifier:. The following query should get the distinct entry for GHSA-wmx6-vxcf-c3gr.
query {
securityAdvisory(ghsaId: "GHSA-wmx6-vxcf-c3gr") {
ghsaId
summary
}
}

How do I delete a node in appsettings.json using appsettings.Development.json?

I am struggling a bit with how the appsettings.Development.json overrides or otherwise merges with the appsettings.json. I am not sure how to "clear" a node out of appsettings.json by using the appsettings.Development.json file.
For reference, I am using the default builder as seen here https://github.com/aspnet/MetaPackages/blob/rel/2.0.0-preview1/src/Microsoft.AspNetCore/WebHost.cs#L159-L160
appsettings.json
{
"Policy": {
"roles": [
{
"name": "inventoryAdmin",
"subjects": [ "bob", "alice" ],
"identityRoles": [ "ActiveDirectory-Role-Manager" ]
},
]
}
}
Given that example, why can I not do the following in my:
appsettings.Development.Json:
{ "Policy": { "Roles": [] } }
or
{ "Policy": { "Roles": null } }
When I check the output via something like Configuration.Get<PolicyServer.Local.Policy>().Roles I still get 3 roles back.
This question is hopefully going to guide me on how I can override a node and not just clear it. So I am hoping to start simple and work my way there.
All of the settings that go into your IConfiguration instance are simply key-value pairs. Take the following, simplified example JSON:
{
"Roles": [
{ "Name": "Role1", "Subjects": [ "Alice", "Bob" ] },
{ "Name": "Role2", "Subjects": [ "Charlie" ] }
]
}
Although this is essentially a tree structure, it maps into the following key-value pairs when added to your IConfiguration instance (there are some additional empty values here, but they're not part of this discussion):
Roles =
Roles:0:Name = Role1
Roles:0:Subjects:0 = Alice
Roles:0:Subjects:1 = Bob
Roles:1:Name = Role2
Roles:1:Subjects:0 = Charlie
You can see that this mimics the hierarchy of your JSON, where the names are object properties and the numbers are indexes into arrays. That first one is important: There's a key of Roles which has no value, because values can only be simple strings and its just a parent in itself.
Now, when you add an extra JSON file to the IConfiguration instance setup, it maps to a new set of key-value pairs that get applied on top of those that exist. Take the following additional JSON:
{
"Roles": []
}
This simply overwrites the existing Roles key and sets it to, well, the same value it already has: nothing. The same applies if you use null in your JSON file - that's just how this stuff works.
In terms of a solution here, I suggest seeing if you can rework your appsettings.json approach. For example, you might be able to put the role configuration itself into e.g. an appsettings.Production.json file and leave the default version blank so that it doesn't exist in your development environment. In other words, try and model your different appsettings.json files to be additive themselves.

Design pattern - update join table through REST API

I'm struggling with a REST API design concept. I have these classes:
user:
- first_name
- last_name
metadata_fields:
- field_name
user_metadata:
- user_id
- field_id
- value
- unique index on [user_id, field_id]
Ok, so users have many metadata and the type of metadata is defined in metadata_fields. Typical HABTM with extra data in the join table.
If I were to update user_metadata through a Rails form, the data would look like this:
user_metadata: {
id: 1,
user_id: 2,
field_id: 3,
value: 'foo'
}
If I posted to the user#update controller, the data would look like this:
user: {
user_metadata: {
id: 1,
field_id: 3,
value: 'foo'
}
}
The trouble with this approach is that we're ignoring the uniqueness of the user_id/field_id relationship. If I change the field_id in either update, I'm not just changing data, I'm changing the meaning of that data. This tends to work fine in Rails because it's somewhat of a walled garden, but it breaks down when you open up an API endpoint.
If I allow this:
PATCH /api/user_metadata
Then I'm opening myself up to someone modifying the user_id or field_id or both. Similarly with this:
PATCH /api/user/:user_id/metadata
Now user_id is set but field_id can still change. So really the only way to solve this is to limit the update to a single field:
PATCH /api/user/:user_id/metadata/:field_id
Or a bulk update:
PATCH /api/user/:user_id/metadata
But with that call, we have to modify the data structure so that the uniqueness of the user_id/field_id relationship is intact:
user_metadata: {
field_id1: 'value1',
field_id2: 'value2',
...
}
I'd love to hear thoughts here. I've scoured Google and found absolutely nothing. Any recommendations?
As metadata belongs to a certain user /api/user/{userId}/metadata/{metadataId} is probably the clean URI for a single metadata resource of a user. The URI of your resource is already the unique-key you are looking for. There can't be 2 resources with the same URI! Furthermore, the URI already contains the user and field IDs.
A request like GET /api/user/1 HTTP/1.1 could return a HAL-like representation like the one below:
{
"user" : {
"id": "1",
"firstName": "Max",
"lastName": "Sample",
...
"_links": {
"self" : {
"href": "/api/user/1"
}
},
"_embedded": {
"metadata" : {
"fields" : [{
"id": "1",
"type": "string",
"value": "foo",
"_links": {
"self": {
"href": "/api/user/1/metadata/1"
}
}
}, {
"id": "2",
"type": "string",
"value": "bar",
"_links": {
"self": {
"href": "/api/user/1/metadata/2"
}
}
}],
"_links": {
"self": {
"href": "/api/user/1/metadata"
}
}
}
}
}
}
Of course you could send a PUT or a PATCH request to modify an existing metadata field. Though, the URI of the resource will still be the same (unless you move or delete a resource within a PATCH request).
You also have the possibility to ignore certain fields on incomming PUT requests which prevents modification of certain fields like id or _link. I'll assume this should also be valid for PATCH requests, though will have to re-read the spec again therefore.
Therefore, I'd suggest to ignore any id or _link fields contained in requests and update the remaining fields. But you also have the option to return a 403 Forbidden or 409 Conflict response if someone tries to update an ID-field.
UPDATE
If you want to update multiple fields within a single request, you have two options:
Using PUT and replace the current set of fields with the new version
Using PATCH and send the server the necessary steps to transform the current field-set to the new field-set
Example PUT:
PUT /api/user/1/metadata HTTP/1.1
{
"metadata": {
"fields": [{
"type": "string",
"value": "newFoo"
}, {
"type": "string",
"value": "newBar"
}]
}
}
This request would first delete every stored metadata field of the user the metadata belong to and afterwards create a new resoure for each contained field in the request. While this still guarantees unique URIs, there are a couple of drawbacks to this approach however:
all the data which should be available after the update, even fields that do not change, need to be transmitted
clients which have a URI pointing to a certain resource may point to a false representation. F.e. a client has retrieved /user/1/metadata/2right before a further client updated all the metadata, the IDs are dispatched via auto-increment, the update however introduced a new second item and therefore moved the former 2 to position 3, client1 has now a reference to /user/1/metadata/2 while the actual data is /user/1/metadata/3 however. To prevent this, unique UUIDs could be used instead of autoincrement IDs. If client 1 later on tries to retrieve or update former resource 2, his can be notified that the resource is not available anymore, even a redirect to the new location could be created.
Example PATCH:
A PATCH request contains the necessary steps to transform the state of a resource to the new state. The request itself can affect multiple resources at the same time and even create or delete other resources as needed.
The following example is in json-patch+json format:
PATCH /api/user/1/metadata HTTP/1.1
[
{
"op": "add",
"path": "/0/value",
"value": "newFoo"
},
{
"op": "add",
"path": "/2",
"value": { "type": "string", "value": "totally new entry" }
},
{
"op": "remove",
"path": "/1"
},
]
The path is defined as a JSON Pointer for the invoked resource.
The add operation of the JSON-Patch type is defined as:
If the target location specifies an array index, a new value is inserted into the array at the specified index.
If the target location specifies an object member that does not already exist, a new member is added to the object.
If the target location specifies an object member that does exist, that member's value is replaced.
For the removal case however, the spec states:
If removing an element from an array, any elements above the specified index are shifted one position to the left.
Therefore the newly added entry would end up in position 2 in the array. If not an auto-increment value is used for the ID, this should not be a big problem though.
Besindes add, and remove the spec also contains definitions for replace, move, copy and test.
The PATCH should be transactional - either all operations succeed or none. The spec states:
If a normative requirement is violated by a JSON Patch document, or if an operation is not successful, evaluation of the JSON Patch document SHOULD terminate and application of the entire patch document SHALL NOT be deemed successful.
I'll interpret this lines as, if it tries to update a field which it is not supposed to update, you should return an error for the whole PATCH request and therefore do not alter any resources.
Drawback to the PATCH approach is clearly the transactional requirement as well as the JSON Pointer notation, which might not be that popular (at least I haven't used it often and had to look it up again). Same as with PUT, PATCH allows to add new resources inbetween existing resources and shifting further ones to the right which may lead to an issue if you rely on autoincrement values.
Therefore, I strongly recommend to use randomly generated UUIDs as identifier rather than auto-increment values.