FIWARE-Orion Context Broker metadata updates trigger notifications - notifications

I'm using 3 FIWARE GEs: IDAS+Orion+CEP.
As reported in the Orion documentation (https://github.com/telefonicaid/fiware-orion/blob/develop/doc/manuals/user/metadata.md) "changing the metadata of a given attribute or adding a new metadata element is considered a change even if attribute value itself hasn't changed".
Is there a way to send notifications from Orion only if the value of the attribute specified in the subscription changes?
I've tried the solution proposed in the documentation, delete and re-create the attribute, in order to remove the metadata. But, since the messages to Orion are produced by IDAS, the metadata are created with the new communication.
Thanks.
UPDATE:
GEs Version:
- Orion - 0.26.1-next
- IoTAgent (IDAS) - 1.3.1
The metadata added by IDAS are:
"attributes" : [
{
"name" : "temperature",
"type" : "int",
"value" : "37",
"metadatas" : [
{
"name" : "TimeInstant",
"type" : "ISO8601",
"value" : "2015-12-29T12:46:04.421859"
}
]
}
]
Specifically, from mongodb query:
"temperature" : { "value" : "37", "type" : "int", "md" : [ { "name" : "TimeInstant", "type" : "ISO8601", "value" : "2015-12-29T12:46:04.421859" } ], "creDate" : 1450716887, "modDate" : 1451393164 }

As far as I know, TimeInstant metadata sending from IDAS/IoTAgent to Orion couldn't be disabled by the time being.
A possible workaround could be to have a proxy between IDAS and Orion ir order to remove the TimeInstant metadata (or the whole metadata field in JSON to prevent some other metadata could be causing a similar problem).

Related

Karate - Conditional JSON schema validation

I am just wondering how can I do conditional schema validation. The API response is dynamic based on customerType key. If customerType is person then, person details will be included and if the customerType is org organization details will be included in the JSON response. So the response can be in either of the following forms
{
"customerType" : "person",
"person" : {
"fistName" : "A",
"lastName" : "B"
},
"id" : 1,
"requestDate" : "2021-11-11"
}
{
"customerType" : "org",
"organization" : {
"orgName" : "A",
"orgAddress" : "B"
},
"id" : 2,
"requestDate" : "2021-11-11"
}
The schema I created to validate above 2 scenario is as follows
{
"customerType" : "#string",
"organization" : "#? response.customerType=='org' ? karate.match(_,personSchema) : karate.match(_,null)",
"person" : "#? response.customerType=='person' ? karate.match(_,orgSchema) : karate.match(_,null)",
"id" : "#number",
"requestDate" : "#string"
}
but the schema fails to match with the actual response. What changes should I make in the schema to make it work?
Note : I am planning to reuse the schema in multiple tests so I will be keeping the schema in separate files, independent of the feature file
Can you refer to this answer which I think is the better approach: https://stackoverflow.com/a/47336682/143475
That said, I think you missed that the JS karate.match() API doesn't return a boolean, but a JSON that contains a pass boolean property.
So you have to do things like this:
* def someVar = karate.match(actual, expected).pass ? {} : {}

Push from Filbeat to Elasticsearch with custom _type and _id

The problem is to push json logs collected by Filebeat to Elasticsearch with defined _type and _id. Default elastic _type is "log" and _id is smth. like "AVryuUKMKNQ7xhVUFxN2".
My log row:
{"unit_id":10001,"node_id":1,"message":"Msg ..."}
Desired record in Elasticsearch:
"hits" : [ {
"_index" : "filebeat",
"_type" : "unit_id",
"_id" : "10001",
...
"_source" : {
"message" : "Msg ...",
"node_id" : 1,
...
}
} ]
I know how to do it with Logstash, just use document_id => "%{unit_id}" and document_type => "unit_id" in the output section. The goal is to use only Filebeat. Because it is a very-light weight solution and no intermediate aggregation is needed here.
You can set a custom _type by using the document_type option in Filebeat.
There is no way to set the _id directly in Filebeat as of version 5.x.
filebeat.prospectors:
- paths: ['/var/log/messages']
document_type: syslog
You could use the Elasticsearch Ingest Node feature to set the _id field. You would need to use a script processor to copy a value from the event into the _id field. Once you have defined your pipeline you would tell Filebeat to send its data to that pipeline using the output.elasticsearch.pipeline config option.
You can now set a custom _id : https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-deduplication.html

Complex attribute that holds another complex attribute

As per the RFC7643 section 2.3.8
A complex attribute MUST NOT contain sub-attributes that have sub-attributes (i.e., that are complex).
But when I read the Schema definition in the same RFC section 8.7.2 line 88, I noted the urn:ietf:params:scim:schemas:core:2.0:Schema description is:
{
...
"attributes" : [
...
{
"name" : "attributes",
"type" : "complex",
"multiValued" : true,
"description" : "A complex attribute that includes the
attributes of a schema.",
"required" : true,
"mutability" : "readOnly",
"returned" : "default",
"subAttributes" : [
...
{
"name" : "subAttributes",
"type" : "complex",
"multiValued" : true,
"description" : "Used to define the sub-attributes of a
complex attribute.",
"required" : false,
"mutability" : "readOnly",
"returned" : "default",
"subAttributes" : [
What did I miss?
For all schema definitions, Complex Attributes may contain another Complex Attribute.
In the RFC7643 section 7 we can read
Unlike other core resources, the "Schema" resource MAY contain a
complex object within a sub-attribute, and all attributes are
REQUIRED unless otherwise specified.

Mongo adding an object to original object

I am not sure if I am asking the correct question but I assume this is just a basic mongodb question.
I currently have this:
{
"_id" : ObjectId("57af98d4d71c4efff5304335"),
"fullname" : "test",
"username" : "test",
"email" : "test#gmail.com",
"password" : "$2a$10$Wl29i6FemBrnOKq/ZErSguxlfvqoayZQkaEDirkmDl5O3GDEQjOV2"
}
and I would like to add an exercise object like this:
{
"_id" : ObjectId("57af98d4d71c4efff5304335"),
"fullname" : "test",
"username" : "test",
"email" : "test#gmail.com",
"password" : "$2a$10$Wl29i6FemBrnOKq/ZErSguxlfvqoayZQkaEDirkmDl5O3GDEQjOV2",
"exercises": {
"benchpress",
"rows",
"curls",
}
I am just unsure how to create exercises with the object without using $push which just opens up an array. I don't want an array, I want an object.
Any help would be greatly appreciated it.
An object is a key-value pair. In your representation of the second document, you have a nested document exercises as a key and its value as an object containing only strings. Don't you see something strange there? An object without keys?
It should probably be an array of strings. Note that an array is indeed an object where the key is the numeric index starting from 0 and the value is the string in that position.
(You have an additional comma and a missing curly-brace. Lets fix that.)
This is the document we wish to see after updating the document.
{
"_id" : ObjectId("57af98d4d71c4efff5304335"),
"fullname" : "test",
"username" : "test",
"email" : "test#gmail.com",
"password" : "$2a$10$Wl29i6FemBrnOKq/ZErSguxlfvqoayZQkaEDirkmDl5O3GDEQjOV2",
"exercises": [
"benchpress",
"rows",
"curls"
]
}
Now, back to your question. How can we update the existing document with the exercises document? Its pretty simple. Mongodb has a 'update' method which exactly does that. Since we don't want to replace the entire document and just add additional fields, we should use $set to update specific fields. Fire up the mongo shell and switch to your database using use db-name. Then execute the following command. I assume you have an existing document with id ObjectId("57af98d4d71c4efff5304335"). Note that ObjectId is a BSON datatype.
db.scratch.update({ "_id" : ObjectId("57af98d4d71c4efff5304335") }, { $set: {"exercises": ["benchpress", "rows", "curls"] } })
This will update the document as
{
"_id" : ObjectId("57af98d4d71c4efff5304335"),
"fullname" : "test",
"username" : "test",
"email" : "test#gmail.com",
"password" : "$2a$10$Wl29i6FemBrnOKq/ZErSguxlfvqoayZQkaEDirkmDl5O3GDEQjOV2",
"exercises" : [
"benchpress",
"rows",
"curls"
]
}
Here scratch refers to the collection name. The update method takes 3 parameters.
Query to find the document to update
The Update parameter(document to update). You can either replace the whole document or just specific parts of the document(using $set).
An optional object which can tell Mongodb to insert the record if the document doesn't exist(upsert) or update multiple documents that match the criteria(multiple).
EXTRA
Warning: If you execute the following in the mongo shell,
db.scratch.update({ "_id" : ObjectId("57af98d4d71c4efff5304335") }, {"exercises": ["benchpress", "rows", "curls"] })
the entire document would be replaced except the _id field. So, the record would be something like this:
{
"_id" : ObjectId("57af98d4d71c4efff5304335"),
"exercises" : [
"benchpress",
"rows",
"curls"
]
}
You should only do this when you are aware of the consequence.
Hope this helps.
For more, see https://docs.mongodb.com/manual/reference/method/db.collection.update/

Lookback API: Deleted items

I'd like to use the lookback API to view the history of a deleted object, which I think should be simple if I know the formatted id. I just need to query:
{ FormattedID: 'DEXXXX' }
But does the Lookback API record anything special for when an object is deleted (like can I tell exactly when it was deleted or by whom)? Can it help point me to the correct place in the Recycle bin so that I could try to undelete it?
If you know the specific FormattedID, you can just query for its history, as you mentioned above. There isn't a special indicator that a snapshot represents the last valid state before a deletion, but the _ValidTo date will have been changed from the apoc (9999-01-01) to the date and time it was deleted. Unfortunately, the _User field of that last snapshot will be of the person that caused the last change to the object (before deletion), since we don't record a snapshot for when we delete.
Interesting. I just ran a REST query on the Recycle Bin with fetch=true and got back a lot more data on the result set than I'm used to:
GET https://rally1.rallydev.com/slm/webservice/1.40/recyclebinentry/12345678914.js
{ "RecycleBinEntry" : { "DeletedBy" : { "_rallyAPIMajor" : "1",
"_rallyAPIMinor" : "40",
"_ref" : "https://rally1.rallydev.com/slm/webservice/1.40/user/12345678910.js",
"_refObjectName" : "User One",
"_type" : "User"
},
"DeletionDate" : "2012-05-15T02:53:10.087Z",
"Errors" : [ ],
"ID" : "DE32",
"Name" : "Error found in TC43: TC07-011",
"ObjectID" : 12345678911,
"Subscription" : { "_rallyAPIMajor" : "1",
"_rallyAPIMinor" : "40",
"_ref" : "https://rally1.rallydev.com/slm/webservice/1.40/subscription/12345678912.js",
"_refObjectName" : "My Subscription",
"_type" : "Subscription"
},
"Type" : "Defect",
"Warnings" : [ ],
"Workspace" : { "_rallyAPIMajor" : "1",
"_rallyAPIMinor" : "40",
"_ref" : "https://rally1.rallydev.com/slm/webservice/1.40/workspace/12345678913.js",
"_refObjectName" : "My Workspace",
"_type" : "Workspace"
},
"_CreatedAt" : "May 14, 2012",
"_objectVersion" : "1",
"_rallyAPIMajor" : "1",
"_rallyAPIMinor" : "40",
"_ref" : "https://rally1.rallydev.com/slm/webservice/1.40/recyclebinentry/12345678914.js",
"_refObjectName" : "Error found in TC43: TC07-011"
}
}
I didn't realized Rally released an enhancement to this information, but this data includes the Name and Ref of the User that deleted the Object.
You can walk the Recycle bin of the current Workspace/Project using this REST URL:
https://rally1.rallydev.com/slm/webservice/1.40/recyclebin.js?workspace=/workspace/12345678919&project=/project/12345678920&fetch=true
Where 12345678919 and 12345678920 are the Workspace and Project OIDs, respectively.
Unfortunately Lookback API doesn't provide anything along the lines of tracking deletions or entries in the Recycle Bin. The focus is definitely on Analytics and providing a robust reporting engine for agile metrics.
This doesn't exclude the possibility that at some point LBAPI or other aspects of Rally services could be enhanced with trace-ability and tracking/accountability type of functionality. Enhanced trace-ability in Rally is something customers have expressed a need for and is definitely something that Rally's Product Management team is aware of as a customer need.