How to fetch fieldname and structure for additional properties in operation, managedObject etc - cumulocity

I am trying to figure out which fragments are related to operation:
managedObject
event
measurement
alarm
So , Is there a way to get all these fragments ?
Also there are additional Properties for which field name is defined as * and the value can be an Object or anything else(*). I have gone through the device management library and sensor library in cumulocity documentation but found it does not contain all the possible fragments and there is no clarity as in which object the fragment goes i.e does it go in operation or managedObject, or both?

Since every user, device and application can contribute such fragments, there is no "global list" of them that you could refer to. Normally, a client (application, device) knows what data it sends or what data it requests, so it's also in most cases not required.
Regarding the relationship between operations and managed objects, there are some typical design pattern. Let's say you want to configure something in a device, like a polling interval:
"mydevice_Configuration": { "pollingRate": 60 }
What your application would do is to send that fragment as an operation to a device:
POST /devicecontrol/operations HTTP/1.1
...
{
"deviceId": "12345",
"mydevice_Configuration": { "pollingRate": 60 }
}
The device would accept the operation (http://cumulocity.com/guides/rest/device-integration/#step-6-finish-operations-and-subscribe) and change its configuration. When it does that successfully, it will update its managed object to contain the new configuration:
PUT /inventory/managedObjects/12345 HTTP/1.1
{
"mydevice_Configuration": { "pollingRate": 60 }
}
This way, your inventory reflects as closely as possible the true state of devices.
Hope that helps ...

Related

Firebase Realtime DB: How to setup rule to only write to DB if value being passed to be written is not already stored in DB

I am looking for syntax to setup a rule in my Firebase Realtime DB to confirm that the value being passed from my app does not already exist in that particular location before writing another duplicate entry. I am passing an "ExponentPushToken" value for push notifications. This happens anonymously. That is, the write is happening without any user authentication, the DB is simply storing a list of "ExponentPushToken" like so:
{
"users" : {
"push_token" : {
"auto-generated key" : "ExponentPushToken-value"
}
}
}
I attempted to use the following, but it still will allow the "duplicate" value to be written. I am wondering if this has anything to do with random 'auto-generated key' that is generated for each instance a value written to the DB, or if i am not thinking through this properly.
{
"rules": {
".read": "false",
".write": "true && newData.val() !== data.child('push_token').val()",
}
}
For context, each ExponentPushToken is a unique value for a given device that is used to open my app. The ExponentPushToken is a unique ID that is assigned to that device indefinitely and won't change ever in any case that particular device is accessing the app. The ExponentPushToken is assigned to the device indefinitely, and is written to the DB. Each time the app is opened, there is a push/set command to write that token to the DB within my app's code. I would like to setup a rule for Firebase Realtime DB to confirm the value being passed from the app to the DB does NOT get written if that value already exists. I am stuck and am finding myself having to manually parse the db to remove duplicate value to keep the DB clean and not send more than one Push Notification to app users.
Any help or direction you can provide would be greatly appreciated!
Thanks!
This cannot be done using firebase security rules, as they cannot search for values (that wouldn't scale). The query to check for existing values in a particular location in the database would have to be done in the app that is performing the push/writing of the value into the database.
The app would need to query the database for values and have a condition in place to prevent the writing of duplicate values.
An example of working code that can be implemented within the app can be found here that accomplishes the desired outcome for this specific ask/desired outcome.
Expo and Firebase Realtime DB - Read values and confirm a particular value does not exist before writing to database
Rules are evaluated in the context of the node/path in which you declare them. So the .write rule in your question is evaluated on the root, and both data and newData contain the data/new data at the root.
The easiest way to verify that there's no data at the path, is to define this rules at that path:
{
"rules": {
".read": "false",
"push_token": {
"$pushId": {
".write": "!data.exists()",
}
}
}
}
The $pusId here means that the rules under there are applies to every child node of push_token, which seems to match your JSON. For more on this, see the documentation on wildcard variables.

IBM Domino 10 - integrating with Resource Reservations via Domino Data Services API

We're trying to integrate with IBM Domino via REST API to pull out information about reservations/events in a specific room and also be able to create new events/reservations remotely. We already integrated with other services such as Microsoft Exchange, but IBM seems to be the toughest of them all.
I studied deeply into it and read thousands of articles & stack overflow questions, and got pretty far but still can't make any real use out of it.
What I currently plan on doing is this:
Pull information about reservations from /api/data/collections/name/($Reservations) or ($Calendar)
Create events/reservations using the documents api, POSTing to /api/data/documents?form=Reservation, I already tried doing it and my reservation even showed in Domino Admin (not in Notes client though), but it had some errors (probably just some json problem on my side)
While it looks kinda clear and easy, it really isn't. I have a few questions:
How can I get reservations/calendar for a specific room? ($Calendar) returns all events in the database, not even including in which room it is, to get that information I would need to additionally query each reservation by it's unid and that would probably kill the entire app
Is there any way I could filter/search the /api/data/documents to return only documents whose form field has a value of Reservation or any other value? This way I could get all the reservation documents without querying each of the documents directly (/api/data/documents only returns the href to the document without any interesting data), I wouldn't also need to additionally enable DAS for each view I want to use.
What are the fields like $25 returned in the json, and how can I know what's their purpose if they don't have any real name? They often contain interesting data, such as the room name.
I also looked into the FreeBusy api service, and it's pretty interesting and I could easily use it to look for reservations (/busytimes) in the room I want, if it ever returned what resource/reservation is causing the busy time. It just shows the start and end time, nothing else..
I also read suggestions that one should create a 'main' user to handle the reservations and use his calendar api (/api/calendar/events), but afaik it can't be done that way.
However I tried creating events in the users calendar in specified room, and kinda got it to work by adding the following attendee in the json |(PHP syntax, actually):
'organizer' => [ 'email' => 'admin/test#test.com' ],
'attendees' => [
[
'role' => 'req-participant',
'userType' => 'room',
'status' => 'accepted',
'rsvp' => true,
'email' => 'testroom#test.com',
],
],
But it doesn't really get displayed in the room reservations, unlike normal events created in IBM Notes. It also cannot be edited or deleted in IBM Notes, and it has "Accepted: " in front of the subject, and it says "attendance is delegated for admin". To delete it, I need to delete it via API through its unid directly. x-lotus-noticetype is being set to A so I guess it's not being treated as a meeting but as an notice, no idea why though.
I'd really like some help or suggestions on how I could get this working, are there any other ways that would have any sense?
Edit:
After struggling a lot and reading Dave's reply, I think it would be a good solution to have a single user that would do the reservations via calendar api, because the direct data api probably won't work. I could just only pull the list of all reservations from Rooms database ($Calendar) or ($Reservations) view, or make some sort of my own view.
However! I cannot get the calendar method to work on my local IBM Domino server. Dave pointed to me that I need to specify a valid email (internet address) of the organizer, so I set my user's internet address to testmail#test.test (test.test is mapped to 127.0.0.1 in the hosts file). Now as soon as I try to use that address like that:
"organizer": {
"email": "testmail#test.test"
}
I cannot even create the event/reservation (through /mail/admin.nsf/api/calendar/events), it's returning 500 internal error with cserror 1026, and Domino logs
[CS API]> Error | calendarapi.c(379) : There was an error sending out notices to meeting participants. (0x8E4)
Error connecting to server test/test: The remote server is not a known TCP/IP host.
So it has a problem with sending the notice, and doesn't create the event at all. I thought it may not work with localhost, so I set my users email to an external mail service, and I even received the email, but the event was still created incorrectly (x-lotus-noticetype A is being added automatically and overrides whatever I send as the value), it's not visible in the Room Reservations database.
Here's the json object of an event created via Notes client:
"events": [
{
"href":"\/mail\/admin.nsf\/api\/calendar\/events\/2B35FABBC50EA4D0C12583BC002E26FA-Lotus_Notes_Generated",
"id":"2B35FABBC50EA4D0C12583BC002E26FA-Lotus_Notes_Generated",
"summary":"Notes client meeting",
"location":"Test room\/Test site#test",
"start": {
"date":"2019-03-13",
"time":"09:30:00",
"tzid":"Central European Standard Time"
},
"end": {
"date":"2019-03-13",
"time":"10:30:00",
"tzid":"Central European Standard Time"
},
"class":"public",
"transparency":"opaque",
"sequence":0,
"last-modified":"20190313T082436Z",
"attendees": [
{
"role":"chair",
"status":"accepted",
"rsvp":false,
"displayName":"admin\/test",
"email":"testmail#test.test"
},
{
"role":"req-participant",
"userType":"room",
"status":"needs-action",
"rsvp":true,
"displayName":"Test room\/Test site",
"email":"room#test.test"
}
],
"organizer": {
"displayName":"admin\/test",
"email":"testmail#test.test"
},
"x-lotus-broadcast": {
"data":"FALSE"
},
"x-lotus-notesversion": {
"data":"2"
},
"x-lotus-appttype": {
"data":"3"
}
}
]
As you can see, Notes is able to create the event with testmail#test.test successfully.
Now here's an event created with my API, but with admin/test#test.test as the organizer's email (because normal email doesn't let me create the event):
"events": [
{
"href":"\/mail\/admin.nsf\/api\/calendar\/events\/E1D1F752203FC2DFC12583BC002FCB12-Lotus_Auto_Generated",
"id":"E1D1F752203FC2DFC12583BC002FCB12-Lotus_Auto_Generated",
"summary":"Api reservation test",
"location":"Test room\/Test site#test\r\nCN=Test room\/O=Test site",
"description":"API Generated event\r\n",
"start": {
"date":"2019-03-20",
"time":"11:00:00",
"utc":true
},
"end": {
"date":"2019-03-20",
"time":"15:00:00",
"utc":true
},
"class":"public",
"transparency":"opaque",
"sequence":0,
"last-modified":"20190313T084201Z",
"attendees": [
{
"role":"chair",
"status":"accepted",
"rsvp":false,
"displayName":"admin\/test",
"email":"testmail#test.test"
},
{
"role":"req-participant",
"userType":"room",
"status":"needs-action",
"rsvp":true,
"displayName":"Test room\/Test site",
"email":"room#test.test"
}
],
"organizer": {
"displayName":"admin\/test",
"email":"testmail#test.test"
},
"x-lotus-broadcast": {
"data":"FALSE"
},
"x-lotus-notesversion": {
"data":"2"
},
"x-lotus-noticetype": {
"data":"A"
},
"x-lotus-appttype": {
"data":"3"
}
}
]
As you can see, the organizer's & chair emails were automatically updated by Lotus to testmail#test.test, and theorethically everything should work but it doesnt. In Notes, I see the event as 'Accepted: Api reservation test' and I cannot modify things like the room, or don't have the option to delete it from right click menu (I can delete it with Del keyboard button though)
The only difference is that x-lotus-noticetype get's added, and I don't even know why
Edit 2:
I got it to work! Dave pointed that I may have some configuration issue, so I re-installed the server & setup everything again (including the mail services), I used admin#test.test and the meeting was succesfully created & added to the room reservations. Server console only showed that the message was delivered.
HOWEVER! I was able to create as many identical meetings as I wanted, they weren't added to the reservations database but they were succesfully created in my calendar (with the room assigned to them) without any errors (not even in the sever console), this is obviously bad. Is there any way to check (externally, through API) if the reservation was created succesfully, and prevent it's creation if the room is busy at that moment? Notes client prompts an error when the room is busy. I could probably use FreeBusy api, however that would require another HTTP request before each reservation attempt, but if that's the only way then I'll just take it. I see that the status field of the attendeed room is set to declined, but the response from POST still contains needs-action so I'd need to do some delayed request once again to check if the status has changed to declined or not.
Also, while it works, I still don't know how I could obtain a list of reservations in a selected room? The already existing views in Reservations database don't give many details, and they need to be exclusively enabled DAS services in order to work. Is there any other way that could work properly?
Another thing is, is there any way I could get current user's email address to use for the reservations, or can I only 'hardcode' it manually? Same goes for room's email. Currently, I need to have:
User name
User password
User mail database (/mail/admin.nsf/)
User email
Room email
and if I'd want to read some data from the Reservations database directly, then I'd also need to have the path to that database. This isn't really user-friendly, I'd like to automate some things if possible. Otherwise the integration may be impossible to make.
The reservation database was designed to manage reservations either (1) directly through it's own UI, or (2) indirectly by auto-processing notifications from calendar users. By using the DAS data API, you are asserting you can manage reservations (3) programmatically -- by manipulating the low-level document items. You might get this to work, but I don't think the reservation database was designed with that in mind.
That's why I think this answer is the best option. It leverages auto-processing (#2 above) and saves you from dealing with the internal design of reservation documents. If you use this approach, you should give the DAS calendar API a list of attendees like this:
"attendees":[
{
"role":"req-participant",
"userType":"room",
"status":"needs-action",
"rsvp":true,
"email":"room#mycorp.com"
}
]
In other words, status must be "needs-action" -- not "accepted" as shown in your original post. Also, make sure you are using the correct email address for both the organizer and the target room. The above example shows an Internet-style address for the room, but administrators don't always give a room an Internet address.

smartREST template: custom measurement not visible as Datapoint

I created a smartREST template for measurement via "device management -> smartREST templates". I send the reading via MQTT:
s/uc/mytemplateID
777,123,stringValue
The message arrives because I can see it through the API:
{
"time":"2018-07-03T15:36:13.237+01:00",
"id":"47638",
"self":"https://myDomain.mydomain/measurement/measurements/47638",
"source":{
"id":"20018",
"self":"https://myDomain.mydomain/inventory/managedObjects/20018"
},
"type":"myType",
"myStrValue":"stringValue",
"myNumberValue":123
}
But I can not see it as a data point.
I also can not see it under: "device management -> All devices -> myDevice -> Measurements"
If the cause is that the incoming message does not have the expected format, then the question is, how can I use MQTT to send custom measurments with the expected format?
Thank you
To be able to use Cumulocity standard features on your measurements, they must adhere to a certain standard. Transform your template to create measurements like this:
{
"time":"2018-07-03T15:36:13.237+01:00",
"id":"47638",
"self":"https://myDomain.mydomain/measurement/measurements/47638",
"source":{
"id":"20018",
"self":"https://myDomain.mydomain/inventory/managedObjects/20018"
},
"type":"myType",
"myFragment":{
"mySeries":{
"value":123,
"unit":"aUnit"
},
"myOtherSeries":{
"value":321,
"unit":"anotherUnit"
}
}
}
Note that measurement values are always numeric, using string based values here may cause unwanted behavior again.
If you want to communicate string based status variables sending events or alarms is usually a better approach.
The template configuration to send such measurements should look like this:

Best way to build a "task queue" with RxJava

Currently I'm working on a lot of network-related features. At the moment, I'm dealing with a network channel that allows me to send 1 single piece of information at a time, and I have to wait for it to be acknowledged before I can send the next piece of information. I'm representing the server with 1..n connected clients.
Some of these messages, I have to send in chunks, which is fairly easy to do with RxJava. Currently my "writing" method looks sort of like this:
fun write(bytes: ByteArray, ignoreMtu: Boolean) =
server.deviceList()
.first(emptyList())
.flatMapObservable { devices ->
Single.fromCallable {
if (ignoreMtu) {
bytes.size
} else {
devices.minBy { device -> device.mtu }?.mtu ?: DEFAULT_MTU
}
}
.flatMapObservable { minMtu ->
Observable.fromIterable(bytes.asIterable())
.buffer(minMtu)
}
.map { it.toByteArray() }
.doOnNext { server.currentData = bytes }
.map { devices }
// part i've left out: waiting for each device acknowledging the message, timeouts, etc.
}
What's missing in here is the part where I only allow one piece of information to be sent at the same time. Also, what I require is that if I'm adding a message into my queue, I have to be able to observe the status of only this message (completed, error).
I've thought about what's the most elegant way to achieve this. Solutions I've came up with include for example a PublishSubject<ByteArray> in which I push the messages (queue-like), add a subscriber and observe it - but this would throw for example onError if the previous message failed.
Another way that crossed my mind was to give each message a number upon creating / queueing it, and have a global "message-counter" Observable which I'd hook into the chain's beginning with a filter for the currently sent message == MY_MESSAGE_ID. But this feels kind of fragile. I could increment the counter whenever the subscription terminates, but I'm sure there must be a better way to achieve my goal.
Thanks for your help.
For future reference: The most straight-forward approach I've found is to add a scheduler that's working on a single thread, thus working each task sequential.

Designing an API for the client to a 3rd-party service

I am fairly new to Scala and I'm working on an application (library) which is a client to a 3rd-party service (I'm not able to modify the server side and it uses custom binary protocol). I use Netty for networking.
I want to design an API which should allow users to:
Send requests to the server
Send requests to the server and get the response asynchronously
Subscribe to events triggered by the server (having multiple asynchronous event handlers which should be able to send requests as well)
I am not sure how should I design it. Exploring Scala, I stumble upon a bunch of information about Actor model, but I am not sure if it can be applied there and if it can, how.
I'd like to get some recommendations on the way I should take.
In general, the Scala-ish way to expose asynchronous functionality to user code is to return a scala.concurrent.Future[T].
If you're going the actor route, you might consider encapsulating the binary communication within the context of a single actor class. You can scale the instances of this proxy actor using Akka's router support, and you could produce response futures easily using the ask pattern. There are a few nice libraries (Spray, Play Framework) that make wrapping e.g. a RESTful or even WebSocket layer over Akka almost trivial.
A nice model for the pub-sub functionality might be to define a Publisher trait that you can mix in to some actor subclasses. This could define some state to keep track of subscribers, handle Subscribe and Unsubscribe messages, and provide some sort of convenient method for broadcasting messages:
/**
* Sends a copy of the supplied event object to every subscriber of
* the event object class and superclasses.
*/
protected[this] def publish[T](event: T) {
for (subscriber <- subscribersFor(event)) subscriber ! event
}
These are just some ideas based on doing something similar in some recent projects. Feel free to elaborate on your use case if you need more specific direction. Also, the Akka user list is a great resource for general questions like this, if indeed you're interested in exploring actors in Scala.
Observables
This looks like a good example for the Obesrvable pattern. This pattern comes from the Reactive Extensions of .NET, but is also available for Java and Scala. The library is provided by Netflix and has a really good quality.
This pattern has a good theoretical foundation --- it is the dual to the iterator in the category theoretical sense. But more important, it has a lot of practical ideas in it. Especially it handles time very good, e.g. you can limit the event rate you want to get.
With an observable you can process events on avery high level. In .NET it looks a lot like an SQL query. You can register for certain events ("FROM"), filter them ("WHERE") and finally process them ("SELECT"). In Scala you can use standard monadic API (map, filter, flatMap) and of course "for expressions".
An example can look like
stackoverflowQuestions.filter(_.tag == "Scala").map(_.subject).throttleLast(1 second).subscribe(println _)
Obeservables take away a lot of problems you will have with event based systems
Handling subsrcriptions
Handling errors
Filtering and pre-processing events
Buffering events
Structuring the API
Your API should provide an obesrvable for each event source you have. For procedure calls you provide a function that will map the function call to an obesrvable. This function will call the remote procedure and provide the result through the obeservable.
Implementation details
Add the following dependency to your build.sbt:
libraryDependencies += "com.netflix.rxjava" % "rxjava-scala" % "0.15.0"
You can then use the following pattern to convert a callback to an obeservable (given your remote API has some way to register and unregister a callback):
private val callbackFunc : (rx.lang.scala.Observer[String]) => rx.lang.scala.Subscription = { o =>
val listener = {
case Value(s) => o.onNext(s)
case Error(e) => o.onError(o)
}
remote.subscribe(listener)
// Return an interface to cancel the subscription
new Subscription {
val unsubscribed = new AtomicBoolean(false)
def isUnsubscribed: Boolean = unsubscribed.get()
val asJavaSubscription: rx.Subscription = new rx.Subscription {
def unsubscribe() {
remote.unsubscribe(listener)
unsubscribed.set(true)
}
}
}
If you have some specific questions, just ask and I can refine the answer
Additional ressources
There is a very nice course from Martin Odersky et al. at coursera, covering Observables and other reactive techniques.
Take a look at the spray-client library. This provides HTTP request functionality (I'm assuming the server you want to talk to is a web service?). It gives you a pretty nice DSL for building requests and is all about being asynchronous. It does use the akka Actor model behind the scenes, but you do not have to build your own Actors to use it. Instead the you can just use scala's Future model for handling things asynchronously. A good introduction to the Future model is here.
The basic building block of spray-client is a "pipeline" which maps an HttpRequest to a Future containing an HttpResponse:
// this is from the spray-client docs
val pipeline: HttpRequest => Future[HttpResponse] = sendReceive
val response: Future[HttpResponse] = pipeline(Get("http://spray.io/"))
You can take this basic building block and build it up into a client API in a couple of steps. First, make a class that sets up a pipeline and defines some intermediate helpers demonstrating ResponseTransformation techniques:
import scala.concurrent._
import spray.can.client.HttpClient
import spray.client.HttpConduit
import spray.client.HttpConduit._
import spray.http.{HttpRequest, HttpResponse, FormData}
import spray.httpx.unmarshalling.Unmarshaller
import spray.io.IOExtension
type Pipeline = (HttpRequest) => Future[HttpResponse]
// this is basically spray-client boilerplate
def createPipeline(system: ActorSystem, host: String, port: Int): Pipeline = {
val httpClient = system.actorOf(Props(new HttpClient(IOExtension(system).ioBridge())))
val conduit = system.actorOf(props = Props(new HttpConduit(httpClient, host, port)))
sendReceive(conduit)
}
private var pipeline: Pipeline = _
// unmarshalls to a specific type, e.g. a case class representing a datamodel
private def unmarshallingPipeline[T](implicit ec:ExecutionContext, um:Unmarshaller[T]) = (pipeline ~> unmarshal[T])
// for requests that don't return any content. If you get a successful Future it worked; if there's an error you'll get a failed future from the errorFilter below.
private def unitPipeline(implicit ec:ExecutionContext) = (pipeline ~> { _:HttpResponse => () })
// similar to unitPipeline, but where you care about the specific response code.
private def statusPipeline(implicit ec:ExecutionContext) = (pipeline -> {r:HttpResponse => r.status})
// if you want standard error handling create a filter like this
// RemoteServerError and RemoteClientError are custom exception classes
// not shown here.
val errorFilter = { response:HttpResponse =>
if(response.status.isSuccess) response
else if(response.status.value >= 500) throw RemoteServerError(response)
else throw RemoteClientError(response)
}
pipeline = (createPipeline(system, "yourHost", 8080) ~> errorFilter)
Then you can use wrap these up in methods tied to specific requests/responses that becomes the public API. For example, suppose the service has a "ping" GET endpoint that returns a string ("pong") and a "form" POST endpoint where you post form data and receive a DataModel in return:
def ping()(implicit ec:ExecutionContext, um:Unmarshaller[String]): Future[String] =
unmarshallingPipeline(Get("/ping"))
def form(formData: Map[String, String])(implicit ec:ExecutionContext, um:Unmarshaller[DataModel]): Future[DataModel] =
unmarshallingPipeline(Post("/form"), FormData(formData))
And then someone could use the API like this:
import scala.util.{Failure, Success}
API.ping() foreach(println) // will print out "pong" when response comes back
API.form(Map("a" -> "b") onComplete {
case Success(dataModel) => println("Form accepted. Server returned DataModel: " + dataModel)
case Failure(e) => println("Oh noes, the form didn't go through! " + e)
}
I'm not sure if you will find direct support in spray-client for your third bullet point about subscribing to events. Are these events being generated by the server and somehow sent to your client outside the scope of a specific HTTP request? If so, then spray-client will probably not be able to help directly (though your event handlers could still use it to send requests). Are the events occurring on the client side, e.g. the completion of deferred processing initially triggered by a response from the server? If so, you could actually probably get pretty far just by using the functionality in Future, but depending on your use cases, using Actors might make sense.