NOAA provides tidal and weather data through their own http API, and I would like to be able to use their API to get data into ThingsBoard (Professional) every six minutes to overlay with my device data (their data are updated every 6 minutes). Can someone walk me through the details of using the correct Integrations or Rule chains to get the time series data added to the database? It would also be nice to only use the metadata once. Below you can see how to get the most recent tide gauge level (water level) using their API.
For example, to see the latest tide gauge water level for a tide gauge (in this case, tide gauge 8638610), the API allows for getting the most recent water level information -- https://api.tidesandcurrents.noaa.gov/api/prod/datagetter?date=latest&station=8638610&product=water_level&datum=navd&units=metric&time_zone=lst_ldt&application=web_services&format=json
That call produces the following JSON: {"metadata":{"id":"8638610","name":"Sewells Point","lat":"36.9467","lon":"-76.3300"},"data":[{"t":"2022-02-08 22:42", "v":"-0.134", "s":"0.003", "f":"1,0,0,0", "q":"p"}]}
The Data Converter was fairly easy to construct (except maybe the noaa_data.data[0, 0] used in the code below):
//function Decoder(payload,metadata)
var noaa_data = decodeToJson(payload);
var deviceName = noaa_data.metadata.id;
var dataType = 'water_level';
var latitude = noaa_data.metadata.lat;
var longitude = noaa_data.metadata.lon;
var waterLevelData = noaa_data.data[0, 0];
//functions
function decodeToString(payload) {
return String.fromCharCode.apply(String, payload);
}
var result = {
deviceName: deviceName,
dataType: dataType,
time: waterLevelData.t,
waterLevel: waterLevelData.v,
waterLevelStDev: waterLevelData.s,
latitude: latitude,
longitude: longitude
}
function decodeToJson(payload) {
var str = decodeToString(payload);
var data = JSON.parse(str);
return data;
}
return result;
which has an Output:
{
"deviceName": "8638610",
"dataType": "water_level",
"time": "2022-02-08 22:42",
"waterLevel": "-0.134",
"waterLevelStDev": "0.003",
"latitude": "36.9467",
"longitude": "-76.3300"
}
I am not sure what process to use to get the data into ThingsBoard to be displayed as a device alongside my other device data.
Thank you for your help.
If you have a specific(and small) number of stations to grab then you can do the following:
Create the devices in Thingsboard manually
Go into rule chains, create a water stations rule chain
For each water station place a 'Generator' node, selecting the originator as required.
Route these into an external "Rest API" node.
Route the result of the post into a blue script node and put your decoder in there
Route result to telemetry
Example rule chain
More complex solution but more scalable:
Use a single generator node
Route the message into a blue script. This will contain a list of station id's that you want to pull info for. By setting the output of a script to the following you can make it send out multiple messages in sequence:
return [{msg:{}, metadata:{}, msgType{}, ...etc...]
Route the blue script into the rest api call and get the station data
Do some post processing with another blue script node if you need to. Don't decode the data here though.
Route all this into another rest api node and POST the data back to your HTTP integration endpoint (if you don't have one you will need to create it. Fairly simple)
Connect your data converter to this integration.
Finally, modify your output so that it is accepted by the converter output
{
"deviceName": "8638610",
"deviceType": "water-station",
"telemetry": {
"dataType": "water_level",
"time": "2022-02-08 22:42",
"waterLevel": "-0.134",
"waterLevelStDev": "0.003",
"latitude": "36.9467",
"longitude": "-76.3300"
}
}
Rough example
Above is how I would do it if I didn't want to use any external services. If you're AWS savvy I'd say set up a CRON job to trigger a lambda function every 6 minutes and post into your platform. Either will work.
I am looking for syntax to setup a rule in my Firebase Realtime DB to confirm that the value being passed from my app does not already exist in that particular location before writing another duplicate entry. I am passing an "ExponentPushToken" value for push notifications. This happens anonymously. That is, the write is happening without any user authentication, the DB is simply storing a list of "ExponentPushToken" like so:
{
"users" : {
"push_token" : {
"auto-generated key" : "ExponentPushToken-value"
}
}
}
I attempted to use the following, but it still will allow the "duplicate" value to be written. I am wondering if this has anything to do with random 'auto-generated key' that is generated for each instance a value written to the DB, or if i am not thinking through this properly.
{
"rules": {
".read": "false",
".write": "true && newData.val() !== data.child('push_token').val()",
}
}
For context, each ExponentPushToken is a unique value for a given device that is used to open my app. The ExponentPushToken is a unique ID that is assigned to that device indefinitely and won't change ever in any case that particular device is accessing the app. The ExponentPushToken is assigned to the device indefinitely, and is written to the DB. Each time the app is opened, there is a push/set command to write that token to the DB within my app's code. I would like to setup a rule for Firebase Realtime DB to confirm the value being passed from the app to the DB does NOT get written if that value already exists. I am stuck and am finding myself having to manually parse the db to remove duplicate value to keep the DB clean and not send more than one Push Notification to app users.
Any help or direction you can provide would be greatly appreciated!
Thanks!
This cannot be done using firebase security rules, as they cannot search for values (that wouldn't scale). The query to check for existing values in a particular location in the database would have to be done in the app that is performing the push/writing of the value into the database.
The app would need to query the database for values and have a condition in place to prevent the writing of duplicate values.
An example of working code that can be implemented within the app can be found here that accomplishes the desired outcome for this specific ask/desired outcome.
Expo and Firebase Realtime DB - Read values and confirm a particular value does not exist before writing to database
Rules are evaluated in the context of the node/path in which you declare them. So the .write rule in your question is evaluated on the root, and both data and newData contain the data/new data at the root.
The easiest way to verify that there's no data at the path, is to define this rules at that path:
{
"rules": {
".read": "false",
"push_token": {
"$pushId": {
".write": "!data.exists()",
}
}
}
}
The $pusId here means that the rules under there are applies to every child node of push_token, which seems to match your JSON. For more on this, see the documentation on wildcard variables.
I am getting some attributes in an API but all getting lost after an HTTP request connector in mule4.
why is it happening?
Look in the connector's configuration properties -> advanced tab for the connector configuration (in this case the HTTP connector's "request" operation) and you'll find a target variable and target value. If you fill in the target with a name - this does an enrichment to avoid overwriting the Mule message. If you leave it blank (the default) it will save the message (attributes, payload) over the top of the existing one - which is what you're seeing now. This mirrors the old mule 3 functionality, but sometimes you want it to leave what you have there alone.
So for the target value you get to pick exactly what gets saved.. If you want just payload: put that in. If you want both payload and attributes - I'd use "message" as that will mean you get both payload and attributes saved in the variable. Of course you may not want as much saved, so feel free to put in whatever dataweave expression you like - so you could even create something with bits from anywhere like:
{
statusCode: attributes.statusCode,
headers: attributes.headers,
payload: payload
}
A connector operation may replace the attributes with those of the operation. If you need to preserve the previous attributes you need to save them to a variable.
This is a default behaviour of MuleSoft. Whenever request crosses to transport barrier it losses existing attributes. You need to preserve attribute before HTTP Request.
We're trying to integrate with IBM Domino via REST API to pull out information about reservations/events in a specific room and also be able to create new events/reservations remotely. We already integrated with other services such as Microsoft Exchange, but IBM seems to be the toughest of them all.
I studied deeply into it and read thousands of articles & stack overflow questions, and got pretty far but still can't make any real use out of it.
What I currently plan on doing is this:
Pull information about reservations from /api/data/collections/name/($Reservations) or ($Calendar)
Create events/reservations using the documents api, POSTing to /api/data/documents?form=Reservation, I already tried doing it and my reservation even showed in Domino Admin (not in Notes client though), but it had some errors (probably just some json problem on my side)
While it looks kinda clear and easy, it really isn't. I have a few questions:
How can I get reservations/calendar for a specific room? ($Calendar) returns all events in the database, not even including in which room it is, to get that information I would need to additionally query each reservation by it's unid and that would probably kill the entire app
Is there any way I could filter/search the /api/data/documents to return only documents whose form field has a value of Reservation or any other value? This way I could get all the reservation documents without querying each of the documents directly (/api/data/documents only returns the href to the document without any interesting data), I wouldn't also need to additionally enable DAS for each view I want to use.
What are the fields like $25 returned in the json, and how can I know what's their purpose if they don't have any real name? They often contain interesting data, such as the room name.
I also looked into the FreeBusy api service, and it's pretty interesting and I could easily use it to look for reservations (/busytimes) in the room I want, if it ever returned what resource/reservation is causing the busy time. It just shows the start and end time, nothing else..
I also read suggestions that one should create a 'main' user to handle the reservations and use his calendar api (/api/calendar/events), but afaik it can't be done that way.
However I tried creating events in the users calendar in specified room, and kinda got it to work by adding the following attendee in the json |(PHP syntax, actually):
'organizer' => [ 'email' => 'admin/test#test.com' ],
'attendees' => [
[
'role' => 'req-participant',
'userType' => 'room',
'status' => 'accepted',
'rsvp' => true,
'email' => 'testroom#test.com',
],
],
But it doesn't really get displayed in the room reservations, unlike normal events created in IBM Notes. It also cannot be edited or deleted in IBM Notes, and it has "Accepted: " in front of the subject, and it says "attendance is delegated for admin". To delete it, I need to delete it via API through its unid directly. x-lotus-noticetype is being set to A so I guess it's not being treated as a meeting but as an notice, no idea why though.
I'd really like some help or suggestions on how I could get this working, are there any other ways that would have any sense?
Edit:
After struggling a lot and reading Dave's reply, I think it would be a good solution to have a single user that would do the reservations via calendar api, because the direct data api probably won't work. I could just only pull the list of all reservations from Rooms database ($Calendar) or ($Reservations) view, or make some sort of my own view.
However! I cannot get the calendar method to work on my local IBM Domino server. Dave pointed to me that I need to specify a valid email (internet address) of the organizer, so I set my user's internet address to testmail#test.test (test.test is mapped to 127.0.0.1 in the hosts file). Now as soon as I try to use that address like that:
"organizer": {
"email": "testmail#test.test"
}
I cannot even create the event/reservation (through /mail/admin.nsf/api/calendar/events), it's returning 500 internal error with cserror 1026, and Domino logs
[CS API]> Error | calendarapi.c(379) : There was an error sending out notices to meeting participants. (0x8E4)
Error connecting to server test/test: The remote server is not a known TCP/IP host.
So it has a problem with sending the notice, and doesn't create the event at all. I thought it may not work with localhost, so I set my users email to an external mail service, and I even received the email, but the event was still created incorrectly (x-lotus-noticetype A is being added automatically and overrides whatever I send as the value), it's not visible in the Room Reservations database.
Here's the json object of an event created via Notes client:
"events": [
{
"href":"\/mail\/admin.nsf\/api\/calendar\/events\/2B35FABBC50EA4D0C12583BC002E26FA-Lotus_Notes_Generated",
"id":"2B35FABBC50EA4D0C12583BC002E26FA-Lotus_Notes_Generated",
"summary":"Notes client meeting",
"location":"Test room\/Test site#test",
"start": {
"date":"2019-03-13",
"time":"09:30:00",
"tzid":"Central European Standard Time"
},
"end": {
"date":"2019-03-13",
"time":"10:30:00",
"tzid":"Central European Standard Time"
},
"class":"public",
"transparency":"opaque",
"sequence":0,
"last-modified":"20190313T082436Z",
"attendees": [
{
"role":"chair",
"status":"accepted",
"rsvp":false,
"displayName":"admin\/test",
"email":"testmail#test.test"
},
{
"role":"req-participant",
"userType":"room",
"status":"needs-action",
"rsvp":true,
"displayName":"Test room\/Test site",
"email":"room#test.test"
}
],
"organizer": {
"displayName":"admin\/test",
"email":"testmail#test.test"
},
"x-lotus-broadcast": {
"data":"FALSE"
},
"x-lotus-notesversion": {
"data":"2"
},
"x-lotus-appttype": {
"data":"3"
}
}
]
As you can see, Notes is able to create the event with testmail#test.test successfully.
Now here's an event created with my API, but with admin/test#test.test as the organizer's email (because normal email doesn't let me create the event):
"events": [
{
"href":"\/mail\/admin.nsf\/api\/calendar\/events\/E1D1F752203FC2DFC12583BC002FCB12-Lotus_Auto_Generated",
"id":"E1D1F752203FC2DFC12583BC002FCB12-Lotus_Auto_Generated",
"summary":"Api reservation test",
"location":"Test room\/Test site#test\r\nCN=Test room\/O=Test site",
"description":"API Generated event\r\n",
"start": {
"date":"2019-03-20",
"time":"11:00:00",
"utc":true
},
"end": {
"date":"2019-03-20",
"time":"15:00:00",
"utc":true
},
"class":"public",
"transparency":"opaque",
"sequence":0,
"last-modified":"20190313T084201Z",
"attendees": [
{
"role":"chair",
"status":"accepted",
"rsvp":false,
"displayName":"admin\/test",
"email":"testmail#test.test"
},
{
"role":"req-participant",
"userType":"room",
"status":"needs-action",
"rsvp":true,
"displayName":"Test room\/Test site",
"email":"room#test.test"
}
],
"organizer": {
"displayName":"admin\/test",
"email":"testmail#test.test"
},
"x-lotus-broadcast": {
"data":"FALSE"
},
"x-lotus-notesversion": {
"data":"2"
},
"x-lotus-noticetype": {
"data":"A"
},
"x-lotus-appttype": {
"data":"3"
}
}
]
As you can see, the organizer's & chair emails were automatically updated by Lotus to testmail#test.test, and theorethically everything should work but it doesnt. In Notes, I see the event as 'Accepted: Api reservation test' and I cannot modify things like the room, or don't have the option to delete it from right click menu (I can delete it with Del keyboard button though)
The only difference is that x-lotus-noticetype get's added, and I don't even know why
Edit 2:
I got it to work! Dave pointed that I may have some configuration issue, so I re-installed the server & setup everything again (including the mail services), I used admin#test.test and the meeting was succesfully created & added to the room reservations. Server console only showed that the message was delivered.
HOWEVER! I was able to create as many identical meetings as I wanted, they weren't added to the reservations database but they were succesfully created in my calendar (with the room assigned to them) without any errors (not even in the sever console), this is obviously bad. Is there any way to check (externally, through API) if the reservation was created succesfully, and prevent it's creation if the room is busy at that moment? Notes client prompts an error when the room is busy. I could probably use FreeBusy api, however that would require another HTTP request before each reservation attempt, but if that's the only way then I'll just take it. I see that the status field of the attendeed room is set to declined, but the response from POST still contains needs-action so I'd need to do some delayed request once again to check if the status has changed to declined or not.
Also, while it works, I still don't know how I could obtain a list of reservations in a selected room? The already existing views in Reservations database don't give many details, and they need to be exclusively enabled DAS services in order to work. Is there any other way that could work properly?
Another thing is, is there any way I could get current user's email address to use for the reservations, or can I only 'hardcode' it manually? Same goes for room's email. Currently, I need to have:
User name
User password
User mail database (/mail/admin.nsf/)
User email
Room email
and if I'd want to read some data from the Reservations database directly, then I'd also need to have the path to that database. This isn't really user-friendly, I'd like to automate some things if possible. Otherwise the integration may be impossible to make.
The reservation database was designed to manage reservations either (1) directly through it's own UI, or (2) indirectly by auto-processing notifications from calendar users. By using the DAS data API, you are asserting you can manage reservations (3) programmatically -- by manipulating the low-level document items. You might get this to work, but I don't think the reservation database was designed with that in mind.
That's why I think this answer is the best option. It leverages auto-processing (#2 above) and saves you from dealing with the internal design of reservation documents. If you use this approach, you should give the DAS calendar API a list of attendees like this:
"attendees":[
{
"role":"req-participant",
"userType":"room",
"status":"needs-action",
"rsvp":true,
"email":"room#mycorp.com"
}
]
In other words, status must be "needs-action" -- not "accepted" as shown in your original post. Also, make sure you are using the correct email address for both the organizer and the target room. The above example shows an Internet-style address for the room, but administrators don't always give a room an Internet address.
I am trying to figure out which fragments are related to operation:
managedObject
event
measurement
alarm
So , Is there a way to get all these fragments ?
Also there are additional Properties for which field name is defined as * and the value can be an Object or anything else(*). I have gone through the device management library and sensor library in cumulocity documentation but found it does not contain all the possible fragments and there is no clarity as in which object the fragment goes i.e does it go in operation or managedObject, or both?
Since every user, device and application can contribute such fragments, there is no "global list" of them that you could refer to. Normally, a client (application, device) knows what data it sends or what data it requests, so it's also in most cases not required.
Regarding the relationship between operations and managed objects, there are some typical design pattern. Let's say you want to configure something in a device, like a polling interval:
"mydevice_Configuration": { "pollingRate": 60 }
What your application would do is to send that fragment as an operation to a device:
POST /devicecontrol/operations HTTP/1.1
...
{
"deviceId": "12345",
"mydevice_Configuration": { "pollingRate": 60 }
}
The device would accept the operation (http://cumulocity.com/guides/rest/device-integration/#step-6-finish-operations-and-subscribe) and change its configuration. When it does that successfully, it will update its managed object to contain the new configuration:
PUT /inventory/managedObjects/12345 HTTP/1.1
{
"mydevice_Configuration": { "pollingRate": 60 }
}
This way, your inventory reflects as closely as possible the true state of devices.
Hope that helps ...