querying all sensor values for a machine - cumulocity

I have a simulator with 5 custom measurements sent every 5 minutes. I would like to get the results of these custom measurements based on the name of the simulator, can you please assist me on how to achieve that?

You cannot directly query measurements by the device name (only with the device ID) so you have to do two queries:
Step 1:
Query for the device based on the name to get the ID:
/inventory/managedObjects?fragmentType=c8y_IsDevice&text={device_name}
Step 2:
Query for measurements based on source ID
/measurement/measurements?source={device ID}&dateFrom={...}&dateTo={...}&revert=true&pageSize=5
I added a couple more query parameters. I guess dateFrom/dateTo is clear. The revert parameter will give you the latest measurements first and the pageSize limits the results to 5. So the query should give you the latest 5 measurements for the device which should result in one for each of you 5 measurements

Related

Grafana Status timeline not working with PostgresSQL and only one Query

I’m creating a dashboard in Grafana with data extracted from Google Servers and stored in a PostgresSQL Database.
In one of the visualization I would like to create a Status Timeline:
I have created a query in PostgresSQL which returns me the following table:
As my understanding goes, that is the data that is need to create a Status Timeline. (Time, count of a variable and name of the variable count).
But when I copy that query inside Grafana, the chart is not the same as I have imagined:
I don’t know what else to do or how to fix it.
Does anyone has faced this issue before or know how to solve it, in order to get a Status Timeline like the one showed above?
Thank you very much!
As you can see in your last image, the metric names used in the panel are the column names. The status values used are the column values. So you need a table result like that:
executed_on
dev
asia-dev
...
2022-06-07 12:00:00
1
4
...
...
...
...
...

Azure Stream Analytics : Select data with the last timestamp only

I'm working on a way to stream status of some jobs that are running on an HPC resource (sort of like trying to create a dashboard to look at real time flight status). I generate and push data every 60 seconds. Unfortunately, this way i end up with a lot of repeated data as the status of each 'job' changes unpredictably. I need a way to only keep the latest data. I'm not an SQL pro and do this work in my free time so any help will be appreciated!
Here is my query:
SELECT
Job, Ref, Location, Queue, Description, Status, ElapTime, cast (Time as datetime) as Time
INTO
output_source
FROM
input_source
Here is what my output looks like when i test the query:
Query Test Result
As you can see, in the image, there are two sets of data with two different time stamps. I would like the query to return all the columns associated with only the last timestamp. How do i do this? Any ideas? Apologies if this is a repeated question. I have not found an answer that has helped me solve this problem.
Thanks for all your help!

Splunk Failed Login Report

I am relatively new to Splunk and I am trying to create a reportthat will display a hostname and the amount of times that host failed to login within the past five minutes, when they failed 3 or more times. The only way I was able to get the initial search results I want is to look only within the past 5 minutes, as you can see in my query:
index="wineventlog" EventCode=4625 earliest=-5min | stats count by host,_time | stats count by host | search count > 2
This returns the host and the count. The issue is if I use this query in my report, it can run every five minutes, but the hosts that were listed previously get removed as they no longer are included in the search results.
I found ways to generate logs that I can then search for separately (http://docs.splunk.com/Documentation/Splunk/6.6.2/Alert/LogEvents) but it didn't work the way I expected.
I am looking for an answer to any of these questions that can help me get the intended results:
Can my original search be improved to still only get results where the failed logins were within 5 minutes but be able to search over any time period?
Is there a way to send the results from the query I already have to a report, where the results will not be cleared out when the search is run again?
Is there any other option I haven't considered to achieve the desired result?
If you only care about the last 5 minutes then search only the last 5 minutes. Searching more is just wasting resources.
Consider writing your results to a summary index (using collect) with a scheduled search and have your report/dashboard display values from the summary index.

Import data from csv into database when not all columns are guaranteed

I am trying to build an automatic feature for a database that takes NOAA weather data and imports it into our own database tables.
Currently we have 3 steps:
1. Import the data literally into its own table to preserve the original data
2. Copy it's data into a table that better represents our own data in structure
3. Then convert that table into our own data
The problem I am having stems from the data that NOAA gives us. It comes in the following format:
Station Station_Name Elevation Latitude Longitude Date MXPN Measurement_Flag Quality_Flag Source_Flag Time_Of_Observation ...
Starting with MXPN (Maximum temperature for water in a pan) which for example is comprised of it's column and the 4 other columns after it, it repeats that same 5 columns for each form of weather observation. The problem though is that if a particular type of weather was not observed in any of the stations reported, that set of 5 columns will be completely omitted.
For example if you look at Central Florida stations, you will find no SNOW (Snowfall measured in mm). However, if you look at stations in New Jersey, you will find this column as they report snowfall. This means a 1:1 mapping of columns is not possible between different reports, and the order of columns may not be guaranteed.
Even worse, some of the weather types include wild cards in their definition, e.g. SN*# where * is a number from 0-8 representing the type of ground, and # is a number 1-7 representing the depth at which soil temperature was taken for the minimum soil temperature, and we'd like to collect these together.
All of these are column headers, and my instinct is to build a small Java program to map these properly to our data set as we'd like it. However, my superior believes it may be possible to have the database do this on a mass import, but he does not know how to do it.
Is there a way to do this as a mass import, or is it best for me to just write the Java program to convert the data to our format?
Systems in use:
MariaDB for the database.
Centos7 for the operating system (if it really becomes an issue)
Java is being done with JPA and Spring Boot, with hibernate where necessary.
You are creating a new table per each file.
I presume that the first 6 fields are always present, and that you have 0 or more occurrences of the next 5 fields. if you are using SQL Server i would approach it as follows,
Query the information_schema catalog to get a count of the fields in
the table. If the count= 6 then no observations are present, if 11
columns ,then you have 1 observation, if 17 then you have 2
observations, etc.
Now that you know the number of observations you can write some SQL
that will loop the over the observations and insert them into a
child table with a link back to a parent table which has the 1st 6
fields.
apologies if my assumptions are way off.
-HTH

SoQL query for unique values with Socrata API

I am trying to count the total unique number of heating complaints in the Socrata NYC 311 service requests database: https://data.cityofnewyork.us/Social-Services/All-Heat-complaints-to-311/m5nm-vca4
Ideally I want to use the data to populate a map with unique complaints, as well as the number of complaints by each unique address. So far I have used the following query which only returns about 2 days of data:
http://data.cityofnewyork.us/resource/m5nm-vca4.json?$select=created_date,longitude,latitude,COUNT(*)&$group=created_date,longitude,latitude&$where=complaint_type=%27heating%27
Is there anyway to query the database for unique address across all dates and count them by total complaints?
Can you be a little more descriptive about what you're trying to get? An example of the output you're trying to get would be perfect.
I think we might be able to get what you're looking for, but we need to get the aggregation right.