Are Heart Points available in the REST API? - google-fit-sdk

Are Heart Points available in the REST API for reading? IF so, how do we get to them? I'm not seeing the documentation data. Thanks.
Eric

You should use the Users.dataSources.datasets API endpoint. You can grab the hearts points merged from all data points by querying the dataSourceId "derived:com.google.heart_minutes:com.google.android.gms:merge_heart_minutes". It returns a JSON object with an array called "points". You'll find each heart point in that list and if you drill down further for each heart point you'll get the derived source.
The endpoint takes the form:
https://www.googleapis.com/fitness/v1/users/me/dataSources/dataSourceId/datasets/datasetId
Replace the following in the URL above:
dataSourceId: derived:com.google.heart_minutes:com.google.android.gms:merge_heart_minutes
datasetId: The ID is formatted like: "startTime-endTime" where startTime and endTime are 64 bit integers.

Expanding on WiteCastle's answer, this datasource will provide you with the heart points.
"derived:com.google.heart_minutes:com.google.android.gms:merge_heart_minutes"
You will need to specify a timeframe denoted by the datasetId parameter which is a start time and and an end time in epoch time with nanoseconds format, e.g.:
1607904000000000000-1608057778000000000
The json response includes an array of points, essentially each time the sensor detected the user's activity. The 'heart points' are accessible within each point's "fpVal". Example of a point is below:
{
"startTimeNanos": "1607970900000000000",
"endTimeNanos": "1607970960000000000",
"dataTypeName": "com.google.heart_minutes",
"originDataSourceId": "derived:com.google.heart_rate.bpm:com.google.android.gms:merge_heart_rate_bpm",
"value": [
{
"fpVal": 2, <--- 2 heart points recorded during this activity
"mapVal": []
}
],
"modifiedTimeMillis": "1607976569329"
},
To get the heart points for today, specify the timeframe (00:00-23:59 in epcoch format), then loop through each point adding up all the "fpVal" values.

Related

How to setup a pr. job_type.Murnaghan that for each volume reads the structure output of another Murnaghan job?

For the Ene-Vol calculations of the non-cubic structures, one has to relax the structures for all volumes.
Suppose that I start with a pr.jobtype.Murnaghan() job that its ref_job_relax is a cell-shape and internal coordinates relaxation. Let's call the Murnaghan job R1 with 7 volumes, i.e. R1-V1,...,R1-V7.
After one or more rounds of relaxation (R1...RN), one has to perform a static calculation to acquire a precise energy. Let's call the final static round S.
For the final round, I want to create a pr.jobtype.Murnaghan() job that reads all the required setup configurations from the ref_job_static except the input structures .
Then for each volume S-Vn it should read the corresponding output structure of RN-Vn, e.g. R1-V1-->S-V1, ..., R1-V7-->S-V7 if there were only one round of relaxation.
I am looking for an implementation like below:
murn_relax = pr.create_job(pr.job_type.Murnaghan, 'R1')
murn_relax.ref_job = ref_job_relax
murn_relax.run()
murn_static = pr.create_job(pr.job_type.Murnaghan, 'S', continuation=True)
murn_static.ref_job = ref_job_static
murn_static.structures_from(prev_job='R1')
murn_static.run()
The Murnaghan object has two relevant functions:
get_structure() https://github.com/pyiron/pyiron_atomistics/blob/master/pyiron_atomistics/atomistics/master/murnaghan.py#L829
list_structures() https://github.com/pyiron/pyiron_atomistics/blob/master/pyiron_atomistics/atomistics/master/murnaghan.py#L727
The first returns the predicted equilibrium structure and the second returns the structures at the different volumes.
In addition you can get the IDs of the children and iterate over those:
structure_lst = [
pr.load(job_id).get_structure()
for job_id in murn_relax.child_ids
]
to get a list of converged structures.

InfluxDB 1.8 schema design for industrial application?

I have node-red-S7PLC link pushing the following data to InfluxDB at 1.5 second cycle.
msg.payload = {
name: 'PLCTEST',
level1_m: msg.payload.a90, "value payload from PLC passed to influx"
power1: msg.payload.a93,
valvepos_%: msg.payload.a107,
temp1: msg.payload.a111,
washer_acidity: msg.payload.a113,
etc.
}
return msg;
In total 130 individual data points consisting of binary states like alarms and button presses and measurements (temp, pressure, flow...)
This has been running a week now as a stress test for DB writes. Writing seems to be fine but I have noticed that if i swap from 10 temperature measurements with 30min query window to 3hr query in Grafana dashboard the load times are starting to get annoyingly long. 12hr window is a no go. This i assume is because all my things are pushed as fieldkeys and field values. Without indexes this is straining the database.
Grafana query inspector gives me 1081 rows per measurement_query so x10 = 10810 rows/dasboard_query. But the whole pool influx has to go through is 130 measurements x 1081 = 140530 rows / 3hr window.
I would like to get a few pointers on how to optimize the schema. I have the following in mind.
DB: Aplication_nameX
Measurement: Process_metrics,
Tags: Temp,press,flow,%,Level,acidity, Power
Tag_values: CT-xx1...CT-xxn, CP-xx1...CP-xxn, CF-xx1...CF-xxn,....
Fieldkey= Value, fieldvalue= value
Measurement: Alarms_On,
Fieldkey= State, fieldvalue= "trues", "false"
Measurement:Binary_ON
Fieldkey: State, fieldvalue= "trues", "false"
This would then be in node-red for few temps (i think):
msg.payload = [{
Value: msg.payload.xxx, "value payload from PLC passed to influx"
Value: msg.payload.xxx,
Value: msg.payload.xxx
},
{
Temp:"CT_xx1",
Temp:"CT_xx2",
Temp:"CT_xx2"
}];
return msg;
EDIT: Following Roberts comments.
I read the influx manuals for a week and other samples online before writing here. Some how influx is just different and unique enough from normal SQL mind set that i do find this unusually difficult. But i did have a few moments of clarity over the weekend.
I think the following would be more appropriate.
DB: Station_name
measurements: Process_metrics,Alarms, Binary.
Tags: "SI_metric"
Values= "Temperature", "Pressure" etc.
Fieldkey: "proces_position"= CT/P/F_xxx.
values= process_values
This should prevent the cardinality going bonkers vs. my original thought.
I think alarms and binary can be left as fieldkey/fieldvalue only and separating them to own measurements should give enough filtering. These are also logged only at state change thus a lot less input to the database than analogs at 1s cycle.
Following my original node-red flow code this would translate to batch output function:
msg.payload = [
{
measurement: "Process_metrics",
fields: {
CT_xx1: msg.payload.xxx,
CT_xx2: msg.payload.xxx,
CT_xx3: msg.payload.xxx
},
tags:{
metric:"temperature"
},
{
measurement: "Process_metrics",
fields: {
CP_xx1: msg.payload.xxx,
CP_xx2: msg.payload.xxx,
CP_xx3: msg.payload.xxx
},
tags:{
metric:"pressure"
},
{
measurement: "Process_metrics",
fields: {
CF_xx1: msg.payload.xxx,
CF_xx2: msg.payload.xxx,
CF_xx3: msg.payload.xxx
},
tags:{
metric:"flow"
},
{
measurement: "Process_metrics",
fields: {
AP_xx1: msg.payload.xxx,
AP_xx2: msg.payload.xxx,
AP_xx3: msg.payload.xxx
},
tags:{
metric:"Pumps"
},
{
measurement: "Binary_states",
fields: {
Binary1: msg.payload.xxx,
Binary2: msg.payload.xxx,
Binary3: msg.payload.xxx
},
{
measurement: "Alarms",
fields: {
Alarm1: msg.payload.xxx,
Alarm2: msg.payload.xxx,
Alarm3: msg.payload.xxx
}
];
return msg;
EDIT 2:
Final thoughts after testing my above idea and refining it further.
My second idea did not work as intended. The final step with Grafana variables did not work as the process data had info needed in fields and not as tags. This made the Grafana side annoying with rexec queries to get the plc tag names info from fields to link to grafana variable drop down lists. Thus again running resource intensive field queries.
I stumbled on a blog post on the matter of how to get your mind straight with TSDB and the above idea is still too SQL like approach to data with TSDB. I refined the DB structure some more and i seem to have found a compromise with coding time in different steps (PLC->NodeRed->influxDB->Grafana) and query load on the database. From 1gb ram usage when stressing with write and query to 100-300MB in normal usage test.
Currently in testing:
Python script to crunch the PLC side tags and descriptions from csv to a copypastable format for Node-Red. Example for extracting temperature measurements from the csv and formating to nodered.
import pandas as pd
from pathlib import Path
file1 = r'C:\\Users\\....pandastestcsv.csv
df1 = pd.read_csv(file1, sep=';')
dfCT= df1[df1['POS'].str.contains('CT', regex=False, na=False)]
def my_functionCT(x,y):
print( "{measurement:"+'"temperature",'+"fields:{value:msg.payload."+ x +",},tags:{CT:\"" + y +'\",},},' )
result = [my_functionCT(x, y) for x, y in zip(dfCT['ID'], dfCT['POS'])]
Output of this is all the temperature measurements CT from the CSV. {measurement:"temperature",fields:{value:msg.payload.a1,},tags:{CT:"tag description with process position CT_310",},},
This list can be copypasted to Node-Red datalink payload to influxDB.
InfluxDB:
database: PLCTEST
Measurements: temperature, pressure, flow, pumps, valves, Alarms, on_off....
tag-keys: CT,CP,CF,misc_mes....
tag-field: "PLC description of the tag"
Field-key: value
field-value: "process measurement value from PLC payload"
This keeps the cardinality per measurement in check within reason and queries can be better targeted to relevant data without running through the whole DB. Ram and CPU loads are now minor and jumping from 1h to 12h query in Grafana loads in seconds without lock ups.
While designing InfluxDB measurement schema we need to be very careful on selecting the tags and fields.
Each tag value will create separate series and as the number of tag values increase the memory requirement of InfluxDB server will increase exponentially.
From the description of the measurement given in the question, I can see that you are keeping high cardinality values like temperature, pressure etc as tag values. These values should be kept as field instead.
By keeping these values as tags, influxdb will index those values for faster search. For each tag value a separate series will be created. As the number of tag values increase, the number of series also will increase leading to Out of Memory errors.
Quoting from InfluxDB documentation.
Tags containing highly variable information like UUIDs, hashes, and
random strings lead to a large number of series in the database, also
known as high series cardinality. High series cardinality is a primary
driver of high memory usage for many database workloads.
Please refer the influxDB documentation for designing schema for more details.
https://docs.influxdata.com/influxdb/v1.8/concepts/schema_and_data_layout/

tzs for Jawbone Moves

I would like some clarification on tzs for the Jawbone Moves endpoint: https://jawbone.com/up/developer/endpoints/moves. Is this key going to be present on all response items? If not, what types of records will have it vs those that don't. Additionally, the docs indicate it will be an array of arrays with the following format:
"tzs": [
[1384963500, "America/Phoenix"],
[1385055720, "America/Los_Angeles"]
]
However, I am getting response that look like the following:
"tzs": [[1468410383, -14400]]
Is the second an offset I presume in seconds?
The tzs key will appear in responses from the moves endpoint that provide data for a given day's move. It will always be present, but it will only contain more than one entry if the user changes timezones during the given time period (e.g., the user is travelling).
Here's the explanation from the documentation:
Each entry in the list contains a unix timestamp and a timezone. In most instances the timezone entry is a string containing the Olson timezone.
When the timezone entry is just a number, then you are correct it's the GMT offset in seconds, so -14400 corresponds to US/Eastern

options for questions in Watson conversation api

I need to get the available options for a certain question in Watson conversation api?
For example I have a conversation app and in some cases Y need to give the users a list to select an option from it.
So I am searching for a way to get the available reply options for a certain question.
I can't answer to the NPM part, but you can get a list of the top 10 possible answers by setting alternate_intents to true. For example.
{
"context":{
"conversation_id":"cbbea7b5-6971-4437-99e0-a82927607079",
"system":{
"dialog_stack":["root"
],
"dialog_turn_counter":1,
"dialog_request_counter":1
}
},
"alternate_intents":true,
"input":{
"text":"Is it hot outside?"
}
}
This will return at most the top ten answers. If there is a limited number of intents it will only show them.
Part of your JSON response will have something like this:
"intents":[{
"intent":"temperature",
"confidence":0.9822100598134365
},
{
"intent":"conditions",
"confidence":0.017789940186563623
}
This won't get you the output text though from the node. So you will need to have your answer store elsewhere to cross reference.
Also be aware that just because it is in the list, doesn't mean it's a valid answer to give the end user. The confidence level needs to be taken into account.
The confidence level also does not work like a normal confidence. You need to determine your upper and lower bounds. I detail this briefly here.
Unlike earlier versions of WEA, the confidence is relative to the
number of intents you have. So the quickest way to find the lowest
confidence is to send a really ambiguous word.
These are the results I get for determining temperature or conditions.
treehouse = conditions / 0.5940327076534431
goldfish = conditions / 0.5940327076534431
music = conditions / 0.5940327076534431
See a pattern?🙂 So the low confidence level I will set at 0.6. Next
is to determine the higher confidence range. You can do this by mixing
intents within the same question text. It may take a few goes to get a
reasonable result.
These are results from trying this (C = Conditions, T = Temperature).
hot rain = T/0.7710267712183176, C/0.22897322878168241
windy desert = C/0.8597747113239446, T/0.14022528867605547
ice wind = C/0.5940327076534431, T/0.405967292346557
I purposely left out high confidence ones. In this I am going to go
with 0.8 as the high confidence level.

How to access data NOAA data through GRADS?

I'm trying to get some DAP data from noaa, but can't figure out how to pass variables to it. I've looked and looked and haven't found how to just poke around at it with my browser. The data is located at http://nomads.ncep.noaa.gov:9090/dods/ruc/ruc20110725/ruc_f17.info (which may become outdated some time after this post sits around.)
I want to access the ugrd10m variable with the variables time, latitude, and longitude. Any ideas what url is needed to do this?
According to their documentation, it sounds like you want to point your browser at a URL like:
http://nomads.ncep.noaa.gov:9090/dods/ruc/ruc20110914/ruc_f17.ascii?ugrd10m[0:1][0:1][0:1]
That will return a table of the ugrd10m values for the first two time/lat/lon points:
ugrd10m, [2][2][2]
[0][0], 9.999E20, 9.999E20
[0][1], 9.999E20, 9.999E20
[1][0], 9.999E20, 9.999E20
[1][1], 9.999E20, 9.999E20
time, [2]
734395.0, 734395.0416666666
lat, [2]
16.281, 16.46570909091
lon, [2]
-139.856603, -139.66417731424
The number of time/lat/lon points is given under the long description of ugrd10m at the dataset info address:
http://nomads.ncep.noaa.gov:9090/dods/ruc/ruc20110914/ruc_f17.info
time: Array of 64 bit Reals [time = 0..18]
means that there are 19 different time values, at indexes 0 to 18. In this case, the complete dataset can be retrieved by setting all the ranges to the max values:
http://nomads.ncep.noaa.gov:9090/dods/ruc/ruc20110914/ruc_f17.ascii?ugrd10m[0:18][0:226][0:427]
According to this reference, you can access data with this URL:
http://nomads.ncep.noaa.gov:9090/dods/ruc/ruc20110914/ruc_00z.asc?ugrd10m&time=19&time=227&time=428
However, I can't confirm the data's validity.