How can i stop receiving irrelevant disk_usage metrics while using CloudWatchAgent? - amazon-cloudwatch

Following is the config.json that I'm using
{
"agent": {
"metrics_collection_interval": 300,
"run_as_user": "root"
},
"metrics": {
"append_dimensions": {
"AutoScalingGroupName": "${aws:AutoScalingGroupName}",
"ImageId": "${aws:ImageId}",
"InstanceId": "${aws:InstanceId}",
"InstanceType": "${aws:InstanceType}"
},
"metrics_collected": {
"disk": {
"measurement": [
"used_percent"
],
"metrics_collection_interval": 300,
"resources": [
"/"
]
},
"mem": {
"measurement": [
"mem_used_percent"
],
"metrics_collection_interval": 300
}
}
}
}
But using this configuration I am receiving many metrics which i dont need, pasting a sample pic below.
I just need the disk_used_percent metric for device: rootfs and path: /

Related

GetBlock and all related methods doesn't return an info about transactions

Usually, tronGrid returns blocks with transactions, but as I found today, it's not behaving as needed.
How it works right now:
{
"blockID": "0000000001b16eb8b97ab73b7dc8f161c5f2f786f0937bfed7886baa33926c84",
"block_header": {
"raw_data": {
"number": 28405432,
"txTrieRoot": "0000000000000000000000000000000000000000000000000000000000000000",
"witness_address": "41f16412b9a17ee9408646e2a21e16478f72ed1e95",
"parentHash": "0000000001b16eb7a6f39a1523f35db8b4089d5a03f591958beafd139e0949d5",
"version": 24,
"timestamp": 1666102851000
},
"witness_signature": "60c7b8b964f103072b7e0fd33b5df636ef4e06d95bb113184ef3266b691b2cf517091960aba72dcd8d5a1ee40374f2124256ddad445429897b066332964ef8d500"
}
}
And how it works before:
{
"blockID": "0000000001b15f18976aee56ff9490303ec64c2007d6034ca03a7a2caefdab73",
"block_header": {
"raw_data": {
"number": 28401432,
"txTrieRoot": "167b9b1620d76e9855d426453ea726a709582f4ed711701ee22fe730bae3f8d8",
"witness_address": "41cd8d8ad1b4a5bd7afe46949421d2b411a3601717",
"parentHash": "0000000001b15f175dc2a20b8c3b29bbbb50860dc57d54d44eff4b02edf849e6",
"version": 24,
"timestamp": 1666089210000
},
"witness_signature": "b9239d12b2044b1bdfa631115f3c7b9b1c1fc5d37c482809d2c7846d05ab84d61778910a55501f4702fe653b943b22b9270b33151ec15e35aa500388dac0abda01"
},
"transactions": [
{
"ret": [
{
"contractRet": "SUCCESS"
}
],
"signature": [
"56a427e32fc0267a2e469ef85530c3145c7de423c8f20d7e11d85dbff98701bdd599d5a45b58ab4f7fa7f212c2ee3cbb5daa50541a83f67dbff533b7e185331501"
],
"txID": "5fd2335105f68de47b82fe3f8065cb3d1cc8ab437aaee55a5d4e61624113730b",
"raw_data": {
...
},
"raw_data_hex": "0a025f0522084cf822e1795ff17f40a7e6a2d5be305a67080112630a2d747970652e676f6f676c65617069732e636f6d2f70726f746f636f6c2e5472616e73666572436f6e747261637412320a1541989cc89d2df684c69bed3563c0cd8817be0a11e1121541bc0777bd8f50e5e148ef59bdce2b895b754c452e1888890a70c7919fd5be30"
}
]
}
Is there a new feature, or it's a bug?

OpenDistro Kibana Index Policy Stopped Working

I've been using the below Index Management Policy for a little less than a year now with no issue, but sometime a few months back it apparently stopped working, as all indices which should fall into the "delete" category are now stuck in "Evaluating transition conditions..." state. I have been searching for possible changes to syntax, but have not found any. I am also not aware of any updates having been performed for either the host machine or Kibana/Elastic. What could possibly be the issue?
{
"policy_id": "delete_14d",
"description": "Deletes old indices after 14 days",
"last_updated_time": 1661536875977,
"schema_version": 1,
"error_notification": null,
"default_state": "hot",
"states": [
{
"name": "hot",
"actions": [
{
"read_write": {}
}
],
"transitions": [
{
"state_name": "Delete",
"conditions": {
"min_index_age": "14d"
}
}
]
},
{
"name": "Delete",
"actions": [
{
"delete": {}
}
],
"transitions": []
}
],
"ism_template": {
"index_patterns": [
"staging*"
],
"priority": 100,
"last_updated_time": 1632510094716
}
}

AWS IoT rule sql select statement

I am trying to write a SQL select statement for my AWS IoT rule to extract the values 'gateway_id and 'rssi' from the following MQTT message:
{
"end_device_ids": {
"device_id": "imd2",
"application_ids": {
"application_id": "pennal"
},
"dev_eui": "004E3A0DF76DC9E9",
"join_eui": "70B3D57ED003CBE8",
"dev_addr": "260BA9D0"
},
"correlation_ids": [
"as:up:01G30W0J4D65P6D50QH1DN3ZQP",
"gs:conn:01G2ZZ7FT9BH6J93WRYS4ATVDM",
"gs:up:host:01G2ZZ7FTN14103H90QN71Q557",
"gs:uplink:01G30W0HXWMES1Z7X7F2MCFMPF",
"ns:uplink:01G30W0HXXJM5PNGJAD0W01GGH",
"rpc:/ttn.lorawan.v3.GsNs/HandleUplink:01G30W0HXWFR3HNGBZS7XJV15E",
"rpc:/ttn.lorawan.v3.NsAs/HandleUplink:01G30W0J4D18JZW199EM8WERGR"
],
"received_at": "2022-05-14T08:47:25.837680984Z",
"uplink_message": {
"session_key_id": "AYBlRLSz9n83bW3WU3+GfQ==",
"f_port": 1,
"f_cnt": 5013,
"frm_payload": "DiAAAA==",
"decoded_payload": {
"rainmm": 0,
"voltage": 3.616
},
"rx_metadata": [
{
"gateway_ids": {
"gateway_id": "pennal-gw2",
"eui": "AC1F09FFFE057EC6"
},
"time": "2022-05-14T08:47:25.065794944Z",
"timestamp": 114306297,
"rssi": -126,
"channel_rssi": -126,
"snr": -8.25,
"uplink_token": "ChgKFgoKcGVubmFsLWd3MhIIrB8J//4FfsYQ+dnANhoMCJ3Z/ZMGEOPmv6sCIKjZvump7gYqCwid2f2TBhCA568f"
}
],
"settings": {
"data_rate": {
"lora": {
"bandwidth": 125000,
"spreading_factor": 11
}
},
"coding_rate": "4/5",
"frequency": "868100000",
"timestamp": 114306297,
"time": "2022-05-14T08:47:25.065794944Z"
},
"received_at": "2022-05-14T08:47:25.629041670Z",
"confirmed": true,
"consumed_airtime": "0.659456s",
"version_ids": {
"brand_id": "heltec",
"model_id": "cubecell-dev-board-class-a-otaa",
"hardware_version": "_unknown_hw_version_",
"firmware_version": "1.0",
"band_id": "EU_863_870"
},
"network_ids": {
"net_id": "000013",
"tenant_id": "ttn",
"cluster_id": "eu1",
"cluster_address": "eu1.cloud.thethings.network"
}
}
}
I have tried following the documentation here: AWS Documentation but am struggling with the nested part of the message.
my SQL statement at the moment is:
SELECT received_at as datetime, end_device_ids.device_id as device_id,
uplink_message.decoded_payload.rainmm as rainmm, uplink_message.decoded_payload.voltage as
voltage, uplink_message.settings.data_rate.lora.spreading_factor as sprfact,
uplink_message.consumed_airtime as time_on_air ,uplink_message.settings.timestamp as ts,
uplink_message.rx_metadata as rx,(select value gateway_ids from uplink_message.rx_metadata) as gw,
(select value rssi from uplink_message.rx_metadata)as rssi, get((select gateway_id from
uplink_message.rx_metadata),0).gateway_id as gwn FROM 'thethings/lorawan/matt-pennal-ire/uplink'
which returns
{
"datetime": "2022-05-15T12:19:11.947844474Z",
"device_id": "md4",
"rainmm": 5.842001296924288,
"voltage": 3.352,
"sprfact": 8,
"time_on_air": "0.092672s",
"ts": 3262497863,
"rx": [
{
"gateway_ids": {
"gateway_id": "pennal-gw2",
"eui": "AC1F09FFFE057EC6"
},
"time": "2022-05-15T12:19:11.178463935Z",
"timestamp": 3262497863,
"rssi": -125,
"channel_rssi": -125,
"snr": -7.5,
"uplink_token": "ChgKFgoKcGVubmFsLWd3MhIIrB8J//4FfsYQx4jXkwwaDAi/34OUBhCCy9XhAiDY6prg+ckHKgsIv9+DlAYQv8mMVQ=="
}
],
"gw": [
{
"gateway_id": "pennal-gw2",
"eui": "AC1F09FFFE057EC6"
}
],
"rssi": [
-125
]
}
but I would like it to return
{
"datetime": "2022-05-15T12:19:11.947844474Z",
"device_id": "md4",
"rainmm": 5.842001296924288,
"voltage": 3.352,
"sprfact": 8,
"time_on_air": "0.092672s",
"ts": 3262497863,
"gwn":"pennal_gw2"
"rssi":-126
}
Any help to get the values from the nested array would be greatly appreciated!

Error when creating a chart via a batch request

I'm trying to create a new chart, following the examples presented in Google sheets API. I'm getting the following error:
HttpError 400 when requesting
https://slides.googleapis.com/v1/presentations/PRESENTATION_ID:batchUpdate?alt=json
returned "Invalid JSON payload received. Unknown name "add_chart" at
'requests[0]': Cannot find field."
Has anyone encountered this before?
Other requests are working normal (replace text, add text, clone presentation, etc)
this request is being copied from the example in Google sheets API.
sourceSheetId is the id where I have the data for the chart saved in.
{
"addChart": {
"chart": {
"spec": {
"title": "Model Q1 Sales",
"basicChart": {
"chartType": "COLUMN",
"legendPosition": "BOTTOM_LEGEND",
"axis": [
{
"position": "BOTTOM_AXIS",
"title": "Model Numbers"
},
{
"position": "LEFT_AXIS",
"title": "Sales"
}
],
"domains": [
{
"domain": {
"sourceRange": {
"sources": [
{
"sheetId": sourceSheetId,
"startRowIndex": 0,
"endRowIndex": 7,
"startColumnIndex": 0,
"endColumnIndex": 1
}
]
}
}
}
],
"series": [
{
"series": {
"sourceRange": {
"sources": [
{
"sheetId": sourceSheetId,
"startRowIndex": 0,
"endRowIndex": 7,
"startColumnIndex": 1,
"endColumnIndex": 2
}
]
}
},
"targetAxis": "LEFT_AXIS"
},
{
"series": {
"sourceRange": {
"sources": [
{
"sheetId": sourceSheetId,
"startRowIndex": 0,
"endRowIndex": 7,
"startColumnIndex": 2,
"endColumnIndex": 3
}
]
}
},
"targetAxis": "LEFT_AXIS"
},
{
"series": {
"sourceRange": {
"sources": [
{
"sheetId": sourceSheetId,
"startRowIndex": 0,
"endRowIndex": 7,
"startColumnIndex": 3,
"endColumnIndex": 4
}
]
}
},
"targetAxis": "LEFT_AXIS"
}
],
"headerCount": 1
}
},
"position": {
"newSheet": True
}
}
}
}
I was expecting the chart to be created and receive a response with chartId, however I'm getting from the request a 400 status:
HttpError 400 when requesting
https://slides.googleapis.com/v1/presentations/PRESENTATION_ID:batchUpdate?alt=json
returned "Invalid JSON payload received. Unknown name "add_chart" at
'requests[0]': Cannot find field."

opendaylight bgp-linkstate not making "loc-rib"

ODL version: Carbon
I'm having a problem with getting BGP-LS into the Network Topology. As you can see from below REST output, I set up "bgp-example" and homed to an external eBGP linkstate peer. "effective-rib-in", "adj-rib-in", and "adj-rib-out" all populate - but "loc-rib" does not. For some reason, it is not inheriting the linkstate afi/safi.
I tried debugs for bgp & karaf but saw nothing out of the ordinary (that I could see) - any help would be much appreciated.
thanks
Erik
*bgp configuration
http://192.168.3.42:8181/restconf/config/openconfig-network-instance:network-instances/network-instance/global-bgp/protocols/protocol/openconfig-policy-types:BGP/bgp-example
{
"protocol": [
{
"name": "bgp-example",
"identifier": "openconfig-policy-types:BGP",
"bgp-openconfig-extensions:bgp": {
"global": {
"config": {
"router-id": "192.168.3.42",
"as": 65000
}
},
"neighbors": {
"neighbor": [
{
"neighbor-address": "192.168.3.41",
"config": {
"peer-type": "EXTERNAL",
"peer-as": 65111
},
"afi-safis": {
"afi-safi": [
{
"afi-safi-name": "bgp-openconfig-extensions:LINKSTATE"
}
]
}
}
]
}
}
}
]
}
*loc-rib empty
http://192.168.3.42:8181/restconf/operational/bgp-rib:bgp-rib/rib/bgp-example/loc-rib
{
"loc-rib": {
"tables": [
{
"afi": "bgp-types:ipv4-address-family",
"safi": "bgp-types:unicast-subsequent-address-family",
"bgp-inet:ipv4-routes": {}
}
]
}
}
as you can see, linkstate is making it into every rib, except loc-rib
http://192.168.3.42:8181/restconf/operational/bgp-rib:bgp-rib/rib/bgp-example
{
"rib": [
{
"id": "bgp-example",
"peer": [
{
"peer-id": "bgp://x.x.x.x",
"supported-tables": [
{
"afi": "bgp-types:ipv4-address-family",
"safi": "bgp-types:unicast-subsequent-address-family"
},
{
"afi": "bgp-linkstate:linkstate-address-family",
"safi": "bgp-linkstate:linkstate-subsequent-address-family"
}
],
"effective-rib-in": {
"tables": [
{
"afi": "bgp-linkstate:linkstate-address-family",
"safi": "bgp-linkstate:linkstate-subsequent-address-family",
"bgp-linkstate:linkstate-routes": {
"linkstate-route": [
{
"route-key": "AAMAMAIAAAAAAAAFMgEAABoCAAAEAAD+VwIBAAQAAAAAAgMABgEAFQmQAAEJAAUgCv0YAQ==",
"identifier": 1330,
"advertising-node-descriptors": {
"as-number": 65111,
"domain-id": 0,
"isis-node": {
"iso-system-id": "AQAVCZAA"
}
},
"prefix-descriptors": {
"ip-reachability-information": "x.x.x.x/32"
},
"attributes": {
"origin": {
"value": "igp"
},
"ipv4-next-hop": {
"global": "x.x.x.x"
},
"as-path": {
"segments": [
{
"as-sequence": [
65111
]
}
]
}
},
"protocol-id": "isis-level2"
}
}
rest of output truncated for brevity/readability
OK, figured this out.... turns out I had not enabled LINKSTATE afi/safi in the global config for ODL BGP. I had to DELETE my existing global config, then POST, add neighbors, peers, etc. Now I have the linkstate DB in the loc-rib, AND it's made it to the network topology - BUT - no idea how to view this topology via DLUX....