Openflow Swicth not pushing MPLS tag. (OpenDayLight) - sdn

I have the following linear SDN architecture with an ODL controller:
Host1 -- ZodiacFX1 -- ZodiacFX2 --- Host2
I am using 2 laptops as hosts and 2 ZodiacFX openflow switches.
I Want the ZodiacFX1 to push a MPLS tag to all the IP packets received from Host1 and the ZodiacFX2 to pop the MPLS tag and send the IP packets to the Host2.
I have added a flow for the MPLS push on the ZodiacFX1 and I can see the flow active in the ZodiacFX1 and also in the operational datastore of ODL. But if I ping h1->h2 there is no push.
The flow is this:
NOTE:(Host1 is conected to port 1 of ZodiacFX1 and ZodiacFX1 port2 is conected to ZodiacFX2 port1.)
GET http://192.168.21.147:8181/restconf/operational/opendaylight-inventory:nodes/node/openflow:123917682137538/table/2
{
"flow-node-inventory:table": [
{
"id": 2,
"opendaylight-flow-table-statistics:flow-table-statistics": {
"active-flows": 1,
"packets-looked-up": 0,
"packets-matched": 0
},
"flow": [
{
"id": "125",
"idle-timeout": 0,
"cookie": 401,
"flags": "",
"hard-timeout": 0,
"instructions": {
"instruction": [
{
"order": 0,
"apply-actions": {
"action": [
{
"order": 2,
"output-action": {
"output-node-connector": "2",
"max-length": 0
}
},
{
"order": 1,
"set-field": {
"protocol-match-fields": {
"mpls-label": 27
}
}
},
{
"order": 0,
"push-mpls-action": {
"ethernet-type": 34887
}
}
]
}
}
]
},
"cookie_mask": 0,
"opendaylight-flow-statistics:flow-statistics": {
"duration": {
"nanosecond": 0,
"second": 7
},
"byte-count": 0,
"packet-count": 0
},
"priority": 0,
"table_id": 2,
"match": {
"in-port": "1",
"ethernet-match": {
"ethernet-type": {
"type": 2048
}
}
}
},
{
"id": "124",
"idle-timeout": 0,
"cookie": 401,
"flags": "",
"hard-timeout": 0,
"instructions": {
"instruction": [
{
"order": 0,
"apply-actions": {
"action": [
{
"order": 2,
"output-action": {
"output-node-connector": "2",
"max-length": 0
}
},
{
"order": 1,
"set-field": {
"protocol-match-fields": {
"mpls-label": 27
}
}
},
{
"order": 0,
"push-mpls-action": {
"ethernet-type": 34887
}
}
]
}
}
]
},
"cookie_mask": 0,
"opendaylight-flow-statistics:flow-statistics": {
"duration": {
"nanosecond": 0,
"second": 180
},
"byte-count": 0,
"packet-count": 0
},
"priority": 8,
"table_id": 2,
"match": {
"in-port": "1",
"ethernet-match": {
"ethernet-type": {
"type": 2048
}
}
}
}
]
}
]
}
And I can see too in Zodiac console interface:
Flow 6
Match:
In Port: 1
ETH Type: IPv4
Attributes:
Table ID: 2 Cookie:0x191
Priority: 8 Duration: 247 secs
Hard Timeout: 0 secs Idle Timeout: 0 secs
Byte Count: 0 Packet Count: 0
Last Match: 00:04:07
Instructions:
Apply Actions:
Push MPLS tag
Set MPLS Label: 27
Output Port: 2
What can be the problem? I think that the main problem is that in this case Zodiac is following this flow, I have tried my flow with priority 0 too and there is no MPLS push.
Flow 5
Match:
In Port: 1
Attributes:
Table ID: 0 Cookie:0x2b00000000000008
Priority: 2 Duration: 2845 secs
Hard Timeout: 0 secs Idle Timeout: 0 secs
Byte Count: 576265 Packet Count: 5246
Last Match: 00:00:00
Instructions:
Apply Actions:
Output Port: 3
Output Port: 2
Output: CONTROLLER

it looks like you have some other application (maybe it's l2switch) pushing
flows to your switches as well. Maybe just install openflowplugin and try.
also, try putting your mpls flows in table 0 instead of table 2.

Related

Prebid - Index video outstream configuration with multiple playerSize in bidders

I would like to create an adUnit video outstream with multiple playserSizes for Index (documentation there
At first I thought to put the playerSize at the adUnit level, but because I want to define multiple playerSizes, I decided to move it at bidder level in params > video > playerSize. Nonetheless it does not work whereas in the documentation it is written
If you are using Index’s outstream player and have placed the video object at the bidder level, you must include the Index required parameters at the bidder level``` (link above)
Here is my prebid configuration
```javascript
{
"code": slotCode,
"sizes": [[1,1],[300,250],[160,600],[300,600],[640,360]],
"bids": [
{
"bidder": "criteo",
"params": {
"networkId": networkId,
"video": {
"playerSize": [640, 480],
"skip": 0,
"playbackmethod": 1,
"placement": 1,
"mimes": [
"video/mp4"
],
"maxduration": 30,
"api": [
1,
2
],
"protocols": [
2,
3
]
}
},
"userId": {...},
"userIdAsEids": [...],
"crumbs": {...}
},
{
"bidder": "index",
"params": {
"siteId": siteId,
"size": [
640,
360
],
"video": {
"playerSize": [640, 480],
"h": 640,
"w": 360,
"protocols": [
2,
3,
5,
6
],
"minduration": 5,
"maxduration": 30,
"mimes": [
"video/mp4",
"application/javascript"
],
"playerSize": [
640,
360
]
}
},
"userId": {... },
"userIdAsEids": [...],
"crumbs": {...}
}
],
"mediaTypes": {
"video": {
"context": "outstream"
}
},
"pubstack": {...}
}
If I use this configuration, I got this error
ERROR: IX Bid Adapter: bid size is not included in ad unit sizes or player size.
Even if my playerSize for index ([640, 360]) is in the adUnit sizes.
I wonder if it is possible for an adUnit to have multiple payerSizes?

AWS IoT rule sql select statement

I am trying to write a SQL select statement for my AWS IoT rule to extract the values 'gateway_id and 'rssi' from the following MQTT message:
{
"end_device_ids": {
"device_id": "imd2",
"application_ids": {
"application_id": "pennal"
},
"dev_eui": "004E3A0DF76DC9E9",
"join_eui": "70B3D57ED003CBE8",
"dev_addr": "260BA9D0"
},
"correlation_ids": [
"as:up:01G30W0J4D65P6D50QH1DN3ZQP",
"gs:conn:01G2ZZ7FT9BH6J93WRYS4ATVDM",
"gs:up:host:01G2ZZ7FTN14103H90QN71Q557",
"gs:uplink:01G30W0HXWMES1Z7X7F2MCFMPF",
"ns:uplink:01G30W0HXXJM5PNGJAD0W01GGH",
"rpc:/ttn.lorawan.v3.GsNs/HandleUplink:01G30W0HXWFR3HNGBZS7XJV15E",
"rpc:/ttn.lorawan.v3.NsAs/HandleUplink:01G30W0J4D18JZW199EM8WERGR"
],
"received_at": "2022-05-14T08:47:25.837680984Z",
"uplink_message": {
"session_key_id": "AYBlRLSz9n83bW3WU3+GfQ==",
"f_port": 1,
"f_cnt": 5013,
"frm_payload": "DiAAAA==",
"decoded_payload": {
"rainmm": 0,
"voltage": 3.616
},
"rx_metadata": [
{
"gateway_ids": {
"gateway_id": "pennal-gw2",
"eui": "AC1F09FFFE057EC6"
},
"time": "2022-05-14T08:47:25.065794944Z",
"timestamp": 114306297,
"rssi": -126,
"channel_rssi": -126,
"snr": -8.25,
"uplink_token": "ChgKFgoKcGVubmFsLWd3MhIIrB8J//4FfsYQ+dnANhoMCJ3Z/ZMGEOPmv6sCIKjZvump7gYqCwid2f2TBhCA568f"
}
],
"settings": {
"data_rate": {
"lora": {
"bandwidth": 125000,
"spreading_factor": 11
}
},
"coding_rate": "4/5",
"frequency": "868100000",
"timestamp": 114306297,
"time": "2022-05-14T08:47:25.065794944Z"
},
"received_at": "2022-05-14T08:47:25.629041670Z",
"confirmed": true,
"consumed_airtime": "0.659456s",
"version_ids": {
"brand_id": "heltec",
"model_id": "cubecell-dev-board-class-a-otaa",
"hardware_version": "_unknown_hw_version_",
"firmware_version": "1.0",
"band_id": "EU_863_870"
},
"network_ids": {
"net_id": "000013",
"tenant_id": "ttn",
"cluster_id": "eu1",
"cluster_address": "eu1.cloud.thethings.network"
}
}
}
I have tried following the documentation here: AWS Documentation but am struggling with the nested part of the message.
my SQL statement at the moment is:
SELECT received_at as datetime, end_device_ids.device_id as device_id,
uplink_message.decoded_payload.rainmm as rainmm, uplink_message.decoded_payload.voltage as
voltage, uplink_message.settings.data_rate.lora.spreading_factor as sprfact,
uplink_message.consumed_airtime as time_on_air ,uplink_message.settings.timestamp as ts,
uplink_message.rx_metadata as rx,(select value gateway_ids from uplink_message.rx_metadata) as gw,
(select value rssi from uplink_message.rx_metadata)as rssi, get((select gateway_id from
uplink_message.rx_metadata),0).gateway_id as gwn FROM 'thethings/lorawan/matt-pennal-ire/uplink'
which returns
{
"datetime": "2022-05-15T12:19:11.947844474Z",
"device_id": "md4",
"rainmm": 5.842001296924288,
"voltage": 3.352,
"sprfact": 8,
"time_on_air": "0.092672s",
"ts": 3262497863,
"rx": [
{
"gateway_ids": {
"gateway_id": "pennal-gw2",
"eui": "AC1F09FFFE057EC6"
},
"time": "2022-05-15T12:19:11.178463935Z",
"timestamp": 3262497863,
"rssi": -125,
"channel_rssi": -125,
"snr": -7.5,
"uplink_token": "ChgKFgoKcGVubmFsLWd3MhIIrB8J//4FfsYQx4jXkwwaDAi/34OUBhCCy9XhAiDY6prg+ckHKgsIv9+DlAYQv8mMVQ=="
}
],
"gw": [
{
"gateway_id": "pennal-gw2",
"eui": "AC1F09FFFE057EC6"
}
],
"rssi": [
-125
]
}
but I would like it to return
{
"datetime": "2022-05-15T12:19:11.947844474Z",
"device_id": "md4",
"rainmm": 5.842001296924288,
"voltage": 3.352,
"sprfact": 8,
"time_on_air": "0.092672s",
"ts": 3262497863,
"gwn":"pennal_gw2"
"rssi":-126
}
Any help to get the values from the nested array would be greatly appreciated!

How can i stop receiving irrelevant disk_usage metrics while using CloudWatchAgent?

Following is the config.json that I'm using
{
"agent": {
"metrics_collection_interval": 300,
"run_as_user": "root"
},
"metrics": {
"append_dimensions": {
"AutoScalingGroupName": "${aws:AutoScalingGroupName}",
"ImageId": "${aws:ImageId}",
"InstanceId": "${aws:InstanceId}",
"InstanceType": "${aws:InstanceType}"
},
"metrics_collected": {
"disk": {
"measurement": [
"used_percent"
],
"metrics_collection_interval": 300,
"resources": [
"/"
]
},
"mem": {
"measurement": [
"mem_used_percent"
],
"metrics_collection_interval": 300
}
}
}
}
But using this configuration I am receiving many metrics which i dont need, pasting a sample pic below.
I just need the disk_used_percent metric for device: rootfs and path: /

Error when creating a chart via a batch request

I'm trying to create a new chart, following the examples presented in Google sheets API. I'm getting the following error:
HttpError 400 when requesting
https://slides.googleapis.com/v1/presentations/PRESENTATION_ID:batchUpdate?alt=json
returned "Invalid JSON payload received. Unknown name "add_chart" at
'requests[0]': Cannot find field."
Has anyone encountered this before?
Other requests are working normal (replace text, add text, clone presentation, etc)
this request is being copied from the example in Google sheets API.
sourceSheetId is the id where I have the data for the chart saved in.
{
"addChart": {
"chart": {
"spec": {
"title": "Model Q1 Sales",
"basicChart": {
"chartType": "COLUMN",
"legendPosition": "BOTTOM_LEGEND",
"axis": [
{
"position": "BOTTOM_AXIS",
"title": "Model Numbers"
},
{
"position": "LEFT_AXIS",
"title": "Sales"
}
],
"domains": [
{
"domain": {
"sourceRange": {
"sources": [
{
"sheetId": sourceSheetId,
"startRowIndex": 0,
"endRowIndex": 7,
"startColumnIndex": 0,
"endColumnIndex": 1
}
]
}
}
}
],
"series": [
{
"series": {
"sourceRange": {
"sources": [
{
"sheetId": sourceSheetId,
"startRowIndex": 0,
"endRowIndex": 7,
"startColumnIndex": 1,
"endColumnIndex": 2
}
]
}
},
"targetAxis": "LEFT_AXIS"
},
{
"series": {
"sourceRange": {
"sources": [
{
"sheetId": sourceSheetId,
"startRowIndex": 0,
"endRowIndex": 7,
"startColumnIndex": 2,
"endColumnIndex": 3
}
]
}
},
"targetAxis": "LEFT_AXIS"
},
{
"series": {
"sourceRange": {
"sources": [
{
"sheetId": sourceSheetId,
"startRowIndex": 0,
"endRowIndex": 7,
"startColumnIndex": 3,
"endColumnIndex": 4
}
]
}
},
"targetAxis": "LEFT_AXIS"
}
],
"headerCount": 1
}
},
"position": {
"newSheet": True
}
}
}
}
I was expecting the chart to be created and receive a response with chartId, however I'm getting from the request a 400 status:
HttpError 400 when requesting
https://slides.googleapis.com/v1/presentations/PRESENTATION_ID:batchUpdate?alt=json
returned "Invalid JSON payload received. Unknown name "add_chart" at
'requests[0]': Cannot find field."

Rally Lookback API doesn't retrieve records newer than 1 week

I'm running some queries with Rally Lookback API and it seems that revisions newer than 1 week are not being retrieved:
λ date
Wed, Nov 28, 2018 2:26:45 PM
using the query below:
{
"ObjectID": 251038028040,
"__At": "current"
}
results:
{
"_rallyAPIMajor": "2",
"_rallyAPIMinor": "0",
"Errors": [],
"Warnings": [
"Max page size limited to 100 when fields=true"
],
"GeneratedQuery": {
"find": {
"ObjectID": 251038028040,
"$and": [
{
"_ValidFrom": {
"$lte": "2018-11-21T14:44:34.694Z"
},
"_ValidTo": {
"$gt": "2018-11-21T14:44:34.694Z"
}
}
],
"_ValidFrom": {
"$lte": "2018-11-21T14:44:34.694Z"
}
},
"limit": 10,
"skip": 0,
"fields": true
},
"TotalResultCount": 1,
"HasMore": false,
"StartIndex": 0,
"PageSize": 10,
"ETLDate": "2018-11-21T14:44:34.694Z",
"Results": [
{
"_id": "5bfe7e3c3f1f4460feaeaf11",
"_SnapshotNumber": 30,
"_ValidFrom": "2018-11-21T12:22:08.961Z",
"_ValidTo": "9999-01-01T00:00:00.000Z",
"ObjectID": 251038028040,
"_TypeHierarchy": [
-51001,
-51002,
-51003,
-51004,
-51005,
-51038,
46772408020
],
"_Revision": 268342830516,
"_RevisionDate": "2018-11-21T12:22:08.961Z",
"_RevisionNumber": 53,
}
],
"ThreadStats": {
"cpuTime": "15.463705",
"waitTime": "0",
"waitCount": "0",
"blockedTime": "0",
"blockedCount": "0"
},
"Timings": {
"preProcess": 0,
"findEtlDate": 88,
"allowedValuesDisambiguation": 1,
"mongoQuery": 1,
"authorization": 3,
"suppressNonRequested": 0,
"compressSnapshots": 0,
"allowedValuesHydration": 0,
"TOTAL": 93
}
}
Having in mind that this artifact have, as for now, 79 revisions with the latest revision pointing to 11/21/2018 02:41 PM CST as per revisions tab at Rally Central.
One other thing is that if I run the query a couple of minutes later the ETL date seems to be updating, as some sort of indexing being run:
{
"_rallyAPIMajor": "2",
"_rallyAPIMinor": "0",
"Errors": [],
"Warnings": [
"Max page size limited to 100 when fields=true"
],
"GeneratedQuery": {
"find": {
"ObjectID": 251038028040,
"$and": [
{
"_ValidFrom": {
"$lte": "2018-11-21T14:45:50.565Z"
},
"_ValidTo": {
"$gt": "2018-11-21T14:45:50.565Z"
}
}
],
"_ValidFrom": {
"$lte": "2018-11-21T14:45:50.565Z"
}
},
"limit": 10,
....... rest of the code ommited.
Is there any reason why Lookback API shouldn't processing current data instead of one week of difference between records?
It appears that your workspace's data is currently being "re-built". The _ETLDate is the date of the most-current revision in the LBAPI database and should eventually catch up to the current revision's date.