opendaylight bgp-linkstate not making "loc-rib" - sdn

ODL version: Carbon
I'm having a problem with getting BGP-LS into the Network Topology. As you can see from below REST output, I set up "bgp-example" and homed to an external eBGP linkstate peer. "effective-rib-in", "adj-rib-in", and "adj-rib-out" all populate - but "loc-rib" does not. For some reason, it is not inheriting the linkstate afi/safi.
I tried debugs for bgp & karaf but saw nothing out of the ordinary (that I could see) - any help would be much appreciated.
thanks
Erik
*bgp configuration
http://192.168.3.42:8181/restconf/config/openconfig-network-instance:network-instances/network-instance/global-bgp/protocols/protocol/openconfig-policy-types:BGP/bgp-example
{
"protocol": [
{
"name": "bgp-example",
"identifier": "openconfig-policy-types:BGP",
"bgp-openconfig-extensions:bgp": {
"global": {
"config": {
"router-id": "192.168.3.42",
"as": 65000
}
},
"neighbors": {
"neighbor": [
{
"neighbor-address": "192.168.3.41",
"config": {
"peer-type": "EXTERNAL",
"peer-as": 65111
},
"afi-safis": {
"afi-safi": [
{
"afi-safi-name": "bgp-openconfig-extensions:LINKSTATE"
}
]
}
}
]
}
}
}
]
}
*loc-rib empty
http://192.168.3.42:8181/restconf/operational/bgp-rib:bgp-rib/rib/bgp-example/loc-rib
{
"loc-rib": {
"tables": [
{
"afi": "bgp-types:ipv4-address-family",
"safi": "bgp-types:unicast-subsequent-address-family",
"bgp-inet:ipv4-routes": {}
}
]
}
}
as you can see, linkstate is making it into every rib, except loc-rib
http://192.168.3.42:8181/restconf/operational/bgp-rib:bgp-rib/rib/bgp-example
{
"rib": [
{
"id": "bgp-example",
"peer": [
{
"peer-id": "bgp://x.x.x.x",
"supported-tables": [
{
"afi": "bgp-types:ipv4-address-family",
"safi": "bgp-types:unicast-subsequent-address-family"
},
{
"afi": "bgp-linkstate:linkstate-address-family",
"safi": "bgp-linkstate:linkstate-subsequent-address-family"
}
],
"effective-rib-in": {
"tables": [
{
"afi": "bgp-linkstate:linkstate-address-family",
"safi": "bgp-linkstate:linkstate-subsequent-address-family",
"bgp-linkstate:linkstate-routes": {
"linkstate-route": [
{
"route-key": "AAMAMAIAAAAAAAAFMgEAABoCAAAEAAD+VwIBAAQAAAAAAgMABgEAFQmQAAEJAAUgCv0YAQ==",
"identifier": 1330,
"advertising-node-descriptors": {
"as-number": 65111,
"domain-id": 0,
"isis-node": {
"iso-system-id": "AQAVCZAA"
}
},
"prefix-descriptors": {
"ip-reachability-information": "x.x.x.x/32"
},
"attributes": {
"origin": {
"value": "igp"
},
"ipv4-next-hop": {
"global": "x.x.x.x"
},
"as-path": {
"segments": [
{
"as-sequence": [
65111
]
}
]
}
},
"protocol-id": "isis-level2"
}
}
rest of output truncated for brevity/readability

OK, figured this out.... turns out I had not enabled LINKSTATE afi/safi in the global config for ODL BGP. I had to DELETE my existing global config, then POST, add neighbors, peers, etc. Now I have the linkstate DB in the loc-rib, AND it's made it to the network topology - BUT - no idea how to view this topology via DLUX....

Related

"INVALID_CURSOR_ARGUMENTS" from Github graphql API

I am using the following query:
query myOrgRepos {
organization(login: "COMPANY_NAME") {
repositories(first: 100) {
edges {
node {
name
defaultBranchRef {
target {
... on Commit {
history(after: "2021-01-01T23:59:00Z", before: "2023-02-06T23:59:00Z", author: { emails: "USER_EMAIL" }) {
edges {
node {
oid
}
}
}
}
}
}
}
}
}
}
}
But with accurate names for the orginization and emails, and am persistantly getting the following error for every repo.
{
"type": "INVALID_CURSOR_ARGUMENTS",
"path": [
"organization",
"repositories",
"edges",
20,
"node",
"defaultBranchRef",
"target",
"history"
],
"locations": [
{
"line": 10,
"column": 29
}
],
"message": "`2021-01-01T23:59:00Z` does not appear to be a valid cursor."
},
If I remove the after field, it works just fine. However, I kind of need it. Acording to all the docs that I have read both after and before take the same timestamp. Can't tell where I am going wrong here.
I have tried:
to narrow the gap between before and after
return only a single repository
remove after (works fine without it)

GetBlock and all related methods doesn't return an info about transactions

Usually, tronGrid returns blocks with transactions, but as I found today, it's not behaving as needed.
How it works right now:
{
"blockID": "0000000001b16eb8b97ab73b7dc8f161c5f2f786f0937bfed7886baa33926c84",
"block_header": {
"raw_data": {
"number": 28405432,
"txTrieRoot": "0000000000000000000000000000000000000000000000000000000000000000",
"witness_address": "41f16412b9a17ee9408646e2a21e16478f72ed1e95",
"parentHash": "0000000001b16eb7a6f39a1523f35db8b4089d5a03f591958beafd139e0949d5",
"version": 24,
"timestamp": 1666102851000
},
"witness_signature": "60c7b8b964f103072b7e0fd33b5df636ef4e06d95bb113184ef3266b691b2cf517091960aba72dcd8d5a1ee40374f2124256ddad445429897b066332964ef8d500"
}
}
And how it works before:
{
"blockID": "0000000001b15f18976aee56ff9490303ec64c2007d6034ca03a7a2caefdab73",
"block_header": {
"raw_data": {
"number": 28401432,
"txTrieRoot": "167b9b1620d76e9855d426453ea726a709582f4ed711701ee22fe730bae3f8d8",
"witness_address": "41cd8d8ad1b4a5bd7afe46949421d2b411a3601717",
"parentHash": "0000000001b15f175dc2a20b8c3b29bbbb50860dc57d54d44eff4b02edf849e6",
"version": 24,
"timestamp": 1666089210000
},
"witness_signature": "b9239d12b2044b1bdfa631115f3c7b9b1c1fc5d37c482809d2c7846d05ab84d61778910a55501f4702fe653b943b22b9270b33151ec15e35aa500388dac0abda01"
},
"transactions": [
{
"ret": [
{
"contractRet": "SUCCESS"
}
],
"signature": [
"56a427e32fc0267a2e469ef85530c3145c7de423c8f20d7e11d85dbff98701bdd599d5a45b58ab4f7fa7f212c2ee3cbb5daa50541a83f67dbff533b7e185331501"
],
"txID": "5fd2335105f68de47b82fe3f8065cb3d1cc8ab437aaee55a5d4e61624113730b",
"raw_data": {
...
},
"raw_data_hex": "0a025f0522084cf822e1795ff17f40a7e6a2d5be305a67080112630a2d747970652e676f6f676c65617069732e636f6d2f70726f746f636f6c2e5472616e73666572436f6e747261637412320a1541989cc89d2df684c69bed3563c0cd8817be0a11e1121541bc0777bd8f50e5e148ef59bdce2b895b754c452e1888890a70c7919fd5be30"
}
]
}
Is there a new feature, or it's a bug?

How to issue ticket in Amadeus after flight-order request?

{
"data":{
"type":"flight-order",
"id":"eJzTd9cPijL1Cg8FAAuUAn0%3D",
"associatedRecords":[
{
"reference":"RZ5JWU",
"creationDate":"2022-01-13T05:40:00.000",
"originSystemCode":"GDS",
"flightOfferId":"1"
}
],
"flightOffers":[
{
"type":"flight-offer",
"id":"1",
"source":"GDS",
"nonHomogeneous":false,
"lastTicketingDate":"2022-03-31",
"itineraries":[
{
"segments":[
{
"departure":{
"iataCode":"ISB",
"at":"2022-03-30T01:40:00"
},
"arrival":{
"iataCode":"DXB",
"terminal":"1",
"at":"2022-03-31T03:55:00"
},
"carrierCode":"PK",
"number":"233",
"aircraft":{
"code":"320"
},
"operating":{
},
"id":"1",
"numberOfStops":0,
"co2Emissions":[
{
"weight":141,
"weightUnit":"KG",
"cabin":"ECONOMY"
}
]
}
]
}
],
"price":{
"currency":"PKR",
"total":"25235.00",
"base":"15190.00",
"fees":[
{
"amount":"0.00",
"type":"TICKETING"
},
{
"amount":"0.00",
"type":"SUPPLIER"
},
{
"amount":"0.00",
"type":"FORM_OF_PAYMENT"
}
],
"grandTotal":"25234.00",
"billingCurrency":"PKR"
},
"pricingOptions":{
"fareType":[
"PUBLISHED"
],
"includedCheckedBagsOnly":true
},
"validatingAirlineCodes":[
"PK"
],
"travelerPricings":[
{
"travelerId":"1",
"fareOption":"STANDARD",
"travelerType":"ADULT",
"price":{
"currency":"PKR",
"total":"25234.00",
"base":"15190.00",
"taxes":[
{
"amount":"5000.00",
"code":"RG"
},
{
"amount":"2000.00",
"code":"SP"
},
{
"amount":"2800.00",
"code":"YD"
},
{
"amount":"244.00",
"code":"ZR"
}
],
"refundableTaxes":"10044.00"
},
"fareDetailsBySegment":[
{
"segmentId":"1",
"cabin":"ECONOMY",
"fareBasis":"VLOWPK",
"class":"V",
"includedCheckedBags":{
"weight":30,
"weightUnit":"KG"
}
}
]
}
]
}
],
"travelers":[
{
"id":"1",
"dateOfBirth":"2003-01-03",
"gender":"FEMALE",
"name":{
"firstName":"Fakhar",
"lastName":"Khan"
},
"documents":[
{
"number":"AG324234234",
"issuanceDate":"2015-01-17",
"expiryDate":"2025-01-17",
"issuanceCountry":"PK",
"issuanceLocation":"Pakistan",
"nationality":"PK",
"documentType":"PASSPORT",
"holder":true
}
],
"contact":{
"purpose":"STANDARD",
"phones":[
{
"deviceType":"MOBILE",
"countryCallingCode":"92",
"number":"3452345678"
}
],
"emailAddress":"hamidafridi.droidor#gmail.com"
}
}
],
"remarks":{
"general":[
{
"subType":"GENERAL_MISCELLANEOUS",
"text":"ONLINE BOOKING FROM INCREIBLE VIAJES"
}
]
},
"ticketingAgreement":{
"option":"DELAY_TO_CANCEL",
"delay":"6D"
},
"contacts":[
{
"addresseeName":{
"firstName":"PABLO RODRIGUEZ"
},
"address":{
"lines":[
"Calle Prado, 16"
],
"postalCode":"28014",
"countryCode":"ES",
"cityName":"Madrid"
},
"purpose":"STANDARD",
"phones":[
{
"deviceType":"LANDLINE",
"countryCallingCode":"34",
"number":"480080071"
},
{
"deviceType":"MOBILE",
"countryCallingCode":"33",
"number":"480080072"
}
],
"companyName":"INCREIBLE VIAJES",
"emailAddress":"support#increibleviajes.es"
}
]
},
"dictionaries":{
"locations":{
"ISB":{
"cityCode":"ISB",
"countryCode":"PK"
},
"DXB":{
"cityCode":"DXB",
"countryCode":"AE"
}
}
}
}
I request for create-order then returned #PNR and now want to issue ticket. #Amadeus
As of now, this Flight Create Orders API allows you to book a flight and generate a PNR, but it does not allow for ticketing. Therefore, one of the requirements in order to use the API in production is to sign a contract with an airline consolidator to issue tickets.
Please check the requirements on the API page. If you want help to find a consolidator get in touch with us via the support channel and we can recommend you one.

How can i stop receiving irrelevant disk_usage metrics while using CloudWatchAgent?

Following is the config.json that I'm using
{
"agent": {
"metrics_collection_interval": 300,
"run_as_user": "root"
},
"metrics": {
"append_dimensions": {
"AutoScalingGroupName": "${aws:AutoScalingGroupName}",
"ImageId": "${aws:ImageId}",
"InstanceId": "${aws:InstanceId}",
"InstanceType": "${aws:InstanceType}"
},
"metrics_collected": {
"disk": {
"measurement": [
"used_percent"
],
"metrics_collection_interval": 300,
"resources": [
"/"
]
},
"mem": {
"measurement": [
"mem_used_percent"
],
"metrics_collection_interval": 300
}
}
}
}
But using this configuration I am receiving many metrics which i dont need, pasting a sample pic below.
I just need the disk_used_percent metric for device: rootfs and path: /

logging with Serilog Concurrently to AzureTableStorage

Logging to AzureTableStorage is causing an overhead that blocks next calls, compared to when logging into a file, is there a way to make this process faster (batch or async option)?
My code in appsetttings.josn:
"Serilog": {
"MinimumLevel": "Information",
"WriteTo": [
{
"Name": "AzureTableStorage",
"Args": {
"storageTableName": "tbl",
"connectionString": "DefaultEndpointsProtocol=http;AccountName=xxx;AccountKey=/xxxxxxxx"
}
}
]
}
I ended up using https://github.com/serilog/serilog-sinks-async and the batch option in AzureTableStorage, similar to:
"WriteTo": [
{
"Name": "Async",
"Args": {
"configure": [
{
"Name": "AzureTableStorage",
"Args": {
"storageTableName": "tbl",
"connectionString": "DefaultEndpointsProtocol=http;AccountName=xxx;AccountKey=xxx"
}
}
]
}
}
]