ovs set port tag command does not work when use ovn at the same time - kvm

I use ovn (https://docs.ovn.org/en/latest/) as sdn tool to manage my virtual machines' network.
There is a kvm interface named 763ec19668dc6b3 bind to ovn logical switch. Below is the detail:
ovn-nbctl list Logical_Switch_port 763ec19668dc6b3
_uuid : 9076cf67-5e2d-4814-a111-0d2b6c6427c0
addresses : ["02:00:00:a1:58:60 10.67.50.163"]
dhcpv4_options : 89242b8c-5696-4755-91b7-ae8e682ad3fe
dhcpv6_options : []
dynamic_addresses : []
enabled : []
external_ids : {}
ha_chassis_group : []
name : "763ec19668dc6b3"
options : {}
parent_name : []
port_security : []
tag : []
tag_request : []
type : ""
up : true
It's also a port belong to br-int in openvswitch.
ovs-vsctl list-ports br-int | grep 763ec19668dc6b3
763ec19668dc6b3
I want to set vlan tag to this interface and I use the command "ovs-vsctl set port 763ec19668dc6b3 tag=50". But I found this command does not work. When I captured the network traffic the packet does not contain the vlan tag.
PS: ovn-nbctl set Logical_Switch_Port 763ec19668dc6b3 tag=50 and ovn-nbctl set Logical_Switch_Port 763ec19668dc6b3 tag_request=50 do not work too.
version detail:
ovn-nbctl 21.06.0
Open vSwitch Library 2.15.1
DB Schema 5.32.0
Can anybody answer my question? Thanks!

Related

Check if a request has a response in Zeek language

Good Morning,
I have a Zeek machine generating logs on a Modbus traffic.
Currently, my script generates logs looking like this :
ts tid id.orig_h id.orig_p id.resp_h id.resp_p unit_id func network_direction
1342774501.072580 32 10.2.2.2 51411 10.2.2.3 502 255 READ_HOLDING_REGISTERS request
1342774501.087014 32 10.2.2.2 51411 10.2.2.3 502 255 READ_HOLDING_REGISTERS response
'tid' is the transaction id to identify a request/response couple. I want to know if a robot hasn't responded to the Controller by logging only requests that did not have a response within 1 second.
My code is :
module Modbus_Extended;
export {
redef enum Log::ID += { LOG_DETAILED,
LOG_MASK_WRITE_REGISTER,
LOG_READ_WRITE_MULTIPLE_REGISTERS};
type Modbus_Detailed: record {
ts : time &log; # Timestamp of event
tid : count &log; # Zeek unique ID for connection
id : conn_id &log; # Zeek connection struct (addresses and ports)
unit_id : count &log; # Modbus unit-id
func : string &log &optional; # Modbus Function
network_direction : string &log &optional; # Message direction (request or response)
address : count &log &optional; # Starting address for value(s) field
quantity : count &log &optional; # Number of addresses/values read or written to
values : string &log &optional; # Coils, discrete_inputs, or registers read/written to
};
global log_modbus_detailed: event(rec: Modbus_Detailed);
global transaction_ids: set[string, string] = {};
event modbus_message(c: connection,
headers: ModbusHeaders,
is_orig: bool) &priority=-5 {
local modbus_detailed_rec: Modbus_Detailed;
if(headers$tid !in transaction_ids[count]){
add transaction_ids[headers$tid, c$modbus$ts]
}else{
delete transaction_ids[headers$tid, c$modbus$ts]
}
for(i in transaction_ids[timestamp]){
if(c$modbus$ts > transactions_ids[headers$tid, i] +1)
{
Log::write(LOG_DETAILED, modbus_detailed_rec);
}
}
}
}
My guess is that I have to store transaction ids and check if I get only one occurence within this timelapse and then log it into a file, but I can figure out how to do it. Currently I can only generate logs with all the modbus traffic.
Thank you for your help

Can't handle HTTP multiple attribute values in Perl

I'm facing with a really strange issue. I interfaced a SAML authentication with OTRS which is an ITSM written in Perl and the Identity Provider sends the attributes as follow :
LoginName : dev-znuny02
mail : test2#company.dev
Profile : company.autre.idp.v2()
Profile : company.autre.mcf.sp(dev)
givenName : MyName
sn : Test2
I handle these with a module called Mod_Auth_Mellon and as you can see the attribute Profile is multivaluated. In short I retrieve all of these values with the following snippet :
sub new {
my ( $Type, %Param ) = #_;
# allocate new hash for object
my $Self = {};
bless( $Self, $Type );
$Self->{ConfigObject} = $Kernel::OM->Get('Kernel::Config');
$Self->{UserObject} = Kernel::System::User->new( %{$Self} );
# Handle header's attributes
$Self->{loginName} = 'MELLON_LoginName';
$Self->{eMail} = 'MELLON_mail';
$Self->{Profile_0} = 'MELLON_Profile_0';
$Self->{Profile_1} = 'MELLON_Profile_1';
$Self->{gName} = 'MELLON_givenName';
$Self->{sName} = 'MELLON_sn';
return $Self;
}
sub Auth {
my ( $Self, %Param ) = #_;
# get params
my $lname = $ENV{$Self->{loginName}};
my $email = $ENV{$Self->{eMail}};
my $profile0 = $ENV{$Self->{Profile_0}};
my $profile1 = $ENV{$Self->{Profile_1}};
my $gname = $ENV{$Self->{gName}};
my $sname = $ENV{$Self->{sName}};
...
}
I can handle all the values of the attributes except the attribute Profile. When I take a look to the documentation, they said :
If an attribute has multiple values, then they will be stored as MELLON_<name>_0, MELLON_<name>_1, MELLON_<name>_2
To be sure, I activated the diagnostics of the Mellon module and indeed I receive the information correctly :
...
MELLON_LoginName : dev_znuny02
MELLON_LoginName_0 : dev_znuny02
MELLON_mail : test2#company.dev
MELLON_mail_0 : test2#company.dev
MELLON_Profile : company.autre.idp.v2()
MELLON_Profile_0 : company.autre.idp.v2()
MELLON_Profile_1 : company.autre.mcf.sp(dev)
...
When I try to manipulate the MELLON_Profile_0 or MELLON_Profile_1 attributes in the Perl script, the variable assigned to it seems empty. Do you have any idea on what can be the issue here ?
Any help is welcome ! Thanks a lot guys
PS : I have no control on the Identity Provider so I can't edit the attributes sent
I didn't managed to make it work but I found a workaround to prevent users who don't have the Profile attribute value from logging into the application:
MellonCond Profile company.autre.mcf.sp(dev)
according the documentation :
You can also utilize SAML attributes to control whether Mellon authentication succeeds (a form of authorization). So even though the IdP may have successfully authenticated the user you can apply additional constraints via the MellonCond directive. The basic idea is that each MellonCond directive specifies one condition that either evaluates to True or False.

Bro script for reading a list of Ips and domains

I am trying to read a file with a list of IP addresses and another one with domains, as a proof of concept of the Input Framework defined in https://docs.zeek.org/en/stable/frameworks/input.html
I´ve prepared the following bro scripts:
reading.bro:
type Idx: record {
ip: addr;
};
type Idx: record {
domain: string;
};
global ips: table[addr] of Idx = table();
global domains: table[string] of Idx = table();
event bro_init() {
Input::add_table([$source="read_ip_bro", $name="ips",
$idx=Idx, $destination=ips, $mode=Input::REREAD]);
Input::add_table([$source="read_domain_bro", $name="domains",
$idx=Idx, $destination=domains, $mode=Input::REREAD]);
Input::remove("ips");
Input::remove("domains");
}
And the bad_ip.bro script, which check if an IP is in the blacklist, which loads the previous one:
bad_ip.bro
#load reading.bro
module HTTP;
event http_reply(c: connection, version: string, code: count, reason: string)
{
if ( c$id$orig_h in ips )
print fmt("A malicious IP is connecting: %s", c$id$orig_h);
}
However, when I run bro, I get the error:
error: Input stream ips: Table type does not match index type. Need type 'string':string, got 'addr':addr
Segmentation fault (core dumped)
You cannot assign a string type to an addr type. In order to do so, you must use the utility function to_addr(). Of course, it would be wise to verify that that string contains a valid addr first. For example:
if(is_valid_ip(inputString){
inputAddr = to_addr(inputString)
} else { print "addr expected, got a string"; }

Properly accessing cluster_config '__default__' values

I have a cluster.json file that looks like this:
{
"__default__":
{
"queue":"normal",
"memory":"12288",
"nCPU":"1",
"name":"{rule}_{wildcards.sample}",
"o":"logs/cluster/{wildcards.sample}/{rule}.o",
"e":"logs/cluster/{wildcards.sample}/{rule}.e",
"jvm":"10240m"
},
"aln_pe":
{
"memory":"61440",
"nCPU":"16"
},
"GenotypeGVCFs":
{
"jvm":"102400m",
"memory":"122880"
}
}
In my snakefile I have a few rules that try to access the cluster_config object in their params
params:
memory=cluster_config['__default__']['jvm']
But this will give me a 'KeyError'
KeyError in line 27 of home/bwubb/projects/Germline/S0330901/haplotype.snake:
'__default__'
Does this have something to do with '__default__' being a special object? It pprints in a visually appealing dictionary where as the others are labeled OrderDict, but when I look at the json it looks the same.
If nothing is wrong with my json, then should I refrain from accessing '__default__'?
The default value is accessed via the keyword "cluster", not
__default__
See here in this example in the tutorial:
{
"__default__" :
{
"account" : "my account",
"time" : "00:15:00",
"n" : 1,
"partition" : "core"
},
"compute1" :
{
"time" : "00:20:00"
}
}
The JSON list in the URL above and listed above is the one being accessed in this example. It's unfortunate they are not on the same page.
To access time, J.K. uses the following call.
#!python
#!/usr/bin/env python3
import os
import sys
from snakemake.utils import read_job_properties
jobscript = sys.argv[1]
job_properties = read_job_properties(jobscript)
# do something useful with the threads
threads = job_properties[threads]
# access property defined in the cluster configuration file (Snakemake >=3.6.0)
job_properties["cluster"]["time"]
os.system("qsub -t {threads} {script}".format(threads=threads, script=jobscript))

Are there any API's for Amazon Web Services PRICING? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
Are there any API's that have up-to-date pricing on Amazon Web Services? Something that can be queried, for example, for the latest price S3 for a given region, or EC2, etc.
thanks
UPDATE:
AWS has pricing API nowadays: https://aws.amazon.com/blogs/aws/new-aws-price-list-api/
Original answer:
This is something I have asked for (via AWS evangelists and surveys) previously, but hasn't been forthcoming. I guess the AWS folks have more interesting innovations on their horizon.
As pointed out by #brokenbeatnik, there is an API for spot-price history. API docs here: http://docs.amazonwebservices.com/AWSEC2/latest/APIReference/ApiReference-query-DescribeSpotPriceHistory.html
I find it odd that the spot-price history has an official API, but that they didn't do this for on-demand services at the same time. Anyway, to answer the question, yes you can query the advertised AWS pricing...
The best I can come up with is from examining the (client-side) source of the various services' pricing pages. Therein you'll find that the tables are built in JS and populated with JSON data, data that you can GET yourself. E.g.:
http://aws.amazon.com/ec2/pricing/pricing-on-demand-instances.json
http://aws.amazon.com/s3/pricing/pricing-storage.json
That's only half the battle though, next you have to pick apart the object format to get at the values you want, e.g., in Python this gets the Hi-CPU On-Demand Extra-Large Linux Instance pricing for Virginia:
>>> import json
>>> import urllib2
>>> response = urllib2.urlopen('http://aws.amazon.com/ec2/pricing/pricing-on-demand-instances.json')
>>> pricejson = response.read()
>>> pricing = json.loads(pricejson)
>>> pricing['config']['regions'][0]['instanceTypes'][3]['sizes'][1]['valueColumns'][0]['prices']['USD']
u'0.68'
Disclaimer: Obviously this is not an AWS sanctioned API and as such I wouldn't recommend expecting stability of the data format or even continued existence of the source. But it's there, and it beats transcribing the pricing data into static config/source files!
For the people who wanted to use the data from the amazon api who uses things like "t1.micro" here is a translation array
type_translation = {
'm1.small' : ['stdODI', 'sm'],
'm1.medium' : ['stdODI', 'med'],
'm1.large' : ['stdODI', 'lg'],
'm1.xlarge' : ['stdODI', 'xl'],
't1.micro' : ['uODI', 'u'],
'm2.xlarge' : ['hiMemODI', 'xl'],
'm2.2xlarge' : ['hiMemODI', 'xxl'],
'm2.4xlarge' : ['hiMemODI', 'xxxxl'],
'c1.medium' : ['hiCPUODI', 'med'],
'c1.xlarge' : ['hiCPUODI', 'xl'],
'cc1.4xlarge' : ['clusterComputeI', 'xxxxl'],
'cc2.8xlarge' : ['clusterComputeI', 'xxxxxxxxl'],
'cg1.4xlarge' : ['clusterGPUI', 'xxxxl'],
'hi1.4xlarge' : ['hiIoODI', 'xxxx1']
}
region_translation = {
'us-east-1' : 'us-east',
'us-west-2' : 'us-west-2',
'us-west-1' : 'us-west',
'eu-west-1' : 'eu-ireland',
'ap-southeast-1' : 'apac-sin',
'ap-northeast-1' : 'apac-tokyo',
'sa-east-1' : 'sa-east-1'
}
I've create a quick & dirty API in Python for accessing the pricing data in those JSON files and converting it to the relevant values (the right translations and the right instance types).
You can get the code here: https://github.com/erans/ec2instancespricing
And read a bit more about it here: http://forecastcloudy.net/2012/04/03/quick-dirty-api-for-accessing-amazon-web-services-aws-ec2-pricing-data/
You can use this file as a module and call the functions to get a Python dictionary with the results, or you can use it as a command line tool to get the output is a human readable table, JSON or CSV to use in combination with other command line tools.
There is a nice API available via the link below which you can query for AWS pricing.
http://info.awsstream.com
If you play around a bit with the filters, you can see how to construct a query to return the specific information you are after e.g. region, instance type etc. For example, to return a json containing the EC2 pricing for the eu-west-1 region linux instances, you can format your query as per below.
http://info.awsstream.com/instances.json?region=eu-west-1&os=linux
Just replace json with xml in the query above to return the information in an xml format.
Note - similar to the URL's posted by other contributors above, I don't believe this is an officially sanctioned AWS API. However, based on a number of spot checks I've made over the last couple of days I can confirm that at time of posting the pricing information seems to be correct.
I don't believe there's an API that covers general current prices for the standard services. However, for EC2 in particular, you can see spot price history so that you don't have to guess what the market price for a spot instance is. More on this is available at:
http://docs.amazonwebservices.com/AWSEC2/latest/DeveloperGuide/using-spot-instances-history.html
I too needed an API to retrieve AWS pricing. I was surprised to find nothing especially given the large number of APIs available for AWS resources.
My preferred language is Ruby so I wrote a Gem to called AWSCosts that provides programmatic access to AWS pricing.
Here is an example of how to find the on demand price for a m1.medium Linux instance.
AWSCosts.region('us-east-1').ec2.on_demand(:linux).price('m1.medium')
For those who need the comprehensive AWS instance pricing data (EC2, RDS, ElastiCache and Redshift), here is the Python module grown from the one suggested above by Eran Sandler:
https://github.com/ilia-semenov/awspricingfull
It contains previous generation instances as well as current generation ones (including newest d2 family), reserved and on-demand pricing. JSON, Table and CSV formats available.
I made a Gist of forward and reverse names in Yaml should anyone need them for Rails, etc.
Another quick & dirty, but with a conversion to a more convenient final data format
class CostsAmazon(object):
'''Class for general info on the Amazon EC2 compute cloud.
'''
def __init__(self):
'''Fetch a bunch of instance cost data from Amazon and convert it
into the following form (as self.table):
table['us-east']['linux']['m1']['small']['light']['ondemand']['USD']
'''
#
# tables_raw['ondemand']['config']['regions'
# ][0]['instanceTypes'][0]['sizes'][0]['valueColumns'][0
# ]['prices']['USD']
#
# structure of tables_raw:
# ┃
# ┗━━[key]
# ┣━━['use'] # an input 3 x ∈ { 'light', 'medium', ... }
# ┣━━['os'] # an input 2 x ∈ { 'linux', 'mswin' }
# ┣━━['scheduling'] # an input
# ┣━━['uri'] # an input (see dict above)
# ┃ # the core output from Amazon follows
# ┣━━['vers'] == 0.01
# ┗━━['config']:
# * ┣━━['regions']: 7 x
# ┃ ┣━━['region'] == ∈ { 'us-east', ... }
# * ┃ ┗━━['instanceTypes']: 7 x
# ┃ ┣━━['type']: 'stdODI'
# * ┃ ┗━━['sizes']: 4 x
# ┃ ┗━━['valueColumns']
# ┃ ┣━━['size']: 'sm'
# * ┃ ┗━━['valueColumns']: 2 x
# ┃ ┣━━['name']: ~ 'linux'
# ┃ ┗━━['prices']
# ┃ ┗━━['USD']: ~ '0.080'
# ┣━━['rate']: ~ 'perhr'
# ┣━━['currencies']: ∈ { 'USD', ... }
# ┗━━['valueColumns']: [ 'linux', 'mswin' ]
#
# The valueColumns thing is weird, it looks like they're trying
# to constrain actual data to leaf nodes only, which is a little
# bit of a conceit since they have lists in several levels. So
# we can obtain the *much* more readable:
#
# tables['regions']['us-east']['m1']['linux']['ondemand'
# ]['small']['light']['USD']
#
# structure of the reworked tables:
# ┃
# ┗━━[<region>]: 7 x ∈ { 'us-east', ... }
# ┗━━[<os>]: 2 x ∈ { 'linux', 'mswin' } # oses
# ┗━━[<type>]: 7 x ∈ { 'm1', ... }
# ┗━━[<scheduling>]: 2 x ∈ { 'ondemand', 'reserved' }
# ┗━━[<size>]: 4 x ∈ { 'small', ... }
# ┗━━[<use>]: 3 x ∈ { 'light', 'medium', ... }
# ┗━━[<currency>]: ∈ { 'USD', ... }
# ┗━━> ~ '0.080' or None
uri_base = 'http://aws.amazon.com/ec2/pricing'
tables_raw = {
'ondemand': {'scheduling': 'ondemand',
'uri': '/pricing-on-demand-instances.json',
'os': 'linux', 'use': 'light'},
'reserved-light-linux': {
'scheduling': 'ondemand',
'uri': 'ri-light-linux.json', 'os': 'linux', 'use': 'light'},
'reserved-light-mswin': {
'scheduling': 'ondemand',
'uri': 'ri-light-mswin.json', 'os': 'mswin', 'use': 'light'},
'reserved-medium-linux': {
'scheduling': 'ondemand',
'uri': 'ri-medium-linux.json', 'os': 'linux', 'use': 'medium'},
'reserved-medium-mswin': {
'scheduling': 'ondemand',
'uri': 'ri-medium-mswin.json', 'os': 'mswin', 'use': 'medium'},
'reserved-heavy-linux': {
'scheduling': 'ondemand',
'uri': 'ri-heavy-linux.json', 'os': 'linux', 'use': 'heavy'},
'reserved-heavy-mswin': {
'scheduling': 'ondemand',
'uri': 'ri-heavy-mswin.json', 'os': 'mswin', 'use': 'heavy'},
}
for key in tables_raw:
# expand to full URIs
tables_raw[key]['uri'] = (
'%s/%s'% (uri_base, tables_raw[key]['uri']))
# fetch the data from Amazon
link = urllib2.urlopen(tables_raw[key]['uri'])
# adds keys: 'vers' 'config'
tables_raw[key].update(json.loads(link.read()))
link.close()
# canonicalize the types - the default is pretty annoying.
#
self.currencies = set()
self.regions = set()
self.types = set()
self.intervals = set()
self.oses = set()
self.sizes = set()
self.schedulings = set()
self.uses = set()
self.footnotes = {}
self.typesizes = {} # self.typesizes['m1.small'] = [<region>...]
self.table = {}
# grovel through Amazon's tables_raw and convert to something orderly:
for key in tables_raw:
scheduling = tables_raw[key]['scheduling']
self.schedulings.update([scheduling])
use = tables_raw[key]['use']
self.uses.update([use])
os = tables_raw[key]['os']
self.oses.update([os])
config_data = tables_raw[key]['config']
self.currencies.update(config_data['currencies'])
for region_data in config_data['regions']:
region = self.instance_region_from_raw(region_data['region'])
self.regions.update([region])
if 'footnotes' in region_data:
self.footnotes[region] = region_data['footnotes']
for instance_type_data in region_data['instanceTypes']:
instance_type = self.instance_types_from_raw(
instance_type_data['type'])
self.types.update([instance_type])
for size_data in instance_type_data['sizes']:
size = self.instance_size_from_raw(size_data['size'])
typesize = '%s.%s' % (instance_type, size)
if typesize not in self.typesizes:
self.typesizes[typesize] = set()
self.typesizes[typesize].update([region])
self.sizes.update([size])
for size_values in size_data['valueColumns']:
interval = size_values['name']
self.intervals.update([interval])
for currency in size_values['prices']:
cost = size_values['prices'][currency]
self.table_add_row(region, os, instance_type,
size, use, scheduling,
currency, cost)
def table_add_row(self, region, os, instance_type, size, use, scheduling,
currency, cost):
if cost == 'N/A*':
return
table = self.table
for key in [region, os, instance_type, size, use, scheduling]:
if key not in table:
table[key] = {}
table = table[key]
table[currency] = cost
def instance_region_from_raw(self, raw_region):
'''Return a less intelligent given EC2 pricing name to the
corresponding region name.
'''
regions = {
'apac-tokyo' : 'ap-northeast-1',
'apac-sin' : 'ap-southeast-1',
'eu-ireland' : 'eu-west-1',
'sa-east-1' : 'sa-east-1',
'us-east' : 'us-east-1',
'us-west' : 'us-west-1',
'us-west-2' : 'us-west-2',
}
return regions[raw_region] if raw_region in regions else raw_region
def instance_types_from_raw(self, raw_type):
types = {
# ondemand reserved
'stdODI' : 'm1', 'stdResI' : 'm1',
'uODI' : 't1', 'uResI' : 't1',
'hiMemODI' : 'm2', 'hiMemResI' : 'm2',
'hiCPUODI' : 'c1', 'hiCPUResI' : 'c1',
'clusterComputeI' : 'cc1', 'clusterCompResI' : 'cc1',
'clusterGPUI' : 'cc2', 'clusterGPUResI' : 'cc2',
'hiIoODI' : 'hi1', 'hiIoResI' : 'hi1'
}
return types[raw_type]
def instance_size_from_raw(self, raw_size):
sizes = {
'u' : 'micro',
'sm' : 'small',
'med' : 'medium',
'lg' : 'large',
'xl' : 'xlarge',
'xxl' : '2xlarge',
'xxxxl' : '4xlarge',
'xxxxxxxxl' : '8xlarge'
}
return sizes[raw_size]
def cost(self, region, os, instance_type, size, use, scheduling,
currency):
try:
return self.table[region][os][instance_type][
size][use][scheduling][currency]
except KeyError as ex:
return None
Here is another unsanctioned "api" which covers reserved instances: http://aws.amazon.com/ec2/pricing/pricing-reserved-instances.json
There is no pricing api, but there are very nice price mentioned above.
In the addition to the ec2 price ripper I'd like to share my rds and elasticache price rippers:
https://github.com/evgeny-gridasov/rdsinstancespricing
https://github.com/evgeny-gridasov/elasticachepricing
There is a reply to a similar question which lists all the .js files containing the prices, which are barely JSON files (with only a callback(...); statement to remove).
Here is an exemple for Linux On Demand prices : http://aws-assets-pricing-prod.s3.amazonaws.com/pricing/ec2/linux-od.js
(Get the full list directly on that reply)