IP Structure: Why HeaderLength is ahead of Version - structure

This is my ctypes code in Python:
class IP(Structure):
_fields_ = [
('ip_hl', c_ubyte, 4),
('ip_version', c_ubyte, 4),
('tos', c_ubyte),
('len', c_ushort),
('id', c_ushort),
('offset', c_ushort),
('ttl', c_ubyte),
('protocol_num', c_ubyte),
('sum', c_ushort),
('src', c_ulong),
('dst', c_ulong)
]
As we all know in IP Header 0-3bit is Version and 4-7bit is HeaderLength but why in this code HeaderLength is ahead of Version

Related

How can I include kotlin-reflect in the classpath of the Bazel compiler?

I'm trying to get moshi-kotlin-codegen to run on some Kotlin code via Bazel. After a lot of trial and error, I managed to get the plugin to run, but it's failing due to not having kotlin-reflect on the classpath. This is needed by kotlinpoet, which is used by Moshi, so it should be transitively included, AFAICT. However, even explicitly stating the dependency in the BUILD.bazel file for moshi-kotlin-codegen doesn't make it work, so I can only assume it gets filtered out somewhere.
The WORKSPACE file:
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "rules_jvm_external",
sha256 = "62133c125bf4109dfd9d2af64830208356ce4ef8b165a6ef15bbff7460b35c3a",
strip_prefix = "rules_jvm_external-3.0",
url = "https://github.com/bazelbuild/rules_jvm_external/archive/3.0.zip",
)
load("#rules_jvm_external//:defs.bzl", "maven_install")
maven_install(
artifacts = [
"com.github.ajalt:clikt:2.6.0",
"org.eclipse.jgit:org.eclipse.jgit:5.7.0.202003090808-r",
"io.github.microutils:kotlin-logging:1.7.8",
"ch.qos.logback:logback-classic:1.2.3",
"com.github.scribejava:scribejava-core:6.9.0",
"com.squareup.moshi:moshi:1.9.2",
"com.squareup.moshi:moshi-kotlin-codegen:1.9.2",
"org.kohsuke:github-api:1.108",
"com.github.ben-manes.caffeine:caffeine:2.8.2",
"javax.xml.bind:jaxb-api:2.3.1",
"org.junit.jupiter:junit-jupiter:5.6.0",
"org.junit.jupiter:junit-jupiter-params:5.6.0",
"com.google.truth:truth:1.0.1",
],
fetch_sources = True,
repositories = [
"https://maven.google.com",
"https://repo1.maven.org/maven2",
"https://jcenter.bintray.com/",
],
strict_visibility = True,
)
rules_kotlin_version = "legacy-1.4.0-rc3"
rules_kotlin_sha = "da0e6e1543fcc79e93d4d93c3333378f3bd5d29e82c1bc2518de0dbe048e6598"
http_archive(
name = "io_bazel_rules_kotlin",
urls = ["https://github.com/bazelbuild/rules_kotlin/releases/download/%s/rules_kotlin_release.tgz" % rules_kotlin_version],
sha256 = rules_kotlin_sha,
)
load("#io_bazel_rules_kotlin//kotlin:kotlin.bzl", "kotlin_repositories", "kt_register_toolchains")
kotlin_repositories()
kt_register_toolchains()
The BUILD.bazel for moshi-kotlin-codegen:
java_plugin(
name = "moshi_kotlin_codegen_plugin",
processor_class = "com.squareup.moshi.kotlin.codegen.JsonClassCodegenProcessor",
deps = [
"#maven//:com_squareup_moshi_moshi_kotlin_codegen",
],
generates_api = True,
visibility = ["//visibility:public"],
)
(I also tried adding a java_library and depending on that, no luck.)
The final BUILD file that tries to include it:
load("#io_bazel_rules_kotlin//kotlin:kotlin.bzl", "kt_jvm_binary")
kt_jvm_binary(
name = "myproject",
srcs = glob([
"**/*.kt",
]),
main_class = "my.project.MainKt",
plugins = [
"//third_party/moshi_kotlin_codegen:moshi_kotlin_codegen_plugin",
],
deps = [
"#maven//:ch_qos_logback_logback_classic",
"#maven//:com_github_ajalt_clikt",
"#maven//:com_github_ben_manes_caffeine_caffeine",
"#maven//:com_github_scribejava_scribejava_core",
"#maven//:com_squareup_moshi_moshi",
"#maven//:io_github_microutils_kotlin_logging",
"#maven//:org_eclipse_jgit_org_eclipse_jgit",
"#maven//:org_kohsuke_github_api",
"#maven//:javax_xml_bind_jaxb_api",
],
)
The exception during the compilation:
Caused by: kotlin.jvm.KotlinReflectionNotSupportedError: Kotlin reflection implementation is not found at runtime. Make sure you have kotlin-reflect.jar in the classpath
at kotlin.jvm.internal.ClassReference.error(ClassReference.kt:79)
at kotlin.jvm.internal.ClassReference.getQualifiedName(ClassReference.kt:15)
at com.squareup.kotlinpoet.ClassNames.get(ClassName.kt:49)
at com.squareup.moshi.kotlinpoet.classinspector.elements.ElementsClassInspector.<clinit>(ElementsClassInspector.kt:493)
at com.squareup.moshi.kotlin.codegen.JsonClassCodegenProcessor.process(JsonClassCodegenProcessor.kt:99)
at org.jetbrains.kotlin.kapt3.base.incremental.IncrementalProcessor.process(incrementalProcessors.kt)
at org.jetbrains.kotlin.kapt3.base.ProcessorWrapper.process(annotationProcessing.kt:147)
at jdk.compiler/com.sun.tools.javac.processing.JavacProcessingEnvironment.callProcessor(JavacProcessingEnvironment.java:980)
... 48 more
Turns out that this was indeed a bug. A fix is https://github.com/bazelbuild/rules_kotlin/pull/354.

platform dependent linker flags in bazel (for glut)

I am trying to build the c++ app with glut using bazel. It should work on both macos and linux. Now the problem is that on macos it requires passing "-framework OpenGL", "-framework GLUT" to linker flags, while on linux I should probably do soemthing like
cc_library(
name = "glut",
srcs = glob(["local/lib/libglut*.dylib", "lib/libglut*.so"]),
...
in glut.BUILD.
So the question is
1. How to provide platform-dependent linker options to cc_library rules in general?
2. And in particular how to link to glut in platform-independent way using bazel?
You can do this using the Bazel select() function. Something like this might work:
config_setting(
name = "linux_x86_64",
values = {"cpu": "k8"},
visibility = ["//visibility:public"],
)
config_setting(
name = "darwin_x86_64",
values = {"cpu": "darwin_x86_64"},
visibility = ["//visibility:public"],
)
cc_library(
name = "glut",
srcs = select({
":darwin_x86_64": [],
":linux_x86_64": glob(["local/lib/libglut*.dylib", "lib/libglut*.so"]),
}),
linkopts = select({
":darwin_x86_64": [
"-framework OpenGL",
"-framework GLUT"
],
":linux_x86_64": [],
})
...
)
Dig around in the Bazel github repository, it's got some good real world examples of using select().
I had a similar problem but with picking the right compiler depending on the platform and #zlalanne's solution didn't work for me. After 2 days of frustration, I finally found the following solution:
config_setting (
name = "darwin",
constraint_values = [ "#bazel_tools//platforms:osx" ]
)
config_setting (
name = "windows",
constraint_values = [ "#bazel_tools//platforms:windows" ]
)
I didn't have any need for linux, but adding this to your BUILD file should work:
config_setting (
name = "linux",
constraint_values = [ "#bazel_tools//platforms:linux" ]
)
Use ":darwin", ":windows" and ":linux" in your selects and you should have a solution that works.

Ryu, openflow v1.5, OFPET_BAD_ACTION, OFPBAC_BAD_OUT_PORT errors

When I run simple_switch_14.py application in mininet environment, the ping of hosts works just fine.
Then I change the following strings of code:
1) "from ryu.ofproto import ofproto_v1_4" to "from ryu.ofproto import ofproto_v1_5"
2) "OFP_VERSIONS = [ofproto_v1_4.OFP_VERSION]" to "OFP_VERSIONS = [ofproto_v1_5.OFP_VERSION]"
3) "out = parser.OFPPacketOut(datapath=datapath, buffer_id=msg.buffer_id, in_port=in_port, actions=actions, data=data)" to "out = parser.OFPPacketOut(datapath=datapath, buffer_id=msg.buffer_id, match=parser.OFPMatch(in_port=in_port), actions=actions, data=data)"
it means that I am trying to run this application in openflow 1.5 version environment.
I get following error:
EVENT ofp_event->SimpleSwitch14 EventOFPPacketIn
packet in 1 00:00:00:00:00:01 ff:ff:ff:ff:ff:ff 1
EventOFPErrorMsg received.
version=0x6, msg_type=0x1, msg_len=0x44, xid=0x703a0cc
-- msg_type: OFPT_ERROR(1)
OFPErrorMsg(type=0x2, code=0x4, data=b'\x06\x0d\x00\x38\x07\x03\xa0\xcc\x00\x00\x01\x00\x00\x10\x00\x00\x00\x01\x00\x16\x80\x00\x00\x04\x00\x00\x00\x01\x80\x00\x06\x06\xff\xff\xff\xff\xff\xff\x00\x00\x00\x00\x00\x10\xff\xff\xff\xfb\xff\xe5\x00\x00\x00\x00\x00\x00')
|-- type: OFPET_BAD_ACTION(2)
|-- code: OFPBAC_BAD_OUT_PORT(4)
-- data: version=0x6, msg_type=0xd, msg_len=0x38, xid=0x703a0cc
`-- msg_type: OFPT_PACKET_OUT(13)
so, I am wondering, why it generates 'bad out port' error, what has changed in 'out ports' or in 'actions' between v1.4 and v1.5 of openflow protocols?
Thanks

Getting digital signature Error "Error Encountered while BER decoding" using TCPDF

I use TCPDF to create digital signature on a PDF document.
I use almost the exact code mentioned here:
http://www.tcpdf.org/examples/example_052.phps
and I use the same crt file that I use for the SSL on my server when browsing to on my server (linux).
But no matter if the file exists or not, no matter which path it is or which server, I always get the same error
I get the error:
"Error Encountered while BER decoding"
My server is Centos 6.3(64bit), Apache/2.2.15 (CentOS) , PHP Version 5.3.3, MySQLi 5.1.67
My code is:
$pdf = new TCPDF();
// set document information
$pdf->SetCreator('DejaVuSansCondensed');
$pdf->SetAuthor('Soft1');
$pdf->SetTitle('Soft1 Verified');
$pdf->SetSubject('Soft1 Verified');
$pdf->SetKeywords('TCPDF, PDF, example, test, guide');
// set font
$pdf->SetFont('DejaVuSansCondensed', '', 10);
// set certificate file
$certificatePath = 'file://home/tcpdf.crt';
// set additional information
$info = array
(
'Name' => 'Soft1',
'Location' => 'Server',
'Reason' => 'Signing Document',
'ContactInfo' => 'http://www.soft1.com',
);
// set document signature
$pdf->setSignature($certificatePath, $certificatePath, 'tcpdfdemo', '', 2, $info);
// add a page
$pdf->AddPage();
// set LTR direction for english translation
$pdf->setRTL(true);
// output the HTML content
$pdf->writeHTML($printable, true, 0, true, 0);
// create content for signature (image and/or text)
$pdf->Image('themes/default/images/digital_signature.png', 25, 11, 15, 15, 'PNG');
$pdf->setSignatureAppearance(25, 11, 15, 15);
if($task == 'pdf'){
//$GLOBALS['log']->fatal("file_name = " . $file_name);
//$file_name = "hello.pdf";
$pdf->Output($file_name, "D");
}
Can anyone tell me what is wrong?
Searched the web for this error. Almost nothing except old forum posts.
Had the same issue and solved it by replacing:
$certificatePath = 'file:///home/tcpdf.crt';
Note the extra '/' in the path

Are there any API's for Amazon Web Services PRICING? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
Are there any API's that have up-to-date pricing on Amazon Web Services? Something that can be queried, for example, for the latest price S3 for a given region, or EC2, etc.
thanks
UPDATE:
AWS has pricing API nowadays: https://aws.amazon.com/blogs/aws/new-aws-price-list-api/
Original answer:
This is something I have asked for (via AWS evangelists and surveys) previously, but hasn't been forthcoming. I guess the AWS folks have more interesting innovations on their horizon.
As pointed out by #brokenbeatnik, there is an API for spot-price history. API docs here: http://docs.amazonwebservices.com/AWSEC2/latest/APIReference/ApiReference-query-DescribeSpotPriceHistory.html
I find it odd that the spot-price history has an official API, but that they didn't do this for on-demand services at the same time. Anyway, to answer the question, yes you can query the advertised AWS pricing...
The best I can come up with is from examining the (client-side) source of the various services' pricing pages. Therein you'll find that the tables are built in JS and populated with JSON data, data that you can GET yourself. E.g.:
http://aws.amazon.com/ec2/pricing/pricing-on-demand-instances.json
http://aws.amazon.com/s3/pricing/pricing-storage.json
That's only half the battle though, next you have to pick apart the object format to get at the values you want, e.g., in Python this gets the Hi-CPU On-Demand Extra-Large Linux Instance pricing for Virginia:
>>> import json
>>> import urllib2
>>> response = urllib2.urlopen('http://aws.amazon.com/ec2/pricing/pricing-on-demand-instances.json')
>>> pricejson = response.read()
>>> pricing = json.loads(pricejson)
>>> pricing['config']['regions'][0]['instanceTypes'][3]['sizes'][1]['valueColumns'][0]['prices']['USD']
u'0.68'
Disclaimer: Obviously this is not an AWS sanctioned API and as such I wouldn't recommend expecting stability of the data format or even continued existence of the source. But it's there, and it beats transcribing the pricing data into static config/source files!
For the people who wanted to use the data from the amazon api who uses things like "t1.micro" here is a translation array
type_translation = {
'm1.small' : ['stdODI', 'sm'],
'm1.medium' : ['stdODI', 'med'],
'm1.large' : ['stdODI', 'lg'],
'm1.xlarge' : ['stdODI', 'xl'],
't1.micro' : ['uODI', 'u'],
'm2.xlarge' : ['hiMemODI', 'xl'],
'm2.2xlarge' : ['hiMemODI', 'xxl'],
'm2.4xlarge' : ['hiMemODI', 'xxxxl'],
'c1.medium' : ['hiCPUODI', 'med'],
'c1.xlarge' : ['hiCPUODI', 'xl'],
'cc1.4xlarge' : ['clusterComputeI', 'xxxxl'],
'cc2.8xlarge' : ['clusterComputeI', 'xxxxxxxxl'],
'cg1.4xlarge' : ['clusterGPUI', 'xxxxl'],
'hi1.4xlarge' : ['hiIoODI', 'xxxx1']
}
region_translation = {
'us-east-1' : 'us-east',
'us-west-2' : 'us-west-2',
'us-west-1' : 'us-west',
'eu-west-1' : 'eu-ireland',
'ap-southeast-1' : 'apac-sin',
'ap-northeast-1' : 'apac-tokyo',
'sa-east-1' : 'sa-east-1'
}
I've create a quick & dirty API in Python for accessing the pricing data in those JSON files and converting it to the relevant values (the right translations and the right instance types).
You can get the code here: https://github.com/erans/ec2instancespricing
And read a bit more about it here: http://forecastcloudy.net/2012/04/03/quick-dirty-api-for-accessing-amazon-web-services-aws-ec2-pricing-data/
You can use this file as a module and call the functions to get a Python dictionary with the results, or you can use it as a command line tool to get the output is a human readable table, JSON or CSV to use in combination with other command line tools.
There is a nice API available via the link below which you can query for AWS pricing.
http://info.awsstream.com
If you play around a bit with the filters, you can see how to construct a query to return the specific information you are after e.g. region, instance type etc. For example, to return a json containing the EC2 pricing for the eu-west-1 region linux instances, you can format your query as per below.
http://info.awsstream.com/instances.json?region=eu-west-1&os=linux
Just replace json with xml in the query above to return the information in an xml format.
Note - similar to the URL's posted by other contributors above, I don't believe this is an officially sanctioned AWS API. However, based on a number of spot checks I've made over the last couple of days I can confirm that at time of posting the pricing information seems to be correct.
I don't believe there's an API that covers general current prices for the standard services. However, for EC2 in particular, you can see spot price history so that you don't have to guess what the market price for a spot instance is. More on this is available at:
http://docs.amazonwebservices.com/AWSEC2/latest/DeveloperGuide/using-spot-instances-history.html
I too needed an API to retrieve AWS pricing. I was surprised to find nothing especially given the large number of APIs available for AWS resources.
My preferred language is Ruby so I wrote a Gem to called AWSCosts that provides programmatic access to AWS pricing.
Here is an example of how to find the on demand price for a m1.medium Linux instance.
AWSCosts.region('us-east-1').ec2.on_demand(:linux).price('m1.medium')
For those who need the comprehensive AWS instance pricing data (EC2, RDS, ElastiCache and Redshift), here is the Python module grown from the one suggested above by Eran Sandler:
https://github.com/ilia-semenov/awspricingfull
It contains previous generation instances as well as current generation ones (including newest d2 family), reserved and on-demand pricing. JSON, Table and CSV formats available.
I made a Gist of forward and reverse names in Yaml should anyone need them for Rails, etc.
Another quick & dirty, but with a conversion to a more convenient final data format
class CostsAmazon(object):
'''Class for general info on the Amazon EC2 compute cloud.
'''
def __init__(self):
'''Fetch a bunch of instance cost data from Amazon and convert it
into the following form (as self.table):
table['us-east']['linux']['m1']['small']['light']['ondemand']['USD']
'''
#
# tables_raw['ondemand']['config']['regions'
# ][0]['instanceTypes'][0]['sizes'][0]['valueColumns'][0
# ]['prices']['USD']
#
# structure of tables_raw:
# ┃
# ┗━━[key]
# ┣━━['use'] # an input 3 x ∈ { 'light', 'medium', ... }
# ┣━━['os'] # an input 2 x ∈ { 'linux', 'mswin' }
# ┣━━['scheduling'] # an input
# ┣━━['uri'] # an input (see dict above)
# ┃ # the core output from Amazon follows
# ┣━━['vers'] == 0.01
# ┗━━['config']:
# * ┣━━['regions']: 7 x
# ┃ ┣━━['region'] == ∈ { 'us-east', ... }
# * ┃ ┗━━['instanceTypes']: 7 x
# ┃ ┣━━['type']: 'stdODI'
# * ┃ ┗━━['sizes']: 4 x
# ┃ ┗━━['valueColumns']
# ┃ ┣━━['size']: 'sm'
# * ┃ ┗━━['valueColumns']: 2 x
# ┃ ┣━━['name']: ~ 'linux'
# ┃ ┗━━['prices']
# ┃ ┗━━['USD']: ~ '0.080'
# ┣━━['rate']: ~ 'perhr'
# ┣━━['currencies']: ∈ { 'USD', ... }
# ┗━━['valueColumns']: [ 'linux', 'mswin' ]
#
# The valueColumns thing is weird, it looks like they're trying
# to constrain actual data to leaf nodes only, which is a little
# bit of a conceit since they have lists in several levels. So
# we can obtain the *much* more readable:
#
# tables['regions']['us-east']['m1']['linux']['ondemand'
# ]['small']['light']['USD']
#
# structure of the reworked tables:
# ┃
# ┗━━[<region>]: 7 x ∈ { 'us-east', ... }
# ┗━━[<os>]: 2 x ∈ { 'linux', 'mswin' } # oses
# ┗━━[<type>]: 7 x ∈ { 'm1', ... }
# ┗━━[<scheduling>]: 2 x ∈ { 'ondemand', 'reserved' }
# ┗━━[<size>]: 4 x ∈ { 'small', ... }
# ┗━━[<use>]: 3 x ∈ { 'light', 'medium', ... }
# ┗━━[<currency>]: ∈ { 'USD', ... }
# ┗━━> ~ '0.080' or None
uri_base = 'http://aws.amazon.com/ec2/pricing'
tables_raw = {
'ondemand': {'scheduling': 'ondemand',
'uri': '/pricing-on-demand-instances.json',
'os': 'linux', 'use': 'light'},
'reserved-light-linux': {
'scheduling': 'ondemand',
'uri': 'ri-light-linux.json', 'os': 'linux', 'use': 'light'},
'reserved-light-mswin': {
'scheduling': 'ondemand',
'uri': 'ri-light-mswin.json', 'os': 'mswin', 'use': 'light'},
'reserved-medium-linux': {
'scheduling': 'ondemand',
'uri': 'ri-medium-linux.json', 'os': 'linux', 'use': 'medium'},
'reserved-medium-mswin': {
'scheduling': 'ondemand',
'uri': 'ri-medium-mswin.json', 'os': 'mswin', 'use': 'medium'},
'reserved-heavy-linux': {
'scheduling': 'ondemand',
'uri': 'ri-heavy-linux.json', 'os': 'linux', 'use': 'heavy'},
'reserved-heavy-mswin': {
'scheduling': 'ondemand',
'uri': 'ri-heavy-mswin.json', 'os': 'mswin', 'use': 'heavy'},
}
for key in tables_raw:
# expand to full URIs
tables_raw[key]['uri'] = (
'%s/%s'% (uri_base, tables_raw[key]['uri']))
# fetch the data from Amazon
link = urllib2.urlopen(tables_raw[key]['uri'])
# adds keys: 'vers' 'config'
tables_raw[key].update(json.loads(link.read()))
link.close()
# canonicalize the types - the default is pretty annoying.
#
self.currencies = set()
self.regions = set()
self.types = set()
self.intervals = set()
self.oses = set()
self.sizes = set()
self.schedulings = set()
self.uses = set()
self.footnotes = {}
self.typesizes = {} # self.typesizes['m1.small'] = [<region>...]
self.table = {}
# grovel through Amazon's tables_raw and convert to something orderly:
for key in tables_raw:
scheduling = tables_raw[key]['scheduling']
self.schedulings.update([scheduling])
use = tables_raw[key]['use']
self.uses.update([use])
os = tables_raw[key]['os']
self.oses.update([os])
config_data = tables_raw[key]['config']
self.currencies.update(config_data['currencies'])
for region_data in config_data['regions']:
region = self.instance_region_from_raw(region_data['region'])
self.regions.update([region])
if 'footnotes' in region_data:
self.footnotes[region] = region_data['footnotes']
for instance_type_data in region_data['instanceTypes']:
instance_type = self.instance_types_from_raw(
instance_type_data['type'])
self.types.update([instance_type])
for size_data in instance_type_data['sizes']:
size = self.instance_size_from_raw(size_data['size'])
typesize = '%s.%s' % (instance_type, size)
if typesize not in self.typesizes:
self.typesizes[typesize] = set()
self.typesizes[typesize].update([region])
self.sizes.update([size])
for size_values in size_data['valueColumns']:
interval = size_values['name']
self.intervals.update([interval])
for currency in size_values['prices']:
cost = size_values['prices'][currency]
self.table_add_row(region, os, instance_type,
size, use, scheduling,
currency, cost)
def table_add_row(self, region, os, instance_type, size, use, scheduling,
currency, cost):
if cost == 'N/A*':
return
table = self.table
for key in [region, os, instance_type, size, use, scheduling]:
if key not in table:
table[key] = {}
table = table[key]
table[currency] = cost
def instance_region_from_raw(self, raw_region):
'''Return a less intelligent given EC2 pricing name to the
corresponding region name.
'''
regions = {
'apac-tokyo' : 'ap-northeast-1',
'apac-sin' : 'ap-southeast-1',
'eu-ireland' : 'eu-west-1',
'sa-east-1' : 'sa-east-1',
'us-east' : 'us-east-1',
'us-west' : 'us-west-1',
'us-west-2' : 'us-west-2',
}
return regions[raw_region] if raw_region in regions else raw_region
def instance_types_from_raw(self, raw_type):
types = {
# ondemand reserved
'stdODI' : 'm1', 'stdResI' : 'm1',
'uODI' : 't1', 'uResI' : 't1',
'hiMemODI' : 'm2', 'hiMemResI' : 'm2',
'hiCPUODI' : 'c1', 'hiCPUResI' : 'c1',
'clusterComputeI' : 'cc1', 'clusterCompResI' : 'cc1',
'clusterGPUI' : 'cc2', 'clusterGPUResI' : 'cc2',
'hiIoODI' : 'hi1', 'hiIoResI' : 'hi1'
}
return types[raw_type]
def instance_size_from_raw(self, raw_size):
sizes = {
'u' : 'micro',
'sm' : 'small',
'med' : 'medium',
'lg' : 'large',
'xl' : 'xlarge',
'xxl' : '2xlarge',
'xxxxl' : '4xlarge',
'xxxxxxxxl' : '8xlarge'
}
return sizes[raw_size]
def cost(self, region, os, instance_type, size, use, scheduling,
currency):
try:
return self.table[region][os][instance_type][
size][use][scheduling][currency]
except KeyError as ex:
return None
Here is another unsanctioned "api" which covers reserved instances: http://aws.amazon.com/ec2/pricing/pricing-reserved-instances.json
There is no pricing api, but there are very nice price mentioned above.
In the addition to the ec2 price ripper I'd like to share my rds and elasticache price rippers:
https://github.com/evgeny-gridasov/rdsinstancespricing
https://github.com/evgeny-gridasov/elasticachepricing
There is a reply to a similar question which lists all the .js files containing the prices, which are barely JSON files (with only a callback(...); statement to remove).
Here is an exemple for Linux On Demand prices : http://aws-assets-pricing-prod.s3.amazonaws.com/pricing/ec2/linux-od.js
(Get the full list directly on that reply)