NS3 how to get the following attributes of node - ns-3

I am new to NS3. I am trying to get these parameters from a node can someone please help me to get these parameters?
1 Computing Power
2 Numbers of Transactions
3 Energy Consumed for One Transaction
4 Node Latency
5 Bandwidth
6 Residual Energy

All the available class methods are laid out in the NS-3 Docs

Related

How do I tell I find all AWS metrics using high-resolution?

i run into this error in AWS cloudwatch
which does not make sense as I think/thought we had 0 high resolution metrics(high resolution only records for 3 hours). We typically just do 1 minute interval reporting. How do I find all metrics with high resolution? In this way I am hoping I can edit them to not high resolution.
I searched around a ton on the documentation and I looked into micrometer code which seems to default to highResolution = false and a step of 2 minutes. (We are using micrometer). I am trying to figure out next steps on figuring out why AWS thinks this data is high resolution data.
I was also thinking 'ok, perhaps it would roll up to 1 minute data then 5 minute data' so in my query I tried 1 minute and 5 minute but I still get the error of only 3 hours of data.
Error is thrown because you're using the query syntax (SELECT ...) and that only supports the latest 3 hours of data. The feature is called Metrics Insights, you can see the limits here: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch-metrics-insights-limits.html
Error is not related to high resolution metrics. Even if they were high resolution, when you're setting the period to 5 min, you would only retrieve datapoints aggregated to 5 min granularity.

Octaplanner example for Capicated Vehicle Routing with Time Window?

I am new to OctaPlanner.
I want to build a solution where I will nave number of locations to deliver items from one single location and also I want to use openmap distance data for calculating the distance.
Initially I used jsprit, but for more than 300 deliveries, it takes more than 8 minutes with 20 threads. Thats why I am trying to use Octa planner.
I want to map 1000 deliveries within 1 minute.
Does any one know any reference code or reference material which I can start using?
Thanks in advance :)
CVRPTW is a standard example, just open the examples app, vehicle routing and then import one of the belgium datasets with timewindows. The code is in the zip too.
To scale to 1k deliveries and especially beyond, you'll want to use "Nearby selection" (see reference manual), which isn't on by default but which makes a huge difference.

DatastaxEnteprise: node vs instance, correct AMI image, why do I need storage

Currently, we are evaluating datastax enteprise as our provider of Cassandra and Spark.We consider deploying Datastax cluster on AWS.
I have following questions:
1) In step 1 of Datastax on EC2 installation manual, I need to choose correct AMI Image: Currently there are 7 of them. Which is the correct one:
(DataStax Auto-Clustering AMI 2.5.1-pv, DataStax Auto-Clustering AMI 2.6.3-1204-pv, DataStax Auto-Clustering AMI 2.6.3-1404-pv....)
2) The moment we launch the cluster, do we pay only for aws instances or also Datastax Enterprise licensing fee? I know there is a 30 days enterprise free trial, but nowhere in the installation process I saw a step where we can ask for the free trial? Is there some online calculator that we can use to calculate the cost of a cluster on a monthy basis (based on the instance types we create)
3) In the step 3 of the installation process Configure Instance Details, I am confused with terms instance and nodes. What is the difference between them? What happens if I choose:
a) 1 instance, --totalnodes 3 (in the user data)
b) 3 instance, --tatalnodes 3
c) 1 instance, --totalnodes 0 --analyticsnodes 3
d) 3 instance, --totalnodes 0 --analyticsnodes 3
4) We are interested in the use case where each of our 3 cassandra nodes has Spark. Is the proper user data configuration:
--totalnodes 0 --analyticsnodes 3
Are then we going to have 0 nodes with only cassandra, and 3 nodes that have Cassandra and Spark? What is the Number of instances we should specify then?
5) In step 4 of installation process Add Storage, we are asked to add storage to the instance. But why do we need this storage? When choosing instance type, for example m3.large, I already know that my instance has 32GB of SSD storage, what is this then?
Thank you for your answers. If there is some email list to which I can send these questions, I would appreciate it.
Use whichever AMI has the highest version number and the virtualization type you prefer (-pv or -hvm): http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/virtualization_types.html
You only pay for EC2 usage. DSE is free for testing and development. You do not need to request a trial license. If you want a production license or if you want to become a startup member, contact DataStax.
The AMI will install one "DSE node" per "EC2 instance". So if you want a six node cluster you need to specify 6 instances. To use your examples:
a) 1 instance, --totalnodes 3 (in the user data)
This won't work
b) 3 instance, --tatalnodes 3
This will give you a three node Cassandra cluster (running on three instances). You have not specified search or analytics nodes so by default you will just get Cassandra nodes.
c) 1 instance, --totalnodes 0 --analyticsnodes 3
Won't work. Total nodes should equal number of instances and number of analytics nodes can't be greater than total nodes.
d) 3 instance, --totalnodes 0 --analyticsnodes 3
Won't work. Number of analytics nodes can't be greater than number of total nodes.
If you want a three-node cluster and you want all of them running both Cassandra and Spark use this:
3 instances, --totalnodes 3 --analyticsnodes 3
Adding storage is optional. And only possible with certain instance types. You should notice with m3.large that there is a default config and you can't actually make any changes to it.
Hope this helps!

Maximum transfer rate isochronous 128B endpoint full speed

In the usb specification (Table 5-4) is stated that given an isochronous endpoint with a maxPacketSize of 128 Bytes as much as 10 transactions can be done per frame. This gives 128 * 10 * 1000 = 1.28 MB/s of theorical bandwidth.
At the same time it states
The host must not issue more than 1 transaction in a single frame for a specific isochronous endpoint.
Isn't it contradictory with the aforementioned table ?
I've done some tests and found that only 1 transaction is done per frame on my device. Also, I found on several web sites that just 1 transaction can be done per frame(ms). Of course I suppose the spec is the correct reference, so my question is, what could be the cause of receiving only 1 packet per frame ? Am I misunderstanding the spec and what i think are transactions are actually another thing ?
The host must not issue more than 1 transaction in a single frame for a specific isochronous endpoint.
Assuming USB Full Speed you could still have 10 isochronous 128 byte transactions per frame by using 10 different endpoints.
The Table 5-4 seems to miss calculations for chapter 5.6.4 "Isochronous Transfer Bus Access Constraints". The 90% rule reduces the max number of 128 byte isochr. transactions to nine.

Creating a workable Redis store with several filters

I am working on a system to display information about real estate. It runs in angular with the data stored as a json file on the server, which is updated once a day.
I have filters on number of bedrooms, bathrooms, price and a free text field for the address. It's all very snappy, but the problem is the load time of the app. This is why I am looking at Redis. Trouble is, I just can't get my head round how to get data with several different filters running.
Let's say I have some data like this: (missing off lots of fields for simplicity)
id beds price
0 3 270000
1 2 130000
2 4 420000
etc...
I am thinking I could set up three sets, one to hold the whole dataset, one to create an index on bedrooms and another for price:
beds id
2 1
3 0
4 2
and the same for price:
price id
130000 1
270000 0
420000 2
Then I was thinking I could use SINTER to return the overlapping sets.
Let's say I looking for a house with more than 2 bedrooms that is less than 300000.
From the bedrooms set I get IDs 0,2 for beds > 2.
From the prices set I get IDs 0,1 for price < 300000
So the common id is 0, which I would then lookup in the main dataset.
It all sounds good in theory, but being a Redis newbie, I have no clue how to go about achieving it!
Any advice would be gratefully received!
You're on the right track; sets + sorted sets is the right answer.
Two sources for all of the information that you could ever want:
Chapter 7 of my book, Redis in Action - http://bitly.com/redis-in-action
My Python/Redis object mapper - https://github.com/josiahcarlson/rom (it uses ideas directly from chapter 7 of my book to implement sql-like indices)
Both of those resources use Python as the programming language, though chapter 7 has been translated into Java: https://github.com/josiahcarlson/redis-in-action/ (go to the java path to see the code).
... That said, a normal relational database (especially one with built-in Geo handling like Postgres) should handle this data with ease. Have you considered a relational database?