Cloudwatch includes all the metrics in a namespace? - amazon-cloudwatch

I have a custom Namespace on Cloudwatch that contains a list of metrics.
The metric name is the IP of each of the servers connecting to mine, and this can change over time, with new ones coming, older not connecting during a time frame, etc.
What I'm trying to have is a graph that shows all the metrics inside that namespace by including the new arriving ones automatically and setting at 0 the ones that aren't present at a specific timeframe.
(For instance, if IP 1.2.3.4 connects at 9:01, 9:02 but not 9:03, 9:04 then reconnects at 9:05, the graph will show 0 for 9:03 and 9:04 for that IP. If a new IP arrives at 9:05, it will be added automatically in the graph).
Is it possible to do that? How can I do? I haven't found how on Cloudwatch so far.

The answer depends on how many metrics you have in the namespace.
Dashboard Widget can show a maximum of 500 metrics (docs). If you have less than 500 metrics in the namespace, you can simply use the metric math SEARCH and FILL functions like this:
"FILL(SEARCH('{YOUR_NAMESPACE}', 'Average', 300), 0)"
SEARCH will fetch the metrics and FILL will default the values to 0 for intervals that don't have datapoints present. Also, if a metric didn't receive new datapoints in over two weeks, it won't be returned by the search.
If you have between 500 and 2500 metrics in the namespace (limit is 500 metrics per widget and 2500 metrics per dashboard), you could potentialy split IP ranges into multiple graphs with SEARCH expressions like this:
"FILL(SEARCH('{YOUR_NAMESPACE} MetricName="1.2', 'Average', 300), 0)"
This will include all metrics for IPs starting with 1.2 in one graph. You would then need to create similar graphs for different ranges.
You can still use CloudWatch to graph more than 2500 metrics on a graph/dashboard, but then you need to write custom widgets. You would need to write a lambda function that would fetch all of the datapoints from every metric in the namespace and render the graph using something like Matplotlib.

Related

Cloudwatch dashboard insight graphs - can I set binsize dynamically?

I'm using dashboards to monitor various output stats on AWS.
Lets say it looks something like this:
stats avg(myfield1), min(myfield2), max(myfield3) by bin(1m)
This works fine - however I am by default using a bin size of 1 minute - so the data retention period is only 3 days. If I want to look at a week or a month I have to use a separate widget with a larger bin size - I still want the 1 minute resolution for the shorter time periods and I'd rather not have to double up the graphs as the dashboard is already very busy.
Obviously all the built in metrics graphs adjust the bin size they are querying dynamically as the data range being viewed is changed.
Is it possible to do this within a cloudwatch insights query and if so what is the syntax?

OroCommerce: How to add new product unit?

For some strange reason in so very configurable OroCommerce there is no ability to manage product units and the only few words doc says that its possible to add units via web api. I need to add "days" units and best if do it in code via migration. Is it enough just to make migration like
INSERT INTO `oro_product_unit` (`code`, `default_precision`) VALUES ('day', '0');
and add tranlation messages like
oro.product_unit.day.label.full: day
oro.product.product_unit.day.label.full: day
or need to do smth else?
Product units may be loaded to the database using the data fixtures, like this one that loads default units:
https://github.com/oroinc/orocommerce/blob/4.2.1/src/Oro/Bundle/ProductBundle/Migrations/Data/ORM/LoadProductUnitData.php#L47-L52
In addition, you have to provide translations for the new unit, but there are more messages then you specified in the question:
https://github.com/oroinc/orocommerce/blob/ad94fe9bd63db28eae7d4a73743a4cada4f49080/src/Oro/Bundle/ProductBundle/Resources/translations/jsmessages.en.yml#L26-L35

How to see absolute values instead of percents on Events graphs?

I use custom events for tracking statistics about some deprecated modules are used by users. And I`ll want to remove migrations from deprecated module to a new one when amount of usages will be lower a "waterline".
So, it is not enough convenient to track it via clicking on a date on a graph and check amount of events at the date. Could I somehow switch a type of values on a graph to absolute values?
Mike from Fabric here. For the graphs, we will either show the percentage if the custom attribute is a string or the 25th, median and 75th percentiles if the custom attribute is a number. However, the top 10 custom attribute count will be present below the graph.

Cross-project time-record filtering using Active Collab 5 API

For Active Collab team watching this tag.
I am working on a project that uses new Active Collab 5 API, I am having performance issue trying to run reports.
Example I try to build reports on date-range, and currently to achieve that I need to first run a call to get all projects.
Followed by a loop with this call:
API::get('/projects/'.$id.'/time-records/filtered-by-date?' . http_build_query(['from' => $from, 'to' => $to]))
However we have a large number of projects, in addition to high number of active projects we also need to filter Archived projects as well to get correct reports for billing.
Now I work with around 1500 projects in AC.
So I need to make 1500 API calls which takes a huge performance hit. Is there a way that you can possibly build something that would work along these lines.
API::get(/timerecords/filter-by-date);
with a possible passed parameter that will say (all, active, complited) project state.
Please let me know what you can do or if I have missed something in your documentation that already does this.
Thanks
What you need here is not a request that goes through all projects one by one, but a request that it tailored for cross-project reporting. Active Collab 5 has just the right API endpoint for that - /reports/run.
As an example, you can use this command to query time records and expenses from all active projects that were tracked today:
curl -H "X-Angie-AuthApiToken: YOUR-API-TOKEN" "http://your.activecollab.com/api/v1/reports/run?type=TrackingFilter&project_filter=active&tracked_on_filter=today"
Notice the route (/reports/run) and query arguments:
type - specify type of the report, in this case time and expense tracking report,
project_filter - specify project filter. Apart from active, other useful values of this filter are completed (for completed projects), selected_1,2,3,4 (selected projects with a list of project ID-s), client_1,2,3,4 (projects for clients with the given ID-s), category_1,2,3,4 (projects in categories with the given ID-s),
tracked_on_filter - filter by the date when records were tracked. To target a particular date use selected_date_YYYY-MM-DD and to target a date range use selected_range_YYYY-MM-DD:YYYY-MM-DD.
tracked_by_filter - filter by who tracked the time. It can have various values, like anybody, logged_user, selected_1,2,3.
To list only time records, set type_filter to time (or to expenses if you want only expenses to be listed).

Build a Kibana Histogram with buckets dynamically created by ElasticSearch terms aggregation

I want to be able to combine the functionality of the Kibana Terms Graph (be able to create buckets based on uniqueness of values from a particular attribute) and Histogram Graph (separate data into buckets based on queries and then illustrate the date based on time).
Overall, I want to create a Histogram, but I only want to create the Histogram based on the results of one query, not multiple queries like it's being done in the Kibana demo app. Instead, I want each bucket to be dynamically created per unique value of my particular field. For example, consider the following data returned by my query:
{"myValueType": "New York"}
{"myValueType": "New York"}
{"myValueType": "New York"}
{"myValueType": "San Francisco"}
{"myValueType": "San Francisco"}
Also assume that each record has a timestamp field for separating histogram data by date. For that particular date, I want the data to be communicated as a count of 3 into the New York bucket and a count of 2 into the San Francisco bucket. However, I am only able to show a count of 5 for my one linked query. When I configure the Histogram, I am able to specify a field to use for my timestamp, but not to create buckets from. I could've sent a field to compute a total/min/max/mean, but this field would've had to be numeric, so that is not the solution either.
If I were to use a Term Graph to create a pie or bar graph, I am indeed able to separate my data into buckets based on the unique values of my specified field (in this case, "myValueType"), but this would total up the data for all-time, not split up the data by timestamp. Although this is good information to know, it is not ideal because I wouldn't be able to detect trends in my data.
I am looking for a solution that will do one of the following:
Let me dynamically create queries in my Kibana dash board to create "buckets" in a Histogram
Allow me to run an ElasticSearch Terms Aggregation to supposidly split up my data into buckets based on "myValueType" and integrate these results into my Histogram
Customize the JSON of my dashboard, but this doesn't look possible to me
Create my own custom panel, but this is not desirable
Link a Kibana "TopN" query in Kibana. Actually, this has proven to be a work-around for my problem because the TopN query dynamically created one query per unique value/term from the specified fieldName. However, the problem is that I can only link one colour to this TopN query and each unique term will be placed in a bucket that uses a different shade of the colour. Ideally, every bucket in my Histogram will have a completely different colour associated to it. Imagine how difficult it will be to distinguish unique terms as the number of buckets grows.
If all else fails, I make one query per unique value from my search field. This will allow me to have one unique colour per bucket, but as the number of unique terms in the "myValueType" field changes, I need to keep adding/removing queries from Kibana, which can get quite messy.
I'm sure there is someting that I am missing here. Please help me out. Many thanks.
A highly related SOF question: Is it Possible to Use Histogram Facet or Its Curl Response in Kibana
This would be a great feature. It looks like it will be supported in Kibana4, but there doesn't seem to be much more info out there than that.
For reference: https://github.com/elasticsearch/kibana/issues/1249
Maybe a little late but it is actually possible in the newest BETA release.
kibana 4 beta 3 installation download