SNS dashboard shows all the Topics and Subscriptions. I want to see that data from Cloudwatch. Available Cloudwatch metrics do not provide this feature (I think so). Is there any method for that?
Related
I am trying to graph the IncomingLogEvents or IncomingBytes or PutLogEvents or whatever metric that would help me understand which log streams are the ones sending the most logs to a specific log group in Cloudwatch
Did anyone run into this? I was able to graph the IncomingBytes metric for log groups but not for logstreams within those log groups
I have several containers in my environment and each container sends their logs through a separate log stream within the same log group
Suddenly costs started rising due to errors on the containers and was able to identify which log group is causing it but I cannot find a way to identify which logstream is
Docker does not help either and of course I can check container logs but I want to be able to alert on an increase, hence detect when a logstream is sending more logs than normal as it will alert me on cost increase as on errors
I know I can monitor log errors with other log centralization tools or even with CloudWatch but need to know from Cloudwatch which container is the one sending the highest amount of logs
There used to be a metric that got deprecated and I cannot find any documentation that would help me use whatever metric or solution they replaced it with
The metric was "storedBytes": 0 which from now on, and since deprecation it will always be 0
Thank you for any help that you can provide me with and hope this question help others achieve their goals too
Eu
Cloudwatch dashboard but can make it for log groups only
I'm trying to send data generated by a Google Home mesh network to a Splunk instance. I'd especially like to capture which devices are connected to which points throughout the day. This information is available in the app, but does not seem to be able to be streamed to a centralized logging platform. Is there a way I'm not seeing?
There are two methods of ingesting google cloud data supported by splunk.
--Push-based method : data is sent to a Splunk HTTP Event Collector (HEC) through a pub/sub to Splunk dataflow job.
--Pull-based method: data is fetched from the Google Cloud APIs through the Splunk Add-on for Google Cloud Platform.
It is generally recommended that you use the push-based method to ingest Google Cloud data in Splunk. Only in certain cases it is recommended to use the pull based method to ingest Google Cloud data into Splunk. These circumstances are as follows:
--Your Splunk deployment does not offer a Splunk HEC endpoint.
--Your log volume is low.
--You want to pull Cloud Monitoring metrics, Cloud Storage objects, or low-volume logs.
--You are already managing one or more Splunk heavy forwarders or are using a hosted Inputs Data Manger for Splunk Cloud.
More information as how to work around with the setup part and working around with this
problem can be found in the following link :
https://cloud.google.com/architecture/exporting-stackdriver-logging-for-splunk
I'm interested in using AWS Cloudwatch Insights to create a dashboard for my application. I have a lambda function that I have measured how many times it is invoked which I would like to graph and include on the dashboard. My data looks like this:
I've added the data above to my dashboard but I want the visualization graph to appear instead. I've googled, read aws docs, and attempted to recreate the data sets using dashboard metrics but I've been unable to make this work so far. Does anyone know of a way to do this?
After speaking with an AWS Support rep, there is no way to do this currently.
According to this blog post it's possible to add CloudWatch insights to a dashboard. However, you can only visualize line, stacked area, pie, and bar charts.
I'm trying to build a data collection web endpoint.The Use case is similar to Google Analytics collect API. I want to add this endpoint(GET method) to all pages on the website and on-page load collect page info through this API.
Actually I'm thinking of doing this by using Google Cloud services like Endpoints, BQ(for storing the data).. I don't want to host it in any dedicated servers. Otherwise, I will be end up doing a lot for managing/monitoring the service.
Please suggest me how do I achieve this with Google Cloud Service? OR direct me to right direction if my idea is wrong
I suggest focussing on deciding where you want to code to run. There are several GCP options that don't require dedicated servers:
Google App Engine
Cloud Functions/Firebase Functions
Cloud Run (new!)
Look here to see which support Cloud Endpoints.
All of these products can support running code that takes the data from the request and sends it to the BigQuery API.
There are various ways of achieving what you want. David's answer is absolutely valid, but I would like to introduce Stackdriver Custom Metrics to the discussion.
Custom metrics are similar to regular Stackdriver Monitoring metrics, but you create your own time series (Stackdriver lingo described here) to keep track of whatever you want and clients can sent in their data through an API.
You could achieve the same thing with a compute solution (Google Cloud Functions for example) and a database (Google BigTable for example) and writing your own logic.. but Custom Metrics is an already built solution that includes dashboards and alerting policies while being a more managed solution.
How can i send the data in IIS logs to Amazon CloudWatch logs so that i can monitor the performance of my website.
One of the things that I am trying to monitor is the average request size of my web request. I know that IIS logs have the data about the size of web request(BytesRecv, ByteSent) and I can have CloudWatch logs read my IIS log files but What i cannot figure out is a way to tell CloudWatch logs that BytesRecv, ByteSent should be treated as 2 datapoints.
I don't think the CloudWatch Logs service has that capability. When it ingests logs like IIS, you can create simple filters to match something, like 404 errors, and then you can create datapoints on the number of those errors in a given time period. However, I haven't found a way to extract data from logs directly in CloudWatch.
I believe the solution to this problem would be to use Amazon Kinesis to get the log files out of CloudWatch and then process them with EMR to get those data points and then put that information into S3. A lot easier said than done, I know. I think the toughest part of this would be writing your EMR logic and then putting that data into some kind of consolidated format to write to S3. I'd recommend asking for help around that area.
Another option would be to have Amazon Kinesis drop the log files in S3, then trigger an Amazon Lambda action when those logs files are uploaded. The Lambda function could then parse those log files, extract the information you need, put it into some kind of json, xml, etc and write that to S3. The hard part here is writing the lambda function. This link describes how to use lambda to parse CloudTrail logs written to S3, so you could probably follow a lot of that logic to do this.
http://docs.aws.amazon.com/lambda/latest/dg/wt-cloudtrail-events-adminuser.html
If you can get this info in IIS logs you can share them to cloudwatch logs
you can send logs via EC2Config Service or SSM Agent more details are documented in this Post.
Then you can use existing filters to your log group or create custom filter to extract the fields that you want from the logs -> so it's custom log metric based on log filters. e.g.
[serverip, method, uri, query, port, dash, clientip, useragent, status, zero1, zero2, millis]
or some specific filters.
So you can now either use filters as mentioned above or Log Insight queries for creating dashboards.