I send metrics from CloudWatch to Datadog via Kinesis Firehose.
And when I send multiple values of the same metric at the same second, Datadog always preforms average. Even when I use a rollup-sum function.
Example
I send three values for the same metric quickly one after the other in CloudWatch:
aws cloudwatch put-metric-data --namespace example --metric-name test3 --value 1
aws cloudwatch put-metric-data --namespace example --metric-name test3 --value 0
aws cloudwatch put-metric-data --namespace example --metric-name test3 --value 0
And in DataDog the value appears as 0.33 (DataDog preformed average):
Even with a rollup(sum, 300) the value is still 0.33:
What's going on? How can I force Datadog to preform a sum instead of average?
Related
I have jobs running on job clusters. And I want to send metrics to the CloudWatch.
I set CW agent followed this guide.
But issue is that I can't create useful metrics dashboard and alarms because I always have InstanceId dimension, and InstanceId is different on every job run.
If you check the link above, you will find init script and part of the json for configuring cw agent is
{
...
"append_dimensions": {
"InstanceId": "${aws:InstanceId}"
}
Documentation says that if I remove append_dimension than hostname will be dimension, and again... hostname always has different IP address, so not much useful.
Does someone have experience with Databricks on AWS and monitoring/alerting with CloudWatch? If so, how you resolved this issue?
I would like to set dimension which will be specific and for each executor and same each time it runs.
I want to check total number of keys in Redis Cluster.
Is there any direct command available to get this or I have to check with INFO command from each instance / node.
There is no direct way.
You can do the following with the cli though:
redis-cli --cluster call one-cluster-node-ip-address:the-port DBSIZE
And then sum the results.
Alternatively, there's RedisGears with which you can do the following to get the same result:
redis> RG.PYEXECUTE "GB().count().run()"
I have a query that writes the query result to a CSV file:
hive -e 'select * from transactions limit 50'>abc.csv
so the result will be stored in abc.csv which is available only in that gcp instance.
But I need to export it into a GCS bucket so that later I can dump it into BigQuery.
I tried something like this but it didn't work:
hive -e 'select * from transactions limit 50'>gs://my-bucket/abc.csv
so, how can I store my hive query result in a GCS bucket?
You can write the hive query to your instance then use the gsutil command to move it to your bucket.
gsutil mv abc.csv gs://my-bucket/abc.csv
If you do not have gsutil installed on your instance, follow the steps provided here: Install gsutil | Cloud Storage
To find out more about using storage buckets with instances, you can refer to the google docs: Connecting to Cloud Storage buckets
An alternative would be to mount your cloud storage bucket within your instance allowing you to write the hive query result directly to your bucket.
To do this, you will need to make use of Cloud Storage FUSE, you can follow the steps here to install it: Cloud Storage FUSE | Cloud Storage
You can also use below Query,
insert overwrite directory 'gs://bucket-name/file_name/' row format delimited \
fields terminated by ',' stored as textfile \
select * from <db_name>.<table_name> limit 10;
The above query will put the result into the specified bucket location in a file whose format will be CSV.
Suppose a system is generating a batch of data every 1h on hdfs or aws-s3, like this
s3://..../20170901-00/ # generated at 0am
s3://..../20170901-01/ # 1am
...
and I need to pipe these batches of data into kafka once they're generated.
My solution for this is, set up a spark-streaming job and set a moderate job interval (say, half an hour), so at each streaming interval try to read from s3 and if the data is there, then read it and write to kafka.
Is this doable? And I don't know how to read from s3 or hdfs in a spark-streaming job, how?
I have a set of cloudwatch logs in json format that contain a username field. How can I write a cloudwatch metric query that counts the number of unique users per month?
Now you can count unique field values using the count_distinct instruction inside CloudWatch Insights queries.
Example:
fields userId, #timestamp
| stats count_distinct(userId)
More info on CloudWatch Insights: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AnalyzingLogData.html
You can now do this! Using CloudWatch Insights.
API: https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_StartQuery.html
I am working on a similar problem and my query for this API looks something like:
fields #timestamp, #message
| filter #message like /User ID/
| parse #message "User ID: *" as #userId
| stats count(*) by #userId
To get the User Ids. Right now this returns with a list of them then counts for each one. Getting a total count of unique can either be done after getting the response or probably by playing with the query more.
You can easily play with queries using the CloudWatch Insights page in the AWS Console.
I think you can achieve that by following query:
Log statement being parsed: "Trying to login user: abc ....."
fields #timestamp, #message
| filter #message like /Trying to login user/
| parse #message "Trying to login user: * and " as user
| sort #timestamp desc
| stats count(*) as loginCount by user | sort loginCount desc
This will print the table in such a way,
# user loginCount
1 user1 10
2 user2 15
......
I don't think you can.
Amazon CloudWatch Logs can scan log files for a specific string (eg "Out of memory"). When it encounters this string, it will increment a metric. You can then create an alarm for "When the number of 'Out of memory' errors exceeds 10 over a 15-minute period".
However, you are seeking to count unique users, which does not translate well into this method.
You could instead use Amazon Athena, which can run SQL queries against data stored in Amazon S3. For examples, see:
Analyzing Data in S3 using Amazon Athena
Using Athena to Query S3 Server Access Logs
Amazon Athena – Interactive SQL Queries for Data in Amazon S3