I want to verify if there were any redis SET calls made on a specific day,
The info commandstats give me metrics but im not sure what is the time duration of these metrics.
Like whats the start time and end time these metrics were collected in.
127.0.0.1:6379> info commandstats
# Commandstats
cmdstat_command:calls=15,usec=5154,usec_per_call=343.60
cmdstat_randomkey:calls=28,usec=515,usec_per_call=18.39
cmdstat_config:calls=584575,usec=30325793,usec_per_call=51.88
cmdstat_mset:calls=14372,usec=134336973,usec_per_call=9347.13
cmdstat_slowlog:calls=1169146,usec=4763189,usec_per_call=4.07
cmdstat_bgsave:calls=1,usec=46854,usec_per_call=46854.00
cmdstat_scan:calls=4,usec=26,usec_per_call=6.50
cmdstat_get:calls=8224808736,usec=7259627651,usec_per_call=0.88
cmdstat_latency:calls=584573,usec=629736,usec_per_call=1.08
cmdstat_dbsize:calls=1,usec=1,usec_per_call=1.00
cmdstat_set:calls=90774923,usec=174928586,usec_per_call=1.93
cmdstat_monitor:calls=2,usec=0,usec_per_call=0.00
cmdstat_info:calls=584577,usec=29221468,usec_per_call=49.99
cmdstat_mget:calls=13448,usec=59812180,usec_per_call=4447.66
cmdstat_ttl:calls=1,usec=1,usec_per_call=1.00
I want to see how many SET calls were made on September 14th 2021.
Is it possible to get this metric?
Redis version :
~$ redis-server -v
Redis server v=4.0.9
you can not get this metric by redis self. you need other tools.
get info commandstats metrics with a timer. eg.. every 20 seconds.
diff the delta of two metrics in different time. 2021.9.14 metrics can be calculated by 2021.9.14 00:00:00 and 2021.9.15 00:00:00
Related
I am writing some software that will be pushing data to Victoria Metrics, as below:
curl -d 'foo{bar="baz"} 30' -X POST 'http://[Victoria]/insert/0/prometheus/api/v1/import/prometheus'
I noticed that if I push a single metric like this, it shows up as not a single data point but rather shows up repeatedly as if it was being scraped every 15 seconds, either until I push a new value for that metric or 5 minutes passes.
What setting/mechanism is causing this 5-minute repeat period?
Pushing data with a timestamp does not change this. Metric gets repeated for 5 minutes after that time or until a change regardless.
I don't necessarily need to alter this behavior, just trying to understand why it's happening.
How do you query the database?
I guess this behaviour is due to the ranged query concept and ephemeral datapoints, check this out:
https://docs.victoriametrics.com/keyConcepts.html#range-query
The interval between datapoints depends on the step parameter, which is 5 minutes when omitted.
If you want to receive only the real datapoints, go via export functions.
https://docs.victoriametrics.com/#how-to-export-time-series
TSDB VM has ephemeral dots which fill gaps in the closest sample on the left to the requested timestamp.
So if you make the instant request:
curl "http://<victoria-metrics-addr>/api/v1/query?query=foo_bar&time=2022-05-10T10:03:00.000Z"
The time range at which VictoriaMetrics will try to locate a missing data sample is equal to 5m by default and can be overridden via step parameter.
step - optional max lookback window for searching for raw samples when executing the query. If step is skipped, then it is set to 5m (5 minutes) by default.
GET | POST /api/v1/query?query=...&time=...&step=...
You can read more about key concepts in this part of the documentation
key-concepts
There you can find also information about query range and different concepts about TSDB
I am currently in the midst of a POC where I plan to store some IOT data in my Redis.
Here's my question:
I would like to monitor the data sent by multiple IOT devices and raise alarms if a device fails to report telemetry under a certain time threshold.
For Example:
Device 1: Booting: 09:00am : expected turn around time 2min
After 02 min, 01 sec
Device 1 has failed to report back in the given time.
Is there a way to use Redis to query, in order for it to return back the data which has passed a certain time threshold?
Any references will be appreciated, thanks!
I have 200000 fasta sequences. I am doing GATK to call variants and created a wildcard for every sequence. Now I would like to submit 200000 jobs using snakemake. Will this cause a problem to cluster? Is there a way to submit jobs in a set of 10-20?
First off, it might take some time to calculate the DAG, but I have been told the DAG calculation recently has been greatly improved. Anyways, it might be wise to split up in batches.
Most clusters won't allow you to submit more than X jobs at the same time, usually in the range of 100-1000. I believe the documentation is not fully correct, but when using --cluster cluster I believe the --jobs argument controls the number of active/submitted jobs at the same time, so by using snakemake --jobs 20 --cluster "myclustercommand" you should be able to control this. Know that this control the number of submitted jobs, not active jobs. It might be that all your jobs are in the queue, so probably best to check in with your cluster administrator and ask what the maximum number of submitted jobs is, and get as close to that number.
We are using ElastiCache for Redis, and are confused by its Evictions metric.
I'm curious what the unit is on the evicted_keys metric from Redis Info? The ElastiCache docs say it is a count: https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/CacheMetrics.Redis.html but for our application we have observed the "Evictions" metric (which is derived from evicted_keys) fluctuates up and down, indicating it's not a count. I would expect a count to never decrease, since we cannot "un-evict" a key. I'm wondering if evicted_keys is actually a rate (eg, evictions/sec), which would explain why it can fluctuate.
Thanks you in advance for any responses!
From INFO command:
evicted_keys: Number of evicted keys due to maxmemory limit
To learn more about evictions see Using Redis as an LRU cache - Eviction policies
This counter is zero when the server starts, and it is only reset if you issue the CONFIG RESETSTAT command. However, on ElastiCache, this command is not available.
That said, ElastiCache derives the metric from this value, by calculating the difference between data-points.
Redis evicted_keys 0 5 12 18 22 ....
CloudWatch Evictions 0 5 7 6 4 ....
This is the usual pattern in CloudWatch metrics. This allows you to use SUM if you want the cumulative value, but also to detect rate changes or spikes easily.
Think for example you want to alarm if evictions are more than 10,000 over one minute period. If ElastiCache stores the cumulative value from Redis straight as a metric, this would be hard to accomplish.
Also, by committing the metric only as evicted keys for the period, you are protected of the data distortion of a server-reset or a value overflow. While the Redis INFO value would go back to zero, on ElastiCache you still get the value for the period and you can still do running sum over any period.
We are using Redis as a Queue which has on an average about ~3k rps. But when we check the instantaneous_ops_per_sec, this value consistently reports higher than expected, by about 20%, in this case, reports ~4k ops per sec.
To verify this, I have taken a dump of MONITOR for about 10 seconds and checked the number of incoming commands.
grep "1489722862." monitor_output | wc -l
Where 1489722862 is the timestamp. Even this count matches with what is being produced in the queue and what is being consumed from the queue.
This is a master-slave redis cluster setup.
Does instantaneous_ops_per_sec also account for the slave reads? If not, what is the other reason for which this count is significantly higher?
The instantaneous_ops_per_sec metric is calculated as the mean of the recent samples that the server took. The number of recent samples is hardcoded as 16 by STATS_METRIC_SAMPLES in server.h.