iptables gets huge because of fail2ban - iptables

fail2ban is filling my iptables even though it is releasing banned IPs after a certain time. It seems that attacks are very frequent on my server. As a result, my iptables is getting huge. Is there any issue if iptables contains 5000 entries or more? Thank you.

To solve this issue, I reduced the bantime recidive filter from the previous value (1 week) to 1 day. Now iptables entries are recycled everyday.

Related

Storing time intervals efficiently in redis

I am trying to track server uptimes using redis.
So the approach I have chosen is as follows:
server xyz will keep on sending my service ping indicating that it was alive and working in the last 30 seconds.
My service will store a list of all time intervals during which the server was active. This will be done by storing a list of {startTime, endTime} in redis, with key as name of the server (xyz)
Depending on a user query, I will use this list to generate server uptime metrics. Like % downtime in between times (T1, T2)
Example:
assume that the time is T currently.
at T+30, server sends a ping.
xyz:["{start:T end:T+30}"]
at T+60, server sends another ping
xyz:["{start:T end:T+30}", "{start:T+30 end:T+60}"]
and so on for all pings.
This works fine , but an issue is that over a large time period this list will get a lot of elements. To avoid this currently, on a ping, I pop the last element of the list, check if it can be merged with the latest time interval. If it can be merged, I coalesce and push a single time interval into the list. if not then 2 time intervals are pushed.
So with this my list becomes like this after step 2 : xyz:["{start:T end:T+60}"]
Some problems I see with this approach is:
the merging is being done in my service, and not redis.
incase my service is distributed, The list ordering might get corrupted due to multiple readers and writers.
Is there a more efficient/elegant way to handle this , like maybe handling merging of time intervals in redis itself ?

Which meaning of "live" is used in TTL (Time To Live)?

https://en.wikipedia.org/wiki/Time_to_live
Live as in: The show will go live on air this evening.
or Live as in: I want to live in Paris.
For years I thought it was the first definition, but it just occurred to me that it makes more sense as the second.
It's the second. It's the amount of time that said packet has left to live, or alternatively the amount of time left until it dies, as opposed to the amount of time until it goes live.
For IP & DNS its the second definition. For example, for IP it indicates the amount of hops left that the packet can live before it will die. Each "hop" will reduce the TTL by 1 until it reaches 0(and dies) or its destination.

Trying to understand Redis ping latency test vs Ping command Latency Test

I am trying to understand latency vs maximum number or requests that can be served per second.
What I understood RTT is time taken for message to reach destination and acknowledgement back to source. So I assume server can only serve maximum requests per second should not exceed more then sum of avg round trip in a give second. My local ping test shows as
> ping 127.0.0.1
rtt min/avg/max/mdev = 0.089/0.098/0.120/0.012 ms
on average it takes 0.098 ms just for network round trip, which means 10 ping req/ms. So I assume that in sequential order a client can only execute maximum of 10_000 req/sec. while it turns out I am wrong. redis-benchmark tool shows something different.
> redis-benchmark -t set -c 1 -h 127.0.0.1
====== SET ======
100000 requests completed in 2.53 seconds
1 parallel clients
3 bytes payload
keep alive: 1
100.00% <= 1 milliseconds
39588.28 requests per second
a single client is able to execute 39 req/ms while i am expecting maximum of 10req/ms.
Can anyone help me where I went wrong or misunderstood ?
Commands can be pipelined even when using a single logical client thread, meaning: you can send lots of requests before the first response comes back. Responses always come back in request order (unless you're using pub/sub), so a pipelining client simply needs to keep a queue of sent messages that have not yet seen responses, and pair responses to requests as they arrive.
So: you aren't strictly bound by latency, although that remains a useful number. The raw throughout number (bound by bandwidth and server capacity) is also meaningful, since it is often the case that you want to issue multiple commands.

Redis instantaneous_ops_per_sec higher than actual throughput

We are using Redis as a Queue which has on an average about ~3k rps. But when we check the instantaneous_ops_per_sec, this value consistently reports higher than expected, by about 20%, in this case, reports ~4k ops per sec.
To verify this, I have taken a dump of MONITOR for about 10 seconds and checked the number of incoming commands.
grep "1489722862." monitor_output | wc -l
Where 1489722862 is the timestamp. Even this count matches with what is being produced in the queue and what is being consumed from the queue.
This is a master-slave redis cluster setup.
Does instantaneous_ops_per_sec also account for the slave reads? If not, what is the other reason for which this count is significantly higher?
The instantaneous_ops_per_sec metric is calculated as the mean of the recent samples that the server took. The number of recent samples is hardcoded as 16 by STATS_METRIC_SAMPLES in server.h.

Find minimal perfect hash function with gperf

I found gperf to be suitable for my project and are now looking for a way to optimize the size of the generated table. As the switches -i and -j influence the length of the table deterministically, I wrote a small script iterating over those values, finding the minimal table length. The script stores the -i and -j values for retrieval of the current minimum table as well as the currently tried values, when the script is terminated, so it can continue its search later.
Now I saw that there exists a switch -m, which states that it does exactly what I do with my little script. I guess using this switch is a lot faster than calling gperf for a single iteration only. But I need to know two things for replacing the gperf call, which I couldn't find in the gperf help:
Which values if -i and -j are tried if I use the -m switch?
How do I know, which values for -i and -j are actually used, i. e. which are the values leading to the minimum found table lengh for the current gperf call?
Which values if -i and -j are tried if I use the -m switch?
You find this info in the source code, lines 1507..1515.
How do I know, which values for -i and -j are actually used, i. e. which are the values leading to the minimum found table lengh for the current gperf call?
You don't need to know. These values are just describing the starting point in gperf's internal path through the search space.