Abnormally High Memory Utilization of Redis Cluster - redis

There is a 3 node Redis cluster setup running in a Kubernetes cluster. As of yesterday the Total memory usage was creeping up continuously (around 14.9GB at peak with 462274 keys).
Due to a Network instability the master node switched from redis-0 to redis-2 and Memory Utilization dropped to 5.4GB while the key count is 470994 (still the master node is redis-2)
What is the reason for a different memory utilization while number of keys greater than previous key count while a different node is the master?
redis.conf
maxmemory and eviction policy are not set
bind 0.0.0.0
protected-mode no
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize no
supervised no
pidfile "/var/run/redis_6379.pid"
loglevel notice
logfile ""
databases 5
always-show-logo yes
save 900 1
save 300 100
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename "dump.rdb"
rdb-del-sync-files no
dir "/data"
replica-serve-stale-data yes
replica-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-diskless-load disabled
repl-disable-tcp-nodelay no
replica-priority 100
acllog-max-len 128
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no
lazyfree-lazy-user-del no
appendonly yes
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
aof-use-rdb-preamble yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-listpack-entries 512
hash-max-listpack-value 64
list-max-listpack-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-listpack-entries 128
zset-max-listpack-value 64
hll-sparse-max-bytes 3000
stream-node-max-bytes 4kb
stream-node-max-entries 100
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
dynamic-hz yes
aof-rewrite-incremental-fsync yes
rdb-save-incremental-fsync yes
jemalloc-bg-thread yes
redis-0 logs
1:M 07 Jun 2022 22:59:34.691 * Background saving terminated with success
1:M 07 Jun 2022 23:06:50.290 * 100 changes in 300 seconds. Saving...
1:M 07 Jun 2022 23:06:50.684 * Background saving started by pid 21
21:C 07 Jun 2022 23:08:03.628 * DB saved on disk
21:C 07 Jun 2022 23:08:03.887 * Fork CoW for RDB: current 2 MB, peak 2 MB, average 2 MB
1:M 07 Jun 2022 23:08:04.300 * Background saving terminated with success
1:M 07 Jun 2022 23:17:11.139 # Connection with replica client id #95 lost.
1:M 07 Jun 2022 23:17:21.439 # Connection with replica 10.233.93.179:6379 lost.
1:S 07 Jun 2022 23:17:28.397 * Before turning into a replica, using my own master parameters to synthesize a cached master: I may be able to synchronize with the new master with just a partial transfer.
1:S 07 Jun 2022 23:17:28.397 * Connecting to MASTER 10.233.77.57:6379
1:S 07 Jun 2022 23:17:28.397 * MASTER <-> REPLICA sync started
1:S 07 Jun 2022 23:17:28.397 * REPLICAOF 10.233.77.57:6379 enabled (user request from 'id=382 addr=10.233.122.249:35692 laddr=10.233.122.248:6379 fd=8 name=sentinel-62b0b9cd-cmd age=10 idle=0 flags=x db=0 sub=0 psub=0 multi=4 qbuf=200 qbuf-free=20274 argv-mem=4 multi-mem=180 rbs=1024 rbp=1024 obl=45 oll=0 omem=0 tot-mem=22480 events=r cmd=exec user=default redir=-1 resp=2')
1:S 07 Jun 2022 23:17:28.402 # CONFIG REWRITE executed with success.
1:S 07 Jun 2022 23:17:28.405 * Non blocking connect for SYNC fired the event.
1:S 07 Jun 2022 23:17:28.407 * Master replied to PING, replication can continue...
1:S 07 Jun 2022 23:17:28.408 * Trying a partial resynchronization (request 4e8714d0f9b9b6e03d6c6e77175d8e4e0bc4cc0e:51477133).
1:S 07 Jun 2022 23:17:28.409 * Full resync from master: 50740cce7a6e8a8f6336237eee4a3bd9f749ee86:51440060
1:S 07 Jun 2022 23:17:49.782 * MASTER <-> REPLICA sync: receiving 2095263762 bytes from master to disk
1:S 07 Jun 2022 23:18:03.086 * Discarding previously cached master state.
1:S 07 Jun 2022 23:18:03.086 * MASTER <-> REPLICA sync: Flushing old data
1:S 07 Jun 2022 23:18:10.830 * MASTER <-> REPLICA sync: Loading DB in memory
1:S 07 Jun 2022 23:18:10.862 * Loading RDB produced by version 7.0.0
1:S 07 Jun 2022 23:18:10.862 * RDB age 49 seconds
1:S 07 Jun 2022 23:18:10.863 * RDB memory usage when created 3148.61 Mb
1:S 07 Jun 2022 23:18:28.698 * Done loading RDB, keys loaded: 462376, keys expired: 0.
1:S 07 Jun 2022 23:18:28.700 * MASTER <-> REPLICA sync: Finished with success
1:S 07 Jun 2022 23:18:28.790 * Background append only file rewriting started by pid 23
23:C 07 Jun 2022 23:18:52.057 * SYNC append only file rewrite performed
23:C 07 Jun 2022 23:18:52.138 * Fork CoW for AOF rewrite: current 2 MB, peak 2 MB, average 2 MB
1:S 07 Jun 2022 23:18:52.279 * Background AOF rewrite terminated with success
1:S 07 Jun 2022 23:18:52.291 * Removing the history file appendonly.aof.16.incr.aof in the background
1:S 07 Jun 2022 23:18:52.294 * Removing the history file appendonly.aof.16.base.rdb in the background
1:S 07 Jun 2022 23:18:52.304 * Background AOF rewrite finished successfully
1:S 07 Jun 2022 23:22:02.953 * 100 changes in 300 seconds. Saving...
1:S 07 Jun 2022 23:22:03.051 * Background saving started by pid 24
24:C 07 Jun 2022 23:22:26.493 * DB saved on disk
24:C 07 Jun 2022 23:22:26.574 * Fork CoW for RDB: current 1 MB, peak 1 MB, average 1 MB
1:S 07 Jun 2022 23:22:26.732 * Background saving terminated with success
1:S 07 Jun 2022 23:31:20.700 * 100 changes in 300 seconds. Saving...
redis-2 logs
1:S 07 Jun 2022 23:04:06.345 * Background saving started by pid 22
22:C 07 Jun 2022 23:05:18.079 * DB saved on disk
22:C 07 Jun 2022 23:05:18.351 * Fork CoW for RDB: current 1 MB, peak 1 MB, average 1 MB
1:S 07 Jun 2022 23:05:18.750 * Background saving terminated with success
1:M 07 Jun 2022 23:17:08.397 # Connection with master lost.
1:M 07 Jun 2022 23:17:08.397 * Caching the disconnected master state.
1:M 07 Jun 2022 23:17:08.397 * Discarding previously cached master state.
1:M 07 Jun 2022 23:17:08.397 # Setting secondary replication ID to 4e8714d0f9b9b6e03d6c6e77175d8e4e0bc4cc0e, valid up to offset: 51407274. New replication ID is 50740cce7a6e8a8f6336237eee4a3bd9f749ee86
1:M 07 Jun 2022 23:17:08.397 * MASTER MODE enabled (user request from 'id=33 addr=10.233.83.162:54926 laddr=10.233.77.57:6379 fd=22 name=sentinel-5a1cf691-cmd age=4008 idle=0 flags=x db=0 sub=0 psub=0 multi=4 qbuf=188 qbuf-free=20286 argv-mem=4 multi-mem=169 rbs=2048 rbp=1024 obl=45 oll=0 omem=0 tot-mem=23493 events=r cmd=exec user=default redir=-1 resp=2')
1:M 07 Jun 2022 23:17:08.403 # CONFIG REWRITE executed with success.
1:M 07 Jun 2022 23:17:21.526 * Replica 10.233.93.179:6379 asks for synchronization
1:M 07 Jun 2022 23:17:21.527 * Partial resynchronization not accepted: Requested offset for second ID was 51475879, but I can reply up to 51407274
1:M 07 Jun 2022 23:17:21.527 * Starting BGSAVE for SYNC with target: disk
1:M 07 Jun 2022 23:17:21.694 * Background saving started by pid 23
1:M 07 Jun 2022 23:17:28.408 * Replica 10.233.122.248:6379 asks for synchronization
1:M 07 Jun 2022 23:17:28.408 * Partial resynchronization not accepted: Requested offset for second ID was 51477133, but I can reply up to 51407274
1:M 07 Jun 2022 23:17:28.409 * Waiting for end of BGSAVE for SYNC
23:C 07 Jun 2022 23:17:49.291 * DB saved on disk
23:C 07 Jun 2022 23:17:49.394 * Fork CoW for RDB: current 99 MB, peak 212 MB, average 136 MB
1:M 07 Jun 2022 23:17:49.536 * Background saving terminated with success
1:M 07 Jun 2022 23:18:03.074 * Synchronization with replica 10.233.93.179:6379 succeeded
1:M 07 Jun 2022 23:18:03.087 * Synchronization with replica 10.233.122.248:6379 succeeded
1:M 07 Jun 2022 23:31:13.208 * 100 changes in 300 seconds. Saving...
1:M 07 Jun 2022 23:31:13.313 * Background saving started by pid 24
24:C 07 Jun 2022 23:31:40.058 * DB saved on disk
24:C 07 Jun 2022 23:31:40.173 * Fork CoW for RDB: current 2 MB, peak 2 MB, average 1 MB
1:M 07 Jun 2022 23:31:40.383 * Background saving terminated with success
latest redis-cli info
# Server
redis_version:7.0.0
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:e7d3349b21c83e26
redis_mode:standalone
os:Linux 4.18.0-240.el8.x86_64 x86_64
arch_bits:64
monotonic_clock:POSIX clock_gettime
multiplexing_api:epoll
atomicvar_api:c11-builtin
gcc_version:10.3.1
process_id:1
process_supervised:no
run_id:dd0da5308fcb09bebabeca08f93c8172aa10bafb
tcp_port:6379
server_time_usec:1654704006009467
uptime_in_seconds:89655
uptime_in_days:1
hz:10
configured_hz:10
lru_clock:10536837
executable:/data/redis-server
config_file:/etc/redis/redis.conf
io_threads_active:0
# Clients
connected_clients:79
cluster_connections:0
maxclients:10000
client_recent_max_input_buffer:20480
client_recent_max_output_buffer:20504
blocked_clients:0
tracking_clients:0
clients_in_timeout_table:0
# Memory
used_memory:5472656120
used_memory_human:5.10G
used_memory_rss:5627080704
used_memory_rss_human:5.24G
used_memory_peak:15911668368
used_memory_peak_human:14.82G
used_memory_peak_perc:34.39%
used_memory_overhead:25379484
used_memory_startup:858264
used_memory_dataset:5447276636
used_memory_dataset_perc:99.55%
allocator_allocated:5473133208
allocator_active:5474054144
allocator_resident:5629345792
total_system_memory:68112744448
total_system_memory_human:63.43G
used_memory_lua:31744
used_memory_vm_eval:31744
used_memory_lua_human:31.00K
used_memory_scripts_eval:0
number_of_cached_scripts:0
number_of_functions:0
number_of_libraries:0
used_memory_vm_functions:32768
used_memory_vm_total:64512
used_memory_vm_total_human:63.00K
used_memory_functions:184
used_memory_scripts:184
used_memory_scripts_human:184B
maxmemory:0
maxmemory_human:0B
maxmemory_policy:noeviction
allocator_frag_ratio:1.00
allocator_frag_bytes:920936
allocator_rss_ratio:1.03
allocator_rss_bytes:155291648
rss_overhead_ratio:1.00
rss_overhead_bytes:-2265088
mem_fragmentation_ratio:1.03
mem_fragmentation_bytes:154464496
mem_not_counted_for_evict:34088
mem_replication_backlog:1048580
mem_total_replication_buffers:1086712
mem_clients_slaves:38136
mem_clients_normal:399784
mem_cluster_links:0
mem_aof_buffer:112
mem_allocator:jemalloc-5.2.1
active_defrag_running:0
lazyfree_pending_objects:0
lazyfreed_objects:0
# Persistence
loading:0
async_loading:0
current_cow_peak:0
current_cow_size:0
current_cow_size_age:0
current_fork_perc:0.00
current_save_keys_processed:0
current_save_keys_total:0
rdb_changes_since_last_save:731
rdb_bgsave_in_progress:0
rdb_last_save_time:1654703970
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:32
rdb_current_bgsave_time_sec:-1
rdb_saves:259
rdb_last_cow_size:5439488
rdb_last_load_keys_expired:0
rdb_last_load_keys_loaded:462299
aof_enabled:1
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:71
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_rewrites:1
aof_rewrites_consecutive_failures:0
aof_last_write_status:ok
aof_last_cow_size:10428416
module_fork_in_progress:0
module_fork_last_cow_size:0
aof_current_size:8870237917
aof_base_size:6729679707
aof_pending_rewrite:0
aof_buffer_length:0
aof_pending_bio_fsync:0
aof_delayed_fsync:0
# Stats
total_connections_received:6084
total_commands_processed:5994894
instantaneous_ops_per_sec:36
total_net_input_bytes:9159174280
total_net_output_bytes:10728134634
instantaneous_input_kbps:14.51
instantaneous_output_kbps:47.50
rejected_connections:0
sync_full:2
sync_partial_ok:0
sync_partial_err:2
expired_keys:15
expired_stale_perc:0.00
expired_time_cap_reached_count:0
expire_cycle_cpu_milliseconds:12542
evicted_keys:0
evicted_clients:0
total_eviction_exceeded_time:0
current_eviction_exceeded_time:0
keyspace_hits:3786429
keyspace_misses:36072
pubsub_channels:3
pubsub_patterns:1
latest_fork_usec:142931
total_forks:260
migrate_cached_sockets:0
slave_expires_tracked_keys:0
active_defrag_hits:0
active_defrag_misses:0
active_defrag_key_hits:0
active_defrag_key_misses:0
total_active_defrag_time:0
current_active_defrag_time:0
tracking_total_keys:0
tracking_total_items:0
tracking_total_prefixes:0
unexpected_error_replies:0
total_error_replies:3228
dump_payload_sanitizations:0
total_reads_processed:4293751
total_writes_processed:7534562
io_threaded_reads_processed:0
io_threaded_writes_processed:0
reply_buffer_shrinks:123558
reply_buffer_expands:163328
# Replication
role:master
connected_slaves:2
slave0:ip=10.233.93.179,port=6379,state=online,offset=2168514425,lag=1
slave1:ip=10.233.122.248,port=6379,state=online,offset=2168514425,lag=1
master_failover_state:no-failover
master_replid:50740cce7a6e8a8f6336237eee4a3bd9f749ee86
master_replid2:4e8714d0f9b9b6e03d6c6e77175d8e4e0bc4cc0e
master_repl_offset:2168514425
second_repl_offset:51407274
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:2167448036
repl_backlog_histlen:1066390
# CPU
used_cpu_sys:597.455126
used_cpu_user:953.227128
used_cpu_sys_children:1520.911712
used_cpu_user_children:6254.560210
used_cpu_sys_main_thread:581.954406
used_cpu_user_main_thread:946.707373
# Modules
# Errorstats
errorstat_ERR:count=3012
errorstat_LOADING:count=216
# Cluster
cluster_enabled:0
# Keyspace
db0:keys=471001,expires=2,avg_ttl=207531

Related

How to get the latest values day wise from a timeseries table?

I want to get the latest values of each SIZE_TYPE day wise, ordered by TIMESTAMP. So, only 1 value of each SIZE_TYPE must be present for a given day, and that is the latest value for the day.
How do I get the desired output? I'm using PostgreSQL here.
Input
|TIMESTAMP |SIZE_TYPE|SIZE|
|----------------------------------------|---------|----|
|1595833641356 [Mon Jul 27 2020 07:07:21]|0 |541 |
|1595833641356 [Mon Jul 27 2020 07:07:21]|1 |743 |
|1595833641356 [Mon Jul 27 2020 07:07:21]|2 |912 |
|1595876841356 [Mon Jul 27 2020 19:07:21]|1 |714 |
|1595876841356 [Mon Jul 27 2020 19:07:21]|2 |987 |
|1595963241356 [Tue Jul 28 2020 19:07:21]|0 |498 |
|1595920041356 [Tue Jul 28 2020 07:07:21]|2 |974 |
|1595920041356 [Tue Jul 28 2020 07:07:21]|0 |512 |
*Note: the TIMESTAMP values are in UNIX time. I have given
the date-time string for reference*
Output
|TIMESTAMP |SIZE_TYPE|SIZE|
|----------------------------------------|---------|----|
|1595833641356 [Mon Jul 27 2020 07:07:21]|0 |541 |
|1595876841356 [Mon Jul 27 2020 19:07:21]|1 |714 |
|1595876841356 [Mon Jul 27 2020 19:07:21]|2 |987 |
|1595920041356 [Tue Jul 28 2020 07:07:21]|2 |974 |
|1595963241356 [Tue Jul 28 2020 19:07:21]|0 |498 |
*Note: the TIMESTAMP values are in UNIX time. I have given
the date-time string for reference*
Explanation
For July 27, the latest values for
0: 541 (no other entries for the day)
1: 714
2: 987
For July 28, the latest values for
0: 498
1: nothing (ignore)
2: 974 (no other entries for the day)
You can use distinct on:
select distinct on (floor(timestamp / (24 * 60 * 60 * 1000)), size_type) t.*
from input
order by floor(timestamp / (24 * 60 * 60 * 1000)), size_type,
timestamp desc;
The arithmetic is just to extract the day from the timestamp.
Here is a db<>fiddle.

Apache restart every midnight by itself causing apscheduler task to not complete

Hi I have a Pyramid wsgi webapp served using Apache. The webapp has an hourly job that must be run at the 0th min to fetch time-sensitive data and write to my mysql database. I note that sometimes (not all times) the data might not be written into the database for the midnight 00:00:00 run of the task. Looking at the logs, it seems that Apache has been restarted shortly after every midnight which might cause the problem.
After searching through stackoverflow it seems logrotate might be the culprit for the restart. However, I also note that logrotate is called by crontab which defaults to 6:25am so I have no idea why the restart happens at midnight instead. (My Ubuntu server does NOT have anacron installed)
here are the log files for the last few days from Apache
[Tue May 11 00:00:35.534821 2021] [mpm_event:notice] [pid 72273:tid 140034084613184] AH00489: Apache/2.4.41 (Ubuntu) mod_wsgi/4.6.8 Python/3.8 configured -- resuming normal operations
[Tue May 11 00:00:35.534867 2021] [core:notice] [pid 72273:tid 140034084613184] AH00094: Command line: '/usr/sbin/apache2'
.
.
.
[Wed May 12 00:00:00.029412 2021] [wsgi:error] [pid 72660:tid 140033624434432] 2021-05-12 00:00:00,029 INFO [apscheduler.executors.default:123][ThreadPoolExecutor-0_0] Running job "XYZ (trigger: cron[minute='0'], next run at: 2021-05-12 01:00:00 HKT)" (scheduled at 2021-05-12 00:00:00+08:00)
[Wed May 12 00:00:00.621944 2021] [mpm_event:notice] [pid 72273:tid 140034084613184] AH00493: SIGUSR1 received. Doing graceful restart
[Wed May 12 00:00:03.614647 2021] [wsgi:error] [pid 72660:tid 140033624434432] 2021-05-12 00:00:03,614 INFO [apscheduler.executors.default:144][ThreadPoolExecutor-0_0] Job "XYZ (trigger: cron[minute='0'], next run at: 2021-05-12 01:00:00 HKT)" executed successfully
Interesting to note from the log above that it seems my apscheduler still completed running (with the database written into successfully) and printed to the log after Doing graceful restart and before a new log file is created (contents shown below)
[Wed May 12 00:00:03.641095 2021] [mpm_event:notice] [pid 72273:tid 140034084613184] AH00489: Apache/2.4.41 (Ubuntu) mod_wsgi/4.6.8 Python/3.8 configured -- resuming normal operations
[Wed May 12 00:00:03.641146 2021] [core:notice] [pid 72273:tid 140034084613184] AH00094: Command line: '/usr/sbin/apache2'
.
.
.
[Thu May 13 00:00:00.032261 2021] [wsgi:error] [pid 95013:tid 140083656877824] 2021-05-13 00:00:00,032 INFO [apscheduler.executors.default:123][ThreadPoolExecutor-0_0] Running job "XYZ (trigger: cron[minute='0'], next run at: 2021-05-13 01:00:00 HKT)" (scheduled at 2021-05-13 00:00:00+08:00)
[Thu May 13 00:00:03.764471 2021] [wsgi:error] [pid 95013:tid 140083656877824] 2021-05-13 00:00:03,764 INFO [apscheduler.executors.default:144][ThreadPoolExecutor-0_0] Job "XYZ (trigger: cron[minute='0'], next run at: 2021-05-13 01:00:00 HKT)" executed successfully
[Thu May 13 00:00:34.829438 2021] [mpm_event:notice] [pid 95012:tid 140084121332800] AH00493: SIGUSR1 received. Doing graceful restart
In the log file above, my apscheduler job completed before the restart so my database got written into properly as well.
[Thu May 13 00:00:35.588354 2021] [mpm_event:notice] [pid 95012:tid 140084121332800] AH00489: Apache/2.4.41 (Ubuntu) mod_wsgi/4.6.8 Python/3.8 configured -- resuming normal operations
[Thu May 13 00:00:35.588433 2021] [core:notice] [pid 95012:tid 140084121332800] AH00094: Command line: '/usr/sbin/apache2'
.
.
.
[Fri May 14 00:00:00.020559 2021] [wsgi:error] [pid 2120:tid 140241617286912] 2021-05-14 00:00:00,020 INFO [apscheduler.executors.default:123][ThreadPoolExecutor-0_0] Running job "XYZ (trigger: cron[minute='0'], next run at: 2021-05-14 01:00:00 HKT)" (scheduled at 2021-05-14 00:00:00+08:00)
[Fri May 14 00:00:00.558072 2021] [mpm_event:notice] [pid 2119:tid 140242151496768] AH00493: SIGUSR1 received. Doing graceful restart
for the midnight that just passed, the job didn't complete and database has not been written into. there is also no accompanying INFO [apscheduler.executors.default:144][ThreadPoolExecutor-0_0] Job "XYZ (trigger: cron[minute='0'], next run at: xxxxxxx)" executed successfully line written into either of the logs before and after midnight since the job got terminated abruptly before completing
[Fri May 14 00:00:03.588691 2021] [mpm_event:notice] [pid 2119:tid 140242151496768] AH00489: Apache/2.4.41 (Ubuntu) mod_wsgi/4.6.8 Python/3.8 configured -- resuming normal operations
[Fri May 14 00:00:03.588744 2021] [core:notice] [pid 2119:tid 140242151496768] AH00094: Command line: '/usr/sbin/apache2'
.
.
.
day hasn't ended yet
Here is my crontab file which I believe is standard and state that daily jobs should be run at 6:25AM not midnight.
# /etc/crontab: system-wide crontab
# Unlike any other crontab you don't have to run the `crontab'
# command to install the new version when you edit this file
# and files in /etc/cron.d. These files also have username fields,
# that none of the other crontabs do.
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
# Example of job definition:
# .---------------- minute (0 - 59)
# | .------------- hour (0 - 23)
# | | .---------- day of month (1 - 31)
# | | | .------- month (1 - 12) OR jan,feb,mar,apr ...
# | | | | .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
# | | | | |
# * * * * * user-name command to be executed
17 * * * * root cd / && run-parts --report /etc/cron.hourly
25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly )
52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly )
again my Ubuntu server does NOT have anacron installed
ubuntu#xxx:~$ anacron --help
Command 'anacron' not found, but can be installed with:
sudo apt install anacron
logrotate and apache has cron.daily task
ubuntu#xxx:~$ ls -ln /etc/cron.daily/
total 40
-rwxr-xr-x 1 0 0 539 Apr 14 2020 apache2
-rwxr-xr-x 1 0 0 376 Dec 5 2019 apport
-rwxr-xr-x 1 0 0 1478 Apr 9 2020 apt-compat
-rwxr-xr-x 1 0 0 355 Dec 29 2017 bsdmainutils
-rwxr-xr-x 1 0 0 1187 Sep 6 2019 dpkg
-rwxr-xr-x 1 0 0 377 Jan 21 2019 logrotate
-rwxr-xr-x 1 0 0 1123 Feb 26 2020 man-db
-rwxr-xr-x 1 0 0 4574 Jul 18 2019 popularity-contest
-rwxr-xr-x 1 0 0 214 Dec 7 23:35 update-notifier-common
vi /etc/cron.daily/logrotate
#!/bin/sh
# skip in favour of systemd timer
if [ -d /run/systemd/system ]; then
exit 0
fi
# this cronjob persists removals (but not purges)
if [ ! -x /usr/sbin/logrotate ]; then
exit 0
fi
/usr/sbin/logrotate /etc/logrotate.conf
EXITVALUE=$?
if [ $EXITVALUE != 0 ]; then
/usr/bin/logger -t logrotate "ALERT exited abnormally with [$EXITVALUE]"
fi
exit $EXITVALUE
vi /etc/cron.daily/apache2
#!/bin/sh
# run htcacheclean if set to 'cron' mode
set -e
set -u
type htcacheclean > /dev/null 2>&1 || exit 0
[ -e /etc/default/apache-htcacheclean ] || exit 0
# edit /etc/default/apache-htcacheclean to change this
HTCACHECLEAN_MODE=daemon
HTCACHECLEAN_RUN=auto
HTCACHECLEAN_SIZE=300M
HTCACHECLEAN_PATH=/var/cache/apache2/mod_cache_disk
HTCACHECLEAN_OPTIONS=""
. /etc/default/apache-htcacheclean
[ "$HTCACHECLEAN_MODE" = "cron" ] || exit 0
htcacheclean ${HTCACHECLEAN_OPTIONS} \
-p${HTCACHECLEAN_PATH} \
-l${HTCACHECLEAN_SIZE}
/etc/logrotate.conf is just a standard file
# see "man logrotate" for details
# rotate log files weekly
weekly
# use the adm group by default, since this is the owning group
# of /var/log/syslog.
su root adm
# keep 4 weeks worth of backlogs
rotate 4
# create new (empty) log files after rotating old ones
create
# use date as a suffix of the rotated file
#dateext
# uncomment this if you want your log files compressed
#compress
# packages drop log rotation information into this directory
include /etc/logrotate.d
# system-specific logs may be also be configured here.
apache2 included under logrotate.d
ubuntu#xxx:~$ ls -ln /etc/logrotate.d
total 52
-rw-r--r-- 1 0 0 120 Sep 6 2019 alternatives
-rw-r--r-- 1 0 0 442 Apr 14 2020 apache2
-rw-r--r-- 1 0 0 126 Dec 5 2019 apport
-rw-r--r-- 1 0 0 173 Apr 9 2020 apt
-rw-r--r-- 1 0 0 91 Nov 2 2020 bootlog
-rw-r--r-- 1 0 0 130 Jan 21 2019 btmp
-rw-r--r-- 1 0 0 112 Sep 6 2019 dpkg
-rw-r--r-- 1 0 0 845 Nov 7 2019 mysql-server
-rw-r--r-- 1 0 0 501 Mar 7 2019 rsyslog
-rw-r--r-- 1 0 0 119 Mar 31 2020 ubuntu-advantage-tools
-rw-r--r-- 1 0 0 178 Jan 22 2020 ufw
-rw-r--r-- 1 0 0 235 Jul 21 2020 unattended-upgrades
-rw-r--r-- 1 0 0 145 Feb 19 2018 wtmp
vi /etc/logrotate.d/apache2
/var/log/apache2/*.log {
daily
missingok
rotate 14
compress
delaycompress
notifempty
create 640 root adm
sharedscripts
postrotate
if invoke-rc.d apache2 status > /dev/null 2>&1; then \
invoke-rc.d apache2 reload > /dev/null 2>&1; \
fi;
endscript
prerotate
if [ -d /etc/logrotate.d/httpd-prerotate ]; then \
run-parts /etc/logrotate.d/httpd-prerotate; \
fi; \
endscript
}
I just want to know why my Apache gets restarted at midnight when according to crontab it should be at 6:25AM and change the time to avoid conflict with my 0th min hourly job. thanks!

Should diff return nothing after rsync?

I just created a backup with
rsync -av --delete projects/ /Volumes/daten/projects/
Afterwards I ran
diff -r projects/ /Volumes/daten/projects/
just to check if everything's fine expecting no output from diff. However, diff found a lot of differences. Does that mean rsync did not correctly sync my data?
Update: When rerunning rsync it seems fine, there is nothing to do for rsync:
$ rsync -av --delete projects/ /Volumes/daten/projects/
building file list ... done
sent 470414 bytes received 20 bytes 188173.60 bytes/sec
total size is 295619054 speedup is 628.40
However, diff doesn't stop to generate output, as if there are lots of different files. Here is a small excerpt:
$ diff -r projects/ /Volumes/daten/projects/
diff -r projects/CatClicker/app/build/intermediates/incremental/ir_dep/debug/package_dependencies/dex-renamer-state.txt /Volumes/daten/projects/CatClicker/app/build/intermediates/incremental/ir_dep/debug/package_dependencies/dex-renamer-state.txt
1c1
< #Sat Dec 08 16:17:41 CST 2018
---
> #Sun Nov 04 19:34:13 CET 2018
diff -r projects/CatClicker/app/build/intermediates/incremental/ir_slices/debug/package_slice_1/dex-renamer-state.txt /Volumes/daten/projects/CatClicker/app/build/intermediates/incremental/ir_slices/debug/package_slice_1/dex-renamer-state.txt
1c1
< #Sat Dec 08 16:17:41 CST 2018
---
> #Sun Nov 04 19:34:14 CET 2018
diff -r projects/CatClicker/app/build/intermediates/incremental/ir_slices/debug/package_slice_2/dex-renamer-state.txt /Volumes/daten/projects/CatClicker/app/build/intermediates/incremental/ir_slices/debug/package_slice_2/dex-renamer-state.txt
1c1
< #Sat Dec 08 16:17:41 CST 2018
---
> #Sun Nov 04 19:34:14 CET 2018
diff -r projects/CatClicker/app/build/intermediates/incremental/ir_slices/debug/package_slice_3/dex-renamer-state.txt /Volumes/daten/projects/CatClicker/app/build/intermediates/incremental/ir_slices/debug/package_slice_3/dex-renamer-state.txt
1c1
< #Sat Dec 08 16:17:42 CST 2018
---
> #Sun Nov 04 19:34:14 CET 2018
diff -r projects/CatClicker/app/build/intermediates/incremental/ir_slices/debug/package_slice_4/dex-renamer-state.txt /Volumes/daten/projects/CatClicker/app/build/intermediates/incremental/ir_slices/debug/package_slice_4/dex-renamer-state.txt
1c1
< #Sat Dec 08 16:17:42 CST 2018
---
> #Sun Nov 04 19:52:00 CET 2018
diff -r projects/CatClicker/app/build/intermediates/incremental/ir_slices/debug/package_slice_5/dex-renamer-state.txt /Volumes/daten/projects/CatClicker/app/build/intermediates/incremental/ir_slices/debug/package_slice_5/dex-renamer-state.txt
1c1
< #Sat Dec 08 16:17:42 CST 2018
---
> #Sun Nov 04 19:34:14 CET 2018
diff -r projects/CatClicker/app/build/intermediates/incremental/ir_slices/debug/package_slice_6/dex-renamer-state.txt /Volumes/daten/projects/CatClicker/app/build/intermediates/incremental/ir_slices/debug/package_slice_6/dex-renamer-state.txt
1c1
< #Sat Dec 08 16:17:42 CST 2018
---
> #Sun Nov 04 19:34:14 CET 2018
diff -r projects/CatClicker/app/build/intermediates/incremental/ir_slices/debug/package_slice_7/dex-renamer-state.txt /Volumes/daten/projects/CatClicker/app/build/intermediates/incremental/ir_slices/debug/package_slice_7/dex-renamer-state.txt
1c1
< #Sat Dec 08 16:17:42 CST 2018
---
> #Sun Nov 04 19:34:14 CET 2018
diff -r projects/CatClicker/app/build/intermediates/incremental/ir_slices/debug/package_slice_8/dex-renamer-state.txt /Volumes/daten/projects/CatClicker/app/build/intermediates/incremental/ir_slices/debug/package_slice_8/dex-renamer-state.txt
1c1
< #Sat Dec 08 16:17:42 CST 2018
---
> #Sun Nov 04 19:34:14 CET 2018
diff -r projects/CatClicker/app/build/intermediates/incremental/ir_slices/debug/package_slice_9/dex-renamer-state.txt /Volumes/daten/projects/CatClicker/app/build/intermediates/incremental/ir_slices/debug/package_slice_9/dex-renamer-state.txt
1c1
< #Sat Dec 08 16:17:42 CST 2018
---
> #Sun Nov 04 19:34:14 CET 2018
diff -r projects/CatClicker/app/build/intermediates/incremental/packageInstantRunResourcesDebug/tmp/debug/dex-renamer-state.txt /Volumes/daten/projects/CatClicker/app/build/intermediates/incremental/packageInstantRunResourcesDebug/tmp/debug/dex-renamer-state.txt
1c1
< #Sat Dec 08 16:17:34 CST 2018
---
> #Sun Nov 04 19:33:52 CET 2018
Binary files projects/CatClicker/app/build/intermediates/incremental-classes/debug/instant-run-bootstrap.jar and /Volumes/daten/projects/CatClicker/app/build/intermediates/incremental-classes/debug/instant-run-bootstrap.jar differ
Binary files projects/CatClicker/app/build/intermediates/incremental-verifier/debug/android/support/compat/R$color.class and /Volumes/daten/projects/CatClicker/app/build/intermediates/incremental-verifier/debug/android/support/compat/R$color.class differ
Binary files projects/CatClicker/app/build/intermediates/incremental-verifier/debug/android/support/coreui/R$color.class and /Volumes/daten/projects/CatClicker/app/build/intermediates/incremental-verifier/debug/android/support/coreui/R$color.class differ
Binary files projects/CatClicker/app/build/intermediates/incremental-verifier/debug/android/support/coreutils/R$color.class and /Volumes/daten/projects/CatClicker/app/build/intermediates/incremental-verifier/debug/android/support/coreutils/R$color.class differ
Binary files projects/CatClicker/app/build/intermediates/incremental-verifier/debug/android/support/fragment/R$color.class and /Volumes/daten/projects/CatClicker/app/build/intermediates/incremental-verifier/debug/android/support/fragment/R$color.class differ
Binary files projects/CatClicker/app/build/intermediates/incremental-verifier/debug/android/support/graphics/drawable/R$color.class and /Volumes/daten/projects/CatClicker/app/build/intermediates/incremental-verifier/debug/android/support/graphics/drawable/R$color.class differ
Binary files projects/CatClicker/app/build/intermediates/incremental-verifier/debug/android/support/graphics/drawable/animated/R$color.class and /Volumes/daten/projects/CatClicker/app/build/intermediates/incremental-verifier/debug/android/support/graphics/drawable/animated/R$color.class differ
Binary files projects/CatClicker/app/build/intermediates/incremental-verifier/debug/android/support/mediacompat/R$color.class and /Volumes/daten/projects/CatClicker/app/build/intermediates/incremental-verifier/debug/android/support/mediacompat/R$color.class differ
Binary files projects/CatClicker/app/build/intermediates/incremental-verifier/debug/android/support/v4/R$color.class and /Volumes/daten/projects/CatClicker/app/build/intermediates/incremental-verifier/debug/android/support/v4/R$color.class differ
Binary files projects/CatClicker/app/build/intermediates/incremental-verifier/debug/android/support/v7/appcompat/R$anim.class and /Volumes/daten/projects/CatClicker/app/build/intermediates/incremental-verifier/debug/android/support/v7/appcompat/R$anim.class differ
Binary files projects/CatClicker/app/build/intermediates/incremental-verifier/debug/android/support/v7/appcompat/R$color.class and /Volumes/daten/projects/CatClicker/app/build/intermediates/incremental-verifier/debug/android/support/v7/appcompat/R$color.class differ
Binary files projects/CatClicker/app/build/intermediates/incremental-verifier/debug/cyberdynesoftware/catclicker/R$attr.class and /Volumes/daten/projects/CatClicker/app/build/intermediates/incremental-verifier/debug/cyberdynesoftware/catclicker/R$attr.class differ
Binary files projects/CatClicker/app/build/intermediates/incremental-verifier/debug/cyberdynesoftware/catclicker/R$bool.class and /Volumes/daten/projects/CatClicker/app/build/intermediates/incremental-verifier/debug/cyberdynesoftware/catclicker/R$bool.class differ
Binary files projects/CatClicker/app/build/intermediates/incremental-verifier/debug/cyberdynesoftware/catclicker/R$dimen.class and /Volumes/daten/projects/CatClicker/app/build/intermediates/incremental-verifier/debug/cyberdynesoftware/catclicker/R$dimen.class differ
Binary files projects/CatClicker/app/build/intermediates/incremental-verifier/debug/cyberdynesoftware/catclicker/R$drawable.class and /Volumes/daten/projects/CatClicker/app/build/intermediates/incremental-verifier/debug/cyberdynesoftware/catclicker/R$drawable.class differ
It seems that a lot of files are not seen as "different" by rsync. By default rsync only checks the file sizes and the last modification timestamps of the files. Please try using rsync with the -c/--checksum option to turn on checksumming and run "diff" again.
Excerpt from the rsync man page:
-c, --checksum
This changes the way rsync checks if the files have been changed and are in need of a transfer. Without this option,
rsync uses a "quick check" that (by default) checks if each file's size and time of last modification match between the
sender and receiver. This option changes this to compare a 128-bit checksum for each file that has a matching size.
Generating the checksums means that both sides will expend a lot of disk I/O reading all the data in the files in the
transfer (and this is prior to any reading that will be done to transfer changed files), so this can slow things down
significantly.

Duplicity incremental backup taking too long

I have duplicity running an incremental daily backup to S3. About 37 GiB.
On the first month or so, it went ok. It used to finish in about 1 hour. Then it started taking too long to complete the task. Right now, as I type, it is still running the daily backup that started 7 hours ago.
I'm running two commands, first to backup and then cleanup:
duplicity --full-if-older-than 1M LOCAL.SOURCE S3.DEST --volsize 666 --verbosity 8
duplicity remove-older-than 2M S3.DEST
The logs
Temp has 54774476800 available, backup will use approx 907857100.
So the temp has enough space, good. Then it starts with this...
Copying duplicity-full-signatures.20161107T090303Z.sigtar.gpg to local cache.
Deleting /tmp/duplicity-ipmrKr-tempdir/mktemp-13tylb-2
Deleting /tmp/duplicity-ipmrKr-tempdir/mktemp-NanCxQ-3
[...]
Copying duplicity-inc.20161110T095624Z.to.20161111T103456Z.manifest.gpg to local cache.
Deleting /tmp/duplicity-ipmrKr-tempdir/mktemp-VQU2zx-30
Deleting /tmp/duplicity-ipmrKr-tempdir/mktemp-4Idklo-31
[...]
This continues for each day till today, taking long minutes for each file. And continues with this...
Ignoring incremental Backupset (start_time: Thu Nov 10 09:56:24 2016; needed: Mon Nov 7 09:03:03 2016)
Ignoring incremental Backupset (start_time: Thu Nov 10 09:56:24 2016; needed: Wed Nov 9 18:09:07 2016)
Added incremental Backupset (start_time: Thu Nov 10 09:56:24 2016 / end_time: Fri Nov 11 10:34:56 2016)
After a long time...
Warning, found incomplete backup sets, probably left from aborted session
Last full backup date: Sun Mar 12 09:54:00 2017
Collection Status
-----------------
Connecting with backend: BackendWrapper
Archive dir: /home/user/.cache/duplicity/700b5f90ee4a620e649334f96747bd08
Found 6 secondary backup chains.
Secondary chain 1 of 6:
-------------------------
Chain start time: Mon Nov 7 09:03:03 2016
Chain end time: Mon Nov 7 09:03:03 2016
Number of contained backup sets: 1
Total number of contained volumes: 2
Type of backup set: Time: Num volumes:
Full Mon Nov 7 09:03:03 2016 2
-------------------------
Secondary chain 2 of 6:
-------------------------
Chain start time: Wed Nov 9 18:09:07 2016
Chain end time: Wed Nov 9 18:09:07 2016
Number of contained backup sets: 1
Total number of contained volumes: 11
Type of backup set: Time: Num volumes:
Full Wed Nov 9 18:09:07 2016 11
-------------------------
Secondary chain 3 of 6:
-------------------------
Chain start time: Thu Nov 10 09:56:24 2016
Chain end time: Sat Dec 10 09:44:31 2016
Number of contained backup sets: 31
Total number of contained volumes: 41
Type of backup set: Time: Num volumes:
Full Thu Nov 10 09:56:24 2016 11
Incremental Fri Nov 11 10:34:56 2016 1
Incremental Sat Nov 12 09:59:47 2016 1
Incremental Sun Nov 13 09:57:15 2016 1
Incremental Mon Nov 14 09:48:31 2016 1
[...]
After listing all chains:
Also found 0 backup sets not part of any chain, and 1 incomplete backup set.
These may be deleted by running duplicity with the "cleanup" command.
This was only the backup part. It takes hours doing this and only 10 minutes to upload 37 GiB to S3.
ElapsedTime 639.59 (10 minutes 39.59 seconds)
SourceFiles 288
SourceFileSize 40370795351 (37.6 GB)
Then comes the cleanup, that gives me this:
Cleaning up
Local and Remote metadata are synchronized, no sync needed.
Warning, found incomplete backup sets, probably left from aborted session
Last full backup date: Sun Mar 12 09:54:00 2017
There are backup set(s) at time(s):
Tue Jan 10 09:58:05 2017
Wed Jan 11 09:54:03 2017
Thu Jan 12 09:56:42 2017
Fri Jan 13 10:05:05 2017
Sat Jan 14 10:24:54 2017
Sun Jan 15 09:49:31 2017
Mon Jan 16 09:39:41 2017
Tue Jan 17 09:59:05 2017
Wed Jan 18 09:59:56 2017
Thu Jan 19 10:01:51 2017
Fri Jan 20 09:35:30 2017
Sat Jan 21 09:53:26 2017
Sun Jan 22 09:48:57 2017
Mon Jan 23 09:38:45 2017
Tue Jan 24 09:54:29 2017
Which can't be deleted because newer sets depend on them.
Found old backup chains at the following times:
Mon Nov 7 09:03:03 2016
Wed Nov 9 18:09:07 2016
Sat Dec 10 09:44:31 2016
Mon Jan 9 10:04:51 2017
Rerun command with --force option to actually delete.
I found the problem. Because of an issue, I followed this answer, and added this code to my script:
rm -rf ~/.cache/deja-dup/*
rm -rf ~/.cache/duplicity/*
This is supposed to be a one time thing because of random bug duplicity had. But the answer didn't mention this. So every day the script was removing the cache just after syncing the cache, and then, on the next day it had to download the whole thing again.

ZAP Attack proxy History Request ID is not consecutive

I've used ZAP to intercept traffic .
Works nicely and I have a history for my REQUEST - RESPONSE pairs like this:
ID Req. TimeStamp Method etc ..
...
1955 Tue Apr 05 15:42:47 CEST 2016 GET https ://...
1971 Tue Apr 05 15:42:49 CEST 2016 GET https ://...
1984 Tue Apr 05 15:43:30 CEST 2016 GET https ://...
1998 Tue Apr 05 15:43:31 CEST 2016 GET https ://...
...
How come the IDs are not consecutive ?
We have a FAQ for that :) https://github.com/zaproxy/zaproxy/wiki/FAQhistoryIdsMissing
Simon (ZAP Project Lead)