I have enabled both RDB and AOF backup via save 1 1 and appendonly yes. This configuration creates both RDB and AOF files, at prescribed locations. However, during restart of Redis the following is noticed
If appendonly yes, then RDB file is not read, regardless as to whether AOF file exists or not
If appendonly no, then RDB file is read
I've tested the above by setting appendonly yes and running rm /persistent/redis/appendonly.aof; systemctl restart redis. The log file shows
Aug 13 11:11:06 saltspring-zynqmp redis-server[16292]: 16292:M 13 Aug 11:11:06.199 # Redis is now ready to exit, bye bye...
Aug 13 11:11:06 saltspring-zynqmp redis[16292]: DB saved on disk
Aug 13 11:11:06 saltspring-zynqmp redis[16292]: Removing the pid file.
Aug 13 11:11:06 saltspring-zynqmp redis[16292]: Redis is now ready to exit, bye bye...
Aug 13 11:11:06 saltspring-zynqmp systemd[1]: redis.service: Succeeded.
Aug 13 11:11:06 saltspring-zynqmp systemd[1]: Stopped redis.service.
Aug 13 11:11:06 saltspring-zynqmp systemd[1]: Starting redis.service...
Aug 13 11:11:06 saltspring-zynqmp redis-check-aof[16354]: Cannot open file: /persistent/redis/appendonly.aof
Aug 13 11:11:06 saltspring-zynqmp redis-server[16355]: 16355:C 13 Aug 11:11:06.232 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
Aug 13 11:11:06 saltspring-zynqmp redis-server[16355]: oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
Aug 13 11:11:06 saltspring-zynqmp redis-server[16355]: 16355:C 13 Aug 11:11:06.233 # Redis version=4.0.14, bits=64, commit=00000000, modified=0, pid=16355, just started
Aug 13 11:11:06 saltspring-zynqmp redis-server[16355]: Redis version=4.0.14, bits=64, commit=00000000, modified=0, pid=16355, just started
Aug 13 11:11:06 saltspring-zynqmp redis-server[16355]: 16355:C 13 Aug 11:11:06.234 # Configuration loaded
Aug 13 11:11:06 saltspring-zynqmp redis-server[16355]: Configuration loaded
Aug 13 11:11:06 saltspring-zynqmp redis-server[16355]: 16355:C 13 Aug 11:11:06.234 * supervised by systemd, will signal readiness
Aug 13 11:11:06 saltspring-zynqmp redis-server[16355]: supervised by systemd, will signal readiness
Aug 13 11:11:06 saltspring-zynqmp systemd[1]: Started redis.service.
Aug 13 11:11:06 saltspring-zynqmp redis-server[16355]: 16355:M 13 Aug 11:11:06.239 * Increased maximum number of open files to 10032 (it was originally set to 1024).
Aug 13 11:11:06 saltspring-zynqmp redis[16355]: Increased maximum number of open files to 10032 (it was originally set to 1024).
Aug 13 11:11:06 saltspring-zynqmp redis-server[16355]: 16355:M 13 Aug 11:11:06.241 * Running mode=standalone, port=6379.
Aug 13 11:11:06 saltspring-zynqmp redis[16355]: Running mode=standalone, port=6379.
Aug 13 11:11:06 saltspring-zynqmp redis-server[16355]: 16355:M 13 Aug 11:11:06.242 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
Aug 13 11:11:06 saltspring-zynqmp redis[16355]: WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
Aug 13 11:11:06 saltspring-zynqmp redis-server[16355]: 16355:M 13 Aug 11:11:06.242 # Server initialized
Aug 13 11:11:06 saltspring-zynqmp redis[16355]: Server initialized
Aug 13 11:11:06 saltspring-zynqmp redis-server[16355]: 16355:M 13 Aug 11:11:06.242 * Ready to accept connections
Aug 13 11:11:06 saltspring-zynqmp redis[16355]: Ready to accept connections
Notice that the excepted message
...
Aug 13 11:26:53 saltspring-zynqmp redis[16616]: DB loaded from disk: 0.000 seconds
Aug 13 11:26:53 saltspring-zynqmp redis[16616]: Ready to accept connections
is missing. To get RDB read, appendonly must be set to no.
Any thoughts?
Cheers,
Related
I am running Redis server version 6.0.6 on Ubuntu 20.04. The process is run by "redis" user.
Sometimes, the Redis process crashes and gets restarted on its own and when this happens, lot of data cached in Redis becomes unavailable. This happens every few days/weeks. I can see the following messages in the logs - saving was working fine till 2:32:43 and suddenly failed at 2:34:15:
133121:C 23 Jun 2021 02:27:54.383 * RDB: 22 MB of memory used by copy-on-write
105798:M 23 Jun 2021 02:27:54.511 * Background saving terminated with success
105798:M 23 Jun 2021 02:29:46.279 * 10000 changes in 60 seconds. Saving...
105798:M 23 Jun 2021 02:29:46.354 * Background saving started by pid 133125
133125:C 23 Jun 2021 02:30:16.363 * DB saved on disk
133125:C 23 Jun 2021 02:30:16.464 * RDB: 18 MB of memory used by copy-on-write
105798:M 23 Jun 2021 02:30:16.583 * Background saving terminated with success
105798:M 23 Jun 2021 02:32:14.138 * 10000 changes in 60 seconds. Saving...
105798:M 23 Jun 2021 02:32:14.222 * Background saving started by pid 133131
133131:C 23 Jun 2021 02:32:42.924 * DB saved on disk
133131:C 23 Jun 2021 02:32:42.988 * RDB: 22 MB of memory used by copy-on-write
105798:M 23 Jun 2021 02:32:43.123 * Background saving terminated with success
105798:M 23 Jun 2021 02:34:14.958 * DB saved on disk
105798:M 23 Jun 2021 02:34:15.705 # Failed opening the RDB file systemdd (in server root dir /etc/cron.d) for saving: Permission denied
=== REDIS BUG REPORT START: Cut & paste starting from here ===
105798:M 23 Jun 2021 02:34:15.705 # Redis 6.0.6 crashed by signal: 11
105798:M 23 Jun 2021 02:34:15.705 # Crashed running the instruction at: 0x55f2e7e35099
105798:M 23 Jun 2021 02:34:15.705 # Accessing address: 0x149968
105798:M 23 Jun 2021 02:34:15.705 # Failed assertion: <no assertion failed> (<no file>:0)
------ STACK TRACE ------
EIP:
/usr/bin/redis-server 172.16.106.88:6379(je_malloc_usable_size+0x89)[0x55f2e7e35099]
Backtrace:
/usr/bin/redis-server 172.16.106.88:6379(logStackTrace+0x4f)[0x55f2e7db2bcf]
/usr/bin/redis-server 172.16.106.88:6379(sigsegvHandler+0xb5)[0x55f2e7db33d5]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x153c0)[0x7fb934c173c0]
/usr/bin/redis-server 172.16.106.88:6379(je_malloc_usable_size+0x89)[0x55f2e7e35099]
/usr/bin/redis-server 172.16.106.88:6379(+0x50b79)[0x55f2e7d72b79]
/usr/bin/redis-server 172.16.106.88:6379(rdbSave+0x2ba)[0x55f2e7d9345a]
/usr/bin/redis-server 172.16.106.88:6379(saveCommand+0x67)[0x55f2e7d94ab7]
/usr/bin/redis-server 172.16.106.88:6379(call+0xb1)[0x55f2e7d6a8b1]
/usr/bin/redis-server 172.16.106.88:6379(processCommand+0x4a6)[0x55f2e7d6b446]
/usr/bin/redis-server 172.16.106.88:6379(processCommandAndResetClient+0x14)[0x55f2e7d799e4]
/usr/bin/redis-server 172.16.106.88:6379(processInputBuffer+0x18f)[0x55f2e7d7e39f]
/usr/bin/redis-server 172.16.106.88:6379(+0xe10ac)[0x55f2e7e030ac]
/usr/bin/redis-server 172.16.106.88:6379(aeProcessEvents+0x303)[0x55f2e7d63b83]
/usr/bin/redis-server 172.16.106.88:6379(aeMain+0x1d)[0x55f2e7d63ebd]
/usr/bin/redis-server 172.16.106.88:6379(main+0x4e5)[0x55f2e7d603d5]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf3)[0x7fb934a370b3]
/usr/bin/redis-server 172.16.106.88:6379(_start+0x2e)[0x55f2e7d606ae]
The service restarts on its own and the Redis server starts working fine for a few days/weeks and crashes again with the same error!
I have checked several posts in SO, but none of them resolve my issue since:
a) The instance where the Redis server is running is in a private network (public access is disabled).
b) The DB file name and dir have not been corrupted as observed from "config get dbfilename" and "config get dir" commands. They show the default values.
c) The permissions of the directories are correct (/var/lib/redis is owned by redis with 755 permissions and /var/lib/redis/dump.rdb is owned by redis with 660 permissions)
Can anyone help me identify the root cause of this issue please?
Try to using ssh connect google cloud computer engine (macOs Catalina)
gcloud beta compute ssh --zone "us-west1-b" "mac-vm" --project "mac-vm-282201"
and get error
ssh: connect to host 34.105.11.187 port 22: Operation timed out
ERROR: (gcloud.beta.compute.ssh) [/usr/bin/ssh] exited with return code [255].
and I try
ssh -I ~/.ssh/mac-vm-key asd61404#34.105.11.187
also get error
ssh: connect to host 34.105.11.187 port 22: Operation timed out
so I found this code to diagnose it
gcloud compute ssh —zone "us-west1-b" "mac-vm" —project "mac-vm-282201" —ssh-flag="-vvv"
return
OpenSSH_7.9p1, LibreSSL 2.7.3
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 48: Applying options for *
debug2: resolve_canonicalize: hostname 34.105.11.187 is address
debug2: ssh_connect_direct
debug1: Connecting to 34.105.11.187 [34.105.11.187] port 22.
I don't know, how can I fix this issue.
Thanks in advance!
here is my recent Serial console
Jul 4 02:28:39 mac-vm google_network_daemon[684]: For info, please visit https://www.isc.org/software/dhcp/
Jul 4 02:28:39 mac-vm dhclient[684]:
Jul 4 02:28:39 mac-vm dhclient[684]: Listening on Socket/ens4
[ 19.458355] google_network_daemon[684]: Listening on Socket/ens4
Jul 4 02:28:39 mac-vm google_network_daemon[684]: Listening on Socket/ens4
Jul 4 02:28:39 mac-vm dhclient[684]: Sending on Socket/ens4
[ 19.458697] google_network_daemon[684]: Sending on Socket/ens4
Jul 4 02:28:39 mac-vm google_network_daemon[684]: Sending on Socket/ens4
Jul 4 02:28:39 mac-vm systemd[1]: Finished Wait until snapd is fully seeded.
Jul 4 02:28:39 mac-vm systemd[1]: Starting Apply the settings specified in cloud-config...
Jul 4 02:28:39 mac-vm systemd[1]: Condition check resulted in Auto import assertions from block devices being skipped.
Jul 4 02:28:39 mac-vm systemd[1]: Reached target Multi-User System.
Jul 4 02:28:39 mac-vm systemd[1]: Reached target Graphical Interface.
Jul 4 02:28:39 mac-vm systemd[1]: Starting Update UTMP about System Runlevel Changes...
Jul 4 02:28:39 mac-vm systemd[1]: systemd-update-utmp-runlevel.service: Succeeded.
Jul 4 02:28:39 mac-vm systemd[1]: Finished Update UTMP about System Runlevel Changes.
[ 20.216129] cloud-init[718]: Cloud-init v. 20.1-10-g71af48df-0ubuntu5 running 'modules:config' at Sat, 04 Jul 2020 02:28:39 +0000. Up 20.11 seconds.
Jul 4 02:28:39 mac-vm cloud-init[718]: Cloud-init v. 20.1-10-g71af48df-0ubuntu5 running 'modules:config' at Sat, 04 Jul 2020 02:28:39 +0000. Up 20.11 seconds.
Jul 4 02:28:39 mac-vm systemd[1]: Finished Apply the settings specified in cloud-config.
Jul 4 02:28:39 mac-vm systemd[1]: Starting Execute cloud user/final scripts...
Jul 4 02:28:41 mac-vm google-clock-skew: INFO Synced system time with hardware clock.
[ 20.886105] cloud-init[725]: Cloud-init v. 20.1-10-g71af48df-0ubuntu5 running 'modules:final' at Sat, 04 Jul 2020 02:28:41 +0000. Up 20.76 seconds.
[ 20.886430] cloud-init[725]: Cloud-init v. 20.1-10-g71af48df-0ubuntu5 finished at Sat, 04 Jul 2020 02:28:41 +0000. Datasource DataSourceGCE. Up 20.87 seconds
Jul 4 02:28:41 mac-vm cloud-init[725]: Cloud-init v. 20.1-10-g71af48df-0ubuntu5 running 'modules:final' at Sat, 04 Jul 2020 02:28:41 +0000. Up 20.76 seconds.
Jul 4 02:28:41 mac-vm cloud-init[725]: Cloud-init v. 20.1-10-g71af48df-0ubuntu5 finished at Sat, 04 Jul 2020 02:28:41 +0000. Datasource DataSourceGCE. Up 20.87 seconds
Jul 4 02:28:41 mac-vm systemd[1]: Finished Execute cloud user/final scripts.
Jul 4 02:28:41 mac-vm systemd[1]: Reached target Cloud-init target.
Jul 4 02:28:41 mac-vm systemd[1]: Starting Google Compute Engine Startup Scripts...
Jul 4 02:28:41 mac-vm startup-script: INFO Starting startup scripts.
Jul 4 02:28:41 mac-vm startup-script: INFO Found startup-script in metadata.
Jul 4 02:28:42 mac-vm startup-script: INFO startup-script: sudo: ufw: command not found
Jul 4 02:28:42 mac-vm startup-script: INFO startup-script: Return code 1.
Jul 4 02:28:42 mac-vm startup-script: INFO Finished running startup scripts.
Jul 4 02:28:42 mac-vm systemd[1]: google-startup-scripts.service: Succeeded.
Jul 4 02:28:42 mac-vm systemd[1]: Finished Google Compute Engine Startup Scripts.
Jul 4 02:28:42 mac-vm systemd[1]: Startup finished in 1.396s (kernel) + 20.065s (userspace) = 21.461s.
Jul 4 02:29:06 mac-vm systemd[1]: systemd-hostnamed.service: Succeeded.
Jul 4 02:43:32 mac-vm systemd[1]: Starting Cleanup of Temporary Directories...
Jul 4 02:43:32 mac-vm systemd[1]: systemd-tmpfiles-clean.service: Succeeded.
Jul 4 02:43:32 mac-vm systemd[1]: Finished Cleanup of Temporary Directories.
On my Ubuntu machine, redis server was running fine and suddenly it stops. After I started it, again it automatically stops after few minutes. So I start again, and so on. Why is this happening?
Here are the logs when I start redis:
21479:C 29 Apr 21:59:10.986 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
21479:C 29 Apr 21:59:10.987 # Redis version=4.0.9, bits=64, commit=00000000, modified=0, pid=21479, just started
21479:C 29 Apr 21:59:10.987 # Configuration loaded
21480:M 29 Apr 21:59:10.990 * Increased maximum number of open files to 10032 (it was originally set to 1024).
21480:M 29 Apr 21:59:10.991 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
21480:M 29 Apr 21:59:10.992 # Server initialized
21480:M 29 Apr 21:59:14.588 * DB loaded from disk: 3.596 seconds
21480:M 29 Apr 21:59:14.591 * Ready to accept connections
I'm trying to configure 3 Redis instances and 6 sentinels (3 of them running on the Redises and the rest are on the different hosts). But when I install redis-sentinel package and put my configuration under /etc/redis/sentinel.conf and restart the service using systemctl restart redis-sentinel I get this error:
Job for redis-sentinel.service failed because a timeout was exceeded.
See "systemctl status redis-sentinel.service" and "journalctl -xe" for details.
Here is the output of journalctl -u redis-sentinel:
Jan 01 08:07:07 redis1 systemd[1]: Starting Advanced key-value store...
Jan 01 08:07:07 redis1 redis-sentinel[16269]: 16269:X 01 Jan 2020 08:07:07.263 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
Jan 01 08:07:07 redis1 redis-sentinel[16269]: 16269:X 01 Jan 2020 08:07:07.263 # Redis version=5.0.7, bits=64, commit=00000000, modified=0, pid=16269, just started
Jan 01 08:07:07 redis1 redis-sentinel[16269]: 16269:X 01 Jan 2020 08:07:07.263 # Configuration loaded
Jan 01 08:07:07 redis1 systemd[1]: redis-sentinel.service: Can't open PID file /var/run/sentinel/redis-sentinel.pid (yet?) after start: No such file or directory
Jan 01 08:08:37 redis1 systemd[1]: redis-sentinel.service: Start operation timed out. Terminating.
Jan 01 08:08:37 redis1 systemd[1]: redis-sentinel.service: Failed with result 'timeout'.
Jan 01 08:08:37 redis1 systemd[1]: Failed to start Advanced key-value store.
Jan 01 08:08:37 redis1 systemd[1]: redis-sentinel.service: Service hold-off time over, scheduling restart.
Jan 01 08:08:37 redis1 systemd[1]: redis-sentinel.service: Scheduled restart job, restart counter is at 5.
Jan 01 08:08:37 redis1 systemd[1]: Stopped Advanced key-value store.
Jan 01 08:08:37 redis1 systemd[1]: Starting Advanced key-value store...
Jan 01 08:08:37 redis1 redis-sentinel[16307]: 16307:X 01 Jan 2020 08:08:37.738 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
Jan 01 08:08:37 redis1 redis-sentinel[16307]: 16307:X 01 Jan 2020 08:08:37.739 # Redis version=5.0.7, bits=64, commit=00000000, modified=0, pid=16307, just started
Jan 01 08:08:37 redis1 redis-sentinel[16307]: 16307:X 01 Jan 2020 08:08:37.739 # Configuration loaded
Jan 01 08:08:37 redis1 systemd[1]: redis-sentinel.service: Can't open PID file /var/run/sentinel/redis-sentinel.pid (yet?) after start: No such file or directory
and my sentinel.conf file:
port 26379
daemonize yes
sentinel myid 851994c7364e2138e03ee1cd346fbdc4f1404e4c
sentinel deny-scripts-reconfig yes
sentinel monitor mymaster 172.28.128.11 6379 2
sentinel down-after-milliseconds mymaster 5000
# Generated by CONFIG REWRITE
dir "/"
protected-mode no
sentinel failover-timeout mymaster 60000
sentinel config-epoch mymaster 0
sentinel leader-epoch mymaster 0
sentinel current-epoch 0
If you are trying to run your Redis servers on Debian based distribution, add below to your Redis configurations:
pidfile /var/run/redis/redis-sentinel.pid to /etc/redis/sentinel.conf
pidfile /var/run/redis/redis-server.pid to /etc/redis/redis.conf
What's the output in the sentinel log file?
I had a similar issue where Sentinel received a lot of sigterms.
In that case you need to make sure that if you use the daemonize yes setting, the systemd unit file must be using Type=forking.
Also make sure that the location of the PID file specified in the sentinel config matches the location specified in the systemd unit file.
If you face below error in journalctl or systemctl logs,
Jun 26 10:13:02 x systemd[1]: redis-server.service: Failed with result 'exit-code'.
Jun 26 10:13:02 x systemd[1]: redis-server.service: Scheduled restart job, restart counter is at 5.
Jun 26 10:13:02 x systemd[1]: Stopped Advanced key-value store.
Jun 26 10:13:02 x systemd[1]: redis-server.service: Start request repeated too quickly.
Jun 26 10:13:02 x systemd[1]: redis-server.service: Failed with result 'exit-code'.
Jun 26 10:13:02 x systemd[1]: Failed to start Advanced key-value store.
Then check /var/log/redis/redis-server.log for more information
In most cases issue is mentioned there.
i.e if a dump.rdb file is placed in /var/lib/redis then the issue might be with database count or redis version.
or in another scenario disabled IPV6 might be the issue.
I have a Raspberry PI 3B+/raspbian running my NodeJS (node-red) backend application. My Raspberry is hosting a frontend application (VueJS) as well. I also have a 7" display connected. The purpose with the system is to display a map of 433 Mhz electrical switches in my home.
If I, for example, click on a switch on the display - the system should turn on/off the lamp and indicate the current state. This has been working flawless for months!
A picture of the display. A javascript clock i the lower right corner.
Since a few weeks, I am facing a really strange behaviour:
Sometimes between 06.30 - 06.33 (6.33 AM) every day, something (??) happens and the browser seems to be non responsive on my 7" display. One thing that is strange is that I am able to move the cursor when touching display. Nothing obviously happens when I click on a button, BUT!, since I have started my Chromium instance like this: chromium-browser --disable-gpu --remote-debugging-port=9222 --remote-debugging-address==10.0.0.4 --user-data-dir=remote-profile --kiosk http://localhost/kommandoran2.0/#/ (in /etc/xdg/lxsession/LXDE-pi/autostart) I am able to remote debug. I can see that correct javascripts are invoked when I click on buttons (in the real world, my switches turns on and off). The problem is that the GUI seems to be semi frozen. At least the GUI in Chrome/KIOSK. The GUI is not updating itself in Chrome
This is the inspector from a Chrome instance on a windows computer in my network when my Pi has been "frozen"
Ok, some javascript errors, but they are indicating other things.
Since I am the "developer", I am very sure that I have nothing in either backend (node-red) or frontend (VueJS) which should be able to cause this behavior!
Here are some example output from journalctl from my Raspberry:
pi#raspberrypi:~ $ journalctl --since "2019-08-13 06:00:00"
Aug 13 6:09:01 raspberrypi CRON[20587]: pam_unix(cron:session): session opened for user root by (uid=0)
Aug 13 6:09:01 raspberrypi CRON[20592]: (root) CMD ( [ -x /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]; then /usr/lib/php/sessionclea
Aug 13 6:09:01 raspberrypi systemd[1]: Starting Clean php session files...
Aug 13 6:09:01 raspberrypi CRON[20587]: pam_unix(cron:session): session closed for user root
Aug 13 6:09:01 raspberrypi systemd[1]: phpsessionclean.service: Succeeded.
Aug 13 6:09:01 raspberrypi systemd[1]: Started Clean php session files.
Aug 13 6:17:01 raspberrypi CRON[24891]: pam_unix(cron:session): session opened for user root by (uid=0)
Aug 13 6:17:01 raspberrypi CRON[24895]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Aug 13 6:17:01 raspberrypi CRON[24891]: pam_unix(cron:session): session closed for user root
Aug 13 6:25:01 raspberrypi CRON[29156]: pam_unix(cron:session): session opened for user root by (uid=0)
Aug 13 6:25:01 raspberrypi CRON[29160]: (root) CMD (test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ))
Aug 13 6:25:02 raspberrypi CRON[29156]: pam_unix(cron:session): session closed for user root
Aug 13 6:30:02 raspberrypi rngd[320]: stats: bits received from HRNG source: 260064
Aug 13 6:30:02 raspberrypi rngd[320]: stats: bits sent to kernel pool: 213824
Aug 13 6:30:02 raspberrypi rngd[320]: stats: entropy added to kernel pool: 213824
Aug 13 6:30:02 raspberrypi rngd[320]: stats: FIPS 140-2 successes: 13
Aug 13 6:30:02 raspberrypi rngd[320]: stats: FIPS 140-2 failures: 0
Aug 13 6:30:02 raspberrypi rngd[320]: stats: FIPS 140-2(2001-10-10) Monobit: 0
Aug 13 6:30:02 raspberrypi rngd[320]: stats: FIPS 140-2(2001-10-10) Poker: 0
Aug 13 6:30:02 raspberrypi rngd[320]: stats: FIPS 140-2(2001-10-10) Runs: 0
Aug 13 6:30:02 raspberrypi rngd[320]: stats: FIPS 140-2(2001-10-10) Long run: 0
Aug 13 6:30:02 raspberrypi rngd[320]: stats: FIPS 140-2(2001-10-10) Continuous run: 0
Aug 13 6:30:02 raspberrypi rngd[320]: stats: HRNG source speed: (min=422.800; avg=940.174; max=1173.753)Kibits/s
Aug 13 6:30:02 raspberrypi rngd[320]: stats: FIPS tests speed: (min=5.320; avg=9.536; max=16.542)Mibits/s
Aug 13 6:30:02 raspberrypi rngd[320]: stats: Lowest ready-buffers level: 2
Aug 13 6:30:02 raspberrypi rngd[320]: stats: Entropy starvations: 0
Aug 13 6:30:02 raspberrypi rngd[320]: stats: Time spent starving for entropy: (min=0; avg=0.000; max=0)us
******* 06:32 FREEZE
Aug 13 6:34:19 raspberrypi systemd[1]: Starting Daily apt upgrade and clean activities...
Aug 13 6:34:23 raspberrypi systemd[1]: apt-daily-upgrade.service: Succeeded.
Aug 13 6:34:23 raspberrypi systemd[1]: Started Daily apt upgrade and clean activities.
Aug 13 6:39:01 raspberrypi CRON[4436]: pam_unix(cron:session): session opened for user root by (uid=0)
Aug 13 6:39:01 raspberrypi CRON[4442]: (root) CMD ( [ -x /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]; then /usr/lib/php/sessionclean
Aug 13 6:39:01 raspberrypi systemd[1]: Starting Clean php session files...
...
pi#raspberrypi:~ $ journalctl --since "2019-08-14 06:00:00"
Aug 14 6:09:01 raspberrypi CRON[6668]: pam_unix(cron:session): session opened for user root by (uid=0)
Aug 14 6:09:02 raspberrypi systemd[1]: Starting Clean php session files...
Aug 14 6:09:02 raspberrypi CRON[6674]: (root) CMD ( [ -x /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]
Aug 14 6:09:02 raspberrypi CRON[6668]: pam_unix(cron:session): session closed for user root
Aug 14 6:09:02 raspberrypi systemd[1]: phpsessionclean.service: Succeeded.
Aug 14 6:09:02 raspberrypi systemd[1]: Started Clean php session files.
Aug 14 6:14:36 raspberrypi systemd[1]: Starting Daily apt upgrade and clean activities...
Aug 14 6:14:40 raspberrypi systemd[1]: apt-daily-upgrade.service: Succeeded.
Aug 14 6:14:40 raspberrypi systemd[1]: Started Daily apt upgrade and clean activities.
Aug 14 6:17:01 raspberrypi CRON[11005]: pam_unix(cron:session): session opened for user root by (uid=0)
Aug 14 6:17:01 raspberrypi CRON[11009]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Aug 14 6:17:01 raspberrypi CRON[11005]: pam_unix(cron:session): session closed for user root
Aug 14 6:25:01 raspberrypi CRON[15276]: pam_unix(cron:session): session opened for user root by (uid=0)
Aug 14 6:25:01 raspberrypi CRON[15281]: (root) CMD (test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
Aug 14 6:25:02 raspberrypi CRON[15276]: pam_unix(cron:session): session closed for user root
******* 06:32 FREEZE
Aug 14 6:39:01 raspberrypi CRON[22772]: pam_unix(cron:session): session opened for user root by (uid=0)
Aug 14 6:39:01 raspberrypi CRON[22777]: (root) CMD ( [ -x /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]
Aug 14 6:39:01 raspberrypi systemd[1]: Starting Clean php session files...
Aug 14 6:39:01 raspberrypi CRON[22772]: pam_unix(cron:session): session closed for user root
Aug 14 6:39:01 raspberrypi systemd[1]: phpsessionclean.service: Succeeded.
Aug 14 6:39:01 raspberrypi systemd[1]: Started Clean php session files.
...
******* NOT FREEZING Aug 15
pi#raspberrypi:~ $ journalctl --since "2019-08-16 06:00:00"
Aug 16 6:09:01 raspberrypi CRON[13098]: pam_unix(cron:session): session opened for user root by (uid=0)
Aug 16 6:09:01 raspberrypi CRON[13102]: (root) CMD ( [ -x /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]
Aug 16 6:09:01 raspberrypi CRON[13098]: pam_unix(cron:session): session closed for user root
Aug 16 6:09:03 raspberrypi systemd[1]: Starting Clean php session files...
Aug 16 6:09:04 raspberrypi systemd[1]: phpsessionclean.service: Succeeded.
Aug 16 6:09:04 raspberrypi systemd[1]: Started Clean php session files.
Aug 16 6:17:01 raspberrypi CRON[21638]: pam_unix(cron:session): session opened for user root by (uid=0)
Aug 16 6:17:01 raspberrypi CRON[21643]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Aug 16 6:17:01 raspberrypi CRON[21638]: pam_unix(cron:session): session closed for user root
******* 06:31 FREEZE
Aug 16 6:25:01 raspberrypi CRON[30176]: pam_unix(cron:session): session opened for user root by (uid=0)
Aug 16 6:25:01 raspberrypi CRON[30182]: (root) CMD (test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ))
Aug 16 6:25:02 raspberrypi CRON[30176]: pam_unix(cron:session): session closed for user root
Aug 16 6:39:01 raspberrypi CRON[12819]: pam_unix(cron:session): session opened for user root by (uid=0)
Aug 16 6:39:01 raspberrypi CRON[12823]: (root) CMD ( [ -x /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]
Aug 16 6:39:01 raspberrypi CRON[12819]: pam_unix(cron:session): session closed for user root
Aug 16 6:39:03 raspberrypi systemd[1]: Starting Clean php session files...
Aug 16 6:39:04 raspberrypi systemd[1]: phpsessionclean.service: Succeeded.
Aug 16 6:39:04 raspberrypi systemd[1]: Started Clean php session files.
Aug 16 6:41:03 raspberrypi systemd[1]: Starting Daily apt upgrade and clean activities...
Aug 16 6:41:06 raspberrypi systemd[1]: apt-daily-upgrade.service: Succeeded.
...
I have no problems with power to my Raspberry. I have tried to reinstall the system on a new fresh SD-card. I upgraded from stretch to buster. The problem remains...
This is driving my nuts! I can access my Raspberry Pi via XRDP. Neither the display nor Chromium are completely dead. What is causing the chrome GUI to stop updating? Why is this happening every morning around 06:30 AM??
There might be some cron process scheduled, like the apt repository refresh, or some other scheduled maintenance in the default raspbian configuration (locate database update?). A scheduled process could eat up some of the CPU resources leaving chrome less render time.
Have you tried logging the CPU usage in the backgound? There are some good suggestions like: https://askubuntu.com/questions/22021/how-to-log-cpu-load
This might help you figure out if something else is happening at the same time on your system.
Unrelated to the main issue, you could also use Chrome debugger to inspect render times of your web app, and make sure you're not wasteful when it comes to rendering the dom and canvas. In case your page uses a meaningful amount of cpu time to render it can make sense that background processes stall it, and optimizing it could help lessen the affect the other processes have on it. Again, I'm not suggesting this is the case, but it doesn't hurt to check.