Chromium semi freezing in a predictable way on very unclear reasons - vue.js

I have a Raspberry PI 3B+/raspbian running my NodeJS (node-red) backend application. My Raspberry is hosting a frontend application (VueJS) as well. I also have a 7" display connected. The purpose with the system is to display a map of 433 Mhz electrical switches in my home.
If I, for example, click on a switch on the display - the system should turn on/off the lamp and indicate the current state. This has been working flawless for months!
A picture of the display. A javascript clock i the lower right corner.
Since a few weeks, I am facing a really strange behaviour:
Sometimes between 06.30 - 06.33 (6.33 AM) every day, something (??) happens and the browser seems to be non responsive on my 7" display. One thing that is strange is that I am able to move the cursor when touching display. Nothing obviously happens when I click on a button, BUT!, since I have started my Chromium instance like this: chromium-browser --disable-gpu --remote-debugging-port=9222 --remote-debugging-address==10.0.0.4 --user-data-dir=remote-profile --kiosk http://localhost/kommandoran2.0/#/ (in /etc/xdg/lxsession/LXDE-pi/autostart) I am able to remote debug. I can see that correct javascripts are invoked when I click on buttons (in the real world, my switches turns on and off). The problem is that the GUI seems to be semi frozen. At least the GUI in Chrome/KIOSK. The GUI is not updating itself in Chrome
This is the inspector from a Chrome instance on a windows computer in my network when my Pi has been "frozen"
Ok, some javascript errors, but they are indicating other things.
Since I am the "developer", I am very sure that I have nothing in either backend (node-red) or frontend (VueJS) which should be able to cause this behavior!
Here are some example output from journalctl from my Raspberry:
pi#raspberrypi:~ $ journalctl --since "2019-08-13 06:00:00"
Aug 13 6:09:01 raspberrypi CRON[20587]: pam_unix(cron:session): session opened for user root by (uid=0)
Aug 13 6:09:01 raspberrypi CRON[20592]: (root) CMD ( [ -x /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]; then /usr/lib/php/sessionclea
Aug 13 6:09:01 raspberrypi systemd[1]: Starting Clean php session files...
Aug 13 6:09:01 raspberrypi CRON[20587]: pam_unix(cron:session): session closed for user root
Aug 13 6:09:01 raspberrypi systemd[1]: phpsessionclean.service: Succeeded.
Aug 13 6:09:01 raspberrypi systemd[1]: Started Clean php session files.
Aug 13 6:17:01 raspberrypi CRON[24891]: pam_unix(cron:session): session opened for user root by (uid=0)
Aug 13 6:17:01 raspberrypi CRON[24895]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Aug 13 6:17:01 raspberrypi CRON[24891]: pam_unix(cron:session): session closed for user root
Aug 13 6:25:01 raspberrypi CRON[29156]: pam_unix(cron:session): session opened for user root by (uid=0)
Aug 13 6:25:01 raspberrypi CRON[29160]: (root) CMD (test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ))
Aug 13 6:25:02 raspberrypi CRON[29156]: pam_unix(cron:session): session closed for user root
Aug 13 6:30:02 raspberrypi rngd[320]: stats: bits received from HRNG source: 260064
Aug 13 6:30:02 raspberrypi rngd[320]: stats: bits sent to kernel pool: 213824
Aug 13 6:30:02 raspberrypi rngd[320]: stats: entropy added to kernel pool: 213824
Aug 13 6:30:02 raspberrypi rngd[320]: stats: FIPS 140-2 successes: 13
Aug 13 6:30:02 raspberrypi rngd[320]: stats: FIPS 140-2 failures: 0
Aug 13 6:30:02 raspberrypi rngd[320]: stats: FIPS 140-2(2001-10-10) Monobit: 0
Aug 13 6:30:02 raspberrypi rngd[320]: stats: FIPS 140-2(2001-10-10) Poker: 0
Aug 13 6:30:02 raspberrypi rngd[320]: stats: FIPS 140-2(2001-10-10) Runs: 0
Aug 13 6:30:02 raspberrypi rngd[320]: stats: FIPS 140-2(2001-10-10) Long run: 0
Aug 13 6:30:02 raspberrypi rngd[320]: stats: FIPS 140-2(2001-10-10) Continuous run: 0
Aug 13 6:30:02 raspberrypi rngd[320]: stats: HRNG source speed: (min=422.800; avg=940.174; max=1173.753)Kibits/s
Aug 13 6:30:02 raspberrypi rngd[320]: stats: FIPS tests speed: (min=5.320; avg=9.536; max=16.542)Mibits/s
Aug 13 6:30:02 raspberrypi rngd[320]: stats: Lowest ready-buffers level: 2
Aug 13 6:30:02 raspberrypi rngd[320]: stats: Entropy starvations: 0
Aug 13 6:30:02 raspberrypi rngd[320]: stats: Time spent starving for entropy: (min=0; avg=0.000; max=0)us
******* 06:32 FREEZE
Aug 13 6:34:19 raspberrypi systemd[1]: Starting Daily apt upgrade and clean activities...
Aug 13 6:34:23 raspberrypi systemd[1]: apt-daily-upgrade.service: Succeeded.
Aug 13 6:34:23 raspberrypi systemd[1]: Started Daily apt upgrade and clean activities.
Aug 13 6:39:01 raspberrypi CRON[4436]: pam_unix(cron:session): session opened for user root by (uid=0)
Aug 13 6:39:01 raspberrypi CRON[4442]: (root) CMD ( [ -x /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]; then /usr/lib/php/sessionclean
Aug 13 6:39:01 raspberrypi systemd[1]: Starting Clean php session files...
...
pi#raspberrypi:~ $ journalctl --since "2019-08-14 06:00:00"
Aug 14 6:09:01 raspberrypi CRON[6668]: pam_unix(cron:session): session opened for user root by (uid=0)
Aug 14 6:09:02 raspberrypi systemd[1]: Starting Clean php session files...
Aug 14 6:09:02 raspberrypi CRON[6674]: (root) CMD ( [ -x /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]
Aug 14 6:09:02 raspberrypi CRON[6668]: pam_unix(cron:session): session closed for user root
Aug 14 6:09:02 raspberrypi systemd[1]: phpsessionclean.service: Succeeded.
Aug 14 6:09:02 raspberrypi systemd[1]: Started Clean php session files.
Aug 14 6:14:36 raspberrypi systemd[1]: Starting Daily apt upgrade and clean activities...
Aug 14 6:14:40 raspberrypi systemd[1]: apt-daily-upgrade.service: Succeeded.
Aug 14 6:14:40 raspberrypi systemd[1]: Started Daily apt upgrade and clean activities.
Aug 14 6:17:01 raspberrypi CRON[11005]: pam_unix(cron:session): session opened for user root by (uid=0)
Aug 14 6:17:01 raspberrypi CRON[11009]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Aug 14 6:17:01 raspberrypi CRON[11005]: pam_unix(cron:session): session closed for user root
Aug 14 6:25:01 raspberrypi CRON[15276]: pam_unix(cron:session): session opened for user root by (uid=0)
Aug 14 6:25:01 raspberrypi CRON[15281]: (root) CMD (test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
Aug 14 6:25:02 raspberrypi CRON[15276]: pam_unix(cron:session): session closed for user root
******* 06:32 FREEZE
Aug 14 6:39:01 raspberrypi CRON[22772]: pam_unix(cron:session): session opened for user root by (uid=0)
Aug 14 6:39:01 raspberrypi CRON[22777]: (root) CMD ( [ -x /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]
Aug 14 6:39:01 raspberrypi systemd[1]: Starting Clean php session files...
Aug 14 6:39:01 raspberrypi CRON[22772]: pam_unix(cron:session): session closed for user root
Aug 14 6:39:01 raspberrypi systemd[1]: phpsessionclean.service: Succeeded.
Aug 14 6:39:01 raspberrypi systemd[1]: Started Clean php session files.
...
******* NOT FREEZING Aug 15
pi#raspberrypi:~ $ journalctl --since "2019-08-16 06:00:00"
Aug 16 6:09:01 raspberrypi CRON[13098]: pam_unix(cron:session): session opened for user root by (uid=0)
Aug 16 6:09:01 raspberrypi CRON[13102]: (root) CMD ( [ -x /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]
Aug 16 6:09:01 raspberrypi CRON[13098]: pam_unix(cron:session): session closed for user root
Aug 16 6:09:03 raspberrypi systemd[1]: Starting Clean php session files...
Aug 16 6:09:04 raspberrypi systemd[1]: phpsessionclean.service: Succeeded.
Aug 16 6:09:04 raspberrypi systemd[1]: Started Clean php session files.
Aug 16 6:17:01 raspberrypi CRON[21638]: pam_unix(cron:session): session opened for user root by (uid=0)
Aug 16 6:17:01 raspberrypi CRON[21643]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Aug 16 6:17:01 raspberrypi CRON[21638]: pam_unix(cron:session): session closed for user root
******* 06:31 FREEZE
Aug 16 6:25:01 raspberrypi CRON[30176]: pam_unix(cron:session): session opened for user root by (uid=0)
Aug 16 6:25:01 raspberrypi CRON[30182]: (root) CMD (test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ))
Aug 16 6:25:02 raspberrypi CRON[30176]: pam_unix(cron:session): session closed for user root
Aug 16 6:39:01 raspberrypi CRON[12819]: pam_unix(cron:session): session opened for user root by (uid=0)
Aug 16 6:39:01 raspberrypi CRON[12823]: (root) CMD ( [ -x /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]
Aug 16 6:39:01 raspberrypi CRON[12819]: pam_unix(cron:session): session closed for user root
Aug 16 6:39:03 raspberrypi systemd[1]: Starting Clean php session files...
Aug 16 6:39:04 raspberrypi systemd[1]: phpsessionclean.service: Succeeded.
Aug 16 6:39:04 raspberrypi systemd[1]: Started Clean php session files.
Aug 16 6:41:03 raspberrypi systemd[1]: Starting Daily apt upgrade and clean activities...
Aug 16 6:41:06 raspberrypi systemd[1]: apt-daily-upgrade.service: Succeeded.
...
I have no problems with power to my Raspberry. I have tried to reinstall the system on a new fresh SD-card. I upgraded from stretch to buster. The problem remains...
This is driving my nuts! I can access my Raspberry Pi via XRDP. Neither the display nor Chromium are completely dead. What is causing the chrome GUI to stop updating? Why is this happening every morning around 06:30 AM??

There might be some cron process scheduled, like the apt repository refresh, or some other scheduled maintenance in the default raspbian configuration (locate database update?). A scheduled process could eat up some of the CPU resources leaving chrome less render time.
Have you tried logging the CPU usage in the backgound? There are some good suggestions like: https://askubuntu.com/questions/22021/how-to-log-cpu-load
This might help you figure out if something else is happening at the same time on your system.
Unrelated to the main issue, you could also use Chrome debugger to inspect render times of your web app, and make sure you're not wasteful when it comes to rendering the dom and canvas. In case your page uses a meaningful amount of cpu time to render it can make sense that background processes stall it, and optimizing it could help lessen the affect the other processes have on it. Again, I'm not suggesting this is the case, but it doesn't hurt to check.

Related

Redis 4.0.14 AOF and RDB Restore

I have enabled both RDB and AOF backup via save 1 1 and appendonly yes. This configuration creates both RDB and AOF files, at prescribed locations. However, during restart of Redis the following is noticed
If appendonly yes, then RDB file is not read, regardless as to whether AOF file exists or not
If appendonly no, then RDB file is read
I've tested the above by setting appendonly yes and running rm /persistent/redis/appendonly.aof; systemctl restart redis. The log file shows
Aug 13 11:11:06 saltspring-zynqmp redis-server[16292]: 16292:M 13 Aug 11:11:06.199 # Redis is now ready to exit, bye bye...
Aug 13 11:11:06 saltspring-zynqmp redis[16292]: DB saved on disk
Aug 13 11:11:06 saltspring-zynqmp redis[16292]: Removing the pid file.
Aug 13 11:11:06 saltspring-zynqmp redis[16292]: Redis is now ready to exit, bye bye...
Aug 13 11:11:06 saltspring-zynqmp systemd[1]: redis.service: Succeeded.
Aug 13 11:11:06 saltspring-zynqmp systemd[1]: Stopped redis.service.
Aug 13 11:11:06 saltspring-zynqmp systemd[1]: Starting redis.service...
Aug 13 11:11:06 saltspring-zynqmp redis-check-aof[16354]: Cannot open file: /persistent/redis/appendonly.aof
Aug 13 11:11:06 saltspring-zynqmp redis-server[16355]: 16355:C 13 Aug 11:11:06.232 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
Aug 13 11:11:06 saltspring-zynqmp redis-server[16355]: oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
Aug 13 11:11:06 saltspring-zynqmp redis-server[16355]: 16355:C 13 Aug 11:11:06.233 # Redis version=4.0.14, bits=64, commit=00000000, modified=0, pid=16355, just started
Aug 13 11:11:06 saltspring-zynqmp redis-server[16355]: Redis version=4.0.14, bits=64, commit=00000000, modified=0, pid=16355, just started
Aug 13 11:11:06 saltspring-zynqmp redis-server[16355]: 16355:C 13 Aug 11:11:06.234 # Configuration loaded
Aug 13 11:11:06 saltspring-zynqmp redis-server[16355]: Configuration loaded
Aug 13 11:11:06 saltspring-zynqmp redis-server[16355]: 16355:C 13 Aug 11:11:06.234 * supervised by systemd, will signal readiness
Aug 13 11:11:06 saltspring-zynqmp redis-server[16355]: supervised by systemd, will signal readiness
Aug 13 11:11:06 saltspring-zynqmp systemd[1]: Started redis.service.
Aug 13 11:11:06 saltspring-zynqmp redis-server[16355]: 16355:M 13 Aug 11:11:06.239 * Increased maximum number of open files to 10032 (it was originally set to 1024).
Aug 13 11:11:06 saltspring-zynqmp redis[16355]: Increased maximum number of open files to 10032 (it was originally set to 1024).
Aug 13 11:11:06 saltspring-zynqmp redis-server[16355]: 16355:M 13 Aug 11:11:06.241 * Running mode=standalone, port=6379.
Aug 13 11:11:06 saltspring-zynqmp redis[16355]: Running mode=standalone, port=6379.
Aug 13 11:11:06 saltspring-zynqmp redis-server[16355]: 16355:M 13 Aug 11:11:06.242 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
Aug 13 11:11:06 saltspring-zynqmp redis[16355]: WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
Aug 13 11:11:06 saltspring-zynqmp redis-server[16355]: 16355:M 13 Aug 11:11:06.242 # Server initialized
Aug 13 11:11:06 saltspring-zynqmp redis[16355]: Server initialized
Aug 13 11:11:06 saltspring-zynqmp redis-server[16355]: 16355:M 13 Aug 11:11:06.242 * Ready to accept connections
Aug 13 11:11:06 saltspring-zynqmp redis[16355]: Ready to accept connections
Notice that the excepted message
...
Aug 13 11:26:53 saltspring-zynqmp redis[16616]: DB loaded from disk: 0.000 seconds
Aug 13 11:26:53 saltspring-zynqmp redis[16616]: Ready to accept connections
is missing. To get RDB read, appendonly must be set to no.
Any thoughts?
Cheers,

Can't start Varnish cache on port 80 Ubuntu 16.04

Help me please make friends with my Ubuntu 16.04 apache2 web-server!
After installing Varnish cache started normally. But after putting it on port 80 varnish can't start:
Creating /etc/systemd/system/varnish.service.d/customexec.conf:
[Service]
ExecStart=
ExecStart=/usr/sbin/varnishd -a :80 -T localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret -s default,256m
Then
systemctl daemon-reload
service varnish start
Varnish do not start:
varnish.service - Varnish Cache, a high-performance HTTP accelerator
Loaded: loaded (/lib/systemd/system/varnish.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/varnish.service.d
└─customexec.conf
Active: failed (Result: exit-code) since Thu 2020-07-23 09:41:12 MSK; 21s ago
Process: 5886 ExecStart=/usr/sbin/varnishd -a :80 -T localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret -s default,256m (code=exited,
Main PID: 20786 (code=exited, status=0/SUCCESS)
Jul 23 09:41:12 mj33 systemd[1]: Starting Varnish Cache, a high-performance HTTP accelerator...
Jul 23 09:41:12 mj33 varnishd[5886]: Error: Cannot open -S file (/etc/varnish/secret): No such file or directory
Jul 23 09:41:12 mj33 varnishd[5886]: (-? gives usage)
Jul 23 09:41:12 mj33 systemd[1]: varnish.service: Control process exited, code=exited status=255
Jul 23 09:41:12 mj33 systemd[1]: Failed to start Varnish Cache, a high-performance HTTP accelerator.
Jul 23 09:41:12 mj33 systemd[1]: varnish.service: Unit entered failed state.
Jul 23 09:41:12 mj33 systemd[1]: varnish.service: Failed with result 'exit-code'.
Tried to create security file and edit /etc/systemd/system/varnish.service.d/customexec.conf:
[Service]
ExecStart=
ExecStart=/usr/sbin/varnishd -a :80 -T localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret -s malloc,256m
Varnish started first time, but after stop/start it does not start again:
● varnish.service - Varnish Cache, a high-performance HTTP accelerator
Loaded: loaded (/lib/systemd/system/varnish.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/varnish.service.d
└─customexec.conf
Active: failed (Result: exit-code) since Fri 2020-07-24 10:52:41 MSK; 8s ago
Process: 9974 ExecStart=/usr/sbin/varnishd -a :80 -T localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret -s malloc,256m (code=exited, status=2
Main PID: 8395 (code=exited, status=0/SUCCESS)
Jul 24 10:52:40 mj33 systemd[1]: Starting Varnish Cache, a high-performance HTTP accelerator...
Jul 24 10:52:41 mj33 varnishd[9974]: Debug: Version: varnish-6.0.6 revision 29a1a8243dbef3d973aec28dc90403188c1dc8e7
Jul 24 10:52:41 mj33 varnishd[9974]: Debug: Platform: Linux,4.4.0-135-generic,x86_64,-junix,-smalloc,-sdefault,-hcritbit
Jul 24 10:52:41 mj33 varnishd[9974]: Empty secret-file "/etc/varnish/secret"
Jul 24 10:52:41 mj33 varnishd[9976]: Version: varnish-6.0.6 revision 29a1a8243dbef3d973aec28dc90403188c1dc8e7
Jul 24 10:52:41 mj33 varnishd[9976]: Platform: Linux,4.4.0-135-generic,x86_64,-junix,-smalloc,-sdefault,-hcritbit
Jul 24 10:52:41 mj33 systemd[1]: varnish.service: Control process exited, code=exited status=255
Jul 24 10:52:41 mj33 systemd[1]: Failed to start Varnish Cache, a high-performance HTTP accelerator.
Jul 24 10:52:41 mj33 systemd[1]: varnish.service: Unit entered failed state.
Jul 24 10:52:41 mj33 systemd[1]: varnish.service: Failed with result 'exit-code'.
The problem was:
Debug: Child (20551) Started
Error: Child (20551) Acceptor start failed:
Listen failed on socket ':80': Address already in use
Debug: Stopping Child
Info: Child (20551) ended

(gcloud.beta.compute.ssh) [/usr/bin/ssh] exited with return code [255]

Try to using ssh connect google cloud computer engine (macOs Catalina)
gcloud beta compute ssh --zone "us-west1-b" "mac-vm" --project "mac-vm-282201"
and get error
ssh: connect to host 34.105.11.187 port 22: Operation timed out
ERROR: (gcloud.beta.compute.ssh) [/usr/bin/ssh] exited with return code [255].
and I try
ssh -I ~/.ssh/mac-vm-key asd61404#34.105.11.187
also get error
ssh: connect to host 34.105.11.187 port 22: Operation timed out
so I found this code to diagnose it
gcloud compute ssh —zone "us-west1-b" "mac-vm" —project "mac-vm-282201" —ssh-flag="-vvv"
return
OpenSSH_7.9p1, LibreSSL 2.7.3
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 48: Applying options for *
debug2: resolve_canonicalize: hostname 34.105.11.187 is address
debug2: ssh_connect_direct
debug1: Connecting to 34.105.11.187 [34.105.11.187] port 22.
I don't know, how can I fix this issue.
Thanks in advance!
here is my recent Serial console
Jul 4 02:28:39 mac-vm google_network_daemon[684]: For info, please visit https://www.isc.org/software/dhcp/
Jul 4 02:28:39 mac-vm dhclient[684]:
Jul 4 02:28:39 mac-vm dhclient[684]: Listening on Socket/ens4
[ 19.458355] google_network_daemon[684]: Listening on Socket/ens4
Jul 4 02:28:39 mac-vm google_network_daemon[684]: Listening on Socket/ens4
Jul 4 02:28:39 mac-vm dhclient[684]: Sending on Socket/ens4
[ 19.458697] google_network_daemon[684]: Sending on Socket/ens4
Jul 4 02:28:39 mac-vm google_network_daemon[684]: Sending on Socket/ens4
Jul 4 02:28:39 mac-vm systemd[1]: Finished Wait until snapd is fully seeded.
Jul 4 02:28:39 mac-vm systemd[1]: Starting Apply the settings specified in cloud-config...
Jul 4 02:28:39 mac-vm systemd[1]: Condition check resulted in Auto import assertions from block devices being skipped.
Jul 4 02:28:39 mac-vm systemd[1]: Reached target Multi-User System.
Jul 4 02:28:39 mac-vm systemd[1]: Reached target Graphical Interface.
Jul 4 02:28:39 mac-vm systemd[1]: Starting Update UTMP about System Runlevel Changes...
Jul 4 02:28:39 mac-vm systemd[1]: systemd-update-utmp-runlevel.service: Succeeded.
Jul 4 02:28:39 mac-vm systemd[1]: Finished Update UTMP about System Runlevel Changes.
[ 20.216129] cloud-init[718]: Cloud-init v. 20.1-10-g71af48df-0ubuntu5 running 'modules:config' at Sat, 04 Jul 2020 02:28:39 +0000. Up 20.11 seconds.
Jul 4 02:28:39 mac-vm cloud-init[718]: Cloud-init v. 20.1-10-g71af48df-0ubuntu5 running 'modules:config' at Sat, 04 Jul 2020 02:28:39 +0000. Up 20.11 seconds.
Jul 4 02:28:39 mac-vm systemd[1]: Finished Apply the settings specified in cloud-config.
Jul 4 02:28:39 mac-vm systemd[1]: Starting Execute cloud user/final scripts...
Jul 4 02:28:41 mac-vm google-clock-skew: INFO Synced system time with hardware clock.
[ 20.886105] cloud-init[725]: Cloud-init v. 20.1-10-g71af48df-0ubuntu5 running 'modules:final' at Sat, 04 Jul 2020 02:28:41 +0000. Up 20.76 seconds.
[ 20.886430] cloud-init[725]: Cloud-init v. 20.1-10-g71af48df-0ubuntu5 finished at Sat, 04 Jul 2020 02:28:41 +0000. Datasource DataSourceGCE. Up 20.87 seconds
Jul 4 02:28:41 mac-vm cloud-init[725]: Cloud-init v. 20.1-10-g71af48df-0ubuntu5 running 'modules:final' at Sat, 04 Jul 2020 02:28:41 +0000. Up 20.76 seconds.
Jul 4 02:28:41 mac-vm cloud-init[725]: Cloud-init v. 20.1-10-g71af48df-0ubuntu5 finished at Sat, 04 Jul 2020 02:28:41 +0000. Datasource DataSourceGCE. Up 20.87 seconds
Jul 4 02:28:41 mac-vm systemd[1]: Finished Execute cloud user/final scripts.
Jul 4 02:28:41 mac-vm systemd[1]: Reached target Cloud-init target.
Jul 4 02:28:41 mac-vm systemd[1]: Starting Google Compute Engine Startup Scripts...
Jul 4 02:28:41 mac-vm startup-script: INFO Starting startup scripts.
Jul 4 02:28:41 mac-vm startup-script: INFO Found startup-script in metadata.
Jul 4 02:28:42 mac-vm startup-script: INFO startup-script: sudo: ufw: command not found
Jul 4 02:28:42 mac-vm startup-script: INFO startup-script: Return code 1.
Jul 4 02:28:42 mac-vm startup-script: INFO Finished running startup scripts.
Jul 4 02:28:42 mac-vm systemd[1]: google-startup-scripts.service: Succeeded.
Jul 4 02:28:42 mac-vm systemd[1]: Finished Google Compute Engine Startup Scripts.
Jul 4 02:28:42 mac-vm systemd[1]: Startup finished in 1.396s (kernel) + 20.065s (userspace) = 21.461s.
Jul 4 02:29:06 mac-vm systemd[1]: systemd-hostnamed.service: Succeeded.
Jul 4 02:43:32 mac-vm systemd[1]: Starting Cleanup of Temporary Directories...
Jul 4 02:43:32 mac-vm systemd[1]: systemd-tmpfiles-clean.service: Succeeded.
Jul 4 02:43:32 mac-vm systemd[1]: Finished Cleanup of Temporary Directories.

apache2.service: Failed to run 'start' task: No such file or directory

I can't start my apache server on debian 9.
I tried reinstall :
sudo apt-get autoremove --purge apache2 && sudo apt-get install apache2
but no change...
Job for apache2.service failed because of unavailable resources or another system error.
See "systemctl status apache2.service" and "journalctl -xe" for details.
invoke-rc.d: initscript apache2, action "restart" failed.
systemctl status apache2.service
● apache2.service - The Apache HTTP Server
Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled)
Active: failed (Result: resources)
journalctl -xeu apache2.service
(I set loglevel to debug mod)
Sep 05 11:45:44 systemd[1]: apache2.service: Failed with result 'resources'.
Sep 05 11:50:26 systemd[1]: apache2.service: Changed dead -> failed
Sep 05 11:50:27 systemd[1]: apache2.service: Trying to enqueue job apache2.service/stop/replace
Sep 05 11:50:27 systemd[1]: apache2.service: Installed new job apache2.service/stop as 1415
Sep 05 11:50:27 systemd[1]: apache2.service: Enqueued job apache2.service/stop as 1415
Sep 05 11:50:27 systemd[1]: apache2.service: Job apache2.service/stop finished, result=done
Sep 05 11:50:27 systemd[1]: apache2.service: Changed dead -> failed
Sep 05 11:50:30 systemd[1]: apache2.service: Failed to run 'start' task: No such file or directory
Sep 05 11:50:30 systemd[1]: Failed to start The Apache HTTP Server.
-- Subject: Unit apache2.service has failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- Unit apache2.service has failed.
--
-- The result is failed.
Sep 05 11:50:30 systemd[1]: apache2.service: Failed with result 'resources'.
what's wrong?
Maybe this is a problem with service 'tmp' directory. I has a similar error with systemd-resolved.service, and reason was missing '/var/tmp' directory after system migration. Check what temp directory the service is using and create it if necessary.
Also, if there is systemd newly running and crap in /var/tmp/, you might have to clear up this crap and try running the service again.
In my case it turned out to be this (without apache2 running at the time):
root#www:/var/tmp # ls -al
total 32
drwxrwxrwt 8 root root 4096 Dec 15 12:48 .
drwxr-xr-x 14 root root 4096 Jul 8 21:43 ..
drwx------ 2 root root 4096 Dec 15 12:48 systemd-private-1dcdfe608b6c41f387936225d86126c7-apache2.service-L0KeaS
drwx------ 2 root root 4096 Dec 8 03:09 systemd-private-39294ac7bf4b44198d87d45660dcbac2-phpsessionclean.service-4ShLZm
drwx------ 2 root root 4096 Dec 15 04:00 systemd-private-451ad0c3bfe6435891a80a6c714a222b-apache2.service-YQyZes
drwx------ 2 root root 4096 Dec 15 07:09 systemd-private-451ad0c3bfe6435891a80a6c714a222b-phpsessionclean.service-5L25TU
drwx------ 3 root root 4096 Dec 15 03:53 systemd-private-68bc1493e8804c968af642a2319c4e79-apache2.service-RY1iLF

debian wheezy to jessie apache start\cloudflare issue

I apt-get dist-upgraded from Debian Wheezy to Jessie and kept my config files for my LAMP stack and now I'm getting the following error when attempting to start Apache: (I'm totally stumped here. Why would apache\the cloudflare module not load?)
root#county:~# /etc/init.d/apache2 restart
[....] Restarting apache2 (via systemctl): apache2.serviceJob for apache2.service failed. See 'systemctl status apache2.service' and 'journalctl -xn' for details.
failed!
root#county:~# systemctl status apache2.service -l
● apache2.service - LSB: Apache2 web server
Loaded: loaded (/etc/init.d/apache2)
Active: failed (Result: exit-code) since Thu 2015-07-16 20:58:34 EDT; 21s ago
Process: 2166 ExecStart=/etc/init.d/apache2 start (code=exited, status=1/FAILURE)
Jul 16 20:58:34 county systemd[1]: Starting LSB: Apache2 web server...
Jul 16 20:58:34 county apache2[2166]: Starting web server: apache2 failed!
Jul 16 20:58:34 county apache2[2166]: The apache2 configtest failed. ... (warning).
Jul 16 20:58:34 county apache2[2166]: Output of config test was:
Jul 16 20:58:34 county apache2[2166]: apache2: Syntax error on line 244 of /etc/apache2/apache2.conf: Syntax error on line 1 of /etc/apache2/mods-enabled/cloudflare.load: Cannot load /usr/lib/apache2/modules/mod_cloudflare.so into server: /usr/lib/apache2/modules/mod_cloudflare.so: undefined symbol: ap_log_rerror
Jul 16 20:58:34 county apache2[2166]: Action 'configtest' failed.
Jul 16 20:58:34 county apache2[2166]: The Apache error log may have more information.
Jul 16 20:58:34 county systemd[1]: apache2.service: control process exited, code=exited status=1
Jul 16 20:58:34 county systemd[1]: Failed to start LSB: Apache2 web server.
Jul 16 20:58:34 county systemd[1]: Unit apache2.service entered failed state.
root#county:~# journalctl -xn
-- Logs begin at Thu 2015-07-16 20:51:26 EDT, end at Thu 2015-07-16 20:59:02 EDT. --
Jul 16 20:59:01 county CRON[2208]: pam_unix(cron:session): session closed for user root
Jul 16 20:59:01 county CRON[2206]: pam_unix(cron:session): session closed for user root
Jul 16 20:59:01 county CRON[2209]: pam_unix(cron:session): session closed for user root
Jul 16 20:59:01 county CRON[2211]: pam_unix(cron:session): session closed for user root
Jul 16 20:59:01 county CRON[2203]: pam_unix(cron:session): session closed for user root
Jul 16 20:59:01 county CRON[2201]: pam_unix(cron:session): session closed for user root
Jul 16 20:59:01 county CRON[2207]: pam_unix(cron:session): session closed for user root
Jul 16 20:59:02 county CRON[2202]: pam_unix(cron:session): session closed for user root
Jul 16 20:59:02 county CRON[2200]: pam_unix(cron:session): session closed for user root
Jul 16 20:59:02 county CRON[2204]: pam_unix(cron:session): session closed for user root
Cloudflare.load contains the following:
LoadModule cloudflare_module /usr/lib/apache2/modules/mod_cloudflare.so
mod_cloudflare.so is from https://github.com/cloudflare/mod_cloudflare
Update: The issue is far more than Cloudflare.
Jul 16 22:27:41 county apache2[1674]: apache2: Syntax error on line 265 of /etc/apache2/apache2.conf: Could not open configuration...irectory
Jul 16 22:27:41 county apache2[1674]: Action 'configtest' failed.
Jul 16 22:27:41 county apache2[1674]: apache2: Syntax error on line 265 of /etc/apache2/apache2.conf: Could not open configuration file /etc/apache2/conf.d/: No such file or directory
Jul 16 22:29:18 county apache2[1963]: AH00526: Syntax error on line 89 of /etc/apache2/apache2.conf:
Jul 16 22:29:18 county apache2[1963]: Invalid command 'LockFile', perhaps misspelled or defined by a module not included in the server configuration
My apache2.conf (I can't even get the version number [its the latest version] after upgrading from Wheezy to Jessie because I get an error message regarding this config):
http://paste.debian.net/283153/
do you modify the /etc/apt/sources.list.d/cloudflare-main.list replacing the wheezy word with jessie?
If you do not change this file, you will have the package build for the old stable and not for the current stable debian version.
So, put jessie instead of wheezy on /etc/apt/sources.list.d/cloudflare-main.list, run
apt-get update
and then
apt-get upgrade