nova-compute service state is down - rabbitmq

I have an all-in-one-setup with my controller and compute services running on the same node.all my nova and other dependent services are up and running. However, when i try to launch an instance the state of the nova-compute process becomes down. Because of this the instance is stuck in spawning state.
> [root#localhost nova(keystone_admin)]# nova service-list
> +----+------------------+-----------------------+----------+---------+-------+----------------------------+-----------------+ | Id | Binary | Host | Zone | Status |
> State | Updated_at | Disabled Reason |
> +----+------------------+-----------------------+----------+---------+-------+----------------------------+-----------------+ | 6 | nova-cert | localhost.localdomain | internal | enabled |
> up | 2016-11-04T07:24:32.000000 | - | | 7 |
> nova-consoleauth | localhost.localdomain | internal | enabled | up
> | 2016-11-04T07:24:32.000000 | - | | 8 | nova-scheduler
> | localhost.localdomain | internal | enabled | up |
> 2016-11-04T07:24:33.000000 | - | | 9 | nova-conductor
> | localhost.localdomain | internal | enabled | up |
> 2016-11-04T07:24:33.000000 | - | | 11 | nova-compute
> | localhost.localdomain | nova | enabled | **down** |
> 2016-11-04T06:43:03.000000 | - | | 12 | nova-console
> | localhost.localdomain | internal | enabled | up |
> 2016-11-04T07:24:32.000000 | - |
====
[root#localhost nova(keystone_admin)]# systemctl status
openstack-nova-compute.service -l ● openstack-nova-compute.service -
OpenStack Nova Compute Server Loaded: loaded
(/usr/lib/systemd/system/openstack-nova-compute.service; enabled;
vendor preset: disabled) Active: active (running) since Fri
2016-11-04 12:08:54 IST; 49min ago Main PID: 37586 (nova-compute)
CGroup: /system.slice/openstack-nova-compute.service
└─37586 /usr/bin/python2 /usr/bin/nova-compute
Nov 04 12:08:46 localhost.localdomain systemd[1]: Starting OpenStack
Nova Compute Server... Nov 04 12:08:53 localhost.localdomain
nova-compute[37586]: Option "verbose" from group "DEFAULT" is
deprecated for removal. Its value may be silently ignored in the
future. Nov 04 12:08:53 localhost.localdomain nova-compute[37586]:
Option "notification_driver" from group "DEFAULT" is deprecated. Use
option "driver" from group "oslo_messaging_notifications". Nov 04
12:08:54 localhost.localdomain systemd[1]: Started OpenStack Nova
Compute Server.
========
The status for the nova compute process is perfectly fine.
My rabbitmq service is also running
FYI,
[root#localhost nova(keystone_admin)]# systemctl status
rabbitmq-server ● rabbitmq-server.service - RabbitMQ broker Loaded:
loaded (/usr/lib/systemd/system/rabbitmq-server.service; enabled;
vendor preset: disabled) Drop-In:
/etc/systemd/system/rabbitmq-server.service.d
└─limits.conf
Active: active (running) since Thu 2016-11-03 12:32:08 IST; 24h ago
Main PID: 1835 (beam.smp) CGroup:
/system.slice/rabbitmq-server.service
├─1835 /usr/lib64/erlang/erts-5.10.4/bin/beam.smp -W w -K true -A30 -P 1048576 -- -root /usr/lib64/erlang -progname erl -- -home
/var/lib/rabbitmq --...
├─1964 /usr/lib64/erlang/erts-5.10.4/bin/epmd -daemon
├─5873 inet_gethost 4
└─5875 inet_gethost 4
Nov 04 12:13:12 localhost.localdomain rabbitmq-server[1835]:
{user,<<"guest">>, Nov 04 12:13:12 localhost.localdomain
rabbitmq-server[1835]: [administrator], Nov 04 12:13:12
localhost.localdomain rabbitmq-server[1835]:
rabbit_auth_backend_internal,...}, Nov 04 12:13:12
localhost.localdomain rabbitmq-server[1835]: <<"/">>, Nov 04 12:13:12
localhost.localdomain rabbitmq-server[1835]: [{<<...>>,...},{...}],
Nov 04 12:13:12 localhost.localdomain rabbitmq-server[1835]:
<0.14812.0>,<0.14816.0>]}}, Nov 04 12:13:12 localhost.localdomain
rabbitmq-server[1835]: {restart_type,intrinsic}, Nov 04 12:13:12
localhost.localdomain rabbitmq-server[1835]: {shutdown,4294967295},
Nov 04 12:13:12 localhost.localdomain rabbitmq-server[1835]:
{child_type,worker}]}]}} Nov 04 12:13:12 localhost.localdomain
rabbitmq-server[1835]: function_clause
=======
[root#localhost nova(keystone_admin)]# netstat -anp | grep 5672 | grep
37586 tcp 0 0 10.1.10.22:55628 10.1.10.22:5672
ESTABLISHED 37586/python2 tcp 0 0 10.1.10.22:56204
10.1.10.22:5672 ESTABLISHED 37586/python2 tcp 0 0 10.1.10.22:56959 10.1.10.22:5672 ESTABLISHED 37586/python2
===== 37586 is the nova-compute process id.
I have checked the logs for nova-compute, nova-api and nova-conductor and there are no errors.
I have checked the nova scheduler logs and there are some errors stating refused to connect to rabbitmq and the database service.
**
2016-11-03 12:24:50.930 2092 ERROR nova.servicegroup.drivers.db
DBConnectionError: (pymysql.err.OperationalError) (2003, "Can't
connect to MySQL server on '10.1.10 .22' ([Errno 111] ECONNREFUSED)")
2016-11-03 12:24:53.811 2092 ERROR oslo.messaging._drivers.impl_rabbit
[-] AMQP server on 10.1.10.22:5672 is unreachable: [Errno 111]
ECONNREFUSED. Trying again in
**
16 seconds.
=======
Can someone suggest what should i do to handle it.
As i am on the same node, why are these services not reachable?

If nova-compute is down, there are two possible reasons:
a. nova-compute is actually down
b. it cannot communicate with rabbit, or nova-conductor cannot communicate with rabbit.
As far as I can see in your logs, you have issue with rabbit: "10.1.10.22:5672 is unreachable". Check if you have rabbit listening on this IP/port. Check if you can connect to rabbit from compute host. I usually use nc 10.1.10.22 5672 to see if there are connection or not.
Check if nova settings for rabbit are correct. Example of correct settings:
[DEFAULT]
rpc_backend=rabbit
rabbit_host=rabbitmq-ip-here
rabbit_port=5672
rabbit_hosts=$rabbit_host:$rabbit_port
rabbit_use_ssl=false
rabbit_userid=guest
rabbit_password=guest
rabbit_login_method=AMQPLAIN
rabbit_virtual_host=/compute
Check logs in the /var/log/nova/*.log
Enable debug=true in the [DEFAULT] section of nova.conf

Related

Ubuntu server: Apache status show old failure information, and is running fine

I am a bit puzzled about this... Apache status is showing information from 12 days ago. It's running fine and website is working. Is the status command not supposed to show current state of Apache?
# systemctl status apache2
● apache2.service - The Apache HTTP Server
Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Wed 2022-01-05 05:19:24 CET; 1 weeks 5 days ago
Docs: https://httpd.apache.org/docs/2.4/
Process: 514 ExecStart=/usr/sbin/apachectl start (code=exited, status=1/FAILURE)
Jan 05 05:19:24 serverX systemd[1]: Starting The Apache HTTP Server...
Jan 05 05:19:24 serverX apachectl[532]: AH00112: Warning: DocumentRoot [/some/dir] does not exist
Jan 05 05:19:24 serverX apachectl[532]: (99)Cannot assign requested address: AH00072: make_sock: could not bind to address 10.42.24.6:80
Jan 05 05:19:24 serverX apachectl[532]: no listening sockets available, shutting down
Jan 05 05:19:24 serverX apachectl[532]: AH00015: Unable to open logs
Jan 05 05:19:24 serverX apachectl[514]: Action 'start' failed.
Jan 05 05:19:24 serverX apachectl[514]: The Apache error log may have more information.
Jan 05 05:19:24 serverX systemd[1]: apache2.service: Control process exited, code=exited, status=1/FAILURE
Jan 05 05:19:24 serverX systemd[1]: apache2.service: Failed with result 'exit-code'.
Jan 05 05:19:24 serverX systemd[1]: Failed to start The Apache HTTP Server.
OS: Ubuntu 20
netstat shows that port 80 is actually listening on 10.42.24.6
# netstat -tulpn | grep apache
tcp 0 0 10.42.24.6:80 0.0.0.0:* LISTEN 1557/apache2
tcp 0 0 10.42.24.6:443 0.0.0.0:* LISTEN 1557/apache2
Also Apache is logging requests to the access logs
I noticed this problem, because I am using systemctl status apache2 to check apache status in a new shell script I wrote... But the way this command is working, makes it useless.

Cannot restart redis-sentinel unit

I'm trying to configure 3 Redis instances and 6 sentinels (3 of them running on the Redises and the rest are on the different hosts). But when I install redis-sentinel package and put my configuration under /etc/redis/sentinel.conf and restart the service using systemctl restart redis-sentinel I get this error:
Job for redis-sentinel.service failed because a timeout was exceeded.
See "systemctl status redis-sentinel.service" and "journalctl -xe" for details.
Here is the output of journalctl -u redis-sentinel:
Jan 01 08:07:07 redis1 systemd[1]: Starting Advanced key-value store...
Jan 01 08:07:07 redis1 redis-sentinel[16269]: 16269:X 01 Jan 2020 08:07:07.263 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
Jan 01 08:07:07 redis1 redis-sentinel[16269]: 16269:X 01 Jan 2020 08:07:07.263 # Redis version=5.0.7, bits=64, commit=00000000, modified=0, pid=16269, just started
Jan 01 08:07:07 redis1 redis-sentinel[16269]: 16269:X 01 Jan 2020 08:07:07.263 # Configuration loaded
Jan 01 08:07:07 redis1 systemd[1]: redis-sentinel.service: Can't open PID file /var/run/sentinel/redis-sentinel.pid (yet?) after start: No such file or directory
Jan 01 08:08:37 redis1 systemd[1]: redis-sentinel.service: Start operation timed out. Terminating.
Jan 01 08:08:37 redis1 systemd[1]: redis-sentinel.service: Failed with result 'timeout'.
Jan 01 08:08:37 redis1 systemd[1]: Failed to start Advanced key-value store.
Jan 01 08:08:37 redis1 systemd[1]: redis-sentinel.service: Service hold-off time over, scheduling restart.
Jan 01 08:08:37 redis1 systemd[1]: redis-sentinel.service: Scheduled restart job, restart counter is at 5.
Jan 01 08:08:37 redis1 systemd[1]: Stopped Advanced key-value store.
Jan 01 08:08:37 redis1 systemd[1]: Starting Advanced key-value store...
Jan 01 08:08:37 redis1 redis-sentinel[16307]: 16307:X 01 Jan 2020 08:08:37.738 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
Jan 01 08:08:37 redis1 redis-sentinel[16307]: 16307:X 01 Jan 2020 08:08:37.739 # Redis version=5.0.7, bits=64, commit=00000000, modified=0, pid=16307, just started
Jan 01 08:08:37 redis1 redis-sentinel[16307]: 16307:X 01 Jan 2020 08:08:37.739 # Configuration loaded
Jan 01 08:08:37 redis1 systemd[1]: redis-sentinel.service: Can't open PID file /var/run/sentinel/redis-sentinel.pid (yet?) after start: No such file or directory
and my sentinel.conf file:
port 26379
daemonize yes
sentinel myid 851994c7364e2138e03ee1cd346fbdc4f1404e4c
sentinel deny-scripts-reconfig yes
sentinel monitor mymaster 172.28.128.11 6379 2
sentinel down-after-milliseconds mymaster 5000
# Generated by CONFIG REWRITE
dir "/"
protected-mode no
sentinel failover-timeout mymaster 60000
sentinel config-epoch mymaster 0
sentinel leader-epoch mymaster 0
sentinel current-epoch 0
If you are trying to run your Redis servers on Debian based distribution, add below to your Redis configurations:
pidfile /var/run/redis/redis-sentinel.pid to /etc/redis/sentinel.conf
pidfile /var/run/redis/redis-server.pid to /etc/redis/redis.conf
What's the output in the sentinel log file?
I had a similar issue where Sentinel received a lot of sigterms.
In that case you need to make sure that if you use the daemonize yes setting, the systemd unit file must be using Type=forking.
Also make sure that the location of the PID file specified in the sentinel config matches the location specified in the systemd unit file.
If you face below error in journalctl or systemctl logs,
Jun 26 10:13:02 x systemd[1]: redis-server.service: Failed with result 'exit-code'.
Jun 26 10:13:02 x systemd[1]: redis-server.service: Scheduled restart job, restart counter is at 5.
Jun 26 10:13:02 x systemd[1]: Stopped Advanced key-value store.
Jun 26 10:13:02 x systemd[1]: redis-server.service: Start request repeated too quickly.
Jun 26 10:13:02 x systemd[1]: redis-server.service: Failed with result 'exit-code'.
Jun 26 10:13:02 x systemd[1]: Failed to start Advanced key-value store.
Then check /var/log/redis/redis-server.log for more information
In most cases issue is mentioned there.
i.e if a dump.rdb file is placed in /var/lib/redis then the issue might be with database count or redis version.
or in another scenario disabled IPV6 might be the issue.

How to make Orion Context Broker works with HTTPS notification?

I want to enable https for notifications. The Orion Context Broker version 1.7.0 is installed in Ubuntu 16.04. To start, the following command is being used:
sudo /etc/init.d/contextBroker start -logAppend -https -key /path/to/orion.key -cert /path/to/orion.crt
The answer is:
[ ok ] Starting contextBroker (via systemctl): contextBroker.service.
The status is:
sudo systemctl status contextBroker.service
contextBroker.service - LSB: Example initscript
Loaded: loaded (/etc/init.d/contextBroker; bad; vendor preset: enabled)
Active: active (exited) since Tue 2017-04-04 12:56:13 BRT; 14s ago
Docs: man:systemd-sysv-generator(8)
Process: 8312 ExecStart=/etc/init.d/contextBroker start (code=exited, status=0/SUCCESS)
Apr 04 12:56:13 fiware-ubuntu systemd[1]: Starting LSB: Example initscript...
Apr 04 12:56:13 fiware-ubuntu contextBroker[8312]: contextBroker
Apr 04 12:56:13 fiware-ubuntu contextBroker[8312]: /path/bin/contextBroker
Apr 04 12:56:13 fiware-ubuntu systemd[1]: Started LSB: Example initscript.
Another approach is running Orion as:
sudo /path/bin/contextBroker -logLevel DEBUG -localIp x.y.z.t -https -key /path/to/orion.key -cert /path/to/orion.crt
The log follows:
time=2017-04-04T18:37:58.881Z | lvl=INFO | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=contextBroker.cpp[1705]:main | msg=Orion Context Broker is running
time=2017-04-04T18:37:58.887Z | lvl=INFO | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=mongoConnectionPool.cpp[205]:mongoConnect | msg=Successful connection to database
time=2017-04-04T18:37:58.887Z | lvl=INFO | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=connectionOperations.cpp[681]:setWriteConcern | msg=Database Operation Successful (setWriteConcern: 1)
time=2017-04-04T18:37:58.887Z | lvl=INFO | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=connectionOperations.cpp[724]:getWriteConcern | msg=Database Operation Successful (getWriteConcern)
time=2017-04-04T18:37:58.888Z | lvl=INFO | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=connectionOperations.cpp[626]:runCollectionCommand | msg=Database Operation Successful (command: { buildinfo: 1 })
...
time=2017-04-04T18:37:58.897Z | lvl=FATAL | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=rest.cpp[1720]:restStart | msg=Fatal Error (error starting REST interface)
It is not working...
If you run Orion as a service (as recommended) then command line parameters have to be configured in the /etc/sysconfig/contexBroker file. The file is explained in this piece of documentation.
Note the BROKER_EXTRA_OPS variable at the end of the file. This is used to include CLI parameters that are not set using any other option, as the ones related with HTTPS you are using. Thus, it should be a matter of setting BROKER_EXTRA_OPS in this way:
BROKER_EXTRA_OPS="-logAppend -https -key /path/to/orion.key -cert /path/to/orion.crt"
Then start the service using:
sudo /etc/init.d/contextBroker start
(Note that no parameter is added after 'start')
You can check that Orion is running with the right parameters using ps ax | grep contextBroker.
Finally, regarding the error Fatal Error (error starting REST interface) it apperas when Orion, for some reason, is not able to start the listening server for the REST API. Typically this is due to some other process (maybe a forgotten instance of Orion) runs listening at the same port. Use sudo netstat -ntpld | grep 1026 to know which other process could be listening at that part (assuming that 1026 is the port in which you are trying to run Orion, of course).

The Apache James Server App service was launched, but failed to start

I am configuring the Apache james Server, But fail to start the server.
please help.Thanks in Advance.
Followed Steps:
C:\xampp\htdocs\apache-james-3.0-beta4\bin>j
wrapper | Apache James Server App installed
C:\xampp\htdocs\apache-james-3.0-beta4\bin>j
wrapper | Starting the Apache James Server
wrapper | Waiting to start...
wrapper | The Apache James Server App servi
Press any key to continue . . .
C:\xampp\htdocs\apache-james-3.0-beta4\bin>
james-server.log
INFO 12:12:51,775 | org.apache.james.container.spring.context.JamesServerApplicationContext | Refreshing org.apache.james.container.spring.context.JamesServerApplicationContext#2ba11895: startup date [Sat Apr 23 12:12:51 IST 2016]; root of context hierarchy
INFO 12:13:12,034 | org.apache.james.container.spring.context.JamesServerApplicationContext | Refreshing org.apache.james.container.spring.context.JamesServerApplicationContext#5a8d9005: startup date [Sat Apr 23 12:13:12 IST 2016]; root of context hierarchy
INFO 12:14:16,123 | org.apache.james.container.spring.context.JamesServerApplicationContext | Refreshing org.apache.james.container.spring.context.JamesServerApplicationContext#2ba11895: startup date [Sat Apr 23 12:14:16 IST 2016]; root of context hierarchy
INFO 12:17:01,778 | org.apache.james.container.spring.context.JamesServerApplicationContext | Refreshing org.apache.james.container.spring.context.JamesServerApplicationContext#356cdec9: startup date [Sat Apr 23 12:17:01 IST 2016]; root of context hierarchy
INFO 12:19:37,360 | org.apache.james.container.spring.context.JamesServerApplicationContext | Refreshing org.apache.james.container.spring.context.JamesServerApplicationContext#2ba11895: startup date [Sat Apr 23 12:19:37 IST 2016]; root of context hierarchy
INFO 12:25:51,556 | org.apache.james.container.spring.context.JamesServerApplicationContext | Refreshing org.apache.james.container.spring.context.JamesServerApplicationContext#17c68925: startup date [Sat Apr 23 12:25:51 IST 2016]; root of context hierarchy
INFO 12:26:17,151 | org.apache.james.container.spring.context.JamesServerApplicationContext | Refreshing org.apache.james.container.spring.context.JamesServerApplicationContext#17c68925: startup date [Sat Apr 23 12:26:17 IST 2016]; root of context hierarchy
INFO 12:26:33,036 | org.apache.james.container.spring.context.JamesServerApplicationContext | Refreshing org.apache.james.container.spring.context.JamesServerApplicationContext#17c68925: startup date [Sat Apr 23 12:26:33 IST 2016]; root of context hierarchy
INFO 12:30:06,836 | org.apache.james.container.spring.context.JamesServerApplicationContext | Refreshing org.apache.james.container.spring.context.JamesServerApplicationContext#2ba11895: startup date [Sat Apr 23 12:30:06 IST 2016]; root of context hierarchy
INFO 12:32:47,145 | org.apache.james.container.spring.context.JamesServerApplicationContext | Refreshing org.apache.james.container.spring.context.JamesServerApplicationContext#356cdec9: startup date [Sat Apr 23 12:32:47 IST 2016]; root of context hierarchy

Redis in docker starts/restarts multiple times?

I am trying to run redis in docker container by using docker-compose:
docker-compose.yml:
redis:
image: redis:3.0.4
command:
$ docker-compose up
output:
Starting test_redis_1...
Attaching to test_redis_1
redis_1 | 1:C 06 Oct 15:16:13.265 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_1 | _._
redis_1 | _.-``__ ''-._
redis_1 | _.-`` `. `_. ''-._ Redis 3.0.4 (00000000/0) 64 bit
redis_1 | .-`` .-```. ```\/ _.,_ ''-._
redis_1 | ( ' , .-` | `, ) Running in standalone mode
redis_1 | |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379
redis_1 | | `-._ `._ / _.-' | PID: 1
redis_1 | `-._ `-._ `-./ _.-' _.-'
redis_1 | |`-._`-._ `-.__.-' _.-'_.-'|
redis_1 | | `-._`-._ _.-'_.-' | http://redis.io
redis_1 | `-._ `-._`-.__.-'_.-' _.-'
redis_1 | |`-._`-._ `-.__.-' _.-'_.-'|
redis_1 | | `-._`-._ _.-'_.-' |
redis_1 | `-._ `-._`-.__.-'_.-' _.-'
redis_1 | `-._ `-.__.-' _.-'
redis_1 | `-._ _.-'
redis_1 | `-.__.-'
redis_1 |
redis_1 | 1:M 06 Oct 15:16:13.268 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
redis_1 | 1:M 06 Oct 15:16:13.268 # Server started, Redis version 3.0.4
redis_1 | 1:M 06 Oct 15:16:13.268 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
redis_1 | 1:M 06 Oct 15:16:13.268 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis_1 | 1:M 06 Oct 15:16:13.268 * DB loaded from disk: 0.000 seconds
redis_1 | 1:M 06 Oct 15:16:13.268 * The server is now ready to accept connections on port 6379
redis_1 | 1:signal-handler (1444144583) Received SIGTERM scheduling shutdown...
redis_1 | 1:M 06 Oct 15:16:23.761 # User requested shutdown...
redis_1 | 1:M 06 Oct 15:16:23.761 * Saving the final RDB snapshot before exiting.
redis_1 | 1:M 06 Oct 15:16:23.770 * DB saved on disk
redis_1 | 1:M 06 Oct 15:16:23.770 # Redis is now ready to exit, bye bye...
redis_1 | 1:C 06 Oct 15:16:32.194 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_1 | _._
redis_1 | _.-``__ ''-._
redis_1 | _.-`` `. `_. ''-._ Redis 3.0.4 (00000000/0) 64 bit
redis_1 | .-`` .-```. ```\/ _.,_ ''-._
redis_1 | ( ' , .-` | `, ) Running in standalone mode
redis_1 | |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379
redis_1 | | `-._ `._ / _.-' | PID: 1
redis_1 | `-._ `-._ `-./ _.-' _.-'
redis_1 | |`-._`-._ `-.__.-' _.-'_.-'|
redis_1 | | `-._`-._ _.-'_.-' | http://redis.io
redis_1 | `-._ `-._`-.__.-'_.-' _.-'
redis_1 | |`-._`-._ `-.__.-' _.-'_.-'|
redis_1 | | `-._`-._ _.-'_.-' |
redis_1 | `-._ `-._`-.__.-'_.-' _.-'
redis_1 | `-._ `-.__.-' _.-'
redis_1 | `-._ _.-'
redis_1 | `-.__.-'
redis_1 |
redis_1 | 1:M 06 Oct 15:16:32.195 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
redis_1 | 1:M 06 Oct 15:16:32.195 # Server started, Redis version 3.0.4
redis_1 | 1:M 06 Oct 15:16:32.195 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
redis_1 | 1:M 06 Oct 15:16:32.195 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis_1 | 1:M 06 Oct 15:16:32.195 * DB loaded from disk: 0.000 seconds
redis_1 | 1:M 06 Oct 15:16:32.195 * The server is now ready to accept connections on port 6379
redis_1 | 1:signal-handler (1444144597) Received SIGTERM scheduling shutdown...
redis_1 | 1:M 06 Oct 15:16:37.141 # User requested shutdown...
redis_1 | 1:M 06 Oct 15:16:37.141 * Saving the final RDB snapshot before exiting.
redis_1 | 1:M 06 Oct 15:16:37.144 * DB saved on disk
redis_1 | 1:M 06 Oct 15:16:37.144 # Redis is now ready to exit, bye bye...
redis_1 | 1:C 06 Oct 15:17:19.085 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_1 | _._
redis_1 | _.-``__ ''-._
redis_1 | _.-`` `. `_. ''-._ Redis 3.0.4 (00000000/0) 64 bit
redis_1 | .-`` .-```. ```\/ _.,_ ''-._
redis_1 | ( ' , .-` | `, ) Running in standalone mode
redis_1 | |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379
redis_1 | | `-._ `._ / _.-' | PID: 1
redis_1 | `-._ `-._ `-./ _.-' _.-'
redis_1 | |`-._`-._ `-.__.-' _.-'_.-'|
redis_1 | | `-._`-._ _.-'_.-' | http://redis.io
redis_1 | `-._ `-._`-.__.-'_.-' _.-'
redis_1 | |`-._`-._ `-.__.-' _.-'_.-'|
redis_1 | | `-._`-._ _.-'_.-' |
redis_1 | `-._ `-._`-.__.-'_.-' _.-'
redis_1 | `-._ `-.__.-' _.-'
redis_1 | `-._ _.-'
redis_1 | `-.__.-'
redis_1 |
redis_1 | 1:M 06 Oct 15:17:19.086 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
redis_1 | 1:M 06 Oct 15:17:19.086 # Server started, Redis version 3.0.4
redis_1 | 1:M 06 Oct 15:17:19.086 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
redis_1 | 1:M 06 Oct 15:17:19.086 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis_1 | 1:M 06 Oct 15:17:19.086 * DB loaded from disk: 0.000 seconds
redis_1 | 1:M 06 Oct 15:17:19.086 * The server is now ready to accept connections on port 6379
redis_1 | 1:signal-handler (1444144647) Received SIGTERM scheduling shutdown...
redis_1 | 1:M 06 Oct 15:17:27.247 # User requested shutdown...
redis_1 | 1:M 06 Oct 15:17:27.247 * Saving the final RDB snapshot before exiting.
redis_1 | 1:M 06 Oct 15:17:27.256 * DB saved on disk
redis_1 | 1:M 06 Oct 15:17:27.256 # Redis is now ready to exit, bye bye...
For some reason redis starts multiple times, sometimes just once (this is random). And in logs there are many likes like:
redis_1 | 1:signal-handler (1444144597) Received SIGTERM scheduling shutdown...
env:
docker-compose version: 1.4.0
Docker version 1.8.0, build 0d03096
docker-machine version 0.4.0 (9d0dc7a)
edit: It happens only when using docker-compose. While running redis with docker run redis:304 it works fine.
Similar issue has been discussed here: https://github.com/docker/compose/issues/2148
Add --force-recreate to the docker-compose command seems to prevent it.