FusionAuth Setup Wizard issue - fusionauth

I have attempted to install the Fusionauth using installing the package guide, I went through the steps with no issues until the last step to add the administrator account and when provided all required information I got this message "A request to the search index has failed. This error is unexpected. Contact Support."
here is the log file "fusionauth-search.log"
[Sep 30, 2019 11:54:30.674 AM][WARN ][o.e.c.r.a.DiskThresholdMonitor] [ZVTSlmh] high disk watermark [90%] exceeded on [ZVTSlmh1STaZOSQAo6UiQg][ZVTSlmh][C:\FusionAuthDir\fusionauth\fusionauth-search\elasticsearch....\data\search\esv6\nodes\0] free: 25gb[5.2%], shards will be relocated away from this node
[Sep 30, 2019 11:54:30.699 AM][INFO ][o.e.c.r.a.DiskThresholdMonitor] [ZVTSlmh] rerouting shards: [high disk watermark exceeded on one or more nodes]
[Sep 30, 2019 11:55:00.707 AM][WARN ][o.e.c.r.a.DiskThresholdMonitor] [ZVTSlmh] high disk watermark [90%] exceeded on [ZVTSlmh1STaZOSQAo6UiQg][ZVTSlmh][C:\FusionAuthDir\fusionauth\fusionauth-search\elasticsearch....\data\search\esv6\nodes\0] free: 25gb[5.2%], shards will be relocated away from this node
[Sep 30, 2019 12:17:31.739 PM][INFO ][o.e.c.r.a.DiskThresholdMonitor] [ZVTSlmh] low disk watermark [85%] exceeded on [ZVTSlmh1STaZOSQAo6UiQg][ZVTSlmh][C:\FusionAuthDir\fusionauth\fusionauth-search\elasticsearch....\data\search\esv6\nodes\0] free: 57.4gb[12%], replicas will not be assigned to this node
[Sep 30, 2019 12:31:02.531 PM][INFO ][o.e.c.r.a.DiskThresholdMonitor] [ZVTSlmh] low disk watermark [85%] exceeded on [ZVTSlmh1STaZOSQAo6UiQg][ZVTSlmh][C:\FusionAuthDir\fusionauth\fusionauth-search\elasticsearch....\data\search\esv6\nodes\0] free: 57.4gb[12%], replicas will not be assigned to this node
[Sep 30, 2019 12:58:33.948 PM][INFO ][o.e.c.r.a.DiskThresholdMonitor] [ZVTSlmh] low disk watermark [85%] exceeded on [ZVTSlmh1STaZOSQAo6UiQg][ZVTSlmh][C:\FusionAuthDir\fusionauth\fusionauth-search\elasticsearch....\data\search\esv6\nodes\0] free: 62.1gb[13%], replicas will not be assigned to this node
[Sep 30, 2019 12:59:03.982 PM][INFO ][o.e.c.r.a.DiskThresholdMonitor] [ZVTSlmh] low disk watermark [85%] exceeded on [ZVTSlmh1STaZOSQAo6UiQg][ZVTSlmh][C:\FusionAuthDir\fusionauth\fusionauth-search\elasticsearch....\data\search\esv6\nodes\0] free: 62.5gb[13.1%], replicas will not be assigned to this node
[Sep 30, 2019 12:59:34.016 PM][INFO ][o.e.c.r.a.DiskThresholdMonitor] [ZVTSlmh] low disk watermark [85%] exceeded on [ZVTSlmh1STaZOSQAo6UiQg][ZVTSlmh][C:\FusionAuthDir\fusionauth\fusionauth-search\elasticsearch....\data\search\esv6\nodes\0] free: 63.2gb[13.2%], replicas will not be assigned to this node
[Sep 30, 2019 1:00:04.027 PM][INFO ][o.e.c.r.a.DiskThresholdMonitor] [ZVTSlmh] low disk watermark [85%] exceeded on [ZVTSlmh1STaZOSQAo6UiQg][ZVTSlmh][C:\FusionAuthDir\fusionauth\fusionauth-search\elasticsearch....\data\search\esv6\nodes\0] free: 63.3gb[13.3%], replicas will not be assigned to this node
[Sep 30, 2019 1:00:34.039 PM][INFO ][o.e.c.r.a.DiskThresholdMonitor] [ZVTSlmh] low disk watermark [85%] exceeded on [ZVTSlmh1STaZOSQAo6UiQg][ZVTSlmh][C:\FusionAuthDir\fusionauth\fusionauth-search\elasticsearch....\data\search\esv6\nodes\0] free: 63.4gb[13.3%], replicas will not be assigned to this node
[Sep 30, 2019 1:01:04.047 PM][INFO ][o.e.c.r.a.DiskThresholdMonitor] [ZVTSlmh] low disk watermark [85%] exceeded on [ZVTSlmh1STaZOSQAo6UiQg][ZVTSlmh][C:\FusionAuthDir\fusionauth\fusionauth-search\elasticsearch....\data\search\esv6\nodes\0] free: 63.6gb[13.3%], replicas will not be assigned to this node
[Sep 30, 2019 1:01:34.056 PM][INFO ][o.e.c.r.a.DiskThresholdMonitor] [ZVTSlmh] low disk watermark [85%] exceeded on [ZVTSlmh1STaZOSQAo6UiQg][ZVTSlmh][C:\FusionAuthDir\fusionauth\fusionauth-search\elasticsearch....\data\search\esv6\nodes\0] free: 63.7gb[13.4%], replicas will not be assigned to this node
[Sep 30, 2019 1:02:04.067 PM][INFO ][o.e.c.r.a.DiskThresholdMonitor] [ZVTSlmh] low disk watermark [85%] exceeded on [ZVTSlmh1STaZOSQAo6UiQg][ZVTSlmh][C:\FusionAuthDir\fusionauth\fusionauth-search\elasticsearch....\data\search\esv6\nodes\0] free: 63.9gb[13.4%], replicas will not be assigned to this node
[Sep 30, 2019 1:02:34.074 PM][INFO ][o.e.c.r.a.DiskThresholdMonitor] [ZVTSlmh] low disk watermark [85%] exceeded on [ZVTSlmh1STaZOSQAo6UiQg][ZVTSlmh][C:\FusionAuthDir\fusionauth\fusionauth-search\elasticsearch....\data\search\esv6\nodes\0] free: 64.1gb[13.4%], replicas will not be assigned to this node
[Sep 30, 2019 1:03:04.083 PM][INFO ][o.e.c.r.a.DiskThresholdMonitor] [ZVTSlmh] low disk watermark [85%] exceeded on [ZVTSlmh1STaZOSQAo6UiQg][ZVTSlmh][C:\FusionAuthDir\fusionauth\fusionauth-search\elasticsearch....\data\search\esv6\nodes\0] free: 64.2gb[13.5%], replicas will not be assigned to this node
[Sep 30, 2019 1:03:34.120 PM][INFO ][o.e.c.r.a.DiskThresholdMonitor] [ZVTSlmh] low disk watermark [85%] exceeded on [ZVTSlmh1STaZOSQAo6UiQg][ZVTSlmh][C:\FusionAuthDir\fusionauth\fusionauth-search\elasticsearch....\data\search\esv6\nodes\0] free: 64.4gb[13.5%], replicas will not be assigned to this node
[Sep 30, 2019 1:04:04.131 PM][INFO ][o.e.c.r.a.DiskThresholdMonitor] [ZVTSlmh] low disk watermark [85%] exceeded on [ZVTSlmh1STaZOSQAo6UiQg][ZVTSlmh][C:\FusionAuthDir\fusionauth\fusionauth-search\elasticsearch....\data\search\esv6\nodes\0] free: 64.6gb[13.5%], replicas will not be assigned to this node
[Sep 30, 2019 1:04:34.141 PM][INFO ][o.e.c.r.a.DiskThresholdMonitor] [ZVTSlmh] low disk watermark [85%] exceeded on [ZVTSlmh1STaZOSQAo6UiQg][ZVTSlmh][C:\FusionAuthDir\fusionauth\fusionauth-search\elasticsearch....\data\search\esv6\nodes\0] free: 65.2gb[13.7%], replicas will not be assigned to this node
[Sep 30, 2019 1:05:04.175 PM][INFO ][o.e.c.r.a.DiskThresholdMonitor] [ZVTSlmh] low disk watermark [85%] exceeded on [ZVTSlmh1STaZOSQAo6UiQg][ZVTSlmh][C:\FusionAuthDir\fusionauth\fusionauth-search\elasticsearch....\data\search\esv6\nodes\0] free: 65.3gb[13.7%], replicas will not be assigned to this node
[Sep 30, 2019 1:05:34.184 PM][INFO ][o.e.c.r.a.DiskThresholdMonitor] [ZVTSlmh] low disk watermark [85%] exceeded on [ZVTSlmh1STaZOSQAo6UiQg][ZVTSlmh][C:\FusionAuthDir\fusionauth\fusionauth-search\elasticsearch....\data\search\esv6\nodes\0] free: 65.5gb[13.7%], replicas will not be assigned to this node
[Sep 30, 2019 1:06:04.194 PM][INFO ][o.e.c.r.a.DiskThresholdMonitor] [ZVTSlmh] low disk watermark [85%] exceeded on [ZVTSlmh1STaZOSQAo6UiQg][ZVTSlmh][C:\FusionAuthDir\fusionauth\fusionauth-search\elasticsearch....\data\search\esv6\nodes\0] free: 65.6gb[13.7%], replicas will not be assigned to this node
[Sep 30, 2019 1:06:34.224 PM][INFO ][o.e.c.r.a.DiskThresholdMonitor] [ZVTSlmh] low disk watermark [85%] exceeded on [ZVTSlmh1STaZOSQAo6UiQg][ZVTSlmh][C:\FusionAuthDir\fusionauth\fusionauth-search\elasticsearch....\data\search\esv6\nodes\0] free: 65.7gb[13.8%], replicas will not be assigned to this node
[Sep 30, 2019 1:07:04.234 PM][INFO ][o.e.c.r.a.DiskThresholdMonitor] [ZVTSlmh] low disk watermark [85%] exceeded on [ZVTSlmh1STaZOSQAo6UiQg][ZVTSlmh][C:\FusionAuthDir\fusionauth\fusionauth-search\elasticsearch....\data\search\esv6\nodes\0] free: 66.4gb[13.9%], replicas will not be assigned to this node
[Sep 30, 2019 1:07:34.245 PM][INFO ][o.e.c.r.a.DiskThresholdMonitor] [ZVTSlmh] low disk watermark [85%] exceeded on [ZVTSlmh1STaZOSQAo6UiQg][ZVTSlmh][C:\FusionAuthDir\fusionauth\fusionauth-search\elasticsearch....\data\search\esv6\nodes\0] free: 67.4gb[14.1%], replicas will not be assigned to this node
[Sep 30, 2019 1:08:04.254 PM][INFO ][o.e.c.r.a.DiskThresholdMonitor] [ZVTSlmh] low disk watermark [85%] exceeded on [ZVTSlmh1STaZOSQAo6UiQg][ZVTSlmh][C:\FusionAuthDir\fusionauth\fusionauth-search\elasticsearch....\data\search\esv6\nodes\0] free: 68gb[14.3%], replicas will not be assigned to this node
[Sep 30, 2019 1:08:34.262 PM][INFO ][o.e.c.r.a.DiskThresholdMonitor] [ZVTSlmh] low disk watermark [85%] exceeded on [ZVTSlmh1STaZOSQAo6UiQg][ZVTSlmh][C:\FusionAuthDir\fusionauth\fusionauth-search\elasticsearch....\data\search\esv6\nodes\0] free: 69gb[14.5%], replicas will not be assigned to this node
[Sep 30, 2019 1:09:04.272 PM][INFO ][o.e.c.r.a.DiskThresholdMonitor] [ZVTSlmh] low disk watermark [85%] exceeded on [ZVTSlmh1STaZOSQAo6UiQg][ZVTSlmh][C:\FusionAuthDir\fusionauth\fusionauth-search\elasticsearch....\data\search\esv6\nodes\0] free: 67.4gb[14.1%], replicas will not be assigned to this node
[Sep 30, 2019 1:09:34.281 PM][INFO ][o.e.c.r.a.DiskThresholdMonitor] [ZVTSlmh] low disk watermark [85%] exceeded on [ZVTSlmh1STaZOSQAo6UiQg][ZVTSlmh][C:\FusionAuthDir\fusionauth\fusionauth-search\elasticsearch....\data\search\esv6\nodes\0] free: 68.3gb[14.3%], replicas will not be assigned to this node
[Sep 30, 2019 1:10:04.314 PM][INFO ][o.e.c.r.a.DiskThresholdMonitor] [ZVTSlmh] rerouting shards: [one or more nodes has gone under the high or low watermark]
Thank you in advance,

Related

Redis OOM issue

We are using Redis 6.0.0. After OOM, I see this log. What is RDB memory usage?
3489 MB is very close to max memory that we have. Does it indicate that we are storing a lot of data in Redis ? Or its just being caused by RDB overhead.
1666:M 01 Jun 2022 19:23:32.268 # Server initialized
1666:M 01 Jun 2022 19:23:32.270 * Loading RDB produced by version 6.0.6
1666:M 01 Jun 2022 19:23:32.270 * RDB age 339 seconds
1666:M 01 Jun 2022 19:23:32.270 * RDB memory usage when created **3489.20 Mb**
Can we rule out fragmentation? Given that RDB memory usage itself indicated 3489 MB.

Rabbit MQ Web Management Console Not working

I have RabbitMQ v3.8.3. I have followed the other posts and have explicitly enabled the web management console like this:
rabbitmq-plugins enable rabbitmq_management
But when I do localhost:15672 or localhost:5672, the web console does not open up. It says "localhost refused to connect". The server start fine. (shows completed with 0 plugins on startup)
Here is the dump from "rabbitmqctl status"
Status of node rabbit#localhost ...
←[1mRuntime←[0m
OS PID: 4676
OS: Windows
Uptime (seconds): 1635
RabbitMQ version: 3.8.3
Node name: rabbit#localhost
Erlang configuration: Erlang/OTP 22 [erts-10.7] [64-bit] [smp:8:8] [ds:8:8:10] [async-threads:64]
Erlang processes: 526 used, 1048576 limit
Scheduler run queue: 1
Cluster heartbeat timeout (net_ticktime): 60
←[1mPlugins←[0m
Enabled plugin file: D:/volpay-infra/RabbitMQ/enabled_plugins
Enabled plugins:
←[1mData directory←[0m
Node data directory: d:/volpay-infra/RabbitMQ/db/rabbit#localhost-mnesia
←[1mConfig files←[0m
←[1mLog file(s)←[0m
* D:/volpay-infra/RabbitMQ/log/rabbit#localhost.log
* D:/volpay-infra/RabbitMQ/log/rabbit#localhost_upgrade.log
←[1mAlarms←[0m
(none)
←[1mMemory←[0m
Calculation strategy: rss
Memory high watermark setting: 0.4 of available memory, computed to: 6.8201 gb
allocated_unused: 0.0358 gb (28.57 %)
other_proc: 0.0321 gb (25.59 %)
code: 0.0272 gb (21.7 %)
other_system: 0.0151 gb (12.09 %)
queue_procs: 0.0042 gb (3.32 %)
binary: 0.0036 gb (2.84 %)
other_ets: 0.0029 gb (2.34 %)
atom: 0.0014 gb (1.15 %)
connection_channels: 0.0009 gb (0.73 %)
connection_other: 0.0009 gb (0.68 %)
connection_writers: 0.0006 gb (0.44 %)
metrics: 0.0002 gb (0.18 %)
mnesia: 0.0002 gb (0.15 %)
connection_readers: 0.0002 gb (0.13 %)
quorum_ets: 0.0 gb (0.04 %)
msg_index: 0.0 gb (0.03 %)
plugins: 0.0 gb (0.01 %)
mgmt_db: 0.0 gb (0.0 %)
queue_slave_procs: 0.0 gb (0.0 %)
quorum_queue_procs: 0.0 gb (0.0 %)
reserved_unallocated: 0.0 gb (0.0 %)
←[1mFile Descriptors←[0m
Total: 31, limit: 65439
Sockets: 5, limit: 58893
←[1mFree Disk Space←[0m
Low free disk space watermark: 0.05 gb
Free disk space: 735.9345 gb
←[1mTotals←[0m
Connection count: 4
Queue count: 41
Virtual host count: 1
←[1mListeners←[0m
Interface: [::], port: 25672, protocol: clustering, purpose: inter-node and CLI tool communication
Interface: [::], port: 5672, protocol: amqp, purpose: AMQP 0-9-1 and AMQP 1.0
Interface: 0.0.0.0, port: 5672, protocol: amqp, purpose: AMQP 0-9-1 and AMQP 1.0
D:\vol-infra\rabbitmq_server-3.8.3\rabbitmq_forWindows\sbin>

Redis service crashes with "Failed opening the RDB file systemdd (in server root dir /etc/cron.d) for saving: Permission denied"

I am running Redis server version 6.0.6 on Ubuntu 20.04. The process is run by "redis" user.
Sometimes, the Redis process crashes and gets restarted on its own and when this happens, lot of data cached in Redis becomes unavailable. This happens every few days/weeks. I can see the following messages in the logs - saving was working fine till 2:32:43 and suddenly failed at 2:34:15:
133121:C 23 Jun 2021 02:27:54.383 * RDB: 22 MB of memory used by copy-on-write
105798:M 23 Jun 2021 02:27:54.511 * Background saving terminated with success
105798:M 23 Jun 2021 02:29:46.279 * 10000 changes in 60 seconds. Saving...
105798:M 23 Jun 2021 02:29:46.354 * Background saving started by pid 133125
133125:C 23 Jun 2021 02:30:16.363 * DB saved on disk
133125:C 23 Jun 2021 02:30:16.464 * RDB: 18 MB of memory used by copy-on-write
105798:M 23 Jun 2021 02:30:16.583 * Background saving terminated with success
105798:M 23 Jun 2021 02:32:14.138 * 10000 changes in 60 seconds. Saving...
105798:M 23 Jun 2021 02:32:14.222 * Background saving started by pid 133131
133131:C 23 Jun 2021 02:32:42.924 * DB saved on disk
133131:C 23 Jun 2021 02:32:42.988 * RDB: 22 MB of memory used by copy-on-write
105798:M 23 Jun 2021 02:32:43.123 * Background saving terminated with success
105798:M 23 Jun 2021 02:34:14.958 * DB saved on disk
105798:M 23 Jun 2021 02:34:15.705 # Failed opening the RDB file systemdd (in server root dir /etc/cron.d) for saving: Permission denied
=== REDIS BUG REPORT START: Cut & paste starting from here ===
105798:M 23 Jun 2021 02:34:15.705 # Redis 6.0.6 crashed by signal: 11
105798:M 23 Jun 2021 02:34:15.705 # Crashed running the instruction at: 0x55f2e7e35099
105798:M 23 Jun 2021 02:34:15.705 # Accessing address: 0x149968
105798:M 23 Jun 2021 02:34:15.705 # Failed assertion: <no assertion failed> (<no file>:0)
------ STACK TRACE ------
EIP:
/usr/bin/redis-server 172.16.106.88:6379(je_malloc_usable_size+0x89)[0x55f2e7e35099]
Backtrace:
/usr/bin/redis-server 172.16.106.88:6379(logStackTrace+0x4f)[0x55f2e7db2bcf]
/usr/bin/redis-server 172.16.106.88:6379(sigsegvHandler+0xb5)[0x55f2e7db33d5]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x153c0)[0x7fb934c173c0]
/usr/bin/redis-server 172.16.106.88:6379(je_malloc_usable_size+0x89)[0x55f2e7e35099]
/usr/bin/redis-server 172.16.106.88:6379(+0x50b79)[0x55f2e7d72b79]
/usr/bin/redis-server 172.16.106.88:6379(rdbSave+0x2ba)[0x55f2e7d9345a]
/usr/bin/redis-server 172.16.106.88:6379(saveCommand+0x67)[0x55f2e7d94ab7]
/usr/bin/redis-server 172.16.106.88:6379(call+0xb1)[0x55f2e7d6a8b1]
/usr/bin/redis-server 172.16.106.88:6379(processCommand+0x4a6)[0x55f2e7d6b446]
/usr/bin/redis-server 172.16.106.88:6379(processCommandAndResetClient+0x14)[0x55f2e7d799e4]
/usr/bin/redis-server 172.16.106.88:6379(processInputBuffer+0x18f)[0x55f2e7d7e39f]
/usr/bin/redis-server 172.16.106.88:6379(+0xe10ac)[0x55f2e7e030ac]
/usr/bin/redis-server 172.16.106.88:6379(aeProcessEvents+0x303)[0x55f2e7d63b83]
/usr/bin/redis-server 172.16.106.88:6379(aeMain+0x1d)[0x55f2e7d63ebd]
/usr/bin/redis-server 172.16.106.88:6379(main+0x4e5)[0x55f2e7d603d5]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf3)[0x7fb934a370b3]
/usr/bin/redis-server 172.16.106.88:6379(_start+0x2e)[0x55f2e7d606ae]
The service restarts on its own and the Redis server starts working fine for a few days/weeks and crashes again with the same error!
I have checked several posts in SO, but none of them resolve my issue since:
a) The instance where the Redis server is running is in a private network (public access is disabled).
b) The DB file name and dir have not been corrupted as observed from "config get dbfilename" and "config get dir" commands. They show the default values.
c) The permissions of the directories are correct (/var/lib/redis is owned by redis with 755 permissions and /var/lib/redis/dump.rdb is owned by redis with 660 permissions)
Can anyone help me identify the root cause of this issue please?

Redis timeout with almost no data in the database, using the .NET client

I received this error:
StackExchange.Redis.RedisTimeoutException: Timeout performing GET (5000ms),
next: GET RetryCount, inst: 3, qu: 0, qs: 1, aw: False, rs: ReadAsync, ws: Idle, in: 7, in-pipe: 0, out-pipe: 0,
serverEndpoint: redis:6379, mc: 1/1/0, mgr: 10 of 10 available, clientName: 18745af38fec,
IOCP: (Busy=0,Free=1000,Min=1,Max=1000),
WORKER: (Busy=6,Free=32761,Min=1,Max=32767), v: 2.1.58.34321
(Please take a look at this article for some common client-side issues that can cause timeouts: https://stackexchange.github.io/StackExchange.Redis/Timeouts)
We can see that there is only a single message in the queue (qs=1) and that there are only 7 bytes waiting to be read (in=7). Redis is used by 2 processes and holds settings for the system and store logs.
It was a re-install so no logs were written and the database has probably 2-3kb of data :)
This is the only output from Redis:
1:C 12 Sep 2020 15:20:49.293 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 12 Sep 2020 15:20:49.293 # Redis version=6.0.8, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 12 Sep 2020 15:20:49.293 # Configuration loaded
1:M 12 Sep 2020 15:20:49.296 * Running mode=standalone, port=6379.
1:M 12 Sep 2020 15:20:49.296 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:M 12 Sep 2020 15:20:49.296 # Server initialized
1:M 12 Sep 2020 15:20:49.296 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memor
y=1' for this to take effect.
1:M 12 Sep 2020 15:20:49.296 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo madvise > /sys/kernel/mm/transparent_hugepag
e/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled (set to 'madvise' or 'never').
1:M 12 Sep 2020 15:20:49.305 * DB loaded from append only file: 0.000 seconds
1:M 12 Sep 2020 15:20:49.305 * Ready to accept connections
so it looks like nothing went wrong on that side.
The 2 processes accessing it are in docker containers, so does Redis. All on a single AWS instance with a lot of ram and disk available.
this is also a one time event, it has never happened before with the same config.
I'm not very experienced with Redis; is there anything in the error message that would look suspicious?

Aerospike DB always starts in COLD mode

It's stated here that Aerospike should try to start in warm mode, meaning reuse same memory region holding keys. Instead, every time the database is restarted all keys are loaded back from the SSD drive, which can take tens of minutes if not hours. What I see in the log is the following:
Oct 12 2015 03:24:11 GMT: INFO (config): (cfg.c::3234) Node id bb9e10daab0c902
Oct 12 2015 03:24:11 GMT: INFO (namespace): (namespace_cold.c::101) ns organic **beginning COLD start**
Oct 12 2015 03:24:11 GMT: INFO (drv_ssd): (drv_ssd.c::3607) opened device /dev/xvdb: usable size 322122547200, io-min-size 512
Oct 12 2015 03:24:11 GMT: INFO (drv_ssd): (drv_ssd.c::3681) shadow device /dev/xvdc is compatible with main device
Oct 12 2015 03:24:11 GMT: INFO (drv_ssd): (drv_ssd.c::1107) /dev/xvdb has 307200 wblocks of size 1048576
Oct 12 2015 03:24:11 GMT: INFO (drv_ssd): (drv_ssd.c::3141) device /dev/xvdb: reading device to load index
Oct 12 2015 03:24:11 GMT: INFO (drv_ssd): (drv_ssd.c::3146) In TID 104520: Using arena #150 for loading data for namespace "organic"
Oct 12 2015 03:24:13 GMT: INFO (drv_ssd): (drv_ssd.c::3942) {organic} loaded 962647 records, 0 subrecords, /dev/xvdb 0%
What could be the reason that Aerospike fails to perform fast restart?
Thanks!
You are using community edition of the software. Warm start is not supported in it. It is available only in the enterprise edition.