I have installed redis using the Rosetta terminal but when I run "redis-server" I get this error. I am on the new Mac Book Pro 2020 with Apple Silicon.
redis-server
42116:C 21 Nov 2020 20:07:12.620 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo 42116:C 21 Nov 2020 20:07:12.620 # Redis version=6.0.9, bits=64, commit=00000000, modified=0, pid=42116, just started 42116:C 21 Nov 2020 20:07:12.620 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf 42116:M 21 Nov 2020 20:07:12.620 * Increased maximum number of open files to 10032 (it was originally set to 2560).
=== REDIS BUG REPORT START: Cut & paste starting from here === 42116:M 21 Nov 2020 20:07:12.622 # Redis 6.0.9 crashed by signal: 11, si_code: 2 42116:M 21 Nov 2020 20:07:12.622 # Crashed running the instruction at: 0x7fff20371430 42116:M 21 Nov 2020 20:07:12.622 # Accessing address: 0x3046d2000 42116:M 21 Nov 2020 20:07:12.622 # Killed by PID: 0, UID: 0 42116:M 21 Nov 2020 20:07:12.622 # Failed assertion: <no assertion failed> (<no file>:0)
------ STACK TRACE ------ EIP: 0 libsystem_platform.dylib 0x00007fff20371430 _platform_memset$VARIANT$Rosetta + 108
Backtrace: 0 redis-server 0x00000001000e4bb7 logStackTrace + 110 1 redis-server 0x00000001000e4fd5 sigsegvHandler + 271 2 libsystem_platform.dylib 0x00007fff2036ed7d _sigtramp + 29 3 libsystem_malloc.dylib 0x00007fff201547aa tiny_free_no_lock + 1116 4 redis-server 0x00000001001350c3 luaD_call + 97 5 ??? 0x0000000032aaaba2 0x0 + 850045858
------ INFO OUTPUT ------
# Server redis_version:6.0.9 redis_git_sha1:00000000 redis_git_dirty:0 redis_build_id:ec508acaad782189 redis_mode:standalone os:Darwin 20.1.0 x86_64 arch_bits:64 multiplexing_api:kqueue atomicvar_api:atomic-builtin gcc_version:4.2.1 process_id:42116 run_id:3456c4d545624d4cbf42d4b85695b8f4cb6ce250 tcp_port:6379 uptime_in_seconds:0 uptime_in_days:0 hz:10 configured_hz:10 lru_clock:12150112 executable:/Users/leonardo/Dropbox/dev/redis/redis-stable/redis-server config_file: io_threads_active:0
# Clients connected_clients:0 client_recent_max_input_buffer:0 client_recent_max_output_buffer:0 blocked_clients:0 tracking_clients:0 clients_in_timeout_table:0
# Memory used_memory:1019360 used_memory_human:995.47K used_memory_rss:0 used_memory_rss_human:0B used_memory_peak:1019360 used_memory_peak_human:995.47K used_memory_peak_perc:inf% used_memory_overhead:0 used_memory_startup:0 used_memory_dataset:1019360 used_memory_dataset_perc:100.00% allocator_allocated:0 allocator_active:0 allocator_resident:0 total_system_memory:8589934592 total_system_memory_human:8.00G used_memory_lua:37888 used_memory_lua_human:37.00K used_memory_scripts:0 used_memory_scripts_human:0B number_of_cached_scripts:0 maxmemory:0 maxmemory_human:0B maxmemory_policy:noeviction allocator_frag_ratio:nan allocator_frag_bytes:0 allocator_rss_ratio:nan allocator_rss_bytes:0 rss_overhead_ratio:nan rss_overhead_bytes:0 mem_fragmentation_ratio:nan mem_fragmentation_bytes:0 mem_not_counted_for_evict:0 mem_replication_backlog:0 mem_clients_slaves:0 mem_clients_normal:0 mem_aof_buffer:0 mem_allocator:libc active_defrag_running:0 lazyfree_pending_objects:0
# Persistence loading:0 rdb_changes_since_last_save:0 rdb_bgsave_in_progress:0 rdb_last_save_time:1605985632 rdb_last_bgsave_status:ok rdb_last_bgsave_time_sec:-1 rdb_current_bgsave_time_sec:-1 rdb_last_cow_size:0 aof_enabled:0 aof_rewrite_in_progress:0 aof_rewrite_scheduled:0 aof_last_rewrite_time_sec:-1 aof_current_rewrite_time_sec:-1 aof_last_bgrewrite_status:ok aof_last_write_status:ok aof_last_cow_size:0 module_fork_in_progress:0 module_fork_last_cow_size:0
# Stats total_connections_received:0 total_commands_processed:0 instantaneous_ops_per_sec:0 total_net_input_bytes:0 total_net_output_bytes:0 instantaneous_input_kbps:0.00 instantaneous_output_kbps:0.00 rejected_connections:0 sync_full:0 sync_partial_ok:0 sync_partial_err:0 expired_keys:0 expired_stale_perc:0.00 expired_time_cap_reached_count:0 expire_cycle_cpu_milliseconds:0 evicted_keys:0 keyspace_hits:0 keyspace_misses:0 pubsub_channels:0 pubsub_patterns:0 latest_fork_usec:0 migrate_cached_sockets:0 slave_expires_tracked_keys:0 active_defrag_hits:0 active_defrag_misses:0 active_defrag_key_hits:0 active_defrag_key_misses:0 tracking_total_keys:0 tracking_total_items:0 tracking_total_prefixes:0 unexpected_error_replies:0 total_reads_processed:0 total_writes_processed:0 io_threaded_reads_processed:0 io_threaded_writes_processed:0
# Replication role:master connected_slaves:0 master_replid:b00cc4f1203a9a29b81236248b7ebc68c567f4ad master_replid2:0000000000000000000000000000000000000000 master_repl_offset:0 second_repl_offset:-1 repl_backlog_active:0 repl_backlog_size:1048576 repl_backlog_first_byte_offset:0 repl_backlog_histlen:0
# CPU used_cpu_sys:0.004632 used_cpu_user:0.007445 used_cpu_sys_children:0.000000 used_cpu_user_children:0.000000
# Modules
# Commandstats
# Cluster cluster_enabled:0
# Keyspace
------ CLIENT LIST OUTPUT ------
------ REGISTERS ------ 42116:M 21 Nov 2020 20:07:12.623 # RAX:00000003046d1c80 RBX:0000000000000013 RCX:00000003046d2000 RDX:00007f9b90d338ae RDI:00000003046d1c18 RSI:0000000000000000 RBP:00000003046d1a40 RSP:00000003046d1858 R8 :0000000000000000 R9 :00000003046d1910 R10:00000001001507b3 R11:ffffffffffffffff R12:00000003046d1ae0 R13:00000000000000ff R14:0000000100151127 R15:0000000100181740 RIP:00007fff20371430 EFL:0000000000000202 CS :000000000000002b FS:0000000000000000 GS:0000000000000000 42116:M 21 Nov 2020 20:07:12.623 # (00000003046d1867) -> 0000000108c36a00 42116:M 21 Nov 2020 20:07:12.623 # (00000003046d1866) -> 0000000000000006 42116:M 21 Nov 2020 20:07:12.623 # (00000003046d1865) -> 0000000000000000 42116:M 21 Nov 2020 20:07:12.623 # (00000003046d1864)
-> 0000000000002800 42116:M 21 Nov 2020 20:07:12.623 # (00000003046d1863) -> 0000000000000000 42116:M 21 Nov 2020 20:07:12.623 # (00000003046d1862) -> 00007fff20152020 42116:M 21 Nov 2020 20:07:12.623 # (00000003046d1861) -> 000000010015fbca 42116:M 21 Nov 2020 20:07:12.623 # (00000003046d1860) -> 00000001000f34d6 42116:M 21 Nov 2020 20:07:12.623 # (00000003046d185f) -> 00000003046d19d0 42116:M 21 Nov 2020 20:07:12.623 # (00000003046d185e) -> 00007f9e85400000 42116:M 21 Nov 2020 20:07:12.623 # (00000003046d185d)
-> 0000000100152d38 42116:M 21 Nov 2020 20:07:12.623 # (00000003046d185c) -> 00000000000018eb 42116:M 21 Nov 2020 20:07:12.623 # (00000003046d185b) -> 000000010014dbcd 42116:M 21 Nov 2020 20:07:12.623 # (00000003046d185a) -> 00000003046d1c18 42116:M 21 Nov 2020 20:07:12.623 # (00000003046d1859) -> 00007f9e85407da0 42116:M 21 Nov 2020 20:07:12.623 # (00000003046d1858) -> 0000000100103ebb
------ MODULES INFO OUTPUT ------
------ DUMPING CODE AROUND EIP ------ Symbol: _platform_memset$VARIANT$Rosetta (base: 0x7fff203713c4) Module: /usr/lib/system/libsystem_platform.dylib (base 0x7fff2036b000) $ xxd
-r -p /tmp/dump.hex /tmp/dump.bin $ objdump --adjust-vma=0x7fff203713c4 -D -b binary -m i386:x86-64 /tmp/dump.bin
------ 42116:M 21 Nov 2020 20:07:12.623 # dump of function (hexdump of 236 bytes): 81e6ff00000048b90101010101010101480faff14889f94883fa400f82360100004881fa008000000f82a00000000faef0480fc337480fc37708480fc37710480fc37718480fc37720480fc37728480fc37730480fc37738488d4f404883e1c04801fa488d41404829c27631480fc331480fc37108480fc37110480fc37118480fc37120480fc37128480fc37130480fc371384883c1404883ea4077cf4801d1480fc331480fc37108480fc37110480fc37118480fc37120480fc37128480fc37130480fc371380faef84889f8c3488937488977084889771048897718488977204889772848897730488977
=== REDIS BUG REPORT END. Make sure to include from START to END. ===
Please report the crash by opening an issue on github:
http://github.com/redis/redis/issues
Suspect RAM error? Use redis-server --test-memory to verify it.
zsh: segmentation fault redis-server
Memory overflow can cause the Redis service to crash. During peak time, the Redis service may require more memory than what is currently allocated.
To check current configuration and used memory, run the following command in the CLI. It checks for used memory, maxmemory, evicted keys, and Redis up time in days:
redis-cli -p REDIS_PORT -h REDIS_HOST info | egrep --color "(role|used_memory_peak|maxmemory|evicted_keys|uptime_in_days)"
[UPDATE]: Per the latest on that redis Github issue, a fix was merged.
You can either build it locally from their latest master or wait for the next public release (current version is 6.0.9), which will likely include the fix.
I believe the Redis team is currently still working on support here:
https://github.com/redis/redis/issues/8062
Per that link, you might be able to run Redis under sudo
Related
rpm -qa | grep kexec
kexec-tools-2.0.15-13.el7.x86_64
cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-3.10.0-862.el7.x86_64 root=/dev/mapper/centos-root ro crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet
dmesg | grep -i crash
[ 0.000000] Command line: BOOT_IMAGE=/vmlinuz-3.10.0-862.el7.x86_64 root=/dev/mapper/centos-root ro crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet [ 0.000000] Reserving 161MB of memory at 688MB for crashkernel (System RAM: 16383MB) [ 0.000000] Kernel command line: BOOT_IMAGE=/vmlinuz-3.10.0-862.el7.x86_64 root=/dev/mapper/centos-root ro crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet [ 1.033253] crash memory driver: version 1.1
grep -v ^# /etc/kdump.conf
path /var/crash
core_collector makedumpfile -l --message-level 1 -d 31
systemctl status kdump
● kdump.service - Crash recovery kernel arming Loaded: loaded (/usr/lib/systemd/system/kdump.service; enabled; vendor preset: enabled) Active: active (exited) since Fri 2022-08-26 07:34:03 CST; 1h 34min ago Process: 996 ExecStart=/usr/bin/kdumpctl start (code=exited, status=0/SUCCESS) Main PID: 996 (code=exited, status=0/SUCCESS) CGroup: /system.slice/kdump.service
Aug 26 07:34:01 c-1 systemd[1]: Starting Crash recovery kernel arming...
Aug 26 07:34:03 c-1 kdumpctl[996]: kexec: loaded kdump kernel
Aug 26 07:34:03 c-1 kdumpctl[996]: Starting kdump: [OK]
Aug 26 07:34:03 c-1 systemd[1]: Started Crash recovery kernel arming.
kdumpctl status
Kdump is operational
everything looks fine,but when echo c > /proc/sysrq-trigger to test function,it doesn't work.plz help,thanks!
I got a .ovpn file for work a couple of weeks ago and at first, everything worked correctly, it was running no problem. When I tried to use it again I got this error
<user>#<name>:~$ sudo openvpn ~/<file_name>.ovpn
Options error: In [CMD-LINE]:1: Error opening configuration file:
/home/<name>/<file_name>.ovpn
Use --help for more information.
So I tried openvpn --config <file_name>.ovpn and got this
<user>#<name>:~$ openvpn --config <file_name>.ovpn
Tue Feb 2 11:11:08 2021 OpenVPN 2.4.7 x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] built on Sep 5 2019
Tue Feb 2 11:11:08 2021 library versions: OpenSSL 1.1.1f 31 Mar 2020, LZO 2.10
Tue Feb 2 11:11:08 2021 WARNING: No server certificate verification method has been enabled. See http://openvpn.net/howto.html#mitm for more info.
Tue Feb 2 11:11:08 2021 TCP/UDP: Preserving recently used remote address: [AF_INET]65.175.70.209:1194
Tue Feb 2 11:11:08 2021 UDP link local: (not bound)
Tue Feb 2 11:11:08 2021 UDP link remote: [AF_INET]65.175.70.209:1194
Tue Feb 2 11:11:09 2021 [server] Peer Connection Initiated with [AF_INET]65.175.70.209:1194
Tue Feb 2 11:11:10 2021 ERROR: Cannot ioctl TUNSETIFF tun: Operation not permitted (errno=1)
Tue Feb 2 11:11:10 2021 Exiting due to fatal error
What can I do to fix this? Thanks in advance.
I was able to fix it using an --auth-retry and interact added onto the previous command:
<user>#<name>:~$ sudo openvpn --config <file_name>.ovpn --auth-retry interact
Wed Feb 3 10:22:13 2021 OpenVPN 2.4.7 x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] built on Sep 5 2019
Wed Feb 3 10:22:13 2021 library versions: OpenSSL 1.1.1f 31 Mar 2020, LZO 2.10
Wed Feb 3 10:22:13 2021 WARNING: No server certificate verification method has been enabled. See http://openvpn.net/howto.html#mitm for more info.
Wed Feb 3 10:22:13 2021 TCP/UDP: Preserving recently used remote address: [AF_INET]65.175.70.209:1194
Wed Feb 3 10:22:13 2021 UDP link local: (not bound)
Wed Feb 3 10:22:13 2021 UDP link remote: [AF_INET]65.175.70.209:1194
Wed Feb 3 10:22:14 2021 [server] Peer Connection Initiated with [AF_INET]65.175.70.209:1194
Wed Feb 3 10:22:15 2021 TUN/TAP device tun0 opened
Wed Feb 3 10:22:15 2021 /sbin/ip link set dev tun0 up mtu 1500
Wed Feb 3 10:22:16 2021 /sbin/ip addr add dev tun0 local 172.16.100.185 peer 172.16.100.186
Wed Feb 3 10:22:16 2021 WARNING: this configuration may cache passwords in memory -- use the auth-nocache option to prevent this
Wed Feb 3 10:22:16 2021 Initialization Sequence Completed
I am trying to install rabbitmq. So I am facing issue as follow. For installation, I am using this blog
Reading package lists... Done
Building dependency tree
Reading state information... Done
rabbitmq-server is already the newest version (3.8.7-1).
The following packages were automatically installed and are no longer required:
erlang-diameter erlang-edoc erlang-erl-docgen erlang-eunit erlang-ic erlang-inviso erlang-nox erlang-odbc erlang-percept erlang-ssh libodbc1 libsctp1
Use 'sudo apt autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
1 not fully installed or removed.
After this operation, 0 B of additional disk space will be used.
Do you want to continue? [Y/n] y
Setting up rabbitmq-server (3.8.7-1) ...
Job for rabbitmq-server.service failed because the control process exited with error code.
See "systemctl status rabbitmq-server.service" and "journalctl -xe" for details.
invoke-rc.d: initscript rabbitmq-server, action "start" failed.
● rabbitmq-server.service - RabbitMQ broker
Loaded: loaded (/lib/systemd/system/rabbitmq-server.service; enabled; vendor preset: enabled)
Active: activating (auto-restart) (Result: exit-code) since Mon 2020-08-31 17:20:23 IST; 4ms ago
Process: 6118 ExecStart=/usr/lib/rabbitmq/bin/rabbitmq-server (code=exited, status=1/FAILURE)
Main PID: 6118 (code=exited, status=1/FAILURE)
Aug 31 17:20:23 mahesh-Latitude-3500 systemd[1]: Failed to start RabbitMQ broker.
dpkg: error processing package rabbitmq-server (--configure):
installed rabbitmq-server package post-installation script subprocess returned error exit status 1
Errors were encountered while processing:
rabbitmq-server
E: Sub-process /usr/bin/dpkg returned an error code (1)
Many times I have removed all packages of rabbitmq and installed it again but nothing worked.
I tried fixing this issue with the following command as I got from other blogs.
$ sudo apt-get update --fix-missing
$ sudo dpkg --configure -a
$ sudo apt-get install -f
$ sudo apt-get install rabbitmq-server -y --fix-missing
Logs after running systemctl status rabbitmq-server.service
rabbitmq-server.service - RabbitMQ broker
Loaded: loaded (/lib/systemd/system/rabbitmq-server.service; enabled; vendor preset: enabled)
Active: activating (auto-restart) (Result: exit-code) since Mon 2020-08-31 17:43:11 IST; 1s ago
Process: 1878 ExecStart=/usr/lib/rabbitmq/bin/rabbitmq-server (code=exited, status=1/FAILURE)
Main PID: 1878 (code=exited, status=1/FAILURE)
And logs after running journalctl -xe
-- Unit rabbitmq-server.service has begun starting up.
Aug 31 17:44:17 mahesh-Latitude-3500 rabbitmq-server[3761]: Configuring logger redirection
Aug 31 17:44:19 mahesh-Latitude-3500 rabbitmq-server[3761]: ## ## RabbitMQ 3.8.7
Aug 31 17:44:19 mahesh-Latitude-3500 rabbitmq-server[3761]: ## ##
Aug 31 17:44:19 mahesh-Latitude-3500 rabbitmq-server[3761]: ########## Copyright (c) 2007-2020 VMware, Inc. or its affiliates.
Aug 31 17:44:19 mahesh-Latitude-3500 rabbitmq-server[3761]: ###### ##
Aug 31 17:44:19 mahesh-Latitude-3500 rabbitmq-server[3761]: ########## Licensed under the MPL 2.0. Website: https://rabbitmq.com
Aug 31 17:44:19 mahesh-Latitude-3500 rabbitmq-server[3761]: Doc guides: https://rabbitmq.com/documentation.html
Aug 31 17:44:19 mahesh-Latitude-3500 rabbitmq-server[3761]: Support: https://rabbitmq.com/contact.html
Aug 31 17:44:19 mahesh-Latitude-3500 rabbitmq-server[3761]: Tutorials: https://rabbitmq.com/getstarted.html
Aug 31 17:44:19 mahesh-Latitude-3500 rabbitmq-server[3761]: Monitoring: https://rabbitmq.com/monitoring.html
Aug 31 17:44:19 mahesh-Latitude-3500 rabbitmq-server[3761]: Logs: /var/log/rabbitmq/rabbit#mahesh-Latitude-3500.log
Aug 31 17:44:19 mahesh-Latitude-3500 rabbitmq-server[3761]: /var/log/rabbitmq/rabbit#mahesh-Latitude-3500_upgrade.log
Aug 31 17:44:19 mahesh-Latitude-3500 rabbitmq-server[3761]: Config file(s): (none)
Aug 31 17:44:19 mahesh-Latitude-3500 rabbitmq-server[3761]: Starting broker...
Aug 31 17:44:19 mahesh-Latitude-3500 rabbitmq-server[3761]: BOOT FAILED
Aug 31 17:44:19 mahesh-Latitude-3500 rabbitmq-server[3761]: ===========
Aug 31 17:44:19 mahesh-Latitude-3500 rabbitmq-server[3761]: Error during startup: {error,{could_not_start_listener,"::",5672,eaddrinuse}}
Aug 31 17:44:20 mahesh-Latitude-3500 rabbitmq-server[3761]: {"init terminating in do_boot",{error,{could_not_start_listener,"::",5672,eaddrinuse}}}
Aug 31 17:44:20 mahesh-Latitude-3500 rabbitmq-server[3761]: init terminating in do_boot ({error,{could_not_start_listener,::,5672,eaddrinuse}})
Aug 31 17:44:20 mahesh-Latitude-3500 rabbitmq-server[3761]: [1B blob data]
Aug 31 17:44:20 mahesh-Latitude-3500 rabbitmq-server[3761]: Crash dump is being written to: /var/log/rabbitmq/erl_crash.dump...done
Aug 31 17:44:20 mahesh-Latitude-3500 systemd[1]: rabbitmq-server.service: Main process exited, code=exited, status=1/FAILURE
Aug 31 17:44:20 mahesh-Latitude-3500 systemd[1]: rabbitmq-server.service: Failed with result 'exit-code'.
Aug 31 17:44:20 mahesh-Latitude-3500 systemd[1]: Failed to start RabbitMQ broker.
-- Subject: Unit rabbitmq-server.service has failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- Unit rabbitmq-server.service has failed.
--
-- The result is RESULT.
Aug 31 17:44:30 mahesh-Latitude-3500 systemd[1]: rabbitmq-server.service: Service hold-off time over, scheduling restart.
Aug 31 17:44:30 mahesh-Latitude-3500 systemd[1]: rabbitmq-server.service: Scheduled restart job, restart counter is at 179.
-- Subject: Automatic restarting of a unit has been scheduled
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- Automatic restarting of the unit rabbitmq-server.service has been scheduled, as the result for
-- the configured Restart= setting for the unit.
Aug 31 17:44:30 mahesh-Latitude-3500 systemd[1]: Stopped RabbitMQ broker.
-- Subject: Unit rabbitmq-server.service has finished shutting down
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- Unit rabbitmq-server.service has finished shutting down.
Aug 31 17:44:30 mahesh-Latitude-3500 systemd[1]: Starting RabbitMQ broker...
-- Subject: Unit rabbitmq-server.service has begun start-up
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- Unit rabbitmq-server.service has begun starting up.
Nothing has worked. Please help.
Thanks in advance.
The problem most likely is that the port rabbitmq is trying to use is currently in use by another process.
This is stated in the output of journalctl -xe
... : Error during startup: {error,{could_not_start_listener,"::",5672,eaddrinuse}}
you can see all the ports currently in use on the machine with
$ lsof -i -P -n
or to check specifically for that port
$ lsof -i -P -n|grep 5672
so you could either kill the process using port 5672 or change the port used by rabbitmq
Fedora 4.10.8-200.fc25.i686+PAE
dnf is crashing with 'segmentation fault (core dumped)'.
I have tried to run 'dnf clean all' without success.
When running 'dnf upgrade', this is logged in dnf.log:
Apr 30 20:17:21 INFO --- logging initialized ---
Apr 30 20:17:21 DDEBUG timer: config: 7 ms
Apr 30 20:17:21 DEBUG cachedir: /var/cache/dnf
Apr 30 20:17:21 DEBUG Loaded plugins: reposync, Query, noroot, needs-restarting, protected_packages, builddep, playground, config-manager, copr, download, system-upgrade, debuginfo-install, generate_completion_cache
Apr 30 20:17:21 DEBUG DNF version: 1.1.10
Apr 30 20:17:21 DDEBUG Command: dnf upgrade
Apr 30 20:17:21 DDEBUG Installroot: /
Apr 30 20:17:21 DDEBUG Releasever: 25
Apr 30 20:17:21 DDEBUG Base command: upgrade
Apr 30 20:17:21 DDEBUG Extra commands: []
Apr 30 20:17:51 DDEBUG repo: downloading from remote: updates, _Handle: metalnk: https://mirrors.fedoraproject.org/metalink?repo=updates-released-f25&arch=i386, mlist: None, urls [].
This is logged in 'messages':
Apr 30 20:17:51 emil2 audit: ANOM_ABEND auid=0 uid=0 gid=0 ses=10 pid=23817 comm="dnf" exe="/usr/libexec/system-python" sig=11
Apr 30 20:17:51 emil2 kernel: dnf[23817]: segfault at 24 ip b64a9c81 sp bfe10cc0 error 4 in libssl3.so[b6496000+49000]
Apr 30 20:17:51 emil2 abrt-hook-ccpp: Process 23817 (system-python) of user 0 killed by SIGSEGV - dumping core
Apr 30 20:17:52 emil2 abrt-server: Deleting problem directory ccpp-2017-04-30-20:17:51-23817 (dup of ccpp-2017-04-28-22:02:01-6627)
Apr 30 20:17:52 emil2 dbus-daemon[721]: [system] Activating service name='org.freedesktop.problems' requested by ':1.699' (uid=0 pid=23827 comm="/usr/bin/python3 /usr/bin/abrt-action-notify -d /v") (using servicehelper)
Apr 30 20:17:52 emil2 dbus-daemon[721]: [system] Successfully activated service 'org.freedesktop.problems'
What can I do to troubleshoot?
Got the same problem after upgrade
libdb-5.3.28-16 to libdb-5.3.28-24
libdb-utils-5.3.28-16 to libdb-utils-5.3.28-24
Rebuild the rpm db fixed it
% rm /var/lib/rpm/__db*
% rpm --rebuilddb
totally new to mongodb. I'm trying to install locomotive CMS on my server, which is cool, but I've always used SQL/MySQL so mongo is totally new to me.
I installed all the needed mongodb modules, but when I run: sudo service mongod start I get an error code. When I look in the logs for the error, here is what is output:
Fri Mar 21 18:13:47.186 [initandlisten] MongoDB starting : pid=5053 port=27017 dbpath=/var/lib/mongo 64-bit host=vagrant-centos64.vagrantup.com
Fri Mar 21 18:13:47.186 [initandlisten] db version v2.4.9
Fri Mar 21 18:13:47.186 [initandlisten] git version: 52fe0d21959e32a5bdbecdc62057db386e4e029c
Fri Mar 21 18:13:47.186 [initandlisten] build info: Linux ip-10-2-29-40 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 20 17:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_49
Fri Mar 21 18:13:47.186 [initandlisten] allocator: tcmalloc
Fri Mar 21 18:13:47.186 [initandlisten] options: { config: "/etc/mongod.conf", dbpath: "/var/lib/mongo", fork: "true", logappend: "true", logpath: "/var/log/mongo/mongod.log", pidfilepath: "/var/run/mo$
Fri Mar 21 18:13:47.192 [initandlisten] journal dir=/var/lib/mongo/journal
Fri Mar 21 18:13:47.192 [initandlisten] recover : no journal files present, no recovery needed
Fri Mar 21 18:13:47.192 [initandlisten]
Fri Mar 21 18:13:47.192 [initandlisten] ERROR: Insufficient free space for journal files
Fri Mar 21 18:13:47.192 [initandlisten] Please make at least 3379MB available in /var/lib/mongo/journal or use --smallfiles
Fri Mar 21 18:13:47.192 [initandlisten]
Fri Mar 21 18:13:47.193 [initandlisten] exception in initAndListen: 15926 Insufficient free space for journals, terminating
Fri Mar 21 18:13:47.193 dbexit:
Fri Mar 21 18:13:47.193 [initandlisten] shutdown: going to close listening sockets...
Fri Mar 21 18:13:47.193 [initandlisten] shutdown: going to flush diaglog...
Fri Mar 21 18:13:47.193 [initandlisten] shutdown: going to close sockets...
Fri Mar 21 18:13:47.193 [initandlisten] shutdown: waiting for fs preallocator...
Fri Mar 21 18:13:47.193 [initandlisten] shutdown: lock for final commit...
Fri Mar 21 18:13:47.193 [initandlisten] shutdown: final commit...
Fri Mar 21 18:13:47.193 [initandlisten] shutdown: closing all files...
Fri Mar 21 18:13:47.193 [initandlisten] closeAllFiles() finished
Fri Mar 21 18:13:47.193 [initandlisten] journalCleanup...
Fri Mar 21 18:13:47.193 [initandlisten] removeJournalFiles
Fri Mar 21 18:13:47.193 [initandlisten] shutdown: removing fs lock...
Fri Mar 21 18:13:47.193 dbexit: really exiting now
Also, I run: sudo service mongod status and the output is mongod is stopped so I know it's not running.
Following the stack, it looks like the error has something to do with insufficient space, but my server has 15gb free and im running sudo, so i know it's not a permission error....how can I allocate more space...or better yet, what should i allocate more space to?
Any help is appreciated.
Add smallfiles = true to "/etc/mongodb.conf".
Now try to start the service, I assume this should fix the issue!!
Set to true to modify MongoDB to use a smaller default data file size. Specifically, smallfiles reduces the initial size for data files and limits them to 512 megabytes. The smallfiles setting also reduces the size of each journal files from 1 gigabyte to 128 megabytes.