I use the below command to add a user to RabbitMQ
rabbitmqctl add_user heartbeat alive
I get the below error
Adding user "heartbeat" ...
Error:
{:undef, [{:crypto, :hash, [:sha256, <<94, 223, 167, 31, 97, 108, 105, 118, 101>>], []}, {:rabbit_password, :hash, 2, [file: 'src/rabbit_password.erl', line: 34]}, {:rabbit_auth_backend_internal, :add_user_sans_validation, 3, [file: 'src/rabbit_auth_backend_internal.erl', line: 252]}, {:rpc, :"-handle_call_call/6-fun-0-", 5, [file: 'rpc.erl', line: 197]}]}
What wrong I am doing here and I am not able to understand the error.
RabbitMQ server version is 3.7.7
It resolved on mac os using the below command.
brew install openssl
Related
I have a Redis server that I am trying to connect to for Mule 4 applications.
My objective is :
Connect with Redis using Mule 4 app : Success
Connect with Redis using Redisinsight to visualise the data -> Problem
While connecting using Redisinsight I do the following :
Launch the Redisinsight tool. It starts the tool at : http://localhost:8001/
Click on "I already have a database"
Click on "Connect to a Redis Database"
Here I provide the host, port, name (which as per documentation I provide anything say redis_test) and password.
I get the error message : "Something went wrong adding the database. Please try again"
Interestingly while connecting by mule, I just need to provide host, port and password and it works.
Please help. Thanks in advance
From the redisinsight logs :
ERROR 2021-02-09 14:14:20,123 django.request Internal Server Error: /api/instance/
Traceback (most recent call last):
File "django\core\handlers\exception.py", line 34, in inner
File "django\core\handlers\base.py", line 115, in _get_response
File "django\core\handlers\base.py", line 113, in _get_response
File "django\views\decorators\csrf.py", line 54, in wrapped_view
File "django\views\generic\base.py", line 71, in view
File "rest_framework\views.py", line 495, in dispatch
File "rest_framework\views.py", line 455, in handle_exception
File "rest_framework\views.py", line 492, in dispatch
File "redisinsight\core\views\instance.py", line 208, in post
File "redisinsight\core\views\instance.py", line 147, in _save_redis_instance
File "redisinsight\core\services\database\_routines.py", line 80, in _wrapped_add_db_func
File "redisinsight\core\services\database\_routines.py", line 765, in add_redis_database
File "redisinsight\core\services\database\_routines.py", line 809, in add_standalone_db
File "redisinsight\core\services\database\_routines.py", line 576, in _add_standalone_db
File "redisinsight\core\services\database\_routines.py", line 190, in _assert_db_type
File "redisinsight\core\services\database\_routines.py", line 175, in _probe_db_type
File "redis\client.py", line 1281, in info
File "redis\client.py", line 878, in execute_command
File "redis\client.py", line 892, in parse_response
File "redis\connection.py", line 752, in read_response
redis.exceptions.ResponseError: unknown command `INFO`, with args beginning with:
It looks like the INFO command is disabled on your Redis server. RedisInsight needs basic commands like INFO and PING to be enabled.
To enable INFO command
Edit the Redis config file:
sudo nano /etc/redis/redis.conf
search for the INFO command
something like:
rename-command INFO ""
Comment the line and restart redis:
systemctl restart redis
Attempting to restart an Ambari-managed cluster and getting errors related to the Timeline Service V2.0 Reader service starting:
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/YARN/package/scripts/timelinereader.py", line 108, in <module>
ApplicationTimelineReader().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 353, in execute
method(env)
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/YARN/package/scripts/timelinereader.py", line 51, in start
hbase(action='start')
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/YARN/package/scripts/hbase_service.py", line 80, in hbase
createTables()
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/YARN/package/scripts/hbase_service.py", line 147, in createTables
logoutput=True)
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 263, in action_run
returns=self.resource.returns)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 308, in _call
raise ExecuteTimeoutException(err_msg)
resource_management.core.exceptions.ExecuteTimeoutException: Execution of 'ambari-sudo.sh su yarn-ats -l -s /bin/bash -c 'export PATH='"'"'/usr/sbin:/sbin:/usr/lib/ambari-server/*:/usr/local/texlive/2016/bin/x86_64-linux:/usr/local/texlive/2016/bin/x86_64-linux:/usr/local/texlive/2016/bin/x86_64-linux:/usr/lib64/qt-3.3/bin:/usr/local/texlive/2016/bin/x86_64-linux:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/opt/maven/bin:/root/bin:/opt/maven/bin:/opt/maven/bin:/var/lib/ambari-agent'"'"' ; sleep 10;export HBASE_CLASSPATH_PREFIX=/usr/hdp/3.0.0.0-1634/hadoop-yarn/timelineservice/*; /usr/hdp/3.0.0.0-1634/hbase/bin/hbase --config /usr/hdp/3.0.0.0-1634/hadoop/conf/embedded-yarn-ats-hbase org.apache.hadoop.yarn.server.timelineservice.storage.TimelineSchemaCreator -Dhbase.client.retries.number=35 -create -s'' was killed due timeout after 300 seconds
I have not changed any configs or installed anything new between the restart attempt; simply stopped the cluster services and attempted to restart them. Not sure what this error message means. Any debugging tips or fixes?
Found the solution on another community post.
navigate to the host where Timeline Reader is installed and Install Hbase Client in that host
Here is how I installed HBase Client from via the Ambari UI...
In the Ambari UI, go to Hosts then click the host you want to install the hbase client component on
In the list on components, you will have option to add more, see...
From here I installed the HBase client
Then stopped and restarted the cluster via Ambari UI (got notification of stale configs (though not sure if this was my problem all along or if installing the HBase Client reaised the stale configs alert))
I have the following problem:
Elixir
[centos#ip-172-172-3-49 helix]$ env MIX_ENV=prod mix release
Could not find Hex, which is needed to build dependency :plug_cowboy
Shall I install Hex? (if running non-interactively, use "mix local.hex --force") [Yn] Y
18:12:32.462 [info] ['TLS', 32, 'client', 58, 32, 73, 110, 32, 115, 116, 97, 116, 101, 32, 'hello', 32, 'received SERVER ALERT: Fatal - Handshake Failure', 10]
** (Mix) httpc request failed with: {:failed_connect, [{:to_address, {'repo.hex.pm', 443}}, {:inet6, [:inet6], :enetunreach}, {:inet, [:inet], {:tls_alert, 'handshake failure'}}]}
Could not install Hex because Mix could not download metadata at https://repo.hex.pm/installs/hex-1.x.csv.
I guess the root cause is this with Erlang
Erlang/OTP 21 [erts-10.2.1] [source] [64-bit] [smp:36:36] [ds:36:36:10] [async-threads:1] [hipe]
1> ssl:start().
ok
2> Sock = fun() -> {ok, S} = gen_tcp:connect("google.com", 443, []), S end.
#Fun<erl_eval.20.128620087>
3> ssl:connect(Sock(), []).
=INFO REPORT==== 28-Dec-2018::18:10:30.019612 ===
TLS client: In state hello received SERVER ALERT: Fatal - Handshake Failure
{error,{tls_alert,"handshake failure"}}
Is there a workaround for this yet? Os is Centos 7.
Update1
On MacOS it works:
Erlang/OTP 21 [erts-10.2] [source] [64-bit] [smp:6:6] [ds:6:6:10] [async-threads:1] [hipe] [dtrace]
Eshell V10.2 (abort with ^G)
1> ssl:start().
ok
2> Sock = fun() -> {ok, S} = gen_tcp:connect("google.com", 443, []), S end.
#Fun<erl_eval.20.128620087>
3> ssl:connect(Sock(), []).
{ok,{sslsocket,{gen_tcp,#Port<0.6>,tls_connection,undefined},
[<0.100.0>,<0.99.0>]}}
4>
It turns out that the Erlang RPM I was using does not support the new SSL in Erlang properly.
https://github.com/rabbitmq/erlang-rpm/releases/download/v21.2.1/erlang-21.2.1-1.el7.centos.x86_64.rpm
Using a different version from Erlang Solutions works fine:
https://packages.erlang-solutions.com/erlang/esl-erlang/FLAVOUR_1_general/esl-erlang_21.2-1~centos~7_amd64.rpm
ssl:connect/2 is described here: http://erlang.org/doc/man/ssl.html#connect-2.
Connecting is as easy as:
ssl:start().
ssl:connect("www.google.com", 443, []).
I have created hadoop cluster using apache ambari 2.1.0 with 3 datanodes.
Now when i am trying to add another datanode into(existing cluster) it, it throws an error that
resource_management.core.exceptions.Fail: Execution of '/usr/bin/yum
-d 0 -e 0 -y install 'hadoop_2_3_*'' returned 1. No Presto metadata available for base
Delta RPMs reduced 3.6 M of updates to 798 k (78% saved)
Here is my web UI console log:
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py", line 153, in
DataNode().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 218, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py", line 34, in install
self.install_packages(env, params.exclude_packages)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 376, in install_packages
Package(name)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 157, in init
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 152, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 118, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/init.py", line 45, in action_install
self.install_package(package_name, self.resource.use_repos, self.resource.skip_repos)
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py", line 49, in install_package
shell.checked_call(cmd, sudo=True, logoutput=self.get_logoutput())
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call
tries=tries, try_sleep=try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of '/usr/bin/yum
-d 0 -e 0 -y install 'hadoop_2_3_*'' returned 1. No Presto metadata available for base Delta RPMs reduced 3.6 M of updates to 798 k (78%
saved)
Error downloading packages:
hadoop_2_3_4_0_3485-yarn-proxyserver-2.7.1.2.3.4.0-3485.el6.x86_64:
[Errno 256] No more mirrors to try.
This looks like there are two issues with yum and your repositories.
First I see the message:
No Presto metadata available for base Delta RPMs reduced 3.6 M of
updates to 798 k (78% saved)
Try running the following command on the host that you are trying to add as a datanode to fix the first issue:
sudo yum clean all
Then see if you can perform this command successfully:
sudo yum -v install hadoop_2_3_*
If you get to the prompt that asks if you want to install (y/n) then it was successful, choose the no option, and retry the add datanode action from Ambari. If you get an error or some failure take a look at the verbose output to troubleshoot the problem further.
Using 64-bit RHEL 6, receiving this error from Yum:
[root /]# yum install [package_name]
---Start Error---
<BR><BR>
Traceback (most recent call last):<BR>
File "/usr/bin/yum", line 29, in <module>
yummain.user_main(sys.argv[1:], exit_code=True)
File "/usr/share/yum-cli/yummain.py", line 288, in user_main
errcode = main(args)
File "/usr/share/yum-cli/yummain.py", line 140, in main
result, resultmsgs = base.doCommands()
File "/usr/share/yum-cli/cli.py", line 436, in doCommands
self._getTs(needTsRemove)
File "/usr/lib/python2.6/site-packages/yum/depsolve.py", line 99, in _getTs
self._getTsInfo(remove_only)
File "/usr/lib/python2.6/site-packages/yum/depsolve.py", line 110, in _getTsInfo
pkgSack = self.pkgSack
File "/usr/lib/python2.6/site-packages/yum/__init__.py", line 887, in <lambda>
pkgSack = property(fget=lambda self: self._getSacks(),
File "/usr/lib/python2.6/site-packages/yum/__init__.py", line 669, in _getSacks
self.repos.populateSack(which=repos)
File "/usr/lib/python2.6/site-packages/yum/repos.py", line 308, in populateSack
sack.populate(repo, mdtype, callback, cacheonly)
File "/usr/lib/python2.6/site-packages/yum/yumRepo.py", line 187, in populate
dobj = repo_cache_function(xml, csum)
File "/usr/lib64/python2.6/site-packages/sqlitecachec.py", line 46, in getPrimary
self.repoid))
TypeError: Parsing primary.xml error: Start tag expected, '<' not found
---End Error---
Just started today. Was working just fine a couple days ago. Haven't installed anything on this system since last use.
Have already rebuilt Python 2.6 and Yum 3.4.3. Still same errors as above. Any ideas?
Clear the repo cache and rebuild it
yum clean all
yum update
Run this:
sudo su
export LD_LIBRARY_PATH=/usr/lib64:/usr/local/lib
yum clean all
yum update yum
I think this fixes it. It worked for me.