Unable to authenticate cluster user: HORNETQ.CLUSTER.ADMIN.USER - jboss7.x

I am getting following error in WildFly-8.2.0.Final domain mode. This error comes when I start second node.
ERROR [org.hornetq.core.server] (default I/O-1) HQ224018: Failed to create session: HornetQClusterSecurityException[errorType=CLUSTER_SECURITY_EXCEPTION message=HQ119099: Unable to authenticate cluster user: HORNETQ.CLUSTER.ADMIN.USER]

This is fixed by adding
${jboss.messaging.cluster.password:#password#} to subsystem in standalone.xml.

Related

Wirecloud and IDM server hiccup

I linked wirecloud and Idm recently. When i login into wirecloud and i land into my wirecloud i got the following error:
Sorry, but the requested page is unavailable due to a server hiccup.
Our engineers have been notified, so check back later.
My idm configuration is:
URL
http://151.80.41.166:50002
Callback URL
http://151.80.41.166:50002/complete/fiware/
I cant get more error info
Exception Type: AuthStateMissing
Exception Value: Session value state missing.
Exception Location: /usr/local/lib/python2.7/site-packages/social_core/backends/oauth.py in validate_state, line 90
Python Executable: /usr/local/bin/python
Python Version: 2.7.14
Python Path:
['/opt/wirecloud_instance',
'/usr/local/lib/python27.zip',
'/usr/local/lib/python2.7',
'/usr/local/lib/python2.7/plat-linux2',
'/usr/local/lib/python2.7/lib-tk',
'/usr/local/lib/python2.7/lib-old',
'/usr/local/lib/python2.7/lib-dynload',
'/usr/local/lib/python2.7/site-packages']
The problem was i got in the same machine idm and Wirecloud and they use the same cookie.
I add the follow lines on settings.py
SESSION_COOKIE_NAME = "wcsessionid"
CSRF_COOKIE_NAME = "wccsrftoken"

DataStax Lifecycle Manager Failing to Install New Cluster

We are using Lifecycle Manager to distribute DSE to a new Cass cluster. The install fails no matter what is tried. The log shows the following where I edited ip addresses.
Meld failed on name="node1_name" ssh-management-address="node1_ip" node-id="37292a99-f324-4967-803c-d9c50ab0b87b" job-id="d413b5ba-ba87-4de4-bac7-2efac334d2d4" stdout="503 Server Error: Connect failed for url: http://opscenter:8888/api/v1/lcm/internal/nodes/37292a99-f324-4967-803c-d9c50ab0b87b/status event-resource=http://opscenter:8888/api/v1/lcm/internal/nodes/37292a99-f324-4967-803c-d9c50ab0b87b/status2017-07-10 15:53:43,925 - opsc-meld - ERROR - 503 Server Error: Connect failed for url: http://opscenter:8888/api/v1/lcm/internal/nodes/37292a99-f324-4967-803c-d9c50ab0b87b/status event-resource=http://opscenter:8888/api/v1/lcm/internal/nodes/37292a99-f324-4967-803c-d9c50ab0b87b/status " stderr=""

Puppet agent is not running successfully after updating ssl certs

I am running puppet 3.7. The certs are expiring for me so I have updated the certs (after creating a backup so I am able to get back to the original state and that's fine). After updating the certs on puppetmaster using this, updating certs on the agent using this and updating certs on puppetdb using this, I am unable to run puppet agent successfully on a client box. It gives me the following error:
root#ip-10-181-36:/var/lib/puppet# sudo puppet agent -t
Warning: Setting templatedir is deprecated. See http://links.puppetlabs.com/env-settings-deprecations
(at /usr/lib/ruby/vendor_ruby/puppet/settings.rb:1139:in 'issue_deprecation_warning')
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Warning: Unable to fetch my node definition, but the agent run will continue:
Warning: Error 403 on SERVER: Forbidden request: newer-generic-host(127.0.0.1) access to /node/ip-10-181-36 [find] authenticated at :39
Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: newer-generic-host(127.0.0.1) access to /catalog/ip-10-181-36 [find] authenticated at :1
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
Error: Could not send report: Error 403 on SERVER: Forbidden request: newer-generic-host(127.0.0.1) access to /report/ip-10-181-36 [save] authenticated at :91
I am stuck at this point and no googling or reading docs or seeing the logs is helping. Does anyone have any ideas?

How to connect spark application to secure HBase with Kerberos

I`m trying to connect a Spark application to HBase with Kerberos enabled. Spark version is 1.5.0, CDH 5.5.2 and it's executed in yarn cluster mode.
When HbaseContext is initialized, it throws this error:
ERROR ipc.AbstractRpcClient: SASL authentication failed. The most likely cause is missing or invalid credentials. Consider 'kinit'.
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
I have tried to do the authentication in the code, adding:
UserGroupInformation.setConfiguration(config)
UserGroupInformation.loginUserFromKeytab(principalName, keytabFilename)
I distribute the keytab file with --files option in spark-submit.
Now, the error is:
java.io.IOException: Login failure for usercomp#COMPANY.CORP from keytab krb5.usercomp.keytab: javax.security.auth.login.LoginException: Unable to obtain password from user
...
Caused by: javax.security.auth.login.LoginException: Unable to obtain password from user
at com.sun.security.auth.module.Krb5LoginModule.promptForPass(Krb5LoginModule.java:856)
Is this the way to connect to Kerberized HBase from a Spark app?
please see the example configuration like below if you are missing anything like hadoop.security.authentication
val conf= HBaseConfiguration.create()
conf.set("hbase.zookeeper.quorum", "list of ip's")
conf.set("hbase.zookeeper"+ ".property.clientPort","2181");
conf.set("hbase.master", "masterIP:60000");
conf.set("hadoop.security.authentication", "kerberos");
Actually try to put your hbase-site.xml directly in the SPARK_CONF directory of your edge node (should be something like /etc/spark/conf or /etc/spark2/conf).
you can use loginUserFromKeytabAndReturnUGI, and uig.doAs
or you could put you hbase classpath to SPARK_DIST_CLASSPATH.

RabbitMQ Management : webmachine error: path="/api/overview"

After I login to rabbitmq, I get the following error :
Got response code 500 with body
Internal Server Error
The server encountered an error while processing this request:
{error,{error,{badmatch,{error,nxdomain}},
[{rabbit_nodes,cluster_name_default,0},
{rabbit_nodes,cluster_name,0},
{rabbit_mgmt_wm_overview,to_json,2},
{webmachine_resource,resource_call,3},
{webmachine_resource,do,3},
{webmachine_decision_core,resource_call,1},
{webmachine_decision_core,decision,1},
{webmachine_decision_core,handle_request,2}]}}
I see the following error in the log file in /var/log/rabbitmq :
=ERROR REPORT==== 31-Oct-2014::06:20:40 ===
webmachine error: path="/api/overview"
{error,{error,{badmatch,{error,nxdomain}},
[{rabbit_nodes,cluster_name_default,0},
{rabbit_nodes,cluster_name,0},
{rabbit_mgmt_wm_overview,to_json,2},
{webmachine_resource,resource_call,3},
{webmachine_resource,do,3},
{webmachine_decision_core,resource_call,1},
{webmachine_decision_core,decision,1},
{webmachine_decision_core,handle_request,2}]}}
The workers are able to connect to the broker and are receiving the messages, also the new relic plugin for rabbitmq seems to be working fine. However I am unable to login thru the management plugin. Any pointers in this regard will be helpful.
I had updated the hostname of the system and that was causing the issue. See the link below
https://groups.google.com/forum/#!msg/rabbitmq-users/9P-BAwGVHJU/fwOpZPJywwYJ
I added 127.0.0.1 'hostname' in /etc/hosts. That solved the management plugin problem. However rabbitmqctl still showed the following error. Restarted rabbitmq and it solved the rabbitmqctl problem as well
Listing queues ...
Error: unable to connect to node 'rabbit#<hostname>': nodedown
DIAGNOSTICS
===========
attempted to contact: ['rabbit#<hostname>']
rabbit#<hostname>:
* connected to epmd (port 4369) on <hostname>
* epmd reports node 'rabbit' running on port 25672
* TCP connection succeeded but Erlang distribution failed
* suggestion: hostname mismatch?
* suggestion: is the cookie set correctly?
current node details:
- node name: <nodename>
- home dir: <homedir>
- cookie hash: <cookiehash>