Red Hat 7 with JBoss A-MQ 6.1: management console is not accessible - activemq

We're having the same problem as this OP: JBoss ActiveMQ on Red Hat - Unable to access AMQ Web Console, however that post doesn't indicate how he solved the problem.
We're trying to evaluate JBoss A-MQ, and have installed RHEL 7 with A-MQ 6.1 according to their installation guide. Everything works locally, e.g. the sample producer/consumer tests succeed in the Verifying the Installation step.
However, we cannot access the management console, even after configuring the remote user. We did have to add a JAVA_OPTIONS entry to setenv to override the default IPv6 sockets, and netstat shows that the service is now listening on the tcp socket instead of tcp6:
File /opt/jboss-a-mq-6.1.0.redhat-379/bin/setenv now contains:
JAVA_OPTS="$JAVA_OPTS -Djava.net.preferIPv4Stack=true"
export JAVA_OPTS
# netstat -paunt | grep 8181
tcp 0 0 0.0.0.0:8181 0.0.0.0:* LISTEN 10698/java
We can ping the box from other systems, however attempting to open a telnet session on port 8181 times out. The management console URLs we've tried are:
http://172.16.100.110:8181/hawtio
http://172.16.100.110:8181
but neither work. What are we missing?
Here are entries from amq.log containing hawt, which seem like it is starting things up correctly:
$ grep hawt amq.log
2014-09-09 11:32:35,778 | INFO | FelixStartLevel | HttpServiceFactoryImpl | .internal.HttpServiceFactoryImpl 35 | 98 - org.ops4j.pax.web.pax-web-runtime - 3.0.6 | Binding bundle: [io.hawt.hawtio-web [146]] to http service
2014-09-09 11:32:35,860 | INFO | pool-10-thread-1 | ConfigManager | io.hawt.system.ConfigManager 32 | 146 - io.hawt.hawtio-web - 1.2.0.redhat-379 | Configuration will be discovered via system properties
2014-09-09 11:32:35,863 | INFO | pool-10-thread-1 | JmxTreeWatcher | io.hawt.jmx.JmxTreeWatcher 63 | 146 - io.hawt.hawtio-web - 1.2.0.redhat-379 | Welcome to hawtio 1.2-redhat-379 : http://hawt.io/ : Don't cha wish your console was hawt like me? ;-)
2014-09-09 11:32:35,865 | INFO | pool-10-thread-1 | UploadManager | io.hawt.jmx.UploadManager 40 | 146 - io.hawt.hawtio-web - 1.2.0.redhat-379 | Using file upload directory: /opt/jboss-a-mq-6.1.0.redhat-379/data/tmp/uploads
2014-09-09 11:32:35,868 | INFO | pool-10-thread-1 | AuthenticationFilter | io.hawt.web.AuthenticationFilter 84 | 146 - io.hawt.hawtio-web - 1.2.0.redhat-379 | Starting hawtio authentication filter, JAAS realm: "karaf" authorized role: "admin" role principal classes: ""
2014-09-09 11:32:35,965 | INFO | FelixStartLevel | HttpServiceFactoryImpl | .internal.HttpServiceFactoryImpl 35 | 98 - org.ops4j.pax.web.pax-web-runtime - 3.0.6 | Binding bundle: [io.hawt.hawtio-karaf-terminal [148]] to http service
2014-09-09 11:32:35,987 | INFO | pool-10-thread-2 | ConfigManager | io.hawt.system.ConfigManager 32 | 148 - io.hawt.hawtio-karaf-terminal - 1.2.0.redhat-379 | Configuration will be discovered via system properties
2014-09-09 11:32:35,988 | INFO | pool-10-thread-2 | AuthenticationFilter | io.hawt.web.AuthenticationFilter 84 | 148 - io.hawt.hawtio-karaf-terminal - 1.2.0.redhat-379 | Starting hawtio authentication filter, JAAS realm: "karaf" authorized role: "admin" role principal classes: ""
2014-09-09 11:32:35,996 | WARN | FelixStartLevel | ConfigFacade | io.hawt.config.ConfigFacade 23 | 149 - io.hawt.hawtio-maven-indexer - 1.2.0.redhat-379 | No ConfigFacade constructed yet so using default configuration for now
2014-09-09 11:32:36,015 | INFO | pool-10-thread-2 | LoginServlet | io.hawt.web.LoginServlet 55 | 148 - io.hawt.hawtio-karaf-terminal - 1.2.0.redhat-379 | hawtio login is using default HttpSession timeout
2014-09-09 11:32:36,128 | INFO | pool-10-thread-1 | hawtio-web - 146} | lipse.jetty.util.log.JavaUtilLog 75 | 90 - org.eclipse.jetty.aggregate.jetty-all-server - 8.1.14.v20131031 | jolokia-agent: No access restrictor found at classpath:/jolokia-access.xml, access to all MBeans is allowed
2014-09-09 11:32:36,191 | INFO | pool-10-thread-1 | LoginServlet | io.hawt.web.LoginServlet 55 | 146 - io.hawt.hawtio-web - 1.2.0.redhat-379 | hawtio login is using default HttpSession timeout
2014-09-09 11:32:39,227 | INFO | de startup timer | MavenIndexerFacade | maven.indexer.MavenIndexerFacade 98 | 149 - io.hawt.hawtio-maven-indexer - 1.2.0.redhat-379 | Storing maven index files in local directory: /opt/jboss-a-mq-6.1.0.redhat-379/data/mavenIndexer
2014-09-09 11:32:39,621 | INFO | de startup timer | MavenIndexerFacade | maven.indexer.MavenIndexerFacade 148 | 149 - io.hawt.hawtio-maven-indexer - 1.2.0.redhat-379 | Updating the maven indices. This may take a while, please be patient...
2014-09-09 11:32:39,852 | INFO | de startup timer | MavenIndexerFacade | maven.indexer.MavenIndexerFacade 185 | 149 - io.hawt.hawtio-maven-indexer - 1.2.0.redhat-379 | Completed updating 2 maven indices.

OMG, it was the firewall on the local server.
After spending the day scouring the web for ideas, trying to figure out what was wrong with my configuration, in desperation I finally thought I should try disabling the firewall and see what happens. So I shut it down with the command:
sudo service firewalld stop
and suddenly I could access the management console! I tried the default URL, and it worked:
http://172.16.100.110:8181
d'oh!

Related

java.io.IOException: Failed to bind error on ActiveMQ service start

I am configuring a new install of ActiveMQ 5.15.10 on a RHEL 7.7 AWS instance. When the process starts I get this error:
[ec2-user#ip-***-***-***-*** activemq]$ more data/activemq.log
2019-10-30 17:36:45,784 | INFO | Refreshing org.apache.activemq.xbean.XBeanBrokerFactory$1#4667ae56: startup date [Wed Oct 30 17:36:45 UTC 2019]; root of context hierarchy | org.apache.activemq.xbean.XBeanBrokerFactory$1 | main
2019-10-30 17:36:46,568 | INFO | Using Persistence Adapter: KahaDBPersistenceAdapter[/opt/activemq/data/kahadb] | org.apache.activemq.broker.BrokerService | main
2019-10-30 17:36:46,628 | INFO | KahaDB is version 6 | org.apache.activemq.store.kahadb.MessageDatabase | main
2019-10-30 17:36:46,652 | INFO | PListStore:[/opt/activemq/data/***-***-***-***/tmp_storage] started | org.apache.activemq.store.kahadb.plist.PListStoreImpl | main
2019-10-30 17:36:46,821 | INFO | Apache ActiveMQ 5.15.10 (***-***-***-***, ID:ip-***-***-***-***.ec2.internal-36686-1572457006663-0:1) is starting | org.apache.activemq.broker.BrokerService | main
2019-10-30 17:36:46,849 | INFO | Listening for connections at: tcp://ip-***-***-***-***.ec2.internal:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600 | org.apache.activemq.transport.TransportServerThreadSupport | main
2019-10-30 17:36:46,851 | INFO | Connector openwire started | org.apache.activemq.broker.TransportConnector | main
2019-10-30 17:36:46,855 | INFO | Listening for connections at: amqp://ip-***-***-***-***.ec2.internal:5672?maximumConnections=1000&wireFormat.maxFrameSize=104857600 | org.apache.activemq.transport.TransportServerThreadSupport | main
2019-10-30 17:36:46,856 | INFO | Connector amqp started | org.apache.activemq.broker.TransportConnector | main
2019-10-30 17:36:46,859 | INFO | Listening for connections at: stomp://ip-***-***-***-***.ec2.internal:61613?maximumConnections=1000&wireFormat.maxFrameSize=104857600 | org.apache.activemq.transport.TransportServerThreadSupport | main
2019-10-30 17:36:46,861 | INFO | Connector stomp started | org.apache.activemq.broker.TransportConnector | main
2019-10-30 17:36:46,864 | INFO | Listening for connections at: mqtt://ip-***-***-***-***.ec2.internal:1883?maximumConnections=1000&wireFormat.maxFrameSize=104857600 | org.apache.activemq.transport.TransportServerThreadSupport | main
2019-10-30 17:36:46,865 | INFO | Connector mqtt started | org.apache.activemq.broker.TransportConnector | main
2019-10-30 17:36:46,870 | INFO | Starting Jetty server | org.apache.activemq.transport.WebTransportServerSupport | main
2019-10-30 17:36:46,951 | INFO | Creating Jetty connector | org.apache.activemq.transport.WebTransportServerSupport | main
2019-10-30 17:36:47,043 | WARN | ServletContext#o.e.j.s.ServletContextHandler#7b420819{/,null,STARTING} has uncovered http methods for path: / | org.eclipse.jetty.security.SecurityHandler | main
2019-10-30 17:36:47,091 | INFO | Listening for connections at ws://ip-***-***-***-***.ec2.internal:61614?maximumConnections=1000&wireFormat.maxFrameSize=104857600 | org.apache.activemq.transport.ws.WSTransportServer | main
2019-10-30 17:36:47,092 | INFO | Connector ws started | org.apache.activemq.broker.TransportConnector | main
2019-10-30 17:36:47,093 | INFO | Apache ActiveMQ 5.15.10 (***-***-***-***, ID:ip-***-***-***-***.ec2.internal-36686-1572457006663-0:1) started | org.apache.activemq.broker.BrokerService | main
2019-10-30 17:36:47,094 | INFO | For help or more information please see: http://activemq.apache.org | org.apache.activemq.broker.BrokerService | main
2019-10-30 17:36:47,095 | WARN | Store limit is 102400 mb (current store usage is 0 mb). The data directory: /opt/activemq/data/kahadb only has 47858 mb of usable space. - resetting to maximum available disk space: 47858 mb | org.apache.activemq.broker.BrokerService | main
2019-10-30 17:36:47,096 | WARN | Temporary Store limit is 51200 mb (current store usage is 0 mb). The data directory: /opt/activemq/data only has 47858 mb of usable space. - resetting to maximum available disk space: 47858 mb | org.apache.activemq.broker.BrokerService | main
2019-10-30 17:36:47,726 | INFO | ActiveMQ WebConsole available at http://***-***-***-***:8161/ | org.apache.activemq.web.WebConsoleStarter | main
2019-10-30 17:36:47,726 | INFO | ActiveMQ Jolokia REST API available at http://***-***-***-***:8161/api/jolokia/ | org.apache.activemq.web.WebConsoleStarter | main
2019-10-30 17:36:48,001 | WARN | Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'invokeStart' defined in class path resource [jetty.xml]: Invocation of init method failed; nested exception is java.io.IOException: Failed to bind to /***-***-***-***:8161 | org.apache.activemq.xbean.XBeanBrokerFactory$1 | main
2019-10-30 17:36:48,013 | INFO | Apache ActiveMQ 5.15.10 (***-***-***-***, ID:ip-***-***-***-***.ec2.internal-36686-1572457006663-0:1) is shutting down | org.apache.activemq.broker.BrokerService | main
2019-10-30 17:36:48,014 | INFO | Connector openwire stopped | org.apache.activemq.broker.TransportConnector | main
2019-10-30 17:36:48,015 | INFO | Connector amqp stopped | org.apache.activemq.broker.TransportConnector | main
2019-10-30 17:36:48,016 | INFO | Connector stomp stopped | org.apache.activemq.broker.TransportConnector | main
2019-10-30 17:36:48,017 | INFO | Connector mqtt stopped | org.apache.activemq.broker.TransportConnector | main
2019-10-30 17:36:48,021 | INFO | Connector ws stopped | org.apache.activemq.broker.TransportConnector | main
2019-10-30 17:36:48,024 | INFO | PListStore:[/opt/activemq/data/***-***-***-***/tmp_storage] stopped | org.apache.activemq.store.kahadb.plist.PListStoreImpl | main
2019-10-30 17:36:48,025 | INFO | Stopping async queue tasks | org.apache.activemq.store.kahadb.KahaDBStore | main
2019-10-30 17:36:48,025 | INFO | Stopping async topic tasks | org.apache.activemq.store.kahadb.KahaDBStore | main
2019-10-30 17:36:48,026 | INFO | Stopped KahaDB | org.apache.activemq.store.kahadb.KahaDBStore | main
2019-10-30 17:36:48,046 | INFO | Apache ActiveMQ 5.15.10 (***-***-***-***, ID:ip-***-***-***-***.ec2.internal-36686-1572457006663-0:1) uptime 1.495 seconds | org.apache.activemq.broker.BrokerService | main
2019-10-30 17:36:48,047 | INFO | Apache ActiveMQ 5.15.10 (***-***-***-***, ID:ip-***-***-***-***.ec2.internal-36686-1572457006663-0:1) is shutdown | org.apache.activemq.broker.BrokerService | main
2019-10-30 17:36:48,048 | INFO | Closing org.apache.activemq.xbean.XBeanBrokerFactory$1#4667ae56: startup date [Wed Oct 30 17:36:45 UTC 2019]; root of context hierarchy | org.apache.activemq.xbean.XBeanBrokerFactory$1 | main
2019-10-30 17:36:48,049 | ERROR | Failed to load: class path resource [activemq.xml], reason: Error creating bean with name 'invokeStart' defined in class path resource [jetty.xml]: Invocation of init method failed; nested exception is java.io.IOException: Failed to bind to /***-***-***-***:8161 | org.apache.activemq.xbean.XBeanBrokerFactory | main
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'invokeStart' defined in class path resource [jetty.xml]: Invocation of init method failed; nested exception is java.io.IOException: Failed to bind to /3.231.235.30:8161
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1630)[spring-beans-4.3.24.RELEASE.jar:4.3.24.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:553)[spring-beans-4.3.24.RELEASE.jar:4.3.24.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:481)[spring-beans-4.3.24.RELEASE.jar:4.3.24.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:312)[spring-beans-4.3.24.RELEASE.jar:4.3.24.RELEASE]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)[spring-beans-4.3.24.RELEASE.jar:4.3.24.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:308)[spring-beans-4.3.24.RELEASE.jar:4.3.24.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197)[spring-beans-4.3.24.RELEASE.jar:4.3.24.RELEASE]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:737)[spring-beans-4.3.24.RELEASE.jar:4.3.24.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:867)[spring-context-4.3.24.RELEASE.jar:4.3.24.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:542)[spring-context-4.3.24.RELEASE.jar:4.3.24.RELEASE]
at org.apache.xbean.spring.context.ResourceXmlApplicationContext.<init>(ResourceXmlApplicationContext.java:64)[xbean-spring-4.14.jar:4.14]
at org.apache.xbean.spring.context.ResourceXmlApplicationContext.<init>(ResourceXmlApplicationContext.java:52)[xbean-spring-4.14.jar:4.14]
at org.apache.activemq.xbean.XBeanBrokerFactory$1.<init>(XBeanBrokerFactory.java:104)[activemq-spring-5.15.10.jar:5.15.10]
at org.apache.activemq.xbean.XBeanBrokerFactory.createApplicationContext(XBeanBrokerFactory.java:104)[activemq-spring-5.15.10.jar:5.15.10]
at org.apache.activemq.xbean.XBeanBrokerFactory.createBroker(XBeanBrokerFactory.java:67)[activemq-spring-5.15.10.jar:5.15.10]
at org.apache.activemq.broker.BrokerFactory.createBroker(BrokerFactory.java:71)[activemq-broker-5.15.10.jar:5.15.10]
at org.apache.activemq.broker.BrokerFactory.createBroker(BrokerFactory.java:54)[activemq-broker-5.15.10.jar:5.15.10]
at org.apache.activemq.console.command.StartCommand.runTask(StartCommand.java:87)[activemq-console-5.15.10.jar:5.15.10]
at org.apache.activemq.console.command.AbstractCommand.execute(AbstractCommand.java:63)[activemq-console-5.15.10.jar:5.15.10]
at org.apache.activemq.console.command.ShellCommand.runTask(ShellCommand.java:154)[activemq-console-5.15.10.jar:5.15.10]
at org.apache.activemq.console.command.AbstractCommand.execute(AbstractCommand.java:63)[activemq-console-5.15.10.jar:5.15.10]
at org.apache.activemq.console.command.ShellCommand.main(ShellCommand.java:104)[activemq-console-5.15.10.jar:5.15.10]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[:1.8.0_191]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[:1.8.0_191]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[:1.8.0_191]
at java.lang.reflect.Method.invoke(Method.java:498)[:1.8.0_191]
at org.apache.activemq.console.Main.runTaskClass(Main.java:262)[activemq.jar:5.15.10]
at org.apache.activemq.console.Main.main(Main.java:115)[activemq.jar:5.15.10]
It looks like the web console is logged as available, and then it tries to bind the web console port again? This is on a fresh instance with no other services or java processes running.
Please advise.

Karaf w/ LDAP Auth

I'm trying to setup karaf (4.0.9) to authenticate/authorize users via ldap/active directory.
I've copied the following ldap-module.xml to the deploy directory per https://karaf.apache.org/manual/latest/#_available_realm_and_login_modules:
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"
xmlns:jaas="http://karaf.apache.org/xmlns/jaas/v1.0.0"
xmlns:ext="http://aries.apache.org/blueprint/xmlns/blueprint-ext/v1.0.0">
<jaas:config name="karaf" rank="1">
<jaas:module className="org.apache.karaf.jaas.modules.ldap.LDAPLoginModule" flags="sufficient">
initialContextFactory=com.sun.jndi.ldap.LdapCtxFactory
connection.username=cn=ldapsearch,cn=users,dc=eng,dc=net
connection.password=****
connection.protocol=
connection.url=ldap://server:389
user.base.dn=dc=eng,dc=net
user.filter=(samaccountname=%u)
user.search.subtree=true
user.debug=true
role.base.dn=dc=eng,dc=net
role.name.attribute=cn
role.filter=(member=%fqdn)
role.search.subtree=true
role.mapping=jtAdmins=admin,user,operator
authentication=simple
debug=true
</jaas:module>
</jaas:config>
</blueprint>
I see the logs, and I can see the LDAP login on the realm, so I'm confident the config is at least being used
karaf#root(jaas)> jaas:realm-list
Index | Realm Name | Login Module Class Name
-----------------------------------------------------------------------
1 | karaf | org.apache.karaf.jaas.modules.ldap.LDAPLoginModule
When I try to ssh in, I get the following logs (truncated), and I can see shark the LDAP communications:
2017-07-31 16:50:39,229 | DEBUG | 8]-nio2-thread-9 | LDAPLoginModule | 35 - org.apache.karaf.jaas.modules - 4.0.9 | Get the user DN.
2017-07-31 16:50:39,238 | DEBUG | 8]-nio2-thread-9 | LDAPLoginModule | 35 - org.apache.karaf.jaas.modules - 4.0.9 | Looking for the user in LDAP with
2017-07-31 16:50:39,238 | DEBUG | 8]-nio2-thread-9 | LDAPLoginModule | 35 - org.apache.karaf.jaas.modules - 4.0.9 | base DN: dc=eng,dc=net
2017-07-31 16:50:39,238 | DEBUG | 8]-nio2-thread-9 | LDAPLoginModule | 35 - org.apache.karaf.jaas.modules - 4.0.9 | filter: (samaccountname=jtAdmin)
2017-07-31 16:50:39,244 | DEBUG | 8]-nio2-thread-9 | LDAPLoginModule | 35 - org.apache.karaf.jaas.modules - 4.0.9 | Found the user DN.
2017-07-31 16:50:39,245 | DEBUG | 8]-nio2-thread-9 | LDAPLoginModule | 35 - org.apache.karaf.jaas.modules - 4.0.9 | Bind user (authentication).
2017-07-31 16:50:39,245 | DEBUG | 8]-nio2-thread-9 | LDAPLoginModule | 35 - org.apache.karaf.jaas.modules - 4.0.9 | Set the security principal for CN=jtAdmin,CN=Users,dc=eng,dc=net
2017-07-31 16:50:39,245 | DEBUG | 8]-nio2-thread-9 | LDAPLoginModule | 35 - org.apache.karaf.jaas.modules - 4.0.9 | Binding the user.
2017-07-31 16:50:39,254 | DEBUG | 8]-nio2-thread-9 | LDAPLoginModule | 35 - org.apache.karaf.jaas.modules - 4.0.9 | User jtAdmin successfully bound.
2017-07-31 16:50:39,256 | DEBUG | 8]-nio2-thread-9 | LDAPLoginModule | 35 - org.apache.karaf.jaas.modules - 4.0.9 | Looking for the user roles in LDAP with
2017-07-31 16:50:39,256 | DEBUG | 8]-nio2-thread-9 | LDAPLoginModule | 35 - org.apache.karaf.jaas.modules - 4.0.9 | base DN: dc=eng,dc=net
2017-07-31 16:50:39,256 | DEBUG | 8]-nio2-thread-9 | LDAPLoginModule | 35 - org.apache.karaf.jaas.modules - 4.0.9 | filter: (member=CN=jtAdmin,CN=Users,DC=eng,DC=net)
2017-07-31 16:50:39,359 | DEBUG | 8]-nio2-thread-9 | LDAPLoginModule | 35 - org.apache.karaf.jaas.modules - 4.0.9 | User jtAdmin is a member of role Domain Computers
2017-07-31 16:50:39,359 | DEBUG | 8]-nio2-thread-9 | LDAPLoginModule | 35 - org.apache.karaf.jaas.modules - 4.0.9 | Parse role mapping jtAdmin=admin,user,operator
2017-07-31 16:50:39,359 | DEBUG | 8]-nio2-thread-9 | LDAPLoginModule | 35 - org.apache.karaf.jaas.modules - 4.0.9 | Parse role mapping jtAdmin=admin,user,operator
2017-07-31 16:50:39,359 | DEBUG | 8]-nio2-thread-9 | LDAPLoginModule | 35 - org.apache.karaf.jaas.modules - 4.0.9 | User jtAdmin is a member of role Domain Controllers
...
2017-07-31 16:50:39,364 | DEBUG | 8]-nio2-thread-9 | LDAPLoginModule | 35 - org.apache.karaf.jaas.modules - 4.0.9 | Parse role mapping jtAdmins=admin,user,operator
2017-07-31 16:50:39,364 | DEBUG | 8]-nio2-thread-9 | LDAPLoginModule | 35 - org.apache.karaf.jaas.modules - 4.0.9 | Parse role mapping jtAdmins=admin,user,operator
2017-07-31 16:50:39,364 | DEBUG | 8]-nio2-thread-9 | LDAPLoginModule | 35 - org.apache.karaf.jaas.modules - 4.0.9 | LDAP role jtAdmins is mapped to Karaf role admin
2017-07-31 16:50:39,364 | DEBUG | 8]-nio2-thread-9 | LDAPLoginModule | 35 - org.apache.karaf.jaas.modules - 4.0.9 | LDAP role jtAdmins is mapped to Karaf role user
2017-07-31 16:50:39,365 | DEBUG | 8]-nio2-thread-9 | LDAPLoginModule | 35 - org.apache.karaf.jaas.modules - 4.0.9 | LDAP role jtAdmins is mapped to Karaf role operator
I can see LDAP authorize my user, but it seems I don't have permission to login. I thought the role.mapping would handle mapping my LDAP/AD membership to Karaf roles, but that doesn't seem to allow me access. Webconsole also attempts to allow access, but ultimately fails.
What config am I missing to map the LDAP/AD user roles to enable ssh karaf/console for my user? Do I need another login module? And how might I do this dynamically (not using hard-coded role.mapping in the ldap-module.xml bundle)?
Ideally, I'd also like to be able to give ldap OR local users access simultaneously, but I realize that might not be possible.
As luck would have it, I managed to track down the root cause. Thanks to the folks on the karaf IRC channel that let me think out loud.
Ultimately, I believe the root cause is this exception:
javax.naming.PartialResultException: Unprocessed Continuation Reference(s); remaining name ...
I only actually see this exception in the webconsole handler, and NOT in the ssh/shell handler (but ssh doesn't work either, so...)
The exception is coming from LDAPCache.java (namingEnumeration.hasMore(), ~line 259) and ultimately from
at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:2914)
at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:2888)
This exception is propagated to the caller. Since I can't very well change the JVM, I'm borrowing a suggestions/solution of adding a config option to ignorePartialNameResult regarding this exception. I don't quite understand why there is a partial name result, but I saw one comment implying that the error was due to role.base.dn being the same level as the user.base.dn which is true in my case. After catching the exception, and returning the existing roleList, I am able to successfully login w/ ldap users.

How to make Orion Context Broker works with HTTPS notification?

I want to enable https for notifications. The Orion Context Broker version 1.7.0 is installed in Ubuntu 16.04. To start, the following command is being used:
sudo /etc/init.d/contextBroker start -logAppend -https -key /path/to/orion.key -cert /path/to/orion.crt
The answer is:
[ ok ] Starting contextBroker (via systemctl): contextBroker.service.
The status is:
sudo systemctl status contextBroker.service
contextBroker.service - LSB: Example initscript
Loaded: loaded (/etc/init.d/contextBroker; bad; vendor preset: enabled)
Active: active (exited) since Tue 2017-04-04 12:56:13 BRT; 14s ago
Docs: man:systemd-sysv-generator(8)
Process: 8312 ExecStart=/etc/init.d/contextBroker start (code=exited, status=0/SUCCESS)
Apr 04 12:56:13 fiware-ubuntu systemd[1]: Starting LSB: Example initscript...
Apr 04 12:56:13 fiware-ubuntu contextBroker[8312]: contextBroker
Apr 04 12:56:13 fiware-ubuntu contextBroker[8312]: /path/bin/contextBroker
Apr 04 12:56:13 fiware-ubuntu systemd[1]: Started LSB: Example initscript.
Another approach is running Orion as:
sudo /path/bin/contextBroker -logLevel DEBUG -localIp x.y.z.t -https -key /path/to/orion.key -cert /path/to/orion.crt
The log follows:
time=2017-04-04T18:37:58.881Z | lvl=INFO | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=contextBroker.cpp[1705]:main | msg=Orion Context Broker is running
time=2017-04-04T18:37:58.887Z | lvl=INFO | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=mongoConnectionPool.cpp[205]:mongoConnect | msg=Successful connection to database
time=2017-04-04T18:37:58.887Z | lvl=INFO | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=connectionOperations.cpp[681]:setWriteConcern | msg=Database Operation Successful (setWriteConcern: 1)
time=2017-04-04T18:37:58.887Z | lvl=INFO | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=connectionOperations.cpp[724]:getWriteConcern | msg=Database Operation Successful (getWriteConcern)
time=2017-04-04T18:37:58.888Z | lvl=INFO | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=connectionOperations.cpp[626]:runCollectionCommand | msg=Database Operation Successful (command: { buildinfo: 1 })
...
time=2017-04-04T18:37:58.897Z | lvl=FATAL | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=rest.cpp[1720]:restStart | msg=Fatal Error (error starting REST interface)
It is not working...
If you run Orion as a service (as recommended) then command line parameters have to be configured in the /etc/sysconfig/contexBroker file. The file is explained in this piece of documentation.
Note the BROKER_EXTRA_OPS variable at the end of the file. This is used to include CLI parameters that are not set using any other option, as the ones related with HTTPS you are using. Thus, it should be a matter of setting BROKER_EXTRA_OPS in this way:
BROKER_EXTRA_OPS="-logAppend -https -key /path/to/orion.key -cert /path/to/orion.crt"
Then start the service using:
sudo /etc/init.d/contextBroker start
(Note that no parameter is added after 'start')
You can check that Orion is running with the right parameters using ps ax | grep contextBroker.
Finally, regarding the error Fatal Error (error starting REST interface) it apperas when Orion, for some reason, is not able to start the listening server for the REST API. Typically this is due to some other process (maybe a forgotten instance of Orion) runs listening at the same port. Use sudo netstat -ntpld | grep 1026 to know which other process could be listening at that part (assuming that 1026 is the port in which you are trying to run Orion, of course).

Master/slave using Oracle

we're using the so called JDBC Master Slave architecture with Oracle DB. We have 2 nodes and each one has one Broker started. We start the Broker1 (on node1) and it becomes the MASTER obtaining the lock over the tables. Then we start the Broker2 on the node2 and this one starts as SLAVE. WE can see in the log of Slave broker that it's trying to obtain the lock every 10sec, but it fails:
2013-06-12 19:32:38,714 | INFO | Default failed to acquire lease. Sleeping for 10000 milli(s) before trying again... | org.apache.activemq.store.jdbc.LeaseDatabaseLocker | main
2013-06-12 19:32:48,720 | INFO | Default Lease held by Default till Wed Jun 12 19:32:57 UTC 2013 | org.apache.activemq.store.jdbc.LeaseDatabaseLocker | main
Everything works fine and then in one moment in SLAVE's log we see that it suddenly becomes the MASTER:
2013-06-13 00:38:11,262 | INFO | Default Lease held by Default till Thu Jun 13 00:38:17 UTC 2013 | org.apache.activemq.store.jdbc.LeaseDatabaseLocker | main
2013-06-13 00:38:11,262 | INFO | Default failed to acquire lease. Sleeping for 10000 milli(s) before trying again... | org.apache.activemq.store.jdbc.LeaseDatabaseLocker | main
...
2013-06-13 00:38:21,314 | INFO | Default, becoming the master on dataSource: org.apache.commons.dbcp.BasicDataSource#9c6a99d | org.apache.activemq.store.jdbc.LeaseDatabaseLocker | main
2013-06-13 00:38:21,576 | INFO | Apache ActiveMQ 5.8.0 (Default, ID:corerec3-49774-1371083901328-0:1) is starting | org.apache.activemq.broker.BrokerService | main
2013-06-13 00:38:21,692 | WARN | Failed to start jmx connector: Cannot bind to URL [rmi://localhost:1616/jmxrmi]: javax.naming.NameAlreadyBoundException: jmxrmi [Root exception is java.rmi.AlreadyBoundException: jmxrmi]. Will restart management to re-create jmx connector, trying to remedy this issue. | org.apache.activemq.broker.jmx.ManagementContext | JMX connector
2013-06-13 00:38:21,700 | INFO | Listening for connections at: tcp://corerec3:61617?transport.closeAsync=false | org.apache.activemq.transport.TransportServerThreadSupport | main
2013-06-13 00:38:21,700 | INFO | Connector openwire Started | org.apache.activemq.broker.TransportConnector | main
2013-06-13 00:38:21,701 | INFO | Apache ActiveMQ 5.8.0 (Default, ID:corerec3-49774-1371083901328-0:1) started | org.apache.activemq.broker.BrokerService | main
2013-06-13 00:38:21,701 | INFO | For help or more information please see: http://activemq.apache.org | org.apache.activemq.broker.BrokerService | main
2013-06-13 00:38:21,701 | ERROR | Memory Usage for the Broker (512 mb) is more than the maximum available for the JVM: 245 mb | org.apache.activemq.broker.BrokerService | main
2013-06-13 00:38:22,157 | INFO | Web console type: embedded | org.apache.activemq.web.WebConsoleStarter | main
2013-06-13 00:38:22,292 | INFO | ActiveMQ WebConsole initialized. | org.apache.activemq.web.WebConsoleStarter | main
2013-06-13 00:38:22,353 | INFO | Initializing Spring FrameworkServlet 'dispatcher' | /admin | main
while the MASTER's log shows no change from what it usually outputs...
So, it seems that somehow SLAVE obtains the lock (due to hmm... for example connection loss between master and the DB), but if we don't restart the brokers we start losing messages...
The problem is that in the producers' log we can see that it successfully sends the messages to the QueueX, but we don't see the consumer's taking them from the queue...
If we go to the DB and query _ACTIVEMQ_MSGS_ table we see that the messages are unprocessed.
It looks as if the broker (Producers are connected to) has the lock and inserts the messages into the DB and the brokers Clients are consuming from doesn't have the lock and can't consult the tables...
I don't know if all this makes much sense, but I surely hope someone might shed some light upon this one...
I didn't want to saturate the post with the configuration details, but if you need specific details like failover config, IPs, ports etc. I will post it...

Why does ActiveMQ restart automatically and how do i prevent it?

We've been using AMQ 5.5.1 in production for several months. Occasionally, we observe that the broker decides to refresh itself with no outside trigger. When this happens, our queue senders fail until the broker is back online some 10 minutes later. I cannot find any information or settings that would cause this behavior .. and let me control it.
Is this normal for the broker to recycle on its own like this? If so, what things would cause it?
2012-12-11 11:02:11,603 | INFO | Refreshing org.apache.activemq.xbean.XBeanBrokerFactory$1#f6ac0b: startup date [Tue Dec 11 11:02:11 EST 2012]; root of context hierarchy | org.apache.activemq.xbean.XBeanBrokerFactory$1 | WrapperSimpleAppMain
2012-12-11 11:02:13,806 | WARN | destroyApplicationContextOnStop parameter is deprecated, please use shutdown hooks instead | org.apache.activemq.xbean.XBeanBrokerService | WrapperSimpleAppMain
2012-12-11 11:02:13,821 | INFO | PListStore:D:\Tools\ActiveMQ\apache-activemq-5.5.1\bin\win32\..\..\data\localhost\tmp_storage started | org.apache.activemq.store.kahadb.plist.PListStore | WrapperSimpleAppMain
2012-12-11 11:02:13,868 | INFO | Using Persistence Adapter: KahaDBPersistenceAdapter[D:\Tools\ActiveMQ\apache-activemq-5.5.1\bin\win32\..\..\data\kahadb] | org.apache.activemq.broker.BrokerService | WrapperSimpleAppMain
2012-12-11 11:02:16,618 | INFO | KahaDB is version 3 | org.apache.activemq.store.kahadb.MessageDatabase | WrapperSimpleAppMain
2012-12-11 11:02:16,697 | INFO | Recovering from the journal ... | org.apache.activemq.store.kahadb.MessageDatabase | WrapperSimpleAppMain
I found that the wrapper exe process was forcing the restart.
I was able to see in the wrapper.log (windows service) that the process was being restarted because the JVM was not responding. So this is not an issue with the broker auto-restarting per se.. it was an issue with the broker JVM somehow becoming hung (separate problem).
Here are the wrapper log entries for those interested:
ERROR | wrapper | 2012/12/11 11:01:58 | JVM appears hung: Timed out waiting for signal from JVM.
ERROR | wrapper | 2012/12/11 11:01:58 | JVM did not exit on request, terminated
STATUS | wrapper | 2012/12/11 11:02:04 | Launching a JVM...