I have a stateless service that declares labesl for Traefik healthcheck but in Traefik dashboard I don't see the labels picked up and the healtchek is not running against the service.
I have tried to change the Traefik version from 1.7.6 to 1.7.9 and reordering the labels to see if any of them are picked up with no luck
The service configuration looks like this:
<ServiceTypes>
<!-- This is the name of your ServiceType.
The UseImplicitHost attribute indicates this is a guest service. -->
<StatelessServiceType ServiceTypeName="MyServiceType" UseImplicitHost="true">
<Extensions>
<Extension Name="Traefik">
<Labels xmlns="http://schemas.microsoft.com/2015/03/fabact-no-schema">
<Label Key="traefik.enable">true</Label>
<Label Key="traefik.frontend.rule">Host:foobar.com</Label>
<Label key="traefik.backend.healthcheck.interval">1s</Label>
<Label key="traefik.backend.healthcheck.path">/healthpath</Label>
<Label key="traefik.backend.healthcheck.hostname">foobar.com</Label>
</Labels>
</Extension>
</Extensions>
</StatelessServiceType>
</ServiceTypes>
The Traefik toml file include this:
# Enable custom health check options.
[healthcheck]
# Set the default health check interval.
#
# Optional
# Default: "30s"
#
interval = "30s"
I would expect to see the labels picked up and shown in the dashboard and the service receiving traffic on the specified path
Related
I'm seeing many posts achieving Tomcat session replication in docker swarm using traefik. But I just don't want to add another component. I use httpd as frontend for tomcat. Tomcat will be deployed as a service with 4 replicas and will be scaled when required. httpd is running as docker service and both are in same network. My question is, is there any way to achieve Tomcat session replication in docker swarm in this case. TIA.
As long as all Tomcat Docker containers are reachable you can achieve this by below Cluster setup.
The below setup is for tomcat 7 but should work in the same way for other versions also.
In case of Apache Tomcat 7 uncomment <Cluster className=”org.apache.catalina.ha.tcp.SimpleTcpCluster”/> tag and update cluster detail something like below.
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster">
<Manager className="org.apache.catalina.ha.session.DeltaManager"
expireSessionsOnShutdown="false"
notifyListenersOnReplication="true"/>
<Channel className="org.apache.catalina.tribes.group.GroupChannel">
<Membership className="org.apache.catalina.tribes.membership.McastService"
address="228.0.0.4"
port="45564" frequency="500"
dropTime="3000"/>
<Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
<Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
</Sender>
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
address="auto" port="4000" autoBind="100"
selectorTimeout="5000" maxThreads="6"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.ThroughputInterceptor"/>
</Channel>
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve" />
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener" />
</Cluster>
Also, we need to add workers.properties at apache/conf/ path.
Please note this file has to be placed in all the Tomcat servers which will run in a cluster.
You can read more in detail here.
You can do this without a front end, BUT you're going to have issues because if the ingress routes to a different node, that node has to request the session from another node.
The plain tomcat image is not sufficient to do clustering, it wouild need some customizations to /usr/local/tomcat/conf/server.xml to the cluster configuration to use the DNSMembershipProvider
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster">
<Channel className="org.apache.catalina.tribes.group.GroupChannel">
<Membership
className="org.apache.catalina.tribes.membership.cloud.CloudMembershipService"
membershipProviderClassName="org.apache.catalina.tribes.membership.cloud.DNSMembershipProvider"
/>
</Channel>
</Cluster>
DNSMembershipProvider uses the environment variable DNS_MEMBERSHIP_SERVICE_NAME to look up the server so it should match the service name.
The following docker-compose.yml shows how to deploy it without any other component.
version: "3.8"
services:
tomcat:
build: ./tomcat
image: sample-tomcat
environment:
- DNS_MEMBERSHIP_SERVICE_NAME=tomcat
ports:
- 50000:80
deploy:
replicas: 6
update_config:
order: start-first
Proper way
As I stated earlier, using the above approach will satisfy the OP's requirements of not adding an additional component, but will have significant performance issues. Instead you need a proxy that can route to the servers with the following features:
session stickiness
Docker Swarm service label discovery
health check to reduce potential user impact by detecting a server is down beforehand.
Traefik handles all the requirements quite nicely even if itself does not support clustering due to their ACME storage not allowing concurrent access without going to their EE version.
The docker-compose.yml would be as follows.
version: "3.8"
services:
proxy:
image: traefik:2.6
command:
- "--providers.docker.swarmMode=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.app.address=:80"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
ports:
- 50000:80
tomcat:
build: ./tomcat
image: sample-tomcat
environment:
- DNS_MEMBERSHIP_SERVICE_NAME=tomcat
deploy:
replicas: 6
labels:
- "traefik.enable=true"
- "traefik.http.routers.app.entrypoints=app"
- "traefik.http.routers.app.rule=Host(`localhost`)"
- "traefik.http.services.app-service.loadBalancer.sticky.cookie=true"
- "traefik.http.services.app-service.loadBalancer.sticky.cookie.httponly=true"
- "traefik.http.services.app-service.loadbalancer.server.port=8080"
- "traefik.http.services.app-service.loadbalancer.healthcheck.path=/"
update_config:
order: start-first
endpoint_mode: dnsrr
One property of note is endpoint_mode: dnsrr is not available when exposing ports earlier. This enables a better replication of session data within the Tomcat cluster as each node will have individual IDs.
An example implementation of this is in my github repo https://github.com/trajano/tomcat-docker-swarm
I making some tests for logging into an Elasticsearch instance using NLog in our API. The Elasticsearch instance is running inside a Docker, if the API is executed using IIS Express I can log into Elasticsearch without a problem and I can look at the "logstash" index created, but if I run the API inside a Docker container the logs never reach Elasticsearch and the index is never created.
My NLog config:
<?xml version="1.0" encoding="utf-8" ?>
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
autoReload="true"
throwConfigExceptions="true"
internalLogLevel="info"
internalLogFile="c:\temp\internal-nlog-AspNetCore3.txt">
<extensions>
<add assembly="NLog.Targets.ElasticSearch"/>
</extensions>
<targets>
<target name="ElasticSearch" xsi:type="BufferingWrapper" flushTimeout="5000">
<target xsi:type="ElasticSearch"/>
</target>
</targets>
<rules>
<logger name="*" minlevel="Trace" writeTo="ElasticSearch" />
<logger name="Microsoft.*" maxlevel="Info" final="true" />
</rules>
</nlog>
And in my appsettings.json:
"ElasticsearchUrl": "http://192.168.0.9:9200",
Perhaps I'm missing something or I'm not understanding the interaction between the containers.
(1) Your question doesn't provide any details about the configuration of the two containers (one running your app, one running Elasticsearch).
I have an example logging to Elasticsearch, configured with Kibana to view the results, although it uses a different logger provider (Essential.LoggerProvider.Elasticsearch), however it has a docker-compose file that shows the connection between Elasticsearch and Kibana, https://github.com/sgryphon/essential-logging/tree/master/examples/HelloElasticsearch
# Docker Compose file for E-K stack
# Run with:
# docker-compose up -d
version: '3.7'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.6.1
...
networks:
- elastic-network
kibana:
image: docker.elastic.co/kibana/kibana-oss:7.6.1
...
environment:
ELASTICSEARCH_URL: http://elasticsearch:9200
networks:
- elastic-network
networks:
elastic-network:
driver: bridge
The relevant parts show setting up a network bridge between the two docker machines, and then the connection between them.
While "http://192.168.0.9:9200" might be the correct connection from outside (your IIS) into Elasticsearch, you would have to check if it is how your API docker sees the Elasticsearch machine, e.g. how Kibana sees Elasticsearch in the example above is "http://elasticsearch:9200"
You would need to update the question with details of your docker configuration, e.g. the command line you run to start them, or a docker compose file, etc. to work out why they can't see each other.
(2) You might want to check that it really is working from IIS, as it seems unusual that NLog would create an index "logstash-" ... normally Logstash would create that index and NLog should create it's own, e.g. log4net creates index "log-", Essential.LoggerProvider.Elasticsearch uses "dotnet-", etc.
Disclaimer: I am the author of Essential.LoggerProvider.Elasticsearch
I'm trying to install Openfire 4.0.2 . My problem is that after restarting Openfire and pressing Admin Console I always see Setup Page. What should I do to fix it?
This is my openfire.xml file
<?xml version="1.0" encoding="UTF-8"?>
<!--
This file stores bootstrap properties needed by Openfire.
Property names must be in the format: "prop.name.is.blah=value"
That will be stored as:
<prop>
<name>
<is>
<blah>value</blah>
</is>
</name>
</prop>
Most properties are stored in the Openfire database. A
property viewer and editor is included in the admin console.
-->
<!-- root element, all properties must be under this element -->
<jive>
<adminConsole>
<!-- Disable either port by setting the value to -1 -->
<port>7090</port>
<securePort>7091</securePort>
</adminConsole>
<locale>en</locale>
<!-- Network settings. By default, Openfire will bind to all network interfaces.
Alternatively, you can specify a specific network interfaces that the server
will listen on. For example, 127.0.0.1. This setting is generally only useful
on multi-homed servers. -->
<!--
<network>
<interface>127.0.0.1</interface>
</network>
-->
<!-- SPDY Protocol is npn.
(note: npn does not work with Java 8)
add -Xbootclasspath/p:/OPENFIRE_HOME/lib/npn-boot.jar to .vmoptions file -->
<!--
<spdy>
<protocol>npn</protocol>
</spdy>
-->
<!-- XEP-0198 properties -->
<stream>
<management>
<!-- Whether stream management is offered to clients by server. -->
<active>true</active>
<!-- Number of stanzas sent to client before a stream management
acknowledgement request is made. -->
<requestFrequency>5</requestFrequency>
</management>
</stream>
</jive>
Thank you.
In a file like this you miss the database part, so probably you never finished the setup really.
However there are 2 flags you must add:
in openfire.xml <setup>true</setup>
as child of <jive> tag
and in ofProperty table of database
INSERT INTO OFPROPERTY (NAME,PROPVALUE) VALUES ('setup','true');
The answer is, you have to uninstall Openfire and after delete Openfire folder which is situated in C:/ProgramFiles(x86)/ and reinstall Openfire.
The setup procedure of Openfire will, if it runs successfully, modify the content of the openfire.xml file. The most typical reason for this to fail is a file permission problem. Make sure that the user that is executing Openfire is allowed to read & write all files under the Openfire home folder.
Activemq service is triggered via cookbook but it does not runs:
activemq/attributes/default.rb
default['activemq']['version'] = '5.11.0'
src_filename="apache-activemq-#{node['activemq']['version']}-bin.tar.gz"
src_filepath = "#{Chef::Config['file_cache_path']}/#{src_filename}"
default['activemq']['src_filepath'] = "#{src_filepath}"
default['activemq']['tar_filepath'] = "http://xxx-xx-xx-xxx-xx:8091/3rdparty/activemq/#{src_filename}"
default['activemq']['dir'] = "/usr/local/apache-activemq-#{node['activemq']['version']}"
default['activemq']['wrapper']['max_memory'] = '1024'
default['activemq']['wrapper']['useDedicatedTaskRunner'] = 'true'
default['activemq']['zooKeeper']['address']="xxx.xx.xx.xx:2181"
default['activemq']['zooKeeper']['hostname']="xxx.xx.xx.xx"
activemq/recipes/default.rb
include_recipe "lgjava"
activemq_home= node['activemq']['dir']
remote_file "#{node['activemq']['src_filepath']}" do
mode 0755
source node['activemq']['tar_filepath']
action :create
notifies :create, "directory[apache-activemq-#{node['activemq']['version']}]", :immediately
notifies :run, "execute[untar-activemq]", :immediately
end
directory "apache-activemq-#{node['activemq']['version']}" do
path "#{activemq_home}"
mode 0755
recursive true
end
execute "untar-activemq" do
cwd Chef::Config[:file_cache_path]
command <<-EOF
tar xvzf apache-activemq-#{node['activemq']['version']}-bin.tar.gz -C #{node['activemq']['dir'] } --strip 1
EOF
action :run
end
file "#{activemq_home}/bin/activemq" do
owner 'root'
group 'root'
mode '0755'
end
arch = node['kernel']['machine'] == 'x86_64' ? 'x86-64' : 'x86-32'
link '/etc/init.d/activemq' do
to "#{activemq_home}/bin/linux-#{arch}/activemq"
end
template "jetty-realm.properties" do
source "jetty-realm.properties.erb"
mode "0755"
path "#{activemq_home}/conf/jetty-realm.properties"
action :create
notifies :restart, 'service[activemq]'
end
template "activemq.xml" do
source "activemq.xml.erb"
mode "0755"
path "#{activemq_home}/conf/activemq.xml"
action :create
notifies :restart, 'service[activemq]'
end
service 'activemq' do
supports :restart => true, :status => true
action [:enable, :start]
end
# symlink so the default wrapper.conf can find the native wrapper library
link "#{activemq_home}/bin/linux" do
to "#{activemq_home}/bin/linux-#{arch}"
end
# symlink the wrapper's pidfile location into /var/run
link '/var/run/activemq.pid' do
to "#{activemq_home}/bin/linux/ActiveMQ.pid"
not_if 'test -f /var/run/activemq.pid'
end
template "#{activemq_home}/bin/linux/wrapper.conf" do
source 'wrapper.conf.erb'
mode '0644'
notifies :restart, 'service[activemq]'
end
activemq/templates/default/activemq.xml.erb
<!-- START SNIPPET: example -->
<beans
xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">
<!-- Allows us to use system properties as variables in this configuration file -->
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="locations">
<value>file:${activemq.conf}/credentials.properties</value>
</property>
</bean>
<!-- Allows accessing the server log -->
<bean id="logQuery" class="io.fabric8.insight.log.log4j.Log4jLogQuery"
lazy-init="false" scope="singleton"
init-method="start" destroy-method="stop">
</bean>
<!--
The <broker> element is used to configure the ActiveMQ broker.
-->
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" dataDirectory="${activemq.data}">
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" >
<!-- The constantPendingMessageLimitStrategy is used to prevent
slow topic consumers to block producers and affect other consumers
by limiting the number of messages that are retained
For more information, see:
http://activemq.apache.org/slow-consumer-handling.html
-->
<pendingMessageLimitStrategy>
<constantPendingMessageLimitStrategy limit="1000"/>
</pendingMessageLimitStrategy>
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
<!--
The managementContext is used to configure how ActiveMQ is exposed in
JMX. By default, ActiveMQ uses the MBean server that is started by
the JVM. For more information, see:
http://activemq.apache.org/jmx.html
-->
<managementContext>
<managementContext createConnector="false"/>
</managementContext>
<!--
Configure message persistence for the broker. The default persistence
mechanism is the KahaDB store (identified by the kahaDB tag).
For more information, see:
http://activemq.apache.org/persistence.html
-->
<!--
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb"/>
</persistenceAdapter> -->
<persistenceAdapter>
<replicatedLevelDB directory="activemq-data"
replicas="2"
bind="tcp://0.0.0.0:0"
zkAddress=<%=node['activemq']['zooKeeper']['address']%>
zkPath="/activemq/leveldb-stores"
hostname=<%=node['activemq']['zooKeeper']['hostname']%> />
</persistenceAdapter>
<!--
The systemUsage controls the maximum amount of space the broker will
use before disabling caching and/or slowing down producers. For more information, see:
http://activemq.apache.org/producer-flow-control.html
-->
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage percentOfJvmHeap="70" />
</memoryUsage>
<storeUsage>
<storeUsage limit="100 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="50 gb"/>
</tempUsage>
</systemUsage>
</systemUsage>
<!--
The transport connectors expose ActiveMQ over a given protocol to
clients and other brokers. For more information, see:
http://activemq.apache.org/configuring-transports.html
-->
<transportConnectors>
<!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="amqp" uri="amqp://0.0.0.0:5672?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="stomp" uri="stomp://0.0.0.0:61613?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="mqtt" uri="mqtt://0.0.0.0:1883?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="ws" uri="ws://0.0.0.0:61614?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
</transportConnectors>
<!-- destroy the spring context on shutdown to stop jetty -->
<shutdownHooks>
<bean xmlns="http://www.springframework.org/schema/beans" class="org.apache.activemq.hooks.SpringContextHook" />
</shutdownHooks>
</broker>
<!--
Enable web consoles, REST and Ajax APIs and demos
The web consoles requires by default login, you can disable this in the jetty.xml file
Take a look at ${ACTIVEMQ_HOME}/conf/jetty.xml for more details
-->
<import resource="jetty.xml"/>
</beans>
activemq/templates/default/wrapper.conf.erb
#********************************************************************
# Wrapper Properties
#********************************************************************
#wrapper.debug=TRUE
set.default.ACTIVEMQ_HOME=../..
set.default.ACTIVEMQ_BASE=../..
set.default.ACTIVEMQ_CONF=%ACTIVEMQ_BASE%/conf
set.default.ACTIVEMQ_DATA=%ACTIVEMQ_BASE%/data
wrapper.working.dir=.
# Java Application
wrapper.java.command=java
# Java Main class. This class must implement the WrapperListener interface
# or guarantee that the WrapperManager class is initialized. Helper
# classes are provided to do this for you. See the Integration section
# of the documentation for details.
wrapper.java.mainclass=org.tanukisoftware.wrapper.WrapperSimpleApp
# Java Classpath (include wrapper.jar) Add class path elements as
# needed starting from 1
wrapper.java.classpath.1=%ACTIVEMQ_HOME%/bin/wrapper.jar
wrapper.java.classpath.2=%ACTIVEMQ_HOME%/bin/activemq.jar
# Java Library Path (location of Wrapper.DLL or libwrapper.so)
wrapper.java.library.path.1=%ACTIVEMQ_HOME%/bin/linux-x86-64/
# Java Additional Parameters
# note that n is the parameter number starting from 1.
wrapper.java.additional.1=-Dactivemq.home=%ACTIVEMQ_HOME%
wrapper.java.additional.2=-Dactivemq.base=%ACTIVEMQ_BASE%
wrapper.java.additional.3=-Djavax.net.ssl.keyStorePassword=password
wrapper.java.additional.4=-Djavax.net.ssl.trustStorePassword=password
wrapper.java.additional.5=-Djavax.net.ssl.keyStore=%ACTIVEMQ_CONF%/broker.ks
wrapper.java.additional.6=-Djavax.net.ssl.trustStore=%ACTIVEMQ_CONF%/broker.ts
wrapper.java.additional.7=-Dcom.sun.management.jmxremote
wrapper.java.additional.8=-Dorg.apache.activemq.UseDedicatedTaskRunner=<%= node['activemq']['wrapper']['useDedicatedTaskRunner'] %>
wrapper.java.additional.9=-Djava.util.logging.config.file=logging.properties
wrapper.java.additional.10=-Dactivemq.conf=%ACTIVEMQ_CONF%
wrapper.java.additional.11=-Dactivemq.data=%ACTIVEMQ_DATA%
wrapper.java.additional.12=-Djava.security.auth.login.config=%ACTIVEMQ_CONF%/login.config
# Uncomment to enable jmx
#wrapper.java.additional.n=-Dcom.sun.management.jmxremote.port=1616
#wrapper.java.additional.n=-Dcom.sun.management.jmxremote.authenticate=false
#wrapper.java.additional.n=-Dcom.sun.management.jmxremote.ssl=false
# Uncomment to enable YourKit profiling
#wrapper.java.additional.n=-Xrunyjpagent
# Uncomment to enable remote debugging
#wrapper.java.additional.n=-Xdebug -Xnoagent -Djava.compiler=NONE
#wrapper.java.additional.n=-Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=5005
# Initial Java Heap Size (in MB)
#wrapper.java.initmemory=3
# Maximum Java Heap Size (in MB)
wrapper.java.maxmemory=<%= node['activemq']['wrapper']['max_memory'] %>
# Application parameters. Add parameters as needed starting from 1
wrapper.app.parameter.1=org.apache.activemq.console.Main
wrapper.app.parameter.2=start
#********************************************************************
# Wrapper Logging Properties
#********************************************************************
# Format of output for the console. (See docs for formats)
wrapper.console.format=PM
# Log Level for console output. (See docs for log levels)
wrapper.console.loglevel=INFO
# Log file to use for wrapper output logging.
wrapper.logfile=%ACTIVEMQ_DATA%/wrapper.log
# Format of output for the log file. (See docs for formats)
wrapper.logfile.format=LPTM
# Log Level for log file output. (See docs for log levels)
wrapper.logfile.loglevel=INFO
# Maximum size that the log file will be allowed to grow to before
# the log is rolled. Size is specified in bytes. The default value
# of 0, disables log rolling. May abbreviate with the 'k' (kb) or
# 'm' (mb) suffix. For example: 10m = 10 megabytes.
wrapper.logfile.maxsize=0
# Maximum number of rolled log files which will be allowed before old
# files are deleted. The default value of 0 implies no limit.
wrapper.logfile.maxfiles=0
# Log Level for sys/event log output. (See docs for log levels)
wrapper.syslog.loglevel=NONE
#********************************************************************
# Wrapper Windows Properties
#********************************************************************
# Title to use when running as a console
wrapper.console.title=ActiveMQ
#********************************************************************
# Wrapper Windows NT/2000/XP Service Properties
#********************************************************************
# WARNING - Do not modify any of these properties when an application
# using this configuration file has been installed as a service.
# Please uninstall the service before modifying this section. The
# service can then be reinstalled.
# Name of the service
wrapper.ntservice.name=ActiveMQ
# Display name of the service
wrapper.ntservice.displayname=ActiveMQ
# Description of the service
wrapper.ntservice.description=ActiveMQ Broker
# Service dependencies. Add dependencies as needed starting from 1
wrapper.ntservice.dependency.1=
# Mode in which the service is installed. AUTO_START or DEMAND_START
wrapper.ntservice.starttype=AUTO_START
# Allow the service to interact with the desktop.
wrapper.ntservice.interactive=false
Executing chef-client I get the following:
Recipe: activemq::default
* remote_file[/var/chef/cache/apache-activemq-5.11.0-bin.tar.gz] action create (up to date)
* directory[apache-activemq-5.11.0] action create (up to date)
* execute[untar-activemq] action run
- execute tar xvzf apache-activemq-5.11.0-bin.tar.gz -C /usr/local/apache-activemq-5.11.0 --strip 1
* file[/usr/local/apache-activemq-5.11.0/bin/activemq] action create (up to date)
* link[/etc/init.d/activemq] action create (up to date)
* template[jetty-realm.properties] action create
- update content in file /usr/local/apache-activemq-5.11.0/conf/jetty-realm.properties from 827a97 to 96c9a9
--- /usr/local/apache-activemq-5.11.0/conf/jetty-realm.properties 2015-01-30 13:13:51.000000000 +0000
+++ /tmp/chef-rendered-template20150211-9315-ul0l7k 2015-02-11 07:03:37.711054777 +0000
## -5,9 +5,9 ##
## The ASF licenses this file to You under the Apache License, Version 2.0
## (the "License"); you may not use this file except in compliance with
## the License. You may obtain a copy of the License at
-##
+##
## http://www.apache.org/licenses/LICENSE-2.0
-##
+##
## Unless required by applicable law or agreed to in writing, software
## distributed under the License is distributed on an "AS IS" BASIS,
## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
## -17,6 +17,6 ##
# Defines users that can access the web (console, demo, etc.)
# username: *****
-admin: *****, *****
-user: ****, *****
+admin: *****, ****
+user: *****, ****
- change mode from '0644' to '0755'
- restore selinux security context
* template[activemq.xml] action create
- update content in file /usr/local/apache-activemq-5.11.0/conf/activemq.xml from ca8528 to 219698
--- /usr/local/apache-activemq-5.11.0/conf/activemq.xml 2015-01-30 13:13:51.000000000 +0000
+++ /tmp/chef-rendered-template20150211-9315-8syvv7 2015-02-11 07:03:37.736055052 +0000
## -78,11 +78,20 ##
http://activemq.apache.org/persistence.html
-->
+ <!--
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb"/>
- </persistenceAdapter>
+ </persistenceAdapter> -->
+ <persistenceAdapter>
+ <replicatedLevelDB directory="activemq-data"
+ replicas="2"
+ bind="tcp://x.x.x.x:x"
+ zkAddress=xxxxxxx
+ zkPath=xxxxxx
+ hostname=xxxxxx />
+ </persistenceAdapter>
<!--
The systemUsage controls the maximum amount of space the broker will
use before disabling caching and/or slowing down producers. For more information, see:
## -133,5 +142,4 ##
<import resource="jetty.xml"/>
</beans>
-<!-- END SNIPPET: example -->
- change mode from '0644' to '0755'
- restore selinux security context
* service[activemq] action enable (up to date)
* service[activemq] action start (up to date)
* link[/usr/local/apache-activemq-5.11.0/bin/linux] action create (up to date)
* link[/var/run/activemq.pid] action create (up to date)
* template[/usr/local/apache-activemq-5.11.0/bin/linux/wrapper.conf] action create (up to date)
Recipe: lgtomcat::default
* service[tomcat7] action restart
- restart service service[tomcat7]
Recipe: activemq::default
* service[activemq] action restart
- restart service service[activemq]
But on checking for activemq process, there is none started:
ps -eaf| grep activemq
root 9867 9301 0 07:07 pts/1 00:00:00 grep --color=auto activemq
By following https://activemq.apache.org/rest.html, I'm able to push messages via the REST API (e.g. curl -u admin:admin -d "body=message" http://localhost:8161/api/message/TEST?type=queue works, and I can see in the admin console) However, I'd like to be able to use HTTPS. I found https://activemq.apache.org/http-and-https-transports-reference.html and http://troyjsd.blogspot.co.uk/2013/06/activemq-https.html but couldn't manage to make it work. Based on these two outdated/incomplete links:
I added to conf/activemq.xml
Imported self-signed certificate into JDK keystore (per http://troyjsd.blogspot.co.uk/2013/06/activemq-https.html)
Copied xstream and httpclient jars from lib/optional to lib/ (both under ActiveMQ directory, obviously)
So,
How can I set ActiveMQ so that it can be used with a HTTPS REST endpoint?
Assuming I did step 1, how can I test it (a similar curl command example like the above)?
I use ActiveMQ 5.9.1 and Mac OS 10.9.4
Uncomment the following section of conf/jetty.xml.
<!--
Enable this connector if you wish to use https with web console
-->
<!--
<bean id="SecureConnector" class="org.eclipse.jetty.server.ssl.SslSelectChannelConnector">
<property name="port" value="8162" />
<property name="keystore" value="file:${activemq.conf}/broker.ks" />
<property name="password" value="password" />
</bean>
-->
Jetty powers not only the WebConsole, but all HTTP stuff in ActiveMQ.
It should work out of the box for testing, but you probably want to roll your own keystore/certificate for real use.
You could use curl as before on port 8162 with HTTPS given you supply the "insecure" flag -k.
Otherwise, you need to create a trust store in pem format and supply it - see this SO for details. Curl accept the argument --cacert <filename.pem> with your certificate or issuing CA in it.