client nxlog file is not getting create on nxlog server - nxlog

I have a windows nxlog client and ubuntu nxlog server.
however my windows nxlog client logs is not getting poppulated on nxlog server.
On checking the wire via tshark, I does see my client is sending logs on nxlog server however they are not getting populated in a file.
Relevant part of nxlog server configuration file shown below.
## This is a sample configuration file. See the nxlog reference manual about the
## configuration options. It should be installed locally under
## /usr/share/doc/nxlog-ce/ and is also available online at
## http://nxlog.org/nxlog-docs/en/nxlog-reference-manual.html
########################################
# Global directives #
########################################
User nxlog
Group nxlog
LogFile /var/log/nxlog/nxlog.log
LogLevel INFO
########################################
# Modules #
########################################
<Extension _syslog>
Module xm_syslog
</Extension>
<Extension _exec>
Module xm_exec
</Extension>
<Extension fileop>
Module xm_fileop
</Extension>
<Extension multiline>
Module xm_multiline
HeaderLine /^/
EndLine ""
</Extension>
<Input in1>
Module im_udp
Host 0.0.0.0
Port 514
Exec parse_syslog();
</Input>
<Input in2>
Module im_tcp
Host 0.0.0.0
Port 514
</Input>
<Processor norepeat>
Module pm_norepeat
CheckFields Hostname, SourceName, Message
</Processor>
<Output fileout1>
Module om_file
#Exec if $Message =~ /Deny/ drop();
Exec if $Message =~ /Teardown/ drop();
Exec if $Message =~ /(TRACE|DEBUG)/ drop();
Exec if $Message =~ /::ffff:xxxxx/ drop();
Exec if $Message =~ /ControlPortal.cgi/ drop();
File "/var/log/nxlog/" + $Hostname + ".log"
Exec $filename = "/var/log/nxlog/" + $Hostname + ".log";
Exec if (file_size($filename) > 1M) file_cycle($filename,2);
</Output>
When I check the the messagae on server from tshark I does get input from source .i.e. 172.16.130.67
apturing on 'eth1'
1 0.000000 172.16.130.67 -> 172.16.128.6 Syslog 319 USER.NOTICE: 1 2016-12-23T11:14:16.179155+00:00xxx-PROD1 ABC - - [NXLOG#14506 EventReceivedTime="2016-12-23 11:14:16" SourceModuleName="ems" SourceModuleType="im_file"] 2016-12-23 11:14:15,719 DEBUG http-apr-80-exec-4 com.sfnt.ems.web.DOSPreventionFilter - Reached DOSPreventionFilter
1 2 0.000271 172.16.130.67 -> 172.16.128.6 Syslog 324 USER.NOTICE: 1 2016-12-23T11:14:16.179155+00:00 xxx-PROD1 abc - - [NXLOG#14506 EventReceivedTime="2016-12-23 11:14:16" SourceModuleName="ems" SourceModuleType="im_file"] 2016-12-23 11:14:15,720 INFO http-apr-80-exec-4 com.sfnt.ems.web.AuthenticationFilter - BY PASSING URI : /xxx/login.html
however the log file is not getting created in /var/log/nxlog/
I have restarted rsyslog and nxlog service.
what could else be done here to troubleshoot.

Related

Configure LDAP with PGAdmin

Trying to configure LDAP with pgAdmin.
I have pgAdmin running locally on a cluster and I'm using Apache Directory Studio as a local LDAP server with the default connection and I've created 1 user.
The logs from Apache Directory Studio are:
#!SEARCH REQUEST (462) OK
#!CONNECTION ldap://0.0.0.0:10389
#!DATE 2021-03-12T09:33:38.565
# LDAP URL : ldap://0.0.0.0:10389/uid=admin,ou=system?*??(objectClass=*)
# command line : ldapsearch -H ldap://0.0.0.0:10389 -x -D "uid=admin,ou=system" -W -b "uid=admin,ou=system" -s base -a always "(objectClass=*)" "*"
# baseObject : uid=admin,ou=system
# scope : baseObject (0)
# derefAliases : derefAlways (3)
# sizeLimit : 0
# timeLimit : 0
# typesOnly : False
# filter : (objectClass=*)
# attributes : *
#!SEARCH RESULT DONE (462) OK
#!CONNECTION ldap://0.0.0.0:10389
#!DATE 2021-03-12T09:33:38.566
# numEntries : 1
In my pgAdmin config_local.py file I have the following:
AUTHENTICATION_SOURCES = ['ldap','internal']
LDAP_SERVER_URI = 'ldap://<my-ip-address>:10389'
LDAP_USERNAME_ATTRIBUTE = 'uid'
LDAP_BASE_DN = 'uid=admin,ou=system'
LDAP_SEARCH_BASE_DN = 'uid=admin,ou=system'
When I try to log into pgAdmin with admin or the created user I get the following error:
ldap3.core.exceptions.LDAPBindError: automatic bind not successful - invalidCredentials
I think I'm getting the base DN wrong or Apache isn't configured properly. Grateful for any help.

sysmon to nxlog logs nothing to file nor tcp

been trying to set up a windows host logfile with sysmon. This is succesful.
Logging occurs in eventlogfile windows sysmon operational.
Step two is to get nxlog to read it and send it to a remote syslog server. But nothing happens. For troubleshooting I am trying to log to a local file also nothing.
here is my nxlog config file,
#
define ROOT C:\Program Files (x86)\nxlog
Moduledir %ROOT%\modules
CacheDir %ROOT%\data
Pidfile %ROOT%\data\nxlog.pid
SpoolDir %ROOT%\data
LogFile %ROOT%\data\nxlog.log
LogLevel DEBUG
<Extension _syslog>
Module xm_syslog
</Extension>
<Input eventlog>
Module im_msvistalog
<QueryXML>
<QueryList>
<Query Id="0">
<Select Path="Microsoft-Windows-Sysmon/Operational">*</Select>
</Query>
</QueryList>
</QueryXML>
</Input>
<Output syslog>
Module om_tcp
Host 192.168.0.61
Port 514
Exec to_syslog_bsd();
</Output>
<Output file>
Module om_file
File 'C:\test\sysmon.json'
Exec to_json();
</Output>
<Route 1>
Path eventlog => syslog
</Route>
<Route 2>
Path eventlog => file
</Route>
all the log says is
2017-10-31 21:59:21 INFO nxlog-ce-2.9.1716 started
2017-10-31 21:59:21 INFO connecting to 192.168.0.61:514
But no logfile, no logging to tcp ..
I guess your syslog server does not accept the tcp connection which blocks the whole pipeline due to flow-control, including the other route that writes into the local file.

how to write .ksh file as production

I have written below .ksh file to execute the main class from EAR in WebSphere server using putty.
#!/bin/ksh
############################################################################
#
#
#
# Written By: xyz
#
# Sep 23, 2017
#
#
#
#
#
############################################################################
print " Process started"
java -cp \
"/appl/was/profiles/node/installedApps/cab/abcEar.ear/abcWeb.war/WEB-INF/lib/*" \
com.batch.FirtTool
print "Process End"
I am able to execute main class, but database connection is not happening properly. I am using hibernate JNDI stuff for connection.
If I am removing JNDI and directly I am using data source then my program is working properly.
But I need JNDI in my application.
Can you please help on this.
hibernate file like.
<!-- Database connection settings -->
<property
name="connection.driver_class">oracle.jdbc.driver.OracleDriver</property>
<property
name="hibernate.connection.datasource">java:comp/env/jdbc/xyz</property>
With JNDI its not working, But We will use below direct property, Then its working .
<property name="connection.url">xyz</property>
<property name="connection.username">xyz</property>
<property name="connection.password">xyz</property>

I'm having trouble authenticating over AD to windows machines from my ansible host. 'Server not found in Kerberos Database' on Ubuntu 16.10

I'm having trouble authenticating over AD to windows machines from my ansible host. I have a valid kerberos ticket -
klist
Credentials cache: FILE:/tmp/krb5cc_1000
Principal: ansible#SOMEDOMAIN.LOCAL
Issued Expires Principal
Mar 10 09:15:27 2017 Mar 10 19:15:24 2017 krbtgt/SOMEDOMAIN.LOCAL#SOMEDOMAIN.LOCAL
My kerberos config looks fine to me -
cat /etc/krb5.conf
[libdefaults]
default_realm = SOMEDOMAIN.LOCAL
# dns_lookup_realm = true
# dns_lookup_kdc = true
# ticket_lifetime = 24h
# renew_lifetime = 7d
# forwardable = true
# The following krb5.conf variables are only for MIT Kerberos.
# kdc_timesync = 1
# forwardable = true
# proxiable = true
# The following encryption type specification will be used by MIT Kerberos
# if uncommented. In general, the defaults in the MIT Kerberos code are
# correct and overriding these specifications only serves to disable new
# encryption types as they are added, creating interoperability problems.
#
# Thie only time when you might need to uncomment these lines and change
# the enctypes is if you have local software that will break on ticket
# caches containing ticket encryption types it doesn't know about (such as
# old versions of Sun Java).
# default_tgs_enctypes = des3-hmac-sha1
# default_tkt_enctypes = des3-hmac-sha1
# permitted_enctypes = des3-hmac-sha1
# The following libdefaults parameters are only for Heimdal Kerberos.
# v4_instance_resolve = false
# v4_name_convert = {
# host = {
# rcmd = host
# ftp = ftp
# }
# plain = {
# something = something-else
# }
# }
# fcc-mit-ticketflags = true
[realms]
SOMEDOMAIN.LOCAL = {
kdc = prosperitydc1.somedomain.local
kdc = prosperitydc2.somedomain.local
default_domain = somedomain.local
admin_server = somedomain.local
}
[domain_realm]
.somedomain.local = SOMEDOMAIN.LOCAL
somedomain.local = SOMEDOMAIN.LOCAL
When running a test command - ansible windows -m win_ping -vvvvv I get
'Server not found in Kerberos database'.
ansible windows -m win_ping -vvvvv
Using /etc/ansible/ansible.cfg as config file
Loading callback plugin minimal of type stdout, v2.0 from /usr/lib/python2.7/dist-packages/ansible/plugins/callback/__init__.pyc
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/windows/win_ping.ps1
<kerberostest.somedomain.local> ESTABLISH WINRM CONNECTION FOR USER: ansible#SOMEDOMAIN.LOCAL on PORT 5986 TO kerberostest.somedomain.local
<kerberostest.somedomain.local> WINRM CONNECT: transport=kerberos endpoint=https://kerberostest.somedomain.local:5986/wsman
<kerberostest.somedomain.local> WINRM CONNECTION ERROR: authGSSClientStep() failed: (('Unspecified GSS failure. Minor code may provide more information', 851968), ('Server not found in Kerberos database', -1765328377))
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/ansible/plugins/connection/winrm.py", line 154, in _winrm_connect
self.shell_id = protocol.open_shell(codepage=65001) # UTF-8
File "/home/prosperity/.local/lib/python2.7/site-packages/winrm/protocol.py", line 132, in open_shell
res = self.send_message(xmltodict.unparse(req))
File "/home/prosperity/.local/lib/python2.7/site-packages/winrm/protocol.py", line 207, in send_message
return self.transport.send_message(message)
File "/home/prosperity/.local/lib/python2.7/site-packages/winrm/transport.py", line 181, in send_message
prepared_request = self.session.prepare_request(request)
File "/home/prosperity/.local/lib/python2.7/site-packages/requests/sessions.py", line 407, in prepare_request
hooks=merge_hooks(request.hooks, self.hooks),
File "/home/prosperity/.local/lib/python2.7/site-packages/requests/models.py", line 306, in prepare
self.prepare_auth(auth, url)
File "/home/prosperity/.local/lib/python2.7/site-packages/requests/models.py", line 543, in prepare_auth
r = auth(self)
File "/home/prosperity/.local/lib/python2.7/site-packages/requests_kerberos/kerberos_.py", line 308, in __call__
auth_header = self.generate_request_header(None, host, is_preemptive=True)
File "/home/prosperity/.local/lib/python2.7/site-packages/requests_kerberos/kerberos_.py", line 148, in generate_request_header
raise KerberosExchangeError("%s failed: %s" % (kerb_stage, str(error.args)))
KerberosExchangeError: authGSSClientStep() failed: (('Unspecified GSS failure. Minor code may provide more information', 851968), ('Server not found in Kerberos database', -1765328377))
kerberostest.somedomain.local | UNREACHABLE! => {
"changed": false,
"msg": "kerberos: authGSSClientStep() failed: (('Unspecified GSS failure. Minor code may provide more information', 851968), ('Server not found in Kerberos database', -1765328377))",
"unreachable": true
}
I am able to ssh to the target machine
ssh -v1 kerberostest.somedomain.local -p 5986
OpenSSH_7.3p1 Ubuntu-1, OpenSSL 1.0.2g 1 Mar 2016
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: Connecting to kerberostest.somedomain.local [10.10.20.84] port 5986.
debug1: Connection established.
I can also ping all hosts with their hostname. I'm at a loss :(
Here is the ansible host file-
sudo cat /etc/ansible/hosts
# This is the default ansible 'hosts' file.
#
# It should live in /etc/ansible/hosts
#
# - Comments begin with the '#' character
# - Blank lines are ignored
# - Groups of hosts are delimited by [header] elements
# - You can enter hostnames or ip addresses
# - A hostname/ip can be a member of multiple groups
# Ex 1: Ungrouped hosts, specify before any group headers.
## green.example.com
## blue.example.com
## 192.168.100.1
## 192.168.100.10
# Ex 2: A collection of hosts belonging to the 'webservers' group
## [webservers]
## alpha.example.org
## beta.example.org
## 192.168.1.100
## 192.168.1.110
# If you have multiple hosts following a pattern you can specify
# them like this:
## www[001:006].example.com
# Ex 3: A collection of database servers in the 'dbservers' group
## [dbservers]
##
## db01.intranet.mydomain.net
## db02.intranet.mydomain.net
## 10.25.1.56
## 10.25.1.57
# Here's another example of host ranges, this time there are no
# leading 0s:
## db-[99:101]-node.example.com
[monitoring-servers]
#nagios
10.10.20.75 ansible_connection=ssh ansible_user=nagios
[windows]
#fileserver.somedomain.local#this machine isnt joined to the domain yet.
kerberostest.SOMEDOMAIN.LOCAL
[windows:vars]
#the following works for windows local account authentication
#ansible_ssh_user = prosperity
#ansible_ssh_pass = *********
#ansible_connection = winrm
#ansible_ssh_port = 5986
#ansible_winrm_server_cert_validation = ignore
#vars needed to authenticate on the windows domain using kerberos
ansible_user = ansible#SOMEDOMAIN.LOCAL
ansible_connection = winrm
ansible_winrm_scheme = https
ansible_winrm_transport = kerberos
ansible_winrm_server_cert_validation = ignore
I also tried connecting to the domain with realmd with success, but running the ansible command produced the same result.
This looks like a case of a missing SPN.
Here's the relevant error snippet:
<kerberostest.prosperityerp.local> ESTABLISH WINRM CONNECTION FOR USER: ansible#PROSPERITYERP.LOCAL on PORT 5986 TO kerberostest.prosperityerp.local
<kerberostest.prosperityerp.local> WINRM CONNECT: transport=kerberos endpoint=https://kerberostest.prosperityerp.local:5986/wsman
<kerberostest.prosperityerp.local> WINRM CONNECTION ERROR: authGSSClientStep() failed: (('Unspecified GSS failure. Minor code may provide more information', 851968), ('Server not found in Kerberos database', -1765328377))
And that is based off something I noticed in your Ansible configuration file:
[windows]
#fileserver.prosperityerp.local#this machine isnt joined to the domain yet.
kerberostest.PROSPERITYERP.LOCAL
I think the this machine isnt joined to the domain yet line in that file is a good indicator that the SPN HTTP/kerberostest.prosperityerp.local does not exist in Active Directory which would be causing the "server not found" message. You can SSH to kerberostest.prosperityerp.local, probably because it exists in DNS or in a Hosts file of the client machine, but unless and until the SPN HTTP/kerberostest.prosperityerp.local is created in Active Directory you will continue to get that error message. Adding that SPN properly in at this point would be a whole other topic of discussion.
You could use a command like this to test if you have that SPN defined:
setspn -Q HTTP/kerberostest.prosperityerp.local
SPNs exists to represent to a Kerberos client where to find the service instance for that service on the network.
Also run:
nslookup kerberostest.prosperityerp.local
on at least two client machines to make sure the FQDN of the IP host where the Kerberized is running exists DNS. DNS is a requirement for Kerberos to properly run in a network.
Finally, you could use Wireshark on the client for further analysis, use the filter kerberos to highlight only kerberos traffic.
In my case, the Server not found in Kerberos database error was a result of the target Windows machine's DNS name not being mapped to the right realm, as hinted at in this line from this Microsoft Technet Article:
The error “Server not found in Kerberos database” is common and can be misleading because it often appears when the service principal is not missing. The error can be caused by domain/realm mapping problems or it can be the result of a DNS problem where the service principal name is not being built correctly. Server logs and network traces can be used to determine what service principal is actually being requested.
I had playbook whoami.yaml:
- hosts: windows-machine.mydomain.com
tasks:
- name: Run 'whoami' command
win_command: whoami
Hosts file:
[windows]
windows-machine.mydomain.com
[windows:vars]
ansible_connection=winrm
ansible_winrm_transport=kerberos
ansible_user=user#FOO.BAR.MYDOMAIN.COM
ansible_password=<password>
ansible_port=5985
Since the DNS name was windows-machine.mydomain.com, but the AD realm was FOO.BAR.MYDOMAIN.COM I had to fix the mapping in my /etc/krb5.conf file on my Ansible host:
INCORRECT
This won't work for our case since this mapping rule won't apply to windows-machine.mydomain.com:
[domain_realm]
foo.bar.mydomain.com = FOO.BAR.MYDOMAIN.COM
CORRECT
This will correctly map windows-machine.mydomain.com to realm FOO.BAR.MYDOMAIN.COM
[domain_realm]
.mydomain.com = FOO.BAR.MYDOMAIN.COM

ActiveMQ cookbook triggers the activemq service, but the service is not started

Activemq service is triggered via cookbook but it does not runs:
activemq/attributes/default.rb
default['activemq']['version'] = '5.11.0'
src_filename="apache-activemq-#{node['activemq']['version']}-bin.tar.gz"
src_filepath = "#{Chef::Config['file_cache_path']}/#{src_filename}"
default['activemq']['src_filepath'] = "#{src_filepath}"
default['activemq']['tar_filepath'] = "http://xxx-xx-xx-xxx-xx:8091/3rdparty/activemq/#{src_filename}"
default['activemq']['dir'] = "/usr/local/apache-activemq-#{node['activemq']['version']}"
default['activemq']['wrapper']['max_memory'] = '1024'
default['activemq']['wrapper']['useDedicatedTaskRunner'] = 'true'
default['activemq']['zooKeeper']['address']="xxx.xx.xx.xx:2181"
default['activemq']['zooKeeper']['hostname']="xxx.xx.xx.xx"
activemq/recipes/default.rb
include_recipe "lgjava"
activemq_home= node['activemq']['dir']
remote_file "#{node['activemq']['src_filepath']}" do
mode 0755
source node['activemq']['tar_filepath']
action :create
notifies :create, "directory[apache-activemq-#{node['activemq']['version']}]", :immediately
notifies :run, "execute[untar-activemq]", :immediately
end
directory "apache-activemq-#{node['activemq']['version']}" do
path "#{activemq_home}"
mode 0755
recursive true
end
execute "untar-activemq" do
cwd Chef::Config[:file_cache_path]
command <<-EOF
tar xvzf apache-activemq-#{node['activemq']['version']}-bin.tar.gz -C #{node['activemq']['dir'] } --strip 1
EOF
action :run
end
file "#{activemq_home}/bin/activemq" do
owner 'root'
group 'root'
mode '0755'
end
arch = node['kernel']['machine'] == 'x86_64' ? 'x86-64' : 'x86-32'
link '/etc/init.d/activemq' do
to "#{activemq_home}/bin/linux-#{arch}/activemq"
end
template "jetty-realm.properties" do
source "jetty-realm.properties.erb"
mode "0755"
path "#{activemq_home}/conf/jetty-realm.properties"
action :create
notifies :restart, 'service[activemq]'
end
template "activemq.xml" do
source "activemq.xml.erb"
mode "0755"
path "#{activemq_home}/conf/activemq.xml"
action :create
notifies :restart, 'service[activemq]'
end
service 'activemq' do
supports :restart => true, :status => true
action [:enable, :start]
end
# symlink so the default wrapper.conf can find the native wrapper library
link "#{activemq_home}/bin/linux" do
to "#{activemq_home}/bin/linux-#{arch}"
end
# symlink the wrapper's pidfile location into /var/run
link '/var/run/activemq.pid' do
to "#{activemq_home}/bin/linux/ActiveMQ.pid"
not_if 'test -f /var/run/activemq.pid'
end
template "#{activemq_home}/bin/linux/wrapper.conf" do
source 'wrapper.conf.erb'
mode '0644'
notifies :restart, 'service[activemq]'
end
activemq/templates/default/activemq.xml.erb
<!-- START SNIPPET: example -->
<beans
xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">
<!-- Allows us to use system properties as variables in this configuration file -->
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="locations">
<value>file:${activemq.conf}/credentials.properties</value>
</property>
</bean>
<!-- Allows accessing the server log -->
<bean id="logQuery" class="io.fabric8.insight.log.log4j.Log4jLogQuery"
lazy-init="false" scope="singleton"
init-method="start" destroy-method="stop">
</bean>
<!--
The <broker> element is used to configure the ActiveMQ broker.
-->
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" dataDirectory="${activemq.data}">
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" >
<!-- The constantPendingMessageLimitStrategy is used to prevent
slow topic consumers to block producers and affect other consumers
by limiting the number of messages that are retained
For more information, see:
http://activemq.apache.org/slow-consumer-handling.html
-->
<pendingMessageLimitStrategy>
<constantPendingMessageLimitStrategy limit="1000"/>
</pendingMessageLimitStrategy>
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
<!--
The managementContext is used to configure how ActiveMQ is exposed in
JMX. By default, ActiveMQ uses the MBean server that is started by
the JVM. For more information, see:
http://activemq.apache.org/jmx.html
-->
<managementContext>
<managementContext createConnector="false"/>
</managementContext>
<!--
Configure message persistence for the broker. The default persistence
mechanism is the KahaDB store (identified by the kahaDB tag).
For more information, see:
http://activemq.apache.org/persistence.html
-->
<!--
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb"/>
</persistenceAdapter> -->
<persistenceAdapter>
<replicatedLevelDB directory="activemq-data"
replicas="2"
bind="tcp://0.0.0.0:0"
zkAddress=<%=node['activemq']['zooKeeper']['address']%>
zkPath="/activemq/leveldb-stores"
hostname=<%=node['activemq']['zooKeeper']['hostname']%> />
</persistenceAdapter>
<!--
The systemUsage controls the maximum amount of space the broker will
use before disabling caching and/or slowing down producers. For more information, see:
http://activemq.apache.org/producer-flow-control.html
-->
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage percentOfJvmHeap="70" />
</memoryUsage>
<storeUsage>
<storeUsage limit="100 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="50 gb"/>
</tempUsage>
</systemUsage>
</systemUsage>
<!--
The transport connectors expose ActiveMQ over a given protocol to
clients and other brokers. For more information, see:
http://activemq.apache.org/configuring-transports.html
-->
<transportConnectors>
<!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="amqp" uri="amqp://0.0.0.0:5672?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="stomp" uri="stomp://0.0.0.0:61613?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="mqtt" uri="mqtt://0.0.0.0:1883?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="ws" uri="ws://0.0.0.0:61614?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
</transportConnectors>
<!-- destroy the spring context on shutdown to stop jetty -->
<shutdownHooks>
<bean xmlns="http://www.springframework.org/schema/beans" class="org.apache.activemq.hooks.SpringContextHook" />
</shutdownHooks>
</broker>
<!--
Enable web consoles, REST and Ajax APIs and demos
The web consoles requires by default login, you can disable this in the jetty.xml file
Take a look at ${ACTIVEMQ_HOME}/conf/jetty.xml for more details
-->
<import resource="jetty.xml"/>
</beans>
activemq/templates/default/wrapper.conf.erb
#********************************************************************
# Wrapper Properties
#********************************************************************
#wrapper.debug=TRUE
set.default.ACTIVEMQ_HOME=../..
set.default.ACTIVEMQ_BASE=../..
set.default.ACTIVEMQ_CONF=%ACTIVEMQ_BASE%/conf
set.default.ACTIVEMQ_DATA=%ACTIVEMQ_BASE%/data
wrapper.working.dir=.
# Java Application
wrapper.java.command=java
# Java Main class. This class must implement the WrapperListener interface
# or guarantee that the WrapperManager class is initialized. Helper
# classes are provided to do this for you. See the Integration section
# of the documentation for details.
wrapper.java.mainclass=org.tanukisoftware.wrapper.WrapperSimpleApp
# Java Classpath (include wrapper.jar) Add class path elements as
# needed starting from 1
wrapper.java.classpath.1=%ACTIVEMQ_HOME%/bin/wrapper.jar
wrapper.java.classpath.2=%ACTIVEMQ_HOME%/bin/activemq.jar
# Java Library Path (location of Wrapper.DLL or libwrapper.so)
wrapper.java.library.path.1=%ACTIVEMQ_HOME%/bin/linux-x86-64/
# Java Additional Parameters
# note that n is the parameter number starting from 1.
wrapper.java.additional.1=-Dactivemq.home=%ACTIVEMQ_HOME%
wrapper.java.additional.2=-Dactivemq.base=%ACTIVEMQ_BASE%
wrapper.java.additional.3=-Djavax.net.ssl.keyStorePassword=password
wrapper.java.additional.4=-Djavax.net.ssl.trustStorePassword=password
wrapper.java.additional.5=-Djavax.net.ssl.keyStore=%ACTIVEMQ_CONF%/broker.ks
wrapper.java.additional.6=-Djavax.net.ssl.trustStore=%ACTIVEMQ_CONF%/broker.ts
wrapper.java.additional.7=-Dcom.sun.management.jmxremote
wrapper.java.additional.8=-Dorg.apache.activemq.UseDedicatedTaskRunner=<%= node['activemq']['wrapper']['useDedicatedTaskRunner'] %>
wrapper.java.additional.9=-Djava.util.logging.config.file=logging.properties
wrapper.java.additional.10=-Dactivemq.conf=%ACTIVEMQ_CONF%
wrapper.java.additional.11=-Dactivemq.data=%ACTIVEMQ_DATA%
wrapper.java.additional.12=-Djava.security.auth.login.config=%ACTIVEMQ_CONF%/login.config
# Uncomment to enable jmx
#wrapper.java.additional.n=-Dcom.sun.management.jmxremote.port=1616
#wrapper.java.additional.n=-Dcom.sun.management.jmxremote.authenticate=false
#wrapper.java.additional.n=-Dcom.sun.management.jmxremote.ssl=false
# Uncomment to enable YourKit profiling
#wrapper.java.additional.n=-Xrunyjpagent
# Uncomment to enable remote debugging
#wrapper.java.additional.n=-Xdebug -Xnoagent -Djava.compiler=NONE
#wrapper.java.additional.n=-Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=5005
# Initial Java Heap Size (in MB)
#wrapper.java.initmemory=3
# Maximum Java Heap Size (in MB)
wrapper.java.maxmemory=<%= node['activemq']['wrapper']['max_memory'] %>
# Application parameters. Add parameters as needed starting from 1
wrapper.app.parameter.1=org.apache.activemq.console.Main
wrapper.app.parameter.2=start
#********************************************************************
# Wrapper Logging Properties
#********************************************************************
# Format of output for the console. (See docs for formats)
wrapper.console.format=PM
# Log Level for console output. (See docs for log levels)
wrapper.console.loglevel=INFO
# Log file to use for wrapper output logging.
wrapper.logfile=%ACTIVEMQ_DATA%/wrapper.log
# Format of output for the log file. (See docs for formats)
wrapper.logfile.format=LPTM
# Log Level for log file output. (See docs for log levels)
wrapper.logfile.loglevel=INFO
# Maximum size that the log file will be allowed to grow to before
# the log is rolled. Size is specified in bytes. The default value
# of 0, disables log rolling. May abbreviate with the 'k' (kb) or
# 'm' (mb) suffix. For example: 10m = 10 megabytes.
wrapper.logfile.maxsize=0
# Maximum number of rolled log files which will be allowed before old
# files are deleted. The default value of 0 implies no limit.
wrapper.logfile.maxfiles=0
# Log Level for sys/event log output. (See docs for log levels)
wrapper.syslog.loglevel=NONE
#********************************************************************
# Wrapper Windows Properties
#********************************************************************
# Title to use when running as a console
wrapper.console.title=ActiveMQ
#********************************************************************
# Wrapper Windows NT/2000/XP Service Properties
#********************************************************************
# WARNING - Do not modify any of these properties when an application
# using this configuration file has been installed as a service.
# Please uninstall the service before modifying this section. The
# service can then be reinstalled.
# Name of the service
wrapper.ntservice.name=ActiveMQ
# Display name of the service
wrapper.ntservice.displayname=ActiveMQ
# Description of the service
wrapper.ntservice.description=ActiveMQ Broker
# Service dependencies. Add dependencies as needed starting from 1
wrapper.ntservice.dependency.1=
# Mode in which the service is installed. AUTO_START or DEMAND_START
wrapper.ntservice.starttype=AUTO_START
# Allow the service to interact with the desktop.
wrapper.ntservice.interactive=false
Executing chef-client I get the following:
Recipe: activemq::default
* remote_file[/var/chef/cache/apache-activemq-5.11.0-bin.tar.gz] action create (up to date)
* directory[apache-activemq-5.11.0] action create (up to date)
* execute[untar-activemq] action run
- execute tar xvzf apache-activemq-5.11.0-bin.tar.gz -C /usr/local/apache-activemq-5.11.0 --strip 1
* file[/usr/local/apache-activemq-5.11.0/bin/activemq] action create (up to date)
* link[/etc/init.d/activemq] action create (up to date)
* template[jetty-realm.properties] action create
- update content in file /usr/local/apache-activemq-5.11.0/conf/jetty-realm.properties from 827a97 to 96c9a9
--- /usr/local/apache-activemq-5.11.0/conf/jetty-realm.properties 2015-01-30 13:13:51.000000000 +0000
+++ /tmp/chef-rendered-template20150211-9315-ul0l7k 2015-02-11 07:03:37.711054777 +0000
## -5,9 +5,9 ##
## The ASF licenses this file to You under the Apache License, Version 2.0
## (the "License"); you may not use this file except in compliance with
## the License. You may obtain a copy of the License at
-##
+##
## http://www.apache.org/licenses/LICENSE-2.0
-##
+##
## Unless required by applicable law or agreed to in writing, software
## distributed under the License is distributed on an "AS IS" BASIS,
## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
## -17,6 +17,6 ##
# Defines users that can access the web (console, demo, etc.)
# username: *****
-admin: *****, *****
-user: ****, *****
+admin: *****, ****
+user: *****, ****
- change mode from '0644' to '0755'
- restore selinux security context
* template[activemq.xml] action create
- update content in file /usr/local/apache-activemq-5.11.0/conf/activemq.xml from ca8528 to 219698
--- /usr/local/apache-activemq-5.11.0/conf/activemq.xml 2015-01-30 13:13:51.000000000 +0000
+++ /tmp/chef-rendered-template20150211-9315-8syvv7 2015-02-11 07:03:37.736055052 +0000
## -78,11 +78,20 ##
http://activemq.apache.org/persistence.html
-->
+ <!--
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb"/>
- </persistenceAdapter>
+ </persistenceAdapter> -->
+ <persistenceAdapter>
+ <replicatedLevelDB directory="activemq-data"
+ replicas="2"
+ bind="tcp://x.x.x.x:x"
+ zkAddress=xxxxxxx
+ zkPath=xxxxxx
+ hostname=xxxxxx />
+ </persistenceAdapter>
<!--
The systemUsage controls the maximum amount of space the broker will
use before disabling caching and/or slowing down producers. For more information, see:
## -133,5 +142,4 ##
<import resource="jetty.xml"/>
</beans>
-<!-- END SNIPPET: example -->
- change mode from '0644' to '0755'
- restore selinux security context
* service[activemq] action enable (up to date)
* service[activemq] action start (up to date)
* link[/usr/local/apache-activemq-5.11.0/bin/linux] action create (up to date)
* link[/var/run/activemq.pid] action create (up to date)
* template[/usr/local/apache-activemq-5.11.0/bin/linux/wrapper.conf] action create (up to date)
Recipe: lgtomcat::default
* service[tomcat7] action restart
- restart service service[tomcat7]
Recipe: activemq::default
* service[activemq] action restart
- restart service service[activemq]
But on checking for activemq process, there is none started:
ps -eaf| grep activemq
root 9867 9301 0 07:07 pts/1 00:00:00 grep --color=auto activemq