Apache JKmanager Activation Status is not updating - apache

I have changed the Activation Status of JKManager node1 from Activate to Deactivate and once after access the application URL and login the status of the JKManager gets changed to activate status.And i couldn't find any errors in Apache logs.Does is there any other configuration required?
My application is using Server Version: Apache/2.2.15 (Win32) mod_jk/1.2.265,mod-jk and Jboss Application Server Version 6.And below is the configured worker.properties file
worker.list=workerlist
# Set properties for node1
worker.node1.type=ajp13
worker.node1.host=xxxx
worker.node1.port=xx
worker.node1.lbfactor=4
# Set properties for node2
worker.node2.type=ajp13
worker.node2.host=xxxx
worker.node2.port=xx
worker.node2.lbfactor=4
# Set properties for workerlist(lb)
worker.workerlist.type=lb
worker.workerlist.balance_workers=node1,node2
worker.workerlist.sticky_session=1
worker.list=jkstatus
worker.jkstatus.type=status

The issue is due to Jboss Application server(Server.xml) and Apache Server(V_host) are configured with same port and we have changed the Port of JBOSS in server.xml which resolved the issue.Thanks

Related

Apache Ranger Audit log connect with Solr Cloud Mode with SSL

I have three nodes with Solr and ZooKeeper with enabled TLS/SSL where the ZK listen only in securePort and Solr - HTTPS.
Now I want to connect Solr to Apache Ranger for audit logs
where I am setting:
ranger.audit.solr.urls = https://HOST1:8983/solr/ranger_audits
and
ranger_admin_solr_zookeepers = HOST1:2281,HOST2:2281,HOST3:2281
The Apache Ranger is not in SSL mode and listen only on HTTP.
For Solr I have successfully create ranger_audits configset and collection with the same name.
ZooKeeper election is also successful where I have 1 leader and 2 followers.
So everything works as expected except the Apache Ranger audit communication.
The version of the Apache Ranger is 2.0.
ZooKeeper version - 3.6.3
Solr version - 8.11.1
With the current settings I get the following exception when open audit tab in Ranger UI:
2022-03-22 06:54:08,189 [http-bio-6080-exec-2] INFO org.apache.ranger.common.RESTErrorUtil (RESTErrorUtil.java:326) - Operation error. response=VXResponse={org.apache.ranger.view.VXResponse#7ef95c52statusCode={1} msgDesc={Error running solr query, please check solr configs. java.util.concurrent.TimeoutException: Could not connect to ZooKeeper HOST1:2281,HOST2:2281,HOST3:2281 within 15000 ms} messageList={[VXMessage={org.apache.ranger.view.VXMessage#3bd495a3name={ERROR_SYSTEM} rbKey={xa.error.system} message={System Error. Please try later.} objectId={null} fieldName={null} }]} }
javax.ws.rs.WebApplicationException
UPDATE:
The solution is to provide jaas.conf and java properties which fixed the problem.
-Dzookeeper.client.secure=true
-Djava.security.auth.login.config=/etc/ranger/admin/conf/jaas.conf
The sample of the jaas.conf is:
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="admin"
password="admin-pass";
};
Please note that this is not complete solution and the connection from Ranger to through HTTPS ZooKeepers is still problematic.

Unable to get the cluster and node details in Web Console agent for Apache Ignite

I am trying to get the node and cluster details in the Apache Ignite WebConsole. Below are the steps i have performed:
1. Download the Apache Ignite WebConsole.
2. My applications is running the ignite node as a cache layer(Ignite node started OK (id=ac87a66c,)
3. Ignite is running on Ignite discovery url 192.168.0.102:47500
4. I ran the bat file: web-console-agent.bat. But it is not able to connect to the agent and hence the web console:
[2020-05-26T18:05:33,245][INFO ][main][AgentLauncher] Starting Apache GridGain Web Console Agent...
[2020-05-26T18:05:33,415][INFO ][main][AgentLauncher]
[2020-05-26T18:05:33,416][INFO ][main][AgentLauncher] Web Console Agent configuration :
[2020-05-26T18:05:33,535][INFO ][main][AgentLauncher] User's security tokens : ********************************af05
[2020-05-26T18:05:33,539][INFO ][main][AgentLauncher] URI to Ignite node REST server : http://localhost:8080
[2020-05-26T18:05:33,540][INFO ][main][AgentLauncher] URI to GridGain Web Console : https://console.gridgain.com
[2020-05-26T18:05:33,548][INFO ][main][AgentLauncher] Path to properties file : default.properties
[2020-05-26T18:05:33,548][INFO ][main][AgentLauncher] Path to JDBC drivers folder : C:\pluralsight\gridgain-web-console-agent-2020.03.01\jdbc-drivers
[2020-05-26T18:05:33,557][INFO ][main][AgentLauncher] Demo mode : enabled
[2020-05-26T18:05:33,560][INFO ][main][AgentLauncher]
[2020-05-26T18:05:33,621][INFO ][main][WebSocketRouter] Starting Web Console Agent...
[2020-05-26T18:05:33,635][INFO ][Connect thread][WebSocketRouter] Connecting to server: wss://console.gridgain.com
[2020-05-26T18:05:35,996][INFO ][http-client-16][WebSocketRouter] Successfully completes handshake with server
[2020-05-26T18:05:40,035][WARN ][pool-2-thread-1][ClusterHandler] Failed to connect to cluster.
[2020-05-26T18:05:40,036][WARN ][pool-2-thread-1][ClusterHandler] Check that '--node-uri' configured correctly.
[2020-05-26T18:05:40,039][WARN ][pool-2-thread-1][ClusterHandler] Ensure that cluster nodes have [ignite-rest-http] module in classpath (was copied from libs/optional to libs folder).
[2020-05-26T18:05:40,045][INFO ][pool-2-thread-1][ClustersWatcher] Failed to establish connection to node
Please let me know where i am missing steps

(111)Connection refused - Apache Reverse Proxy and Tomcat 8.5.51 - Docker Compose

This works with Tomcat 8.5.50. However, with Tomcat 8.5.51, Apache cannot connect via AJP with the following error:
[Tue Mar 10 20:15:31.378937 2020] [proxy:error] [pid 42:tid 139841308157696] (111)Connection refused: AH00957: AJP: attempt to connect to 172.28.0.5:8009 (tomcatserver) failed
[Tue Mar 10 20:15:31.379336 2020] [proxy_ajp:error] [pid 42:tid 139841308157696] [client 192.168.0.1:58054] AH00896: failed to make connection to backend: tomcatserver
The Apache is on version 2.4.38:
Server version: Apache/2.4.38 (Debian)
Server built: 2019-10-15T19:53:42
The AJP connector in the server.xml has secretRequired="false". Everything is set up via Docker Compose.
The configuration for secretRequired isn't the only thing that changed:
From https://tomcat.apache.org/migration-85.html#Upgrading_8.5.x
In 8.5.51 onwards, the default listen address of the AJP Connector was changed to the loopback address rather than all addresses.
In 8.5.51 onwards, the requiredSecret attribute of the AJP Connector was deprecated and replaced by the secret attribute.
In 8.5.51 onwards, the secretRequired attribute was added to the AJP Connector. If set to true, the default, the AJP Connector will not
start unless a secret has been specified.
In 8.5.51 onwards, the allowedRequestAttributesPattern attribute was added to the AJP Connector. Requests with unrecognised attributes
will now be blocked with a 403.
Reference: AJP connector.
On top of that, the stock server.xml even has the AJPConnector commented, so it won't be active without being explicitly enabled.
Try adding allowedRequestAttributesPattern=".*" to the connector def.
Proceeding from where Olaf left off, follow these steps:
(1) You may omit the address attribute.
(2) Change the secretRequired attribute to secretRequired="true", or equivalently, leave it out. (The default value is True).
(3) Add a secret attribute to the workers.properties file and to the server.xml file. You may choose whatever secret you want, on condition that the values in both files match exactly.
(4) For the time being, add to the AJP connector the attribute allowedRequestAttributesPattern=".*", as T Cervenka suggests.
You should then end up with something like,
workers.properties
worker.list=worker1
worker.worker1.type=ajp13
worker.worker1.host=localhost
worker.worker1.port=8009
worker.worker1.secret=F45A93BF-3AA7-4CB4-E49A-DB34573E4A25
server.xml
<Connector port="8009" protocol="AJP/1.3" maxThreads="500" secret="F45A93BF-3AA7-4CB4-E49A-DB34573E4A25" allowedRequestAttributesPattern=".*" />
The value of allowedRequestAttributesPattern must be a regular expression. It represents the request attributes passed from the reverse proxy to the AJP connector. See the Tomcat docs for details. https://tomcat.apache.org/tomcat-8.5-doc/config/ajp.html.
The regex value for allowedRequestAttributesPattern must be an exact match for the request attributes passed in the AJP protocol. Its default value (where you don't mention the attribute) is null: this is known to break requests. If in doubt, use the regex wildcard, ".*", as above.

java.net.ConnectException: JBAS012144: Could not connect to remote://nnn.nn.nn.88:9999. The connection timed out

I am trying to run in jboss instance in domain mode. While I do that I am getting the following issue......
[Host Controller] 12:45:56,535 WARN [org.jboss.as.host.controller] (Controller Boot Thread) JBAS010900: Could not connect to remote domain controller at remote://nnn.nn.nn.88:9999 -- java.net.ConnectException: JBAS012144: Could not connect to remote://nnn.nn.nn.88:9999. The connection timed out
I had ran two JBoss instance in domain mode after configuring...
First JBoss instance->
./domain.sh -b nnn.nn.nn.88 -Djboss.bind.address.management=nnn.nn.nn.88
Second JBoss Instance ->
./domain.sh -b nnn.nn.nn.89 -Djboss.domain.master.address=nnn.nn.nn.88 --host-config=host-slave.xml
nnn.nn.nn.88 host.xml configuration is as follows...
<domain-controller>
<local/>
</domain-controller>
nnn.nn.nn.89 host-slave.xml configuration is as follows...
<domain-controller>
<remote host="${jboss.domain.master.address}" port="${jboss.domain.master.port:9999}" security-realm="ManagementRealm"/>
<domain-controller>
I am able to telnet to port 9999 on host nnn.nn.nn.88 from 89..... as I configured by removing loopback ip for public & management port...... Although is it the implication that <domain-controller> has <local/>....
Please help me to solve this issue... JDK version is JDK 7 Update 80.... EAP 6.3....
In HC host.xml and if we use --host-config=host-slave.xml that particular xml has to connected with DC under <domain-controller> node....
jboss.domain.master.address should be Domain Controller address nnn.nn.nn.88....
<domain-controller>
<remote host="${jboss.domain.master.address:nnn.nn.nn.88}" port="${jboss.domain.master.port:9999}" security-realm="ManagementRealm"/>
<domain-controller>
As per the solution article from redhat....
https://access.redhat.com/solutions/218053#
I ran following command for the same configuration which I had while posting this question..... And I got succeeded.....
DC->
./domain.sh -b my-host-ip1 -bmanagement my-host-ip1
HC->
./domain.sh -Djboss.domain.master.address=my-host-ip1 -b my-host-ip2 -bmanagement my-host-ip2
Although is this way of configuring gives clustering capability to DC and HCs..... I had raised same question to Redhat on the same solution article..... The answer must be yes I hope....
https://access.redhat.com/solutions/218053#comment-975683

Run OHS for Weblogic cluster

I am using weblogic12.1.3
I was install the WebLogic cluster in the above format :
cluster name : cs1, cluster address 172.30.35.23:7003,172.30.35.23:7004
I have 2 manageserver : 172.30.35.23:7003,172.30.35.23:7004
and a Machine MCH1(added 2 manage server to this machine)
my console address is : 172.30.35.23:7001/console
I have installe OHS on the other server : IP:172.30.35.13 port:7777
===============================================================
My configuration in OHS (mod_wl_ohs.conf) is :
LoadModule weblogic_module /u01/app/product/fmw/ohs/mosules/mod_wl_ohs.so
<IfModule weblogic_module>
<Location /console>
WLSRequest On
WebLogicHost 172.30.35.23
WeblogicPort 7001
</Location>
</IfModule>
<IfModule weblogic_module>
<Location /Hello>
WLSRequest On
WebLogicCluster 172.30.35.23:7003,172.30.35.23:7004
</Location>
</IfModule>
=============================================================
my /etc/hosts on weblogic server is :
127:0.0.1 localhost.localdomain
172.30.35.23 weblogic2 weblogic2.localdomain
my /etc/hosts on OHS server is :
127:0.0.1 localhost.localdomain
172.30.35.13 OHS OHS.localdomain
==============================================================
I deployed Hello.war to cluster
My test result :
172.30.35.23:70001/console is ok
172.30.35.13:7777/console is ok
172.30.35.23:7003/Hello is ok
172.30.35.23:7004/Hello is ok
but i dont have any answer on 172.30.35.13:7777/Hello
why ?
It means OHS is not working with cluster ?
ohs1.log is :
[oracle#OHS logs]$ cat ohs1.log
[2015-10-05T18:21:50.6939+03:30] [OHS] [ERROR:32] [OHS-9999] [mod_weblogic.c] [client_id: 172.30.35.200] [host_id: OHS] [host_addr: 172.30.35.13] [tid: 140599950821120] [user: oracle] [ecid: 0058LmbHfxLDg^wawDedMG0005rH000006] [rid: 0] [VirtualHost: main] <0058LmbHfxLDg^wawDedMG0005rH000006> weblogic: parseServerList: 172.30.35.23:7102 apr_socket_connect error [111] Connection refused
[2015-10-05T18:21:50.6971+03:30] [OHS] [ERROR:32] [OHS-9999] [mod_weblogic.c] [client_id: 172.30.35.200] [host_id: OHS] [host_addr: 172.30.35.13] [tid: 140599950821120] [user: oracle] [ecid: 0058LmbHfxLDg^wawDedMG0005rH000006] [rid: 0] [VirtualHost: main] <0058LmbHfxLDg^wawDedMG0005rH000006> weblogic: parseJVMID: could not resolve hostname '-1407311080'. Returning NULL from parseJVMID
Thanks
You need to make WebLogic Server to accept OHS request
If the version of the Oracle WebLogic Server instances in the back end is 10.3.4 (or later releases), you must set the WebLogic Plug-In Enabled parameter.
1.
Log in to the Oracle WebLogic Server administration console.
The WebLogic Proxy Plug-In provides features that are
identical to those of the plug-in for Apache HTTP Server.
2.
In the Domain Structure pane, expand the
Environment node.
– If the server instances to which you want to proxy requests from Oracle
HTTP Server are in a cluster, select Clusters.
– Otherwise, select Servers.
3.
Select the server or cluster to which you want to proxy requests from Oracle
HTTP Server.
The Configuration: General tab is displayed.
4.
Scroll down to the Advanced section, expand it, and select the
WebLogic Plug-In Enabled checkbox. Or change value from default to yes if there isn't checkbox
5.
Click Save.
6.
If you selected Servers in step 2, repeat steps 3 and 4 for the other servers to which you want to proxy requests from Oracle HTTP Servers.
For the change to take effect, you must restart the server instances.