JBAS010153: Node identifier property is set to the default value. Please make sure it is unique - jboss7.x

I am getting the following WARN message while I start my host which is one of the Host Controller (HC) that is attached to the Domain Controller(DC).
[Server:server-two] 14:06:13,822 WARN [org.jboss.as.txn] (ServerService Thread Pool -- 33) JBAS010153: Node identifier property is set to the default value. Please make sure it is unique.
And my host-slave.xml has the following config...
<server-identities>
<!-- Replace this with either a base64 password of your own, or use a vault with a vault expression -->
<secret value="c2xhdmVfdXNlcl9wYXNzd29yZA=="/>
</server-identities>
I hope this config is the reason...... maybe I didn't understand..... but I couldn't find node identifier property rather this is the default secret value which I hope could be the cause of this WARN message.
However, I didn't mention HC to lookup host-slave.xml..... the command which I ran to start my HC is.....
[host-~-\-\-\bin]$./domain.sh -Djboss.domain.master.address=nnn.nn.nn.88 -b nnn.nn.nn.89 -bmanagement nnn.nn.nn.89 &
nnn.nn.nn.88 is my DC
Else please advise what's cause of the WARN message.
And please let me know the implication of this WARN message and advise us on the required config to overcome and sort out any consecutive consequences that would've been bound for this WARN.

I'm new to wildfly, and noticed this warning when I started it standalone from eclipse (I'm doing the following tutorial: https://wwu-pi.github.io/tutorials/lectures/eai/020_tutorial_jboss_project.html)
The fix was to add a node-identifier to the core-environment in the subsystem:
<subsystem xmlns="urn:jboss:domain:transactions:2.0">
<core-environment node-identifier="meindertwillemhoving">
<process-id>
<uuid/>
</process-id>
</core-environment>
<recovery-environment socket-binding="txn-recovery-environment" status-socket-binding="txn-status-manager"/>
</subsystem>
This is in file [wildfly]\standalone\configuration\standalone.xml.
This is the same answer as https://developer.jboss.org/message/880136#880136

According to WFLY-10541 if you are using WildFly v14.0.0 or newer you can pass the following to the startup script to set the transaction node identifier:
-Djboss.tx.node.id=<some-unique-id>

Setting the node identifier to an unique value is only required for proper handling of XA Transactions.
You can set it as follows in your XML configuration:
<subsystem xmlns="urn:jboss:domain:transactions:6.0">
<core-environment node-identifier="${jboss.tx.node.id}">
It needs to be a unique value up to 23 bytes long.
More about this here: http://www.mastertheboss.com/jboss-server/jboss-configuration/configuring-transactions-jta-using-jboss-as7-wildfly

Building on #kaptan's answer I added the following to the bottom of
bin/standalone.conf:
JAVA_OPTS="$JAVA_OPTS -Djboss.tx.node.id=`hostname -f`
This way I don't have to remember to add the "-Djboss.tx.node.id=" when running up wildfly by hand.

For this <server-identities> is not the issue. In fact, it shouldn't be touched at all.
When JBoss is started in domain mode by domain.sh, by default there will be three servers server-one server-two server-three. When you are running one more HC attached to the DC.... the defaulted server which is in auto-start mode will get clash when we start HC attaching to DC,- by the following command.
[host-~-\-\-\bin]$./domain.sh -Djboss.domain.master.address=nnn.nn.nn.88 -b nnn.nn.nn.89 -bmanagement nnn.nn.nn.89 &
Or by having the host configuration at HC (default host.xml... until unless we choose a different one....).
<domain-controller>
<remote host="${jboss.domain.master.address:nnn.nn.nn.88}" port="${jboss.domain.master.port:9999}" security-realm="ManagementRealm"/>
<domain-controller>
In order to solve this, we need to turn auto-start to false..... And we need to create a new server-group...... To that group we need to add dc-created-server and hc-created-server..... we can choose the appropriate same profile either full-ha or full for both created servers across DC and HC.
SO when we start the group by configuring the required HEAP size including permgen space... You could start both DC and HC.... and in DC you could see both of your-created-servers are started in the created server-group.
DC- Domain Controller
HC- Host Controller
To deploy you need to upload .ear or web-archive in the Application Console. You cannot place it in the deployments folder as how you do in standalone mode with .dodeploy file.
If you upload the same .ear next version do the Replace option instead of the Remove & Add option in the upload process.

Related

How to fix etcd cluster "error "tls: first record does not look like a TLS handshake""

I created a three node etcd cluester, config and start is already OK, but when I check the /var/log/messages, it shows
etcd: rejected connection from "172.17.0.3:43192" (error "tls: first
record does not look like a TLS handshake", ServerName "")
How can I fix it ?
I have checked the health of etcd :
member 48b0dff99d5c867e is healthy: got healthy result from https://172.17.0.9:2379
member 646dab89331aabab is healthy: got healthy result from https://172.17.0.8:2379
member b45603216bfac234 is healthy: got healthy result from https://172.17.0.10:2379
That shows Ok, but when I cat the /var/log/messages, it always shows this error :
Jan 12 20:08:57 master etcd: rejected connection from
"172.17.0.3:43160" (error "tls: first record does not look like a TLS
handshake", ServerName "")
Jan 12 20:08:57 master etcd: rejected
connection from "172.17.0.3:43162" (error "tls: oversized record
received with length 21536", ServerName "")
I got this message for the etcd peer communication when switching from http to https for peer communication. Apparently etcd has persistent peer information that overrides the command line options so it continued to use http for peer communication in spite of the command line options.
In the end, since this was a test cluster, I nuked /var/lib/etcd and the new cli configuration took hold
There is no solution from my side to fully help you with an issue but I've found couple of links that might help you in further investigations. Read them carefully, try solutions and I hope you will resolve the problem.
Github question #9917: check ETCDCTL_API variable, especially make sure --endpoints is configured with https.
Runtime reconfiguration: try to reconfigure you etcd by updating/removing/adding etcs members.
nginx ingress: check your nginx ingress annotations in case you are using nginx
google groups TLS handshake topic: Check this topic, especially comments related to VAULT_ADDR variable. I will copy paste last comment from thread here:
We were able to get everything to work, after understanding the
permission issues.
You asked: "Please confirm if you are seeing server error messages
before initializing Vault" Upon further examination, I did determine
that the errors were not happening before initializing the Vault.
The problem ended up not being related to VAULT_ADDR, and we used the
value: "http://127.0.0.1:8200"
I have the setup operation scripted, and it appears that not
everything was being run at the proper permissions. At first I was
running the scripts using the "sudo" command, which resulted in the
failures. I discovered that the permissions for the certificate key
were restricted and the file could not be accessed by my user. There
may have been other permission issues as well. But once I switched
user to root, and ran the script, everything behaved correctly.
Thanks

Wildfly Server local server debug panel shows error "http connector is not enabled for server profile"

I setup a local JBoss/Wildfly server launch configuraiton in Intellij Idea. When I attempt to start the server, the configuration panel pops up and shows following error.
Error: HTTP connector is not enabled for server profile
I could not find anything in the Idea help what this means and how to fix it. The server is a keycloak distro but is just plain wildfly 10 with an extra subsystem.
Has anyone seen this before and knows how to fix the error?
I can't reproduce this with fresh installation of keycloak 3.2.1 from here
IDEA looks for the 2 following xpath's searching for a HTTP connector settings:
"/ns:server/ns:profile/*[local-name()='subsystem']/*[local-name()='server']/*[local-name()='http-listener'][#*[local-name()='socket-binding' and .='http']]",
"/ns:server/ns:profile/*[local-name()='subsystem']/*[local-name()='connector'][#*[local-name()='socket-binding' and .='http']]"};
For me playing with the fresh keycloak distribution the first xpath hits at the following markup:
<subsystem xmlns="urn:jboss:domain:undertow:3.0">
<buffer-cache name="default"/>
<server name="default-server">
<http-listener name="default" socket-binding="http" redirect-socket="https"/>
Please check your configuration around this place.
If this does not help, please attach your standalone.xml or at least the relevant part of it.
In my case changing of 'JRE' from 'Default' to explicit one (Even though it was the same as it was in parentheses in the default version) solved the problem.

Is there a way to dynamically define and register new Dgraphs in Endeca

As far as my knowledge of Endeca goes, any time you want to add a new dgraph definition in your Endeca configuration, you have to run initializeServices.sh to set the updated configuration on EAC.
I was wondering if there is any way I can do that without running initalizeServices.sh (since it does a lot more than just update the list of Dgraph registered in EAC, and I want to prevent that).
I found the command ./runcommand.sh --update-definition allows you to do configuration changes to a Dgraph, which has already been registered with EAC, but if I add a new dgraph in config and run the command it fails with below error:
[11.17.16 16:00:07] INFO: Setting definition for host 'MDEXLiveHost2'.
[11.17.16 16:00:07] SEVERE: Caught an exception while checking provisioning
Caused by com.endeca.soleng.eac.toolkit.exception.EacCommunicationException
com.endeca.soleng.eac.toolkit.host.Host setDefinition - Caught exception while setting host definition.
Caused by com.endeca.eac.client.ProvisioningFault
sun.reflect.NativeConstructorAccessorImpl newInstance0 - null
I can't find any detailed logs of this error being generated anywhere in PlatformServices logs to further debug.
I could, however see in request log that /eac/ProvisioningService gave a HTTP code of 500, which leads me to believe that the script is trying to find current configuration of MDEXLiveHost2 and is unable to find it.
EDITED TO ADD Configuration for:
New host:
<host id="MDEXLiveHost2" hostName="${mdexLive.host2}" port="${mdexLive.eac.port}" useSsl="false" />
New Dgraph:
<dgraph id="DgraphLive2" host-id="MDEXLiveHost2" port="${dgraphLive1.port}"
post-startup-script="LiveDgraphPostStartup">
<properties>
<property name="restartGroup" value="A" />
<property name="updateGroup" value="a" />
<property name="DgraphContentGroup" value="Live" />
</properties>
<log-dir>./logs/dgraphs/DgraphLive</log-dir>
<input-dir>./data/dgraphs/DgraphLive/dgraph_input</input-dir>
<update-dir>./data/dgraphs/DgraphLive/dgraph_input/updates</update-dir>
</dgraph>
EDITED TO ADD errors after manually adding host using eaccmd.sh
Host definition file:
<host host-id="MDEXLiveHost2" host-name="172.18.0.7" port="9999" useSsl="false"/>
The host is added successfully (validated via describe-app)
$./eaccmd.sh describe-app --app myapp | grep MDEXLiveHost2
<host host-name="172.18.0.7" port="9999" host-id="MDEXLiveHost2" useSsl="false">
But, running any command I get this error:
[11.18.16 11:00:58] INFO: Updating provisioning for host 'MDEXLiveHost2'.
[11.18.16 11:00:58] INFO: Host name of host 'MDEXLiveHost2' has changed from 172.18.0.7 to 172.18.0.7 . Components on this host will be re-provisioned.
[11.18.16 11:00:58] INFO: Updating definition for host 'MDEXLiveHost2'.
[11.18.16 11:00:58] SEVERE: Caught an exception while checking provisioning.
Caused by com.endeca.soleng.eac.toolkit.exception.EacCommunicationException
com.endeca.soleng.eac.toolkit.host.Host updateEacDefinition - Caught exception while updating host definition.
Caused by com.endeca.eac.client.ProvisioningFault
sun.reflect.NativeConstructorAccessorImpl newInstance0 - null
If only this error could be made more verbose, that might give some help.
You do not have to run initializeServices.sh for every configuration change you make. When you execute other scripts in the control folder, they first check if there are any configuration changes and apply these changes.
As far as the error is concerned, I suspect you either didn't specify the MDEXLiveHost2 in your LiveDGraphCluster.xml or the host that you did specify is not reachable. Verify your configuration.
Lastly your approach to dynamically add more DGraphs into the cluster is not standard practice. When you configure your environment you should do a load test using ENEPerf to simulate the load and then create as many DGraphs and hosts as required. If you are adding more hosts and DGraphs dynamically, you also need to ensure that you add them, dynamically, into your load balancer configuration as well.
My first guess was that maybe the mdex host 2 didn't have Platform services/Mdex installed and Platform services running but it may be that the port you specified is incorrect.
<host host-id="MDEXLiveHost2" host-name="172.18.0.7" port="9999" useSsl="false"/>
Is your eac port 9999 and not 8888 (OOB value)? If it is 9999 on your ITL server, you want to make sure that it is also set to 9999 on your new Dgraph server.

Liferay 6.2 clustering issue with multicast

I am trying to cluster ehcache and lucene with Liferay 6.2 EE sp2 bundle on 2 servers with mutlicast enabled. WE have Apache HTTPD servers fronting tomcat servers using reverse proxy. A valid 6.2 license is deployed on both the nodes.
We user the following properties in the portal-ext.properties:
cluster.link.enabled=true
lucene.replicate.write=true
ehcache.cluster.link.replication.enabled=true
# Since we are using SSL on the frontend
web.server.protocol=https
# set this to any server that is visible to both the nodes
cluster.link.autodetect.address=dbserverip:dbport
#ports and ips we know work in our environment for multicast
multicast.group.address["cluster-link-control"]=ip
multicast.group.port["cluster-link-control"]=port1
multicast.group.address["cluster-link-udp"]=ip
multicast.group.port["cluster-link-udp"]=port2
multicast.group.address["cluster-link-mping"]=ip
multicast.group.port["cluster-link-mping"]=port3
multicast.group.address["hibernate"]=ip
multicast.group.port["hibernate"]=port4
multicast.group.address["multi-vm"]=ip
multicast.group.port["multi-vm"]=port5
We are running into issues with the ehcache and lucene clustering not working. The following tests fail :
Moving a portlet on node 1, does not show up on node 2
There are no errors except for a startup error with lucene.
14:19:35,771 ERROR
[CLUSTER_EXECUTOR_CALLBACK_THREAD_POOL-1][LuceneHelperImpl:1186]
Unable to load index for company 10157
com.liferay.portal.kernel.exception.SystemException:
java.net.ConnectException: Connection refused at
com.liferay.portal.search.lucene.LuceneHelperImpl.getLoadIndexesInputStreamFromCluster(LuceneHelperImpl.java:488)
at
com.liferay.portal.search.lucene.LuceneHelperImpl$LoadIndexClusterResponseCallback.callback(LuceneHelperImpl.java:1176)
at
com.liferay.portal.cluster.ClusterExecutorImpl$ClusterResponseCallbackJob.run(ClusterExecutorImpl.java:614)
at
com.liferay.portal.kernel.concurrent.ThreadPoolExecutor$WorkerTask._runTask(ThreadPoolExecutor.java:682)
at
com.liferay.portal.kernel.concurrent.ThreadPoolExecutor$WorkerTask.run(ThreadPoolExecutor.java:593)
at java.lang.Thread.run(Thread.java:745) Caused by:
java.net.ConnectException: Connection refused at
java.net.PlainSocketImpl.socketConnect(Native Method) at
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at
java.net.Socket.connect(Socket.java:579) at
sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:625) at
sun.security.ssl.BaseSSLSocketImpl.connect(BaseSSLSocketImpl.java:160)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180) at
sun.net.www.http.HttpClient.openServer(HttpClient.java:432) at
sun.net.www.http.HttpClient.openServer(HttpClient.java:527) at
sun.net.www.protocol.https.HttpsClient.(HttpsClient.java:275)
at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:371)
We verified that the jgroups multicast works outside of liferay by running the following commands and using a downloaded copy of the jgroups.jar and replacing with the 5 multicast ips and ports.
Testing with JGROUPS
1) McastReceiver -
java -cp ./jgroups.jar org.jgroups.tests.McastReceiverTest -mcast_addr 224.10.10.10 -port 5555
ex. java -cp jgroups-final.jar org.jgroups.tests.McastReceiverTest -mcast_addr 224.10.10.10 -port 5555
2) McastSender -
java -cp ./jgroups.jar org.jgroups.tests.McastSenderTest -mcast_addr 224.10.10.10 -port 5555
ex. java -cp jgroups-final.jar org.jgroups.tests.McastSenderTest -mcast_addr 224.10.10.10 -port 5555
From there, typing things into the McastSender will result in the Receiver printing it out.
Thanks!
After a lot of troubleshooting and help from various folks in my team and at liferay support, we switched to using unicast and it worked a lot better.
Here is what we did:
Extracted jgroups.jar from the tomcat home/webappts/ROOT/WEB_INF/lib, saved locally.
Unzipped the jgroups.jar file and extracted and save the tcp.xml from the jar's WEB_INF folder
As a base line test, changed the section in the tcp.xml and saved
TCPPING timeout="3000"
initial_hosts="${jgroups.tcpping.initial_hosts:servername1[7800],servername2[7800]}"
port_range="1"
num_initial_members="10"
Copy the tcp.xml to the liferay home on both the nodes
Change the portal-ext.properties to remove the mutlicast properties and add the following lines.
cluster.link.channel.properties.control=${liferay.home}/tcp.xml
cluster.link.channel.properties.transport.0=${liferay.home}/tcp.xml
Start node 1
start node 2
check logs
Do the cluster cache test:
Moving a portlet on node 1, shows up on node 2
Under control panel -> License manager both the nodes show up with valid licenses.
searching for user on node 2 after adding in node 1 in control panel -> user and organizations.
All of the above tests worked.
So we shutdown servers and changed the tcp.xml to use jdbc rather than the tcpping so we don't have to specify node names manually.
Step for the jdbc config:
Create the table in the liferay database manually.
CREATE TABLE JGROUPSPING (own_addr varchar(200) not null, cluster_name varchar(200) not null, ping_data blob default null, primary key (own_addr, cluster_name))
change tcp.xml and remove the tcpping section and add the following.
Note: Please replace the leading \ with less than symbol in the following code block. There are issues with the leading less than sign in the SO editor/parser hiding whatever comes after it:
\JDBC_PING datasource_jndi_name="java:comp/env/jdbc/LiferayPool"
initialize_sql="" />
Save and push the file manually to both the nodes.
Start the servers and repeat tests above.
It should work seamlessly.
It was invaluable to have the debug logging on for jgroups mentioned in the following the post:
https://bitsofinfo.wordpress.com/2014/05/21/clustering-liferay-globally-across-data-centers-gslb-with-jgroups-and-relay2/
tomcat home/webapps/ROOT/WEB-INF/classes/META-INF/portal-log4j-ext.xml file I used to triage various issues on bootup related to clustering.
<?xml version="1.0"?>
<!DOCTYPE log4j:configuration SYSTEM "log4j.dtd">
<log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/">
<category name="com.liferay.portal.cluster">
<priority value="TRACE" />
</category>
<category name="com.liferay.portal.license">
<priority value="TRACE" />
</category>
We also found that the Lucene cluster replication startup errors were fixed in a fix pack and are getting a patch for it.
https://issues.liferay.com/browse/LPS-51714
https://issues.liferay.com/browse/LPS-51428
We added the following portal instance properties for lucene replication to work better between the 2 nodes:
portal.instance.http.port=port that the app servers listen on ex. 8080
portal.instance.protocol=http
Hope this helps someone.
Update
The lucene index load in a cluster issue was resolved by a Liferay 6.2 EE patch from support for the LPS's mentioned above.

remove server header tomcat

I am able to rename the value of org.apache.coyote.http11.Http11Protocol.SERVER to anything else, so the HTTP-Response-Header contains something like:
Server:Apache
instead of the default
Server:Apache-Coyote/1.1
Using a empty value for org.apache.coyote.http11.Http11Protocol.SERVER does not remove the Server-Header.
How can I remove the Server-Header from my responses?
You can modify your tomcat server.xml and add a "server" option and set it to whatever you want. The server option should be set for any http or ssl connectors that you have running. For example, below is a sample HTTP Connector configuration from an example server.xml file
<Connector port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" enableLookups="false" xpoweredby="false" server="Web"/>
Short answer - you can't remove the header, but you should modify it (see other answers).
The server header is defined in the RFC and it is mandatory. (not defined as optional in the spec)
Taken from http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.38
14.38 Server
The Server response-header field contains information about the software used by the origin server to handle the request.
The field can contain multiple product tokens (section 3.8) and
comments identifying the server and any significant subproducts. The
product tokens are listed in order of their significance for
identifying the application.
If the response is being forwarded through a proxy, the proxy application MUST NOT modify the Server
response-header. Instead, it SHOULD include a Via field (as described
in section 14.45).
Note: Revealing the specific software version of the server might
allow the server machine to become more vulnerable to attacks
against software that is known to contain security holes. Server
implementors are encouraged to make this field a configurable
option.
It should be possible since Tomcat 5.5. Check out this discussion: https://mail-archives.apache.org/mod_mbox/tomcat-users/200508.mbox/%3C42FBE8AA.1060401#joedog.org%3E
and this link:
https://tomcat.apache.org/tomcat-4.1-doc/config/coyote.html
Accordingly the following should set the server header to TEST. Empty should make it empty.
<Connector className="org.apache.coyote.tomcat4.CoyoteConnector" port="8180" inProcessors="5" maxProcessors="75" enableLookups="true" acceptCount="10" debug="0" connectionTimeout="20000" useURIValidationHack="false" server="TEST"/>
Setting the Server header to Apache should security-wise be good enough in most cases. Just from that it won't be possible to infer which OS nor which exact version with which modules and the versions of the modules running.
if you are using embedded tomcat then you can try below code.
import org.apache.catalina.startup.Tomcat;
final Tomcat server = new Tomcat();
server.getConnector().setXpoweredBy(false);
server.getConnector().setAttribute("server", "");
For Web application.
Set Server header from the code.
It worked for me in Java Spring boot project.
response.setHeader("Server", "none");
Try adding from code if it is deployed in tomcat.