how to write .ksh file as production - jvm

I have written below .ksh file to execute the main class from EAR in WebSphere server using putty.
#!/bin/ksh
############################################################################
#
#
#
# Written By: xyz
#
# Sep 23, 2017
#
#
#
#
#
############################################################################
print " Process started"
java -cp \
"/appl/was/profiles/node/installedApps/cab/abcEar.ear/abcWeb.war/WEB-INF/lib/*" \
com.batch.FirtTool
print "Process End"
I am able to execute main class, but database connection is not happening properly. I am using hibernate JNDI stuff for connection.
If I am removing JNDI and directly I am using data source then my program is working properly.
But I need JNDI in my application.
Can you please help on this.
hibernate file like.
<!-- Database connection settings -->
<property
name="connection.driver_class">oracle.jdbc.driver.OracleDriver</property>
<property
name="hibernate.connection.datasource">java:comp/env/jdbc/xyz</property>
With JNDI its not working, But We will use below direct property, Then its working .
<property name="connection.url">xyz</property>
<property name="connection.username">xyz</property>
<property name="connection.password">xyz</property>

Related

Building 3 node working Ignite cluster on AWS

did anyone successfully launch a three-node ignite cluster on AWS without any issues? if so can some one help with step by step process how to workaround.
Ignite documentation doesn't make sense and its way to little information with no screenshots, the documentation just explains docker instance running on one Ec2 but I need at least three-node ignite cluster on AWS or EMR.
I did try this blog (https://aws.amazon.com/blogs/big-data/real-time-in-memory-oltp-and-analytics-with-apache-ignite-on-aws/) and with the cloud formation json (https://github.com/aws-samples/aws-big-data-blog/blob/master/aws-blog-real-time-in-memory-oltp-and-analytics-with-apache-ignite/cloudformation/configignite.json), below is the configureIgnite.sh script which cloud formation template refers but ignite setup is failing with property syntax error in default-config.xml file
#!/bin/bash
#
# This is a modified version of the file stored at s3://publicbucketbabupe/ignitelibrary/configureIgnite.sh
# which changes the config to use the instance provided credentials rather than requiring access/secret to be passed in
#
# Parameters are
# 1 - Cache Name
# 2 - Number of replicas
# 3 - S3 Bucket Name
#
echo "<?xml version=\"1.0\" encoding=\"UTF-8\"?>
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the \"License\"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an \"AS IS\" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<beans xmlns=\"http://www.springframework.org/schema/beans\"
xmlns:util=\"http://www.springframework.org/schema/util\"
xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"
xsi:schemaLocation=\"
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/util
http://www.springframework.org/schema/util/spring-util.xsd\">
<!--
Alter configuration below as needed.
-->
<bean id=\"grid.cfg\" class=\"org.apache.ignite.configuration.IgniteConfiguration\">
<property name=\"cacheConfiguration\">
<list>
<bean class=\"org.apache.ignite.configuration.CacheConfiguration\">
<property name=\"name\" value=\"$1\"/>" > /tmp/igniteconfig.xml
echo " <property name=\"cacheMode\" value=\"PARTITIONED\"/>
<property name=\"atomicWriteOrderMode\" value=\"PRIMARY\"/>
<property name=\"writeSynchronizationMode\" value=\"PRIMARY_SYNC\"/>" >> /tmp/igniteconfig.xml
availfreeMemory=$(cat /proc/meminfo|grep MemTotal|awk '{print $2}')
memoryOverhead=$((availfreeMemory/1024/1024/10))
availfreeMemoryinGB=$((availfreeMemory/1024/1024 - memoryOverhead))
if [[ $availfreeMemoryinGB -gt 8 ]]; then
offheapmemoryinGB=$((availfreeMemoryinGB-8))
echo " <property name=\"memoryMode\" value=\"ONHEAP_TIERED\" />
<property name=\"offHeapMaxMemory\" value=\"#{$offheapmemoryinGB * 1024L * 1024L * 1024L}\" />" >> /tmp/igniteconfig.xml
echo "8g" > /tmp/heapsize.log
else
echo "${availfreeMemoryinGB}g" > /tmp/heapsize.log
fi
echo " <property name=\"evictionPolicy\">
<bean class=\"org.apache.ignite.cache.eviction.lru.LruEvictionPolicy\">
<property name=\"maxSize\" value=\"100000000\"/>
</bean>
</property>" >> /tmp/igniteconfig.xml
echo " <property name=\"swapEnabled\" value=\"false\"/>" >> /tmp/igniteconfig.xml
echo " <property name=\"atomicityMode\" value=\"ATOMIC\" />" >> /tmp/igniteconfig.xml
echo " <property name=\"backups\" value=\"$2\" />" >> /tmp/igniteconfig.xml
echo " </bean>" >> /tmp/igniteconfig.xml
echo " </list>" >> /tmp/igniteconfig.xml
echo "</property>" >> /tmp/igniteconfig.xml
echo "<property name=\"discoverySpi\">
<bean class=\"org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi\">
<property name=\"ipFinder\">
<bean class=\"org.apache.ignite.spi.discovery.tcp.ipfinder.s3.TcpDiscoveryS3IpFinder\">
<property name=\"awsCredentials\" ref=\"aws.creds\"/>
<property name=\"bucketName\" value=\"$3\"/>
</bean>
</property>
</bean>
</property>
<property name=\"communicationSpi\">
<bean class=\"org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi\">
<property name=\"slowClientQueueLimit\" value=\"1000\"/>
</bean>
</property>
</bean>
<!-- AWS credentials. Provide your access key ID and secret access key. -->
<bean id="aws.creds" class="com.amazonaws.auth.BasicAWSCredentials">
<constructor-arg value="" />
<constructor-arg value="" />
</bean>
</beans>" >> /tmp/igniteconfig.xml
Also I tried below steps mentioned in this blog (https://www.gridgain.com/docs/8.7.6//installation-guide/manual-install-on-ec2) I am getting Spring XML error.
content in my aws-static-ip-finder.xml file is below.
<bean class="org.apache.ignite.configuration.IgniteConfiguration" >
<!-- other properties -->
<!-- Explicitly configure TCP discovery SPI to provide a list of nodes. -->
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
<property name="addresses">
<list>
<value>172.31.81.211</value>
<value>172.31.82.21</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
</bean>
class org.apache.ignite.IgniteException: Failed to instantiate Spring XML application context [springUrl=file:/home/ec2-user/aws-static-ip-finder_2.xml, err=Line 1 in XML document from URL [file:/home/ec2-user/aws-static-ip-finder_2.xml] is invalid; nested exception is org.xml.sax.SAXParseException; lineNumber: 1; columnNumber: 67; cvc-elt.1: Cannot find the declaration of element 'bean'.]
at org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:1052)
at org.apache.ignite.Ignition.start(Ignition.java:350)
at org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:300)
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to instantiate Spring XML application context [springUrl=file:/home/ec2-user/aws-static-ip-finder_2.xml, err=Line 1 in XML document from URL [file:/home/ec2-user/aws-static-ip-finder_2.xml] is invalid; nested exception is org.xml.sax.SAXParseException; lineNumber: 1; columnNumber: 67; cvc-elt.1: Cannot find the declaration of element 'bean'.]
at org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.applicationContext(IgniteSpringHelperImpl.java:391)
at org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.loadConfigurations(IgniteSpringHelperImpl.java:103)
at org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.loadConfigurations(IgniteSpringHelperImpl.java:97)
at org.apache.ignite.internal.IgnitionEx.loadConfigurations(IgnitionEx.java:750)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:951)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:860)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:730)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:699)
at org.apache.ignite.Ignition.start(Ignition.java:347)
... 1 more
Caused by: org.springframework.beans.factory.xml.XmlBeanDefinitionStoreException: Line 1 in XML document from URL [file:/home/ec2-user/aws-static-ip-finder_2.xml] is invalid; nested exception is org.xml.sax.SAXParseException; lineNumber: 1; columnNumber: 67; cvc-elt.1: Cannot find the declaration of element 'bean'.
at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:399)
at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:336)
at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:304)
at org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.applicationContext(IgniteSpringHelperImpl.java:378)
... 9 more
Caused by: org.xml.sax.SAXParseException; lineNumber: 1; columnNumber: 67; cvc-elt.1: Cannot find the declaration of element 'bean'.
at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:203)
at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.error(ErrorHandlerWrapper.java:134)
at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:396)
at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:327)
at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:284)
at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator.handleStartElement(XMLSchemaValidator.java:1901)
at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator.startElement(XMLSchemaValidator.java:741)
at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.scanStartElement(XMLNSDocumentScannerImpl.java:374)
at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl$NSContentDriver.scanRootElementHook(XMLNSDocumentScannerImpl.java:613)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:3132)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$PrologDriver.next(XMLDocumentScannerImpl.java:852)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:602)
at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:112)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:505)
at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:842)
at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:771)
at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:141)
at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:243)
at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:339)
at org.springframework.beans.factory.xml.DefaultDocumentLoader.loadDocument(DefaultDocumentLoader.java:76)
at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadDocument(XmlBeanDefinitionReader.java:429)
at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:391)
... 12 more
Failed to start grid: Failed to instantiate Spring XML application context [springUrl=file:/home/ec2-user/aws-static-ip-finder_2.xml, err=Line 1 in XML document from URL [file:/home/ec2-user/aws-static-ip-finder_2.xml] is invalid; nested exception is org.xml.sax.SAXParseException; lineNumber: 1; columnNumber: 67; cvc-elt.1: Cannot find the declaration of element 'bean'.]
Thanks
Sri
Can you please shared what you have tried to make the cluster formation work? And which version of Ignite are you on?
Basically you must create Ignite configurations with matching IP Finders in Discovery SPI, check out this documentation:
https://apacheignite-mix.readme.io/docs/amazon-aws
You can also go with simple static IP Finder that contains public IP adresses of EC2 instances:
https://apacheignite.readme.io/docs/tcpip-discovery#section-static-ip-finder
To do this, open your EC2 instances listed, check for "IPv4 Public IP" field.
Another thing you should consider is Security Group:
https://docs.aws.amazon.com/en_us/vpc/latest/userguide/VPC_SecurityGroups.html#CreatingSecurityGroups
Make sure the following TCP ports are open bidirectionally (I've included defaults):
Discovery: 47500-47600 (port range from static IP finder)
Communication: 47100-47200
Thin client connection port: 10800
REST (optional): 8080
Make sure that these connections are also available via Outbound, you must be available to receive messages from other EC2 instances.
Here is detailed documentation for GridGain:
https://www.gridgain.com/docs/latest/installation-guide/manual-install-on-ec2
Just replace GridGain with Ignite to complete the deployment.
I got it working (embedded Ignite) using TcpDiscoveryAlbIpFinder using a target group.
Added ingress security group rules to allow port 47500-47600 (discovery), 47100-47200 (communication) and 10800 (thin client).
Added AWS policy to allow load balancer access.
{
"Sid": "AllowLoadBalancer",
"Effect": "Allow",
"Action": [
"elasticloadbalancing:Describe*",
"elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
"elasticloadbalancing:RegisterInstancesWithLoadBalancer"
],
"Resource": "arn:aws:elasticloadbalancing:*:*:loadbalancer/*"
}
See #antkr's answer

SharedRDD code for ignite works on setup of single server but fails with exception when additional server added

I have 2 server nodes running collocated with spark worker. I am using shared ignite RDD to save my dataframe. My code works fine when I work with only one server node stared, if I start both server nodes code fails with
Grid is in invalid state to perform this operation. It either not started yet or has already being or have stopped [gridName=null, state=STOPPING]
DiscoverySpi is configured as below
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<!--
Ignite provides several options for automatic discovery that can be used
instead os static IP based discovery. For information on all options refer
to our documentation: http://apacheignite.readme.io/docs/cluster-config
-->
<!-- Uncomment static IP finder to enable static-based discovery of initial nodes. -->
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
<!--<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">-->
<property name="shared" value="true"/>
<property name="addresses">
<list>
<!-- In distributed environment, replace with actual host IP address. -->
<value>v-in-spark-01:47500..47509</value>
<value>v-in-spark-02:47500..47509</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
I know this exception generally means ignite instanace either not started or stopped and operation tried with same, but I don't think this is the case for reasons that with single server node it works fine and also I am not explicitly closing ignite instance in my program.
Also in my code flow I do perform operations in transaction which works, so it is like
create cache1 : works fine
Create cache2 : works fine
put value in cache1 ; works fine
igniteRDD.saveValues on cache2 : This step failes with above mentioned exception.
USE this link for complete error trace
caused by part is pasted below here also
Caused by: java.lang.IllegalStateException: Grid is in invalid state to perform this operation. It either not started yet or has already being or have stopped [gridName=null, state=STOPPING]
at org.apache.ignite.internal.GridKernalGatewayImpl.illegalState(GridKernalGatewayImpl.java:190)
at org.apache.ignite.internal.GridKernalGatewayImpl.readLock(GridKernalGatewayImpl.java:90)
at org.apache.ignite.internal.IgniteKernal.guard(IgniteKernal.java:3151)
at org.apache.ignite.internal.IgniteKernal.getOrCreateCache(IgniteKernal.java:2739)
at org.apache.ignite.spark.impl.IgniteAbstractRDD.ensureCache(IgniteAbstractRDD.scala:39)
at org.apache.ignite.spark.IgniteRDD$$anonfun$saveValues$1.apply(IgniteRDD.scala:164)
at org.apache.ignite.spark.IgniteRDD$$anonfun$saveValues$1.apply(IgniteRDD.scala:161)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:883)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:883)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1897)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1897)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:85)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
... 3 more</pre>
It looks like the node embedded in the executor process is stopped for some reason while you are still trying to run the job. To my knowledge the only way for this to happen is to stop the executor process. Can this be the case? Is there anything in the log except the trace?

jmx doesnt seem to be working with activeMQ

I'm trying to use JMX with activeMQ for monitoring so far I've been using this and this as a reference but so far I'm unable to connect to jmx remotely and also I don't see any mention of jmx url in activemq logs. I'm wondering if there is another way to make sure jmx is working? is it supposed to be indicated in activemq logs?
PS I'm using jdk1.7 and activeMQ 5.14.2.
Thanks in advance!
EDIT
I set useJmx="true" in my activemq.xml file:
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="primary" useJmx="true" dataDirectory="${activemq.data}">
I tried two steps:
FIRST
I tried changing management context from createConnector="false" to :
<managementContext>
<managementContext createConnector="true" connectorPort="1099"/>
</managementContext>
FOR FIRST TIME THE PORT IS OPEN AND ACTIVEMQ RUNS FINE AND JMX URL GETS REPORTED IN LOGS ALTHOUGH I CAN NOT CONNECT IT TO IT REMOTLEY BUT IM ASSUMING ITS WORKING
SECOND
I reverted back the changes I made for managmentContext and I tried setting:
ACTIVEMQ_SUNJMX_START="-Dcom.sun.management.jmxremote.port=1099 -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.password.file=${ACTIVEMQ_BASE}/jmx.password -Dcom.sun.management.jmxremote.access.file=${ACTIVEMQ_BASE}/jmx.access"
in bin/activemq script and I set a username in conf/jmx.access file as:
admin readwrite
And also have set a password in conf/jmx.password:
admin activemq
NOW ACTIVEMQ IS NOT RUNNINT AT ALL BUT IT WILL RUN IF I SET
AUTHENTICATE=FALSE AND DELETE JMX.ACCESS AND JMX.PASSWORD
CONFIGURATION IN BIN/ACTIVEMQ FILE BUT I NEED USER NAME AND PASSWORD
FOR SECURITY REASONS
I found this post which has the exact same issue as mine. any ideas?
Password authentication for remote monitoring is enabled by default. To disable it, set the following system property when you start the JVM:
-Dcom.sun.management.jmxremote.authenticate=false like you done in second test but you need to add system property -Dcom.sun.management.jmxremote
Try to add these jvm param to env file and update host ip
-Djava.net.preferIPv4Stack=true -Djava.rmi.server.hostname=X.X.X.X
UPDATE
SO, to resume, i think that the FIRST step you tried is the best, for making it working these are the steps :
revert all jmx env file changes, like this :
# ACTIVEMQ_SUNJMX_START="$ACTIVEMQ_SUNJMX_START -Dcom.sun.management.jmxremote.port=1099 "
# ACTIVEMQ_SUNJMX_START="$ACTIVEMQ_SUNJMX_START -Dcom.sun.management.jmxremote.password.file=${ACTIVEMQ_CONF}/jmx.password"
# ACTIVEMQ_SUNJMX_START="$ACTIVEMQ_SUNJMX_START -Dcom.sun.management.jmxremote.access.file=${ACTIVEMQ_CONF}/jmx.access"
# ACTIVEMQ_SUNJMX_START="$ACTIVEMQ_SUNJMX_START -Dcom.sun.management.jmxremote.ssl=false"
ACTIVEMQ_SUNJMX_START="$ACTIVEMQ_SUNJMX_START -Dcom.sun.management.jmxremote"
<broker useJmx="true" ...
<managementContext>
<managementContext createConnector="true" connectorPort="1099" />
</managementContext>
verify that in AMQ logs you have
INFO | JMX consoles can connect to
service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi |
org.apache.activemq.broker.jmx.ManagementContext | JMX connector
NOTE : Assuming that 10.10.10.16 is the IP of AMQ host.
try to connect with jconsole from another machine than AMQ host with url "service:jmx:rmi:///jndi/rmi://10.10.10.16:1099/jmxrmi" without user/pwd.
if you cannot connect, try like this :
<managementContext>
<managementContext createConnector="true" connectorPort="1099" connectorHost="10.10.10.16" />
</managementContext>
verify that in AMQ logs you have
INFO | JMX consoles can connect to
service:jmx:rmi:///jndi/rmi://10.10.10.16:1099/jmxrmi |
org.apache.activemq.broker.jmx.ManagementContext | JMX connector
retry to connect, step 4
at this step normally you can connect with jconsole.
if you want to add security and authorizations, use this :
<managementContext>
<managementContext createConnector="true" connectorPort="1099" connectorHost="10.10.10.16" >
<property xmlns="http://www.springframework.org/schema/beans" name="environment">
<map xmlns="http://www.springframework.org/schema/beans">
<entry xmlns="http://www.springframework.org/schema/beans" key="jmx.remote.x.password.file"
value="${activemq.conf}/jmx.password"/>
<entry xmlns="http://www.springframework.org/schema/beans" key="jmx.remote.x.access.file"
value="${activemq.conf}/jmx.access"/>
</map>
</property>
</managementContext>
</managementContext>
Please try these steps and let me know in which one you fails to connect and provide error message from jconsole.
A couple troubleshooting steps:
Start jconsole or visualvm on the same system and connect using the "pid" attach method. Browse the MBeans and confirm org.apache.activemq beans are present
Run netstat -na and confirm ports 1099 (and 44444) are in LISTEN
Look at logs and confirm you do not have any "java.net.BindException: Address already in use.." messages that indicate a port conflict with an already running Java process.
Edit bin/env to configure JMX (this disables requiring SSL, sets the port to 1099 and disables requiring username and password.
ACTIVEMQ_SUNJMX_START="$ACTIVEMQ_SUNJMX_START -Dcom.sun.management.jmxremote.port=1099 "
ACTIVEMQ_SUNJMX_START="$ACTIVEMQ_SUNJMX_START -Dcom.sun.management.jmxremote.ssl=false "
ACTIVEMQ_SUNJMX_START="$ACTIVEMQ_SUNJMX_START -Dcom.sun.management.jmxremote "
# ACTIVEMQ_SUNJMX_START="$ACTIVEMQ_SUNJMX_START -Dcom.sun.management.jmxremote.password.file=${ACTIVEMQ_CONF}/jmx.password"`
# ACTIVEMQ_SUNJMX_START="$ACTIVEMQ_SUNJMX_START -Dcom.sun.management.jmxremote.access.file=${ACTIVEMQ_CONF}/jmx.access"

Is there a way to dynamically define and register new Dgraphs in Endeca

As far as my knowledge of Endeca goes, any time you want to add a new dgraph definition in your Endeca configuration, you have to run initializeServices.sh to set the updated configuration on EAC.
I was wondering if there is any way I can do that without running initalizeServices.sh (since it does a lot more than just update the list of Dgraph registered in EAC, and I want to prevent that).
I found the command ./runcommand.sh --update-definition allows you to do configuration changes to a Dgraph, which has already been registered with EAC, but if I add a new dgraph in config and run the command it fails with below error:
[11.17.16 16:00:07] INFO: Setting definition for host 'MDEXLiveHost2'.
[11.17.16 16:00:07] SEVERE: Caught an exception while checking provisioning
Caused by com.endeca.soleng.eac.toolkit.exception.EacCommunicationException
com.endeca.soleng.eac.toolkit.host.Host setDefinition - Caught exception while setting host definition.
Caused by com.endeca.eac.client.ProvisioningFault
sun.reflect.NativeConstructorAccessorImpl newInstance0 - null
I can't find any detailed logs of this error being generated anywhere in PlatformServices logs to further debug.
I could, however see in request log that /eac/ProvisioningService gave a HTTP code of 500, which leads me to believe that the script is trying to find current configuration of MDEXLiveHost2 and is unable to find it.
EDITED TO ADD Configuration for:
New host:
<host id="MDEXLiveHost2" hostName="${mdexLive.host2}" port="${mdexLive.eac.port}" useSsl="false" />
New Dgraph:
<dgraph id="DgraphLive2" host-id="MDEXLiveHost2" port="${dgraphLive1.port}"
post-startup-script="LiveDgraphPostStartup">
<properties>
<property name="restartGroup" value="A" />
<property name="updateGroup" value="a" />
<property name="DgraphContentGroup" value="Live" />
</properties>
<log-dir>./logs/dgraphs/DgraphLive</log-dir>
<input-dir>./data/dgraphs/DgraphLive/dgraph_input</input-dir>
<update-dir>./data/dgraphs/DgraphLive/dgraph_input/updates</update-dir>
</dgraph>
EDITED TO ADD errors after manually adding host using eaccmd.sh
Host definition file:
<host host-id="MDEXLiveHost2" host-name="172.18.0.7" port="9999" useSsl="false"/>
The host is added successfully (validated via describe-app)
$./eaccmd.sh describe-app --app myapp | grep MDEXLiveHost2
<host host-name="172.18.0.7" port="9999" host-id="MDEXLiveHost2" useSsl="false">
But, running any command I get this error:
[11.18.16 11:00:58] INFO: Updating provisioning for host 'MDEXLiveHost2'.
[11.18.16 11:00:58] INFO: Host name of host 'MDEXLiveHost2' has changed from 172.18.0.7 to 172.18.0.7 . Components on this host will be re-provisioned.
[11.18.16 11:00:58] INFO: Updating definition for host 'MDEXLiveHost2'.
[11.18.16 11:00:58] SEVERE: Caught an exception while checking provisioning.
Caused by com.endeca.soleng.eac.toolkit.exception.EacCommunicationException
com.endeca.soleng.eac.toolkit.host.Host updateEacDefinition - Caught exception while updating host definition.
Caused by com.endeca.eac.client.ProvisioningFault
sun.reflect.NativeConstructorAccessorImpl newInstance0 - null
If only this error could be made more verbose, that might give some help.
You do not have to run initializeServices.sh for every configuration change you make. When you execute other scripts in the control folder, they first check if there are any configuration changes and apply these changes.
As far as the error is concerned, I suspect you either didn't specify the MDEXLiveHost2 in your LiveDGraphCluster.xml or the host that you did specify is not reachable. Verify your configuration.
Lastly your approach to dynamically add more DGraphs into the cluster is not standard practice. When you configure your environment you should do a load test using ENEPerf to simulate the load and then create as many DGraphs and hosts as required. If you are adding more hosts and DGraphs dynamically, you also need to ensure that you add them, dynamically, into your load balancer configuration as well.
My first guess was that maybe the mdex host 2 didn't have Platform services/Mdex installed and Platform services running but it may be that the port you specified is incorrect.
<host host-id="MDEXLiveHost2" host-name="172.18.0.7" port="9999" useSsl="false"/>
Is your eac port 9999 and not 8888 (OOB value)? If it is 9999 on your ITL server, you want to make sure that it is also set to 9999 on your new Dgraph server.

ActiveMQ cookbook triggers the activemq service, but the service is not started

Activemq service is triggered via cookbook but it does not runs:
activemq/attributes/default.rb
default['activemq']['version'] = '5.11.0'
src_filename="apache-activemq-#{node['activemq']['version']}-bin.tar.gz"
src_filepath = "#{Chef::Config['file_cache_path']}/#{src_filename}"
default['activemq']['src_filepath'] = "#{src_filepath}"
default['activemq']['tar_filepath'] = "http://xxx-xx-xx-xxx-xx:8091/3rdparty/activemq/#{src_filename}"
default['activemq']['dir'] = "/usr/local/apache-activemq-#{node['activemq']['version']}"
default['activemq']['wrapper']['max_memory'] = '1024'
default['activemq']['wrapper']['useDedicatedTaskRunner'] = 'true'
default['activemq']['zooKeeper']['address']="xxx.xx.xx.xx:2181"
default['activemq']['zooKeeper']['hostname']="xxx.xx.xx.xx"
activemq/recipes/default.rb
include_recipe "lgjava"
activemq_home= node['activemq']['dir']
remote_file "#{node['activemq']['src_filepath']}" do
mode 0755
source node['activemq']['tar_filepath']
action :create
notifies :create, "directory[apache-activemq-#{node['activemq']['version']}]", :immediately
notifies :run, "execute[untar-activemq]", :immediately
end
directory "apache-activemq-#{node['activemq']['version']}" do
path "#{activemq_home}"
mode 0755
recursive true
end
execute "untar-activemq" do
cwd Chef::Config[:file_cache_path]
command <<-EOF
tar xvzf apache-activemq-#{node['activemq']['version']}-bin.tar.gz -C #{node['activemq']['dir'] } --strip 1
EOF
action :run
end
file "#{activemq_home}/bin/activemq" do
owner 'root'
group 'root'
mode '0755'
end
arch = node['kernel']['machine'] == 'x86_64' ? 'x86-64' : 'x86-32'
link '/etc/init.d/activemq' do
to "#{activemq_home}/bin/linux-#{arch}/activemq"
end
template "jetty-realm.properties" do
source "jetty-realm.properties.erb"
mode "0755"
path "#{activemq_home}/conf/jetty-realm.properties"
action :create
notifies :restart, 'service[activemq]'
end
template "activemq.xml" do
source "activemq.xml.erb"
mode "0755"
path "#{activemq_home}/conf/activemq.xml"
action :create
notifies :restart, 'service[activemq]'
end
service 'activemq' do
supports :restart => true, :status => true
action [:enable, :start]
end
# symlink so the default wrapper.conf can find the native wrapper library
link "#{activemq_home}/bin/linux" do
to "#{activemq_home}/bin/linux-#{arch}"
end
# symlink the wrapper's pidfile location into /var/run
link '/var/run/activemq.pid' do
to "#{activemq_home}/bin/linux/ActiveMQ.pid"
not_if 'test -f /var/run/activemq.pid'
end
template "#{activemq_home}/bin/linux/wrapper.conf" do
source 'wrapper.conf.erb'
mode '0644'
notifies :restart, 'service[activemq]'
end
activemq/templates/default/activemq.xml.erb
<!-- START SNIPPET: example -->
<beans
xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">
<!-- Allows us to use system properties as variables in this configuration file -->
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="locations">
<value>file:${activemq.conf}/credentials.properties</value>
</property>
</bean>
<!-- Allows accessing the server log -->
<bean id="logQuery" class="io.fabric8.insight.log.log4j.Log4jLogQuery"
lazy-init="false" scope="singleton"
init-method="start" destroy-method="stop">
</bean>
<!--
The <broker> element is used to configure the ActiveMQ broker.
-->
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" dataDirectory="${activemq.data}">
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" >
<!-- The constantPendingMessageLimitStrategy is used to prevent
slow topic consumers to block producers and affect other consumers
by limiting the number of messages that are retained
For more information, see:
http://activemq.apache.org/slow-consumer-handling.html
-->
<pendingMessageLimitStrategy>
<constantPendingMessageLimitStrategy limit="1000"/>
</pendingMessageLimitStrategy>
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
<!--
The managementContext is used to configure how ActiveMQ is exposed in
JMX. By default, ActiveMQ uses the MBean server that is started by
the JVM. For more information, see:
http://activemq.apache.org/jmx.html
-->
<managementContext>
<managementContext createConnector="false"/>
</managementContext>
<!--
Configure message persistence for the broker. The default persistence
mechanism is the KahaDB store (identified by the kahaDB tag).
For more information, see:
http://activemq.apache.org/persistence.html
-->
<!--
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb"/>
</persistenceAdapter> -->
<persistenceAdapter>
<replicatedLevelDB directory="activemq-data"
replicas="2"
bind="tcp://0.0.0.0:0"
zkAddress=<%=node['activemq']['zooKeeper']['address']%>
zkPath="/activemq/leveldb-stores"
hostname=<%=node['activemq']['zooKeeper']['hostname']%> />
</persistenceAdapter>
<!--
The systemUsage controls the maximum amount of space the broker will
use before disabling caching and/or slowing down producers. For more information, see:
http://activemq.apache.org/producer-flow-control.html
-->
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage percentOfJvmHeap="70" />
</memoryUsage>
<storeUsage>
<storeUsage limit="100 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="50 gb"/>
</tempUsage>
</systemUsage>
</systemUsage>
<!--
The transport connectors expose ActiveMQ over a given protocol to
clients and other brokers. For more information, see:
http://activemq.apache.org/configuring-transports.html
-->
<transportConnectors>
<!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="amqp" uri="amqp://0.0.0.0:5672?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="stomp" uri="stomp://0.0.0.0:61613?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="mqtt" uri="mqtt://0.0.0.0:1883?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="ws" uri="ws://0.0.0.0:61614?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
</transportConnectors>
<!-- destroy the spring context on shutdown to stop jetty -->
<shutdownHooks>
<bean xmlns="http://www.springframework.org/schema/beans" class="org.apache.activemq.hooks.SpringContextHook" />
</shutdownHooks>
</broker>
<!--
Enable web consoles, REST and Ajax APIs and demos
The web consoles requires by default login, you can disable this in the jetty.xml file
Take a look at ${ACTIVEMQ_HOME}/conf/jetty.xml for more details
-->
<import resource="jetty.xml"/>
</beans>
activemq/templates/default/wrapper.conf.erb
#********************************************************************
# Wrapper Properties
#********************************************************************
#wrapper.debug=TRUE
set.default.ACTIVEMQ_HOME=../..
set.default.ACTIVEMQ_BASE=../..
set.default.ACTIVEMQ_CONF=%ACTIVEMQ_BASE%/conf
set.default.ACTIVEMQ_DATA=%ACTIVEMQ_BASE%/data
wrapper.working.dir=.
# Java Application
wrapper.java.command=java
# Java Main class. This class must implement the WrapperListener interface
# or guarantee that the WrapperManager class is initialized. Helper
# classes are provided to do this for you. See the Integration section
# of the documentation for details.
wrapper.java.mainclass=org.tanukisoftware.wrapper.WrapperSimpleApp
# Java Classpath (include wrapper.jar) Add class path elements as
# needed starting from 1
wrapper.java.classpath.1=%ACTIVEMQ_HOME%/bin/wrapper.jar
wrapper.java.classpath.2=%ACTIVEMQ_HOME%/bin/activemq.jar
# Java Library Path (location of Wrapper.DLL or libwrapper.so)
wrapper.java.library.path.1=%ACTIVEMQ_HOME%/bin/linux-x86-64/
# Java Additional Parameters
# note that n is the parameter number starting from 1.
wrapper.java.additional.1=-Dactivemq.home=%ACTIVEMQ_HOME%
wrapper.java.additional.2=-Dactivemq.base=%ACTIVEMQ_BASE%
wrapper.java.additional.3=-Djavax.net.ssl.keyStorePassword=password
wrapper.java.additional.4=-Djavax.net.ssl.trustStorePassword=password
wrapper.java.additional.5=-Djavax.net.ssl.keyStore=%ACTIVEMQ_CONF%/broker.ks
wrapper.java.additional.6=-Djavax.net.ssl.trustStore=%ACTIVEMQ_CONF%/broker.ts
wrapper.java.additional.7=-Dcom.sun.management.jmxremote
wrapper.java.additional.8=-Dorg.apache.activemq.UseDedicatedTaskRunner=<%= node['activemq']['wrapper']['useDedicatedTaskRunner'] %>
wrapper.java.additional.9=-Djava.util.logging.config.file=logging.properties
wrapper.java.additional.10=-Dactivemq.conf=%ACTIVEMQ_CONF%
wrapper.java.additional.11=-Dactivemq.data=%ACTIVEMQ_DATA%
wrapper.java.additional.12=-Djava.security.auth.login.config=%ACTIVEMQ_CONF%/login.config
# Uncomment to enable jmx
#wrapper.java.additional.n=-Dcom.sun.management.jmxremote.port=1616
#wrapper.java.additional.n=-Dcom.sun.management.jmxremote.authenticate=false
#wrapper.java.additional.n=-Dcom.sun.management.jmxremote.ssl=false
# Uncomment to enable YourKit profiling
#wrapper.java.additional.n=-Xrunyjpagent
# Uncomment to enable remote debugging
#wrapper.java.additional.n=-Xdebug -Xnoagent -Djava.compiler=NONE
#wrapper.java.additional.n=-Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=5005
# Initial Java Heap Size (in MB)
#wrapper.java.initmemory=3
# Maximum Java Heap Size (in MB)
wrapper.java.maxmemory=<%= node['activemq']['wrapper']['max_memory'] %>
# Application parameters. Add parameters as needed starting from 1
wrapper.app.parameter.1=org.apache.activemq.console.Main
wrapper.app.parameter.2=start
#********************************************************************
# Wrapper Logging Properties
#********************************************************************
# Format of output for the console. (See docs for formats)
wrapper.console.format=PM
# Log Level for console output. (See docs for log levels)
wrapper.console.loglevel=INFO
# Log file to use for wrapper output logging.
wrapper.logfile=%ACTIVEMQ_DATA%/wrapper.log
# Format of output for the log file. (See docs for formats)
wrapper.logfile.format=LPTM
# Log Level for log file output. (See docs for log levels)
wrapper.logfile.loglevel=INFO
# Maximum size that the log file will be allowed to grow to before
# the log is rolled. Size is specified in bytes. The default value
# of 0, disables log rolling. May abbreviate with the 'k' (kb) or
# 'm' (mb) suffix. For example: 10m = 10 megabytes.
wrapper.logfile.maxsize=0
# Maximum number of rolled log files which will be allowed before old
# files are deleted. The default value of 0 implies no limit.
wrapper.logfile.maxfiles=0
# Log Level for sys/event log output. (See docs for log levels)
wrapper.syslog.loglevel=NONE
#********************************************************************
# Wrapper Windows Properties
#********************************************************************
# Title to use when running as a console
wrapper.console.title=ActiveMQ
#********************************************************************
# Wrapper Windows NT/2000/XP Service Properties
#********************************************************************
# WARNING - Do not modify any of these properties when an application
# using this configuration file has been installed as a service.
# Please uninstall the service before modifying this section. The
# service can then be reinstalled.
# Name of the service
wrapper.ntservice.name=ActiveMQ
# Display name of the service
wrapper.ntservice.displayname=ActiveMQ
# Description of the service
wrapper.ntservice.description=ActiveMQ Broker
# Service dependencies. Add dependencies as needed starting from 1
wrapper.ntservice.dependency.1=
# Mode in which the service is installed. AUTO_START or DEMAND_START
wrapper.ntservice.starttype=AUTO_START
# Allow the service to interact with the desktop.
wrapper.ntservice.interactive=false
Executing chef-client I get the following:
Recipe: activemq::default
* remote_file[/var/chef/cache/apache-activemq-5.11.0-bin.tar.gz] action create (up to date)
* directory[apache-activemq-5.11.0] action create (up to date)
* execute[untar-activemq] action run
- execute tar xvzf apache-activemq-5.11.0-bin.tar.gz -C /usr/local/apache-activemq-5.11.0 --strip 1
* file[/usr/local/apache-activemq-5.11.0/bin/activemq] action create (up to date)
* link[/etc/init.d/activemq] action create (up to date)
* template[jetty-realm.properties] action create
- update content in file /usr/local/apache-activemq-5.11.0/conf/jetty-realm.properties from 827a97 to 96c9a9
--- /usr/local/apache-activemq-5.11.0/conf/jetty-realm.properties 2015-01-30 13:13:51.000000000 +0000
+++ /tmp/chef-rendered-template20150211-9315-ul0l7k 2015-02-11 07:03:37.711054777 +0000
## -5,9 +5,9 ##
## The ASF licenses this file to You under the Apache License, Version 2.0
## (the "License"); you may not use this file except in compliance with
## the License. You may obtain a copy of the License at
-##
+##
## http://www.apache.org/licenses/LICENSE-2.0
-##
+##
## Unless required by applicable law or agreed to in writing, software
## distributed under the License is distributed on an "AS IS" BASIS,
## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
## -17,6 +17,6 ##
# Defines users that can access the web (console, demo, etc.)
# username: *****
-admin: *****, *****
-user: ****, *****
+admin: *****, ****
+user: *****, ****
- change mode from '0644' to '0755'
- restore selinux security context
* template[activemq.xml] action create
- update content in file /usr/local/apache-activemq-5.11.0/conf/activemq.xml from ca8528 to 219698
--- /usr/local/apache-activemq-5.11.0/conf/activemq.xml 2015-01-30 13:13:51.000000000 +0000
+++ /tmp/chef-rendered-template20150211-9315-8syvv7 2015-02-11 07:03:37.736055052 +0000
## -78,11 +78,20 ##
http://activemq.apache.org/persistence.html
-->
+ <!--
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb"/>
- </persistenceAdapter>
+ </persistenceAdapter> -->
+ <persistenceAdapter>
+ <replicatedLevelDB directory="activemq-data"
+ replicas="2"
+ bind="tcp://x.x.x.x:x"
+ zkAddress=xxxxxxx
+ zkPath=xxxxxx
+ hostname=xxxxxx />
+ </persistenceAdapter>
<!--
The systemUsage controls the maximum amount of space the broker will
use before disabling caching and/or slowing down producers. For more information, see:
## -133,5 +142,4 ##
<import resource="jetty.xml"/>
</beans>
-<!-- END SNIPPET: example -->
- change mode from '0644' to '0755'
- restore selinux security context
* service[activemq] action enable (up to date)
* service[activemq] action start (up to date)
* link[/usr/local/apache-activemq-5.11.0/bin/linux] action create (up to date)
* link[/var/run/activemq.pid] action create (up to date)
* template[/usr/local/apache-activemq-5.11.0/bin/linux/wrapper.conf] action create (up to date)
Recipe: lgtomcat::default
* service[tomcat7] action restart
- restart service service[tomcat7]
Recipe: activemq::default
* service[activemq] action restart
- restart service service[activemq]
But on checking for activemq process, there is none started:
ps -eaf| grep activemq
root 9867 9301 0 07:07 pts/1 00:00:00 grep --color=auto activemq