I'm experimenting with Apache Ignite 1.6 and Ignite 2.1.
I was able to create a cluster and work with Ignite 1.6 with ZooKeeper Based Discovery.
I'm tried to create the Ignite 2.1 cluster with ZooKeeper Based Discovery on the same nodes but it fails with below error.
I have Killed all the nodes in Ignite 1.6 cluster.
Following is the error message:
>>> +---------------------------------------------------------------------------------+
>>> Ignite ver. 2.1.0#20170720-sha1:a6ca5c8a97e9a4c9d73d40ce76d1504c14ba1940 stopped OK
>>> +---------------------------------------------------------------------------------+
>>> Grid uptime: 00:00:06:777
class org.apache.ignite.IgniteException: Failed to start manager: GridManagerAdapter [enabled=true, name=org.apache.ignite.internal.managers.discovery.GridDiscoveryManager]
at org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:957)
at org.apache.ignite.Ignition.start(Ignition.java:350)
at org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:302)
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to start manager: GridManagerAdapter [enabled=true, name=org.apache.ignite.internal.managers.discovery.GridDiscoveryManager]
at org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1775)
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:977)
at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1896)
at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1648)
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1076)
at org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:994)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:880)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:779)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:649)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:618)
at org.apache.ignite.Ignition.start(Ignition.java:347)
... 1 more
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to start SPI: TcpDiscoverySpi [addrRslvr=null, sockTimeout=5000, ackTimeout=5000, marsh=JdkMarshaller [], reconCnt=10, maxAckTimeout=600000, forceSrvMode=false, clientReconnectDisabled=false]
at org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:300)
at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:837)
at org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1770)
... 11 more
Caused by: class org.apache.ignite.spi.IgniteSpiException: Local node and remote node have different version numbers (node will not join, Ignite does not support rolling updates, so versions must be exactly the same) [locBuildVer=1.6.0, rmtBuildVer=2.1.0, locNodeAddrs=[node1/0:0:0:0:0:0:0:1%lo, /127.0.0.1, /<<ip-addr1>>], rmtNodeAddrs=[node1/0:0:0:0:0:0:0:1%lo, /127.0.0.1, /<<ip-addr1>>], locNodeId=d560e617-67b1-4900-9f78-beb181e65f23, rmtNodeId=fa1ede5a-1f95-4884-830f-99cd1c61fe63]
at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.checkFailedError(TcpDiscoverySpi.java:1759)
at org.apache.ignite.spi.discovery.tcp.ServerImpl.joinTopology(ServerImpl.java:910)
at org.apache.ignite.spi.discovery.tcp.ServerImpl.spiStart(ServerImpl.java:358)
at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:1834)
at org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:297)
... 13 more
Failed to start grid: Failed to start manager: GridManagerAdapter [enabled=true, name=org.apache.ignite.internal.managers.discovery.GridDiscoveryManager]
Following is my config file.
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<!--
Alter configuration below as needed.
-->
<bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.zk.TcpDiscoveryZookeeperIpFinder">
<property name="zkConnectionString" value=“<<zkhost>>:2181"/>
</bean>
</property>
</bean>
</property>
</bean>
</beans>
Now, unable to start any of these clusters.
Failing with same error message.
How to resolve this error.
Issue resolved.
There were some background processes running on the node with Ignite 1.6.
(Visior service was also running even though I exited from the client).
Found all the processes with the below command and killed
ps -ef | grep "ignite"
Related
I use ansible install ignite, just replace artiface zip. The error log for the startup script is as follows, it's strange ClassNotFoundException:ZkDiscoveryNodeFailEventData( which occur in 2.7 version).
Caused by: java.lang.ClassNotFoundException: org.apache.ignite.spi.discovery.zk.internal.ZkDiscoveryNodeFailEventData
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.ignite.internal.util.IgniteUtils.forName(IgniteUtils.java:9064)
at org.apache.ignite.internal.util.IgniteUtils.forName(IgniteUtils.java:9002)
at org.apache.ignite.marshaller.jdk.JdkMarshallerObjectInputStream.resolveClass(JdkMarshallerObjectInputStream.java:59)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1866)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1749)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2040)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1571)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
at java.util.TreeMap.buildFromSorted(TreeMap.java:2568)
at java.util.TreeMap.buildFromSorted(TreeMap.java:2551)
at java.util.TreeMap.buildFromSorted(TreeMap.java:2508)
at java.util.TreeMap.readObject(TreeMap.java:2454)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1158)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2176)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2067)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1571)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2285)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2209)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2067)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1571)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
at org.apache.ignite.marshaller.jdk.JdkMarshaller.unmarshal0(JdkMarshaller.java:124)
... 11 more
Failed to start grid: Failed to start manager: GridManagerAdapter [enabled=true, name=org.apache.ignite.internal.managers.discovery.GridDiscoveryManager]
Note! You may use 'USER_LIBS' environment variable to specify your classpath.
tail: /opt/rdx/log/ignite/ignite_console.out: file truncated
I have solved this problem , just change zkRootPath from /apacheIgnite to /otherPath, or connect zookeeper shell and rmr /apacheIgnite
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.zk.ZookeeperDiscoverySpi">
<property name="zkConnectionString" value="ip1:2181,ip2:2181,ip3:2181"/>
<property name="zkRootPath" value="/apacheIgnite"/>
<property name="sessionTimeout" value="400000"/>
</bean>
</property>
besides, you need to delete the data for the paths in the following properties
<property name="storagePath" value="/xxx/persistent" />
<property name="walPath" value="/xxx/wal_store"/>
<property name="walArchivePath" value="/xxx/wal_archive"/>
I had a maven JSF/JPA web application that was connected to MySQL 5.x developed using Netbeans 12. The application was running fine until I updated the MySQL version from 5.x to 8.x. Since that update, I can not configure the database to connect to the JSF application. The connection to MySQL 8.x is working within Netbeans, but not working when deploying the application.
The current configuration include EclipseLink 2.7.7, MySQL 8.0.23, and GlassFish 5(5.0.1) / Payara 5(5.2021.1). It is not possible to make a successful connection to MySQL. I also failed to establish connection inside JDBS Connection Pool of GlassFish and Payara admin consoles. Can someone please direct me to a source where MySQL version 8 is linked to Payara or GlassFish?
The error displayed in the Payara admin console is as follows.
An error has occurred Ping Connection Pool failed for pooConnection.
Connection could not be allocated because: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. Please check the server.log for more details.
The log file contains the following.
[javax.enterprise.resource.resourceadapter.com.sun.enterprise.connectors.service] [tid: _ThreadID=161 _ThreadName=admin-thread-pool::admin-listener(3)] [timeMillis: 1613549343463] [levelValue: 900] [[
RAR8054: Exception while creating an unpooled [test] connection for pool [ pooConnection ], Connection could not be allocated because: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.]]
[2021-02-17T13:39:03.472+0530] [Payara 5.2021.1] [SEVERE] [] [org.glassfish.admingui] [tid: _ThreadID=139 _ThreadName=admin-thread-pool::admin-listener(1)] [timeMillis: 1613549343472] [levelValue: 1000] [[
RestResponse.getResponse() gives FAILURE. endpoint = 'http://localhost:4848/management/domain/resources/ping-connection-pool.json'; attrs = '{id=pooConnection}']]
In order to connect to Payara Server, the only effective way I found was creating a Connection Pool directly in the Domain Admin Console.
It may seem a long way now, but after completing it will become second nature:
Download MySQL8 Java Connector available at https://dev.mysql.com/downloads/connector/j/
and unzip to any folder:
It will extract to something like:
mysql-connector-java-8.0.23 2/mysql-connector-java-8.0.23.jar
1) Make sure you have Payara Server up and running:
cd PATH_TO_PAYARA/bin
2) Start/Restart it
./asadmin start-domain
Note: This will start domain1 by default
3) Install the MySQL8 Connector
./asadmin add-library PATH_TO_MYSQL_CONNECTOR.jar
4) (required) Restart Payara
./asadmin restart-domain
5) Access Admin Console at http://localhost:4848/common/index.jsf
6) In the sidebar navigate to "JDBC" -> "JDBC Connection Pools" menu
7) From there, click "New" to add new Connection Pool
You can have as many as you want. So if you have tried before, you may keep your pools there.
8) In: New JDBC Connection Pool (Step 1 of 2)
Pool Name: MySQL8Pool (or whatever you want)
Resource Type: javax.sql.DataSource
Database Driver Vendor: MySQL8
Click "Next"
9) Scroll down to "Adicional Properties"
Select all and "Delete Properties"
10) Add 6 Properties with the keys/values as:
DatabaseName YOUR_DB_NAME
User YOUR_DB_USER
Password YOUR_DB_PASSWORD
ServerName localhost
PortNumber 3306
UseSSL false
Click "Save"
11) In the sidebar navigate to "JDBC" -> "JDBC Resources" menu
12) From there, click "New" to add new JDBC Resource
And fill:
JNDI Name: jdbc/MySQL8App
PoolName: MySQL8Pool
Click "Save"
From now own I assume you are using Maven.
13) In you pom.xml make sure you have Eclipse Persistence
In you tag:
<dependency>
<groupId>org.eclipse.persistence</groupId>
<artifactId>org.eclipse.persistence.core</artifactId>
<version>2.7.7</version>
</dependency>
<dependency>
<groupId>org.eclipse.persistence</groupId>
<artifactId>org.eclipse.persistence.asm</artifactId>
<version>2.7.7</version>
</dependency>
<dependency>
<groupId>org.eclipse.persistence</groupId>
<artifactId>org.eclipse.persistence.antlr</artifactId>
<version>2.7.7</version>
</dependency>
<dependency>
<groupId>org.eclipse.persistence</groupId>
<artifactId>org.eclipse.persistence.jpa</artifactId>
<version>2.7.7</version>
</dependency>
<dependency>
<groupId>org.eclipse.persistence</groupId>
<artifactId>org.eclipse.persistence.jpa.jpql</artifactId>
<version>2.7.7</version>
</dependency>
<dependency>
<groupId>org.eclipse.persistence</groupId>
<artifactId>org.eclipse.persistence.moxy</artifactId>
<version>2.7.7</version>
</dependency>
<dependency>
<groupId>org.eclipse.persistence</groupId>
<artifactId>javax.persistence</artifactId>
<version>2.2.1</version>
</dependency>
14) Create a persistence unit in
In persistence.xml file:
<persistence-unit name="mysql8PU" transaction-type="JTA">
<jta-data-source>jdbc/MySQL8App</jta-data-source>
<exclude-unlisted-classes>false</exclude-unlisted-classes>
<shared-cache-mode>NONE</shared-cache-mode>
<!--properties>
<property name="javax.persistence.schema-generation.database.action" value="drop-and-create OR create OR complete remove this line"/-->
</properties>
</persistence-unit>
Note that in jta-data-source it points to jdbc/MySQL8App and from now own it can be used any where in your code so that after built Payara is able to now we injected it.
PersistenceService.java
package com.your.package.services
import javax.enterprise.context.ApplicationScoped;
import javax.persistence.EntityManager;
import javax.persistence.PersistenceContext;
#ApplicationScoped
public class PersistenceService {
#PersistenceContext(unitName = "mysql8PU")
EntityManager entityManager;
}
Now rebuild and rerun you project and everything should be fine!
Configure your connecion just in four steps:
Copy Mysql JDBC driver JAR to $PAYARA_HOME/glassfish/domains/$YOUR_DOMAIN/lib/ e. g.
cp mysql-connector-java-8.0.22.jar /opt/payara5/glassfish/domains/domain1/lib/
Create XML resources descriptor file, name it glassfish-resources.xml. Specify apprpriate params:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE resources PUBLIC "-//GlassFish.org//DTD GlassFish Application Server 3.1 Resource Definitions//EN" "http://glassfish.org/dtds/glassfish-resources_1_5.dtd">
<resources>
<jdbc-connection-pool connection-creation-retry-interval-in-seconds="30" connection-validation-method="auto-commit" datasource-classname="com.mysql.cj.jdbc.MysqlDataSource" wrap-jdbc-objects="false" res-type="javax.sql.DataSource" name="mysql_mydb_rootPool" is-connection-validation-required="true" connection-creation-retry-attempts="10" validate-atmost-once-period-in-seconds="60">
<property name="User" value="root"/>
<property name="Password" value="secret"/>
<property name="URL" value="jdbc:mysql://localhost:3306/voyager?zeroDateTimeBehavior=convertToNull&serverTimezone=UTC&useSSL=false"/>
<property name="driverClass" value="com.mysql.cj.jdbc.Driver"/>
<property name="characterEncoding" value="utf-8"/>
</jdbc-connection-pool>
<jdbc-resource enabled="true" jndi-name="jdbc/mydb" object-type="user" pool-name="mysql_mydb_rootPool"/>
</resources>
Add resources to Payara
$PAYARA_HOME/bin/asadmin add-resources glassfish-resources.xml
Restart your domain
$PAYARA_HOME/bin/asadmin restart-domain
I'm getting ClassNotFoundException : org/apache/avro/ipc/ByteBufferOutputStream when I run apache Nutch with HSQLDB although I have all the avro related jar files under lib
avro-1.7.6.jar
avro-compiler-1.7.6.jar
avro-ipc-1.7.6.jar
avro-mapred-1.7.6.jar
This is what I did:
Got HSQLDB up and running
root#elephant hsqldb# sudo java -cp /home/hsqldb/hsqldb-2.3.3/hsqldb/lib/hsqldb.jar org.hsqldb.server.Server --props /home/hsqldb/hsqldb-2.3.3/hsqldb/conf/server.properties
[Server#372f7a8d]: [Thread[main,5,main]]: checkRunning(false) entered
[Server#372f7a8d]: [Thread[main,5,main]]: checkRunning(false) exited
[Server#372f7a8d]: Startup sequence initiated from main() method
[Server#372f7a8d]: Loaded properties from [/home/hsqldb/hsqldb-2.3.3/hsqldb/conf/server.properties]
[Server#372f7a8d]: Initiating startup sequence...
[Server#372f7a8d]: Server socket opened successfully in 28 ms.
[Server#372f7a8d]: Database [index=0, id=0, db=file:/home/hsqldb/hsqldb-2.3.3/hsqldb/data/nutch, alias=nutchdb] opened sucessfully in 1406 ms.
[Server#372f7a8d]: Startup sequence completed in 1438 ms.
[Server#372f7a8d]: 2015-12-26 18:30:13.841 HSQLDB server 2.3.3 is online on port 9001
[Server#372f7a8d]: To close normally, connect and execute SHUTDOWN SQL
[Server#372f7a8d]: From command line, use [Ctrl]+[C] to abort abruptly
Configured ivy/ivy.xml
uncommented below lines in ivy.xml
<dependency org="org.apache.gora" name="gora-core" rev="0.5" conf="*->default"/>
and
<dependency org="org.apache.gora" name="gora-sql" rev="0.1.1-incubating"
conf="*->default" />
uncommented the below lines conf/gora.properites
###############################
# Default SqlStore properties #
###############################
gora.sqlstore.jdbc.driver=org.hsqldb.jdbc.JDBCDriver
gora.sqlstore.jdbc.url=jdbc:hsqldb:hsql://localhost/nutchdb
gora.sqlstore.jdbc.user=sa
gora.sqlstore.jdbc.password=
Ran ant build
ant runtime
Added configuration for nutch-site.xml
root#elephant conf# cat nutch-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>storage.data.store.class</name>
<value>org.apache.gora.sql.store.SqlStore</value>
</property>
<property>
<name>http.agent.name</name>
<value>NutchCrawler</value>
</property>
<property>
<name>http.robots.agents</name>
<value>NutchCrawler,*</value>
</property>
</configuration>
Created seed.txt under urls folder
Executed the nutch by injecting the urls
[root#elephant local]# bin/nutch inject urls/
InjectorJob: starting at 2015-12-26 19:11:24
InjectorJob: Injecting urlDir: urls
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/avro/ipc/ByteBufferOutputStream
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:259)
at org.apache.nutch.storage.StorageUtils.getDataStoreClass(StorageUtils.java:93)
at org.apache.nutch.storage.StorageUtils.createWebStore(StorageUtils.java:77)
at org.apache.nutch.crawl.InjectorJob.run(InjectorJob.java:218)
at org.apache.nutch.crawl.InjectorJob.inject(InjectorJob.java:252)
at org.apache.nutch.crawl.InjectorJob.run(InjectorJob.java:275)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.nutch.crawl.InjectorJob.main(InjectorJob.java:284)
Caused by: java.lang.ClassNotFoundException: org.apache.avro.ipc.ByteBufferOutputStream
at java.net.URLClassLoader$1.run(URLClassLoader.java:372)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:360)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 9 more
Gora-sql is not supported. Due some licenses issues (if I am not wrong), it became disabled around Gora 0.2.
So I suggest you to use other storage like, for example, HBase.
How to get HBase up&running fast: read answer at https://stackoverflow.com/a/39837926/582789
I'm using Hadoop 2.5.1 with HBase 0.98.11 on Ubuntu 14.04
I could run it in Pseudo-distributed mode. Now that I want to run on distributed mode. I follow the instruction from sites and end up having an error in RUNTIME called "Error: org/apache/hadoop/hbase/HBaseConfiguration" (while there is no error when I compile the code).
After trying things, I found that if I comment the mapreduce.framework.name in mapred-site.xml and also stuffs in yarn-site, I could be able to run the hadoop successfully.
But I think it's the single-node running (I have no idea, just guessing by comparing the running time to what I ran in Pseudo and there is no MR in slave's node jps when running the job on master).
Here are some of my conf:
hdfs-site
<property>
<name>dfs.replication</name>
<value>2</value>
<description>Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create time.
</description>
</property>
<!-- <property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop_store/hdfs/namenode</value>
</property>-->
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop_store/hdfs/datanode</value>
</property>
<property>
<name>dfs.datanode.use.datanode.hostname</name>
<value>false</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
mapred-site
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
<description>The host and port that the MapReduce job tracker runs
at. If "local", then jobs are run in-process as a single map
and reduce task.
</description>
</property>
<!--<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>-->
yarn-site
<!-- Site specific YARN configuration properties -->
<!--<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>10.1.1.177:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>10.1.1.177:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>10.1.1.177:8031</value>
</property>-->
Thank you so much for every help
UPDATE: I try making some changes to the yarn-site by adding yarn.applicaton.classpath like this
https://dl-web.dropbox.com/get/Public/yarn.png?_subject_uid=51053996&w=AABeDJfRp_D31RiVHqBWn0r9naQR_lFVJXIlwvCwjdhCAQ
The error changed to EXIT CODE.
https://dl-web.dropbox.com/get/Public/exitcode.jpg?_subject_uid=51053996&w=AAAQ-bYoRSrQV3yFq36vEDPnAB9aIHnyOQfnvt2cUHn5IQ
UPDATE2: In syslog of the application logs it says
2015-04-24 20:34:59,164 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for application appattempt_1429792550440_0035_000002
2015-04-24 20:34:59,589 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
2015-04-24 20:34:59,610 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
2015-04-24 20:34:59,616 FATAL [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
java.lang.NoSuchMethodError: org.apache.hadoop.http.HttpConfig.setPolicy(Lorg/apache/hadoop/http/HttpConfig$Policy;)V
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1364)
2015-04-24 20:34:59,621 INFO [Thread-1] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: MRAppMaster received a signal. Signaling RMCommunicator and JobHistoryEventHandler.
Any suggestions pls
I guess that you didn't set up your hadoop cluster correctly please follow these steps :
Hadoop Configuration:
step 1 : edit hadoop-env.sh as following:
# The java implementation to use. Required.
export JAVA_HOME=/usr/lib/jvm/java-6-sun
step 2 : Now create a directory and set the required ownerships and permissions
$ sudo mkdir -p /app/hadoop/tmp
$ sudo chown hduser:hadoop /app/hadoop/tmp
# ...and if you want to tighten up security, chmod from 755 to 750...
$ sudo chmod 750 /app/hadoop/tmp
step 3 : edit core-site.xml
<property>
<name>hadoop.tmp.dir</name>
<value>/app/hadoop/tmp</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
</property>
step 5 : edit mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
step 6 : edit hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>file:///home/hduser/hadoopdata/hdfs/namenode</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>file:///home/hduser/hadoop/hadoopdata/hdfs/datanode</value>
</property>
step 7 : edit yarn-site.xml
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
finally format your hdfs (You need to do this the first time you set up a Hadoop cluster)
$ /usr/local/hadoop/bin/hadoop namenode -format
Hbase Configuration:
edit you hbase-site.xml:
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:54310/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>localhost</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/usr/local/hbase/zookeeper</value>
</property>
Hope this helps you
After sticking with the problem for more than 3 days (maybe it's from my misunderstanding the concept), I can fix the problem by adding HADOOP_CLASSPATH (like what I did when setting up the pseudo-distribute in hadoop-env) into the yarn-env.
I have no idea much in detail. But, yeah, hope this may be able to help someone in the future.
Cheers.
I was using Spark on Yarn and was getting the same error. Actually, the spark jar had a internal dependency of hadoop-client and hadoop-mapreduce-client-* jars pointing to older 2.2.0 versions. So, I included these entries in my POM with the Hadoop version that I was running and did a clean build.
This resolved the issue for me. Hope this helps someone.
I am trying to use Hibernate-Search 4.1.1-Final + Lucene 3.5 with Hibernate 4.1.3 and Spring 3.1.1 frameworks (Dependency using Maven) to enable POJO-based text search. I followed the Hibernate-Search documentation. Based on the doc, here is my Hibernate configuration:
<bean id="sessionFactory"
class="org.springframework.orm.hibernate4.LocalSessionFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="mappingResources">
<list>
<value>com/mytutorial/User.hbm.xml</value>
</list>
</property>
<property name="hibernateProperties">
<props>
<!-- Other Hibernate-specific properties -->
<prop key="hibernate.search.default.directory_provider">filesystem</prop>
<prop key="hibernate.search.default.indexBase">C:/temp/</prop>
<prop key="hibernate.search.lucene_version">LUCENE_35</prop>
</props>
</property>
</bean>
I turned on the log (DEBUG) for hibernate-search, it shows:
[INFO,Version,pool-2-thread-1] HSEARCH000034: Hibernate Search 4.1.1.Final
[DEBUG,ConfigContext,pool-2-thread-1] Setting Lucene compatibility to Version LUCENE_35
[DEBUG,ConfigContext,pool-2-thread-1] Using default similarity implementation: org.apache.lucene.search.DefaultSimilarity
[WARN,DirectoryProviderHelper,pool-2-thread-1] HSEARCH000041: Index directory not found, creating: 'C:\temp\com.mytutorial.User'
at the startup. And after that, I get the following exception when I run my tomcat server on Windows 7 64-bit machine.
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'sessionFactory' defined in ServletContext resource [/WEB-INF/config/spring/application-context.xml]: Invocation of init method failed; nested exception is java.lang.NoSuchMethodError: org.apache.lucene.store.FSDirectory.open(Ljava/io/File;Lorg/apache/lucene/store/LockFactory;)Lorg/apache/lucene/store/FSDirectory;
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1455)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:294)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:225)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:291)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:193)
at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveReference(BeanDefinitionValueResolver.java:322)
... 45 more
Caused by: java.lang.NoSuchMethodError: org.apache.lucene.store.FSDirectory.open(Ljava/io/File;Lorg/apache/lucene/store/LockFactory;)Lorg/apache/lucene/store/FSDirectory;
at org.hibernate.search.store.impl.DirectoryProviderHelper$FSDirectoryType.getDirectory(DirectoryProviderHelper.java:365)
at org.hibernate.search.store.impl.DirectoryProviderHelper.createFSIndex(DirectoryProviderHelper.java:137)
at org.hibernate.search.store.impl.FSDirectoryProvider.initialize(FSDirectoryProvider.java:70)
at org.hibernate.search.store.impl.DirectoryProviderFactory.createDirectoryProvider(DirectoryProviderFactory.java:84)
at org.hibernate.search.indexes.impl.DirectoryBasedIndexManager.createDirectoryProvider(DirectoryBasedIndexManager.java:216)
at org.hibernate.search.indexes.impl.DirectoryBasedIndexManager.initialize(DirectoryBasedIndexManager.java:89)
at org.hibernate.search.indexes.impl.IndexManagerHolder.createDirectoryManager(IndexManagerHolder.java:241)
at org.hibernate.search.indexes.impl.IndexManagerHolder.buildEntityIndexBinding(IndexManagerHolder.java:111)
at org.hibernate.search.spi.SearchFactoryBuilder.initDocumentBuilders(SearchFactoryBuilder.java:411)
at org.hibernate.search.spi.SearchFactoryBuilder.buildNewSearchFactory(SearchFactoryBuilder.java:221)
at org.hibernate.search.spi.SearchFactoryBuilder.buildSearchFactory(SearchFactoryBuilder.java:145)
at org.hibernate.search.event.impl.FullTextIndexEventListener.initialize(FullTextIndexEventListener.java:129)
at org.hibernate.search.hcore.impl.HibernateSearchIntegrator.integrate(HibernateSearchIntegrator.java:82)
at org.hibernate.internal.SessionFactoryImpl.<init>(SessionFactoryImpl.java:306)
at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1744)
at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1782)
at org.springframework.orm.hibernate4.LocalSessionFactoryBuilder.buildSessionFactory(LocalSessionFactoryBuilder.java:189)
at org.springframework.orm.hibernate4.LocalSessionFactoryBean.buildSessionFactory(LocalSessionFactoryBean.java:350)
at org.springframework.orm.hibernate4.LocalSessionFactoryBean.afterPropertiesSet(LocalSessionFactoryBean.java:335)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1514)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1452)
... 52 more
I still get the exception, even if I remove the "hibernate.search.lucene_version" property from the configuration.
The NoSuchMethodError means you have the wrong version of Lucene in your classpath.
You might also want to check for duplicates, and use a command line tool such as Tattletale to easily find class and library duplicates.