I am trying to run oozie workflow containing sqoop and hive jobs but on running status of Job become PREP and after a while it convert into START_RETRY as shown in picture
my workflow.xml file is
<workflow-app xmlns="uri:oozie:workflow:0.5" name="simpleWF">
<start to="sqoop-import-A"/>
<action name="sqoop-import-A">
<sqoop xmlns="uri:oozie:sqoop-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>default</value>
</property>
</configuration>
<command>import --connect
jdbc:mysql://quickstart.cloudera/movie_lens --username
root --password cloudera --table ratings --target-dir
/user/cloudera/Movie_AY377330 -m 1</command>
</sqoop>
<ok to="sqoop-import-B"/>
<error to="fail"/>
</action>
<action name="sqoop-import-B">
<sqoop xmlns="uri:oozie:sqoop-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>default</value>
</property>
</configuration>
<command>import --connect
jdbc:mysql://quickstart.cloudera/movie_lens --username root
--password cloudera --table movie --target-dir
/user/cloudera/Movie_AY3773301 -m 1</command>
</sqoop>
<ok to="sqoop-import-C"/>
<error to="fail"/>
</action>
<action name="sqoop-import-C">
<sqoop xmlns="uri:oozie:sqoop-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>default</value>
</property>
</configuration>
<command>sqoop import --connect
jdbc:mysql://quickstart.cloudera/movie_lens --username
root --password cloudera --table tags --target-dir
/user/cloudera/Movie_AY3773302 -m 1</command>
</sqoop>
<ok to="hive-node"/>
<error to="fail"/>
</action>
<action name="hive-node">
<hive xmlns="uri:oozie:hive-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<job-xml>hive-site.xml</job-xml>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
</configuration>
<script>hive1.hql</script>
</hive>
<ok to="hive-node-2"/>
<error to="fail"/>
</action>
<action name="hive-node-2">
<hive xmlns="uri:oozie:hive-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<job-xml>hive-site.xml</job-xml>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
</configuration>
<script>hive2.hql</script>
<param>OUTPUT_PATH1=${outputPath1}</param>
</hive>
<ok to="hive-node-3"/>
<error to="fail"/>
</action>
<action name="hive-node-3">
<hive xmlns="uri:oozie:hive-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<job-xml>hive-site.xml</job-xml>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
</configuration>
<script>hive3.hql</script>
<param>OUTPUT_PATH2=${outputPath2}</param>
</hive>
<ok to="end"/>
<error to="fail"/>
</action>
<kill name="fail">
<message>error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end"/>
and my job.properties file is
nameNode=hdfs://localhost:8020
jobTracker=localhost:50300
queueName=default
exampleRoot=example
folder= test
outputPath1=
outputPath2=
oozie.use.system.libpath=true
oozie.libpath=/user/oozie/share/lib
oozie.wf.application.path=${nameNode}/user/${user.name}/${exampleRoot}
I have found a similar question at Oozie job stuck at START action in PREP state and try to solve issue but no idea how to do it.
I know that something is wrong with my namenode or jobtracker address.
I can access my namenode in browser by localhost:50070
After a while the job is suspended with error
JA009: Failed on local exception:
com.google.protobuf.InvalidProtocolBufferException: Protocol message end-
group tag did not match expected tag.; Host Details : local host is:
"quickstart.cloudera/127.0.0.1"; destination host is: "localhost":8088;
Thanks in advance.
Related
I tried running infinispan in two machines and persisted the index data in one machine. When i try to run simultaneously in 2 machines with indexing and persisting(cache-store) into the database, i am getting the following exception,
Caused by: java.io.FileNotFoundException: Error loading metadata for index file: segments_2j|M|Course
at org.infinispan.lucene.impl.DirectoryImplementor.openInput(DirectoryImplementor.java:134)
at org.infinispan.lucene.impl.DirectoryLuceneV4.openInput(DirectoryLuceneV4.java:101)
at org.apache.lucene.store.Directory.openChecksumInput(Directory.java:113)
at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:341)
at org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:57)
at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:923)
at org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:53)
at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:67)
at org.hibernate.search.indexes.impl.SharingBufferReaderProvider.readerFactory(SharingBufferReaderProvider.java:131)
at org.hibernate.search.indexes.impl.SharingBufferReaderProvider$PerDirectoryLatestReader.<init>(SharingBufferReaderProvider.java:206)
at org.hibernate.search.indexes.impl.SharingBufferReaderProvider.createReader(SharingBufferReaderProvider.java:108)
... 24 more
My infinispan config file is:
infinispan-config.xml
<?xml version="1.0" encoding="UTF-8"?> <infinispan
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:infinispan:config:7.2 http://www.infinispan.org/schemas/infinispan-config-7.2.xsd
urn:infinispan:config:store:jdbc:7.2 http://www.infinispan.org/schemas/infinispan-cachestore-jdbc-config-7.2.xsd"
xmlns="urn:infinispan:config:7.2"
xmlns:jdbc="urn:infinispan:config:store:jdbc:7.2">
<!-- *************************** -->
<!-- System-wide global settings -->
<!-- *************************** -->
<jgroups>
<!-- Note that the JGroups transport uses sensible defaults if no configuration
property is defined. See the JGroupsTransport javadocs for more flags.
jgroups-udp.xml is the default stack bundled in the Infinispan core jar: integration
and tuning are tested by Infinispan. -->
<stack-file name="default-jgroups-tcp" path="my-jgroupstcp.xml"/>
</jgroups>
<cache-container name="HibernateSearch" default-cache="default" statistics="false" shutdown-hook="DONT_REGISTER">
<transport stack="default-jgroups-tcp"/>
<!-- Duplicate domains are allowed so that multiple deployments with default configuration
of Hibernate Search applications work - if possible it would be better to use JNDI to share
the CacheManager across applications -->
<jmx duplicate-domains="true"/>
<!-- *************************************** -->
<!-- Cache to store Lucene's file metadata -->
<!-- *************************************** -->
<replicated-cache name="LuceneIndexesMetadata" mode="ASYNC" async-marshalling="true">
<locking striping="false" acquire-timeout="10000" concurrency-level="500" write-skew="false"/>
<transaction mode="NONE" />
<eviction max-entries="-1" strategy="NONE"/>
<expiration max-idle="-1"/>
<persistence passivation="false">
<jdbc:string-keyed-jdbc-store preload="true" fetch-state="true" read-only="false" purge="false">
<write-behind />
<property name="key2StringMapper">org.infinispan.lucene.LuceneKey2StringMapper</property>
<jdbc:connection-pool connection-url="jdbc:mysql://localhost:3306/hsearch"
driver="com.mysql.jdbc.Driver" username="my-username"
password="my-password"></jdbc:connection-pool>
<jdbc:string-keyed-table drop-on-exit="false" create-on-start="true" prefix="ISPN_STRING_TABLE">
<jdbc:id-column name="ID" type="VARCHAR(255)"/>
<jdbc:data-column name="DATA" type="MEDIUMBLOB"/>
<jdbc:timestamp-column name="TIMESTAMP" type="BIGINT"/>
</jdbc:string-keyed-table>
</jdbc:string-keyed-jdbc-store>
</persistence>
<indexing index="ALL"/>
<state-transfer enabled="true" timeout="480000" await-initial-transfer="true"/>
</replicated-cache>
<!-- **************************** -->
<!-- Cache to store Lucene data -->
<!-- **************************** -->
<distributed-cache name="LuceneIndexesData" mode="ASYNC" async-marshalling="true">
<locking striping="false" acquire-timeout="10000" concurrency-level="500" write-skew="false"/>
<transaction mode="NONE"/>
<eviction max-entries="-1" strategy="NONE"/>
<expiration max-idle="-1"/>
<persistence passivation="false">
<jdbc:string-keyed-jdbc-store preload="true" fetch-state="true" read-only="false" purge="false">
<write-behind />
<property name="key2StringMapper">org.infinispan.lucene.LuceneKey2StringMapper</property>
<jdbc:connection-pool connection-url="jdbc:mysql://localhost:3306/hsearch"
driver="com.mysql.jdbc.Driver" username="my-username"
password="my-password"></jdbc:connection-pool>
<jdbc:string-keyed-table drop-on-exit="false" create-on-start="true" prefix="ISPN_STRING_TABLE">
<jdbc:id-column name="ID" type="VARCHAR(255)"/>
<jdbc:data-column name="DATA" type="MEDIUMBLOB"/>
<jdbc:timestamp-column name="TIMESTAMP" type="BIGINT"/>
</jdbc:string-keyed-table>
</jdbc:string-keyed-jdbc-store>
</persistence>
<indexing index="NONE"/>
<state-transfer enabled="true" timeout="480000" await-initial-transfer="true"/>
</distributed-cache>
<!-- ***************************** -->
<!-- Cache to store Lucene locks -->
<!-- ***************************** -->
<replicated-cache name="LuceneIndexesLocking" mode="ASYNC" async-marshalling="true">
<locking striping="false" acquire-timeout="10000" concurrency-level="500" write-skew="false"/>
<transaction mode="NONE"/>
<eviction max-entries="-1" strategy="NONE"/>
<expiration max-idle="-1"/>
<indexing index="NONE"/>
<state-transfer enabled="true" timeout="480000" await-initial-transfer="true"/>
</replicated-cache>
</cache-container>
</infinispan>
My jgroup configuration file is,
my-jgroup.xml
<config xmlns="urn:org:jgroups"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/JGroups-3.6.xsd">
<TCP bind_addr="${jgroups.tcp.address:192.168.1.48}"
bind_port="${jgroups.tcp.port:7800}"
enable_diagnostics="false"
thread_naming_pattern="pl"
send_buf_size="640k"
sock_conn_timeout="300"
thread_pool.min_threads="${jgroups.thread_pool.min_threads:2}"
thread_pool.max_threads="${jgroups.thread_pool.max_threads:30}"
thread_pool.keep_alive_time="60000"
thread_pool.queue_enabled="false"
internal_thread_pool.min_threads="${jgroups.internal_thread_pool.min_threads:5}"
internal_thread_pool.max_threads="${jgroups.internal_thread_pool.max_threads:20}"
internal_thread_pool.keep_alive_time="60000"
internal_thread_pool.queue_enabled="true"
internal_thread_pool.queue_max_size="500"
oob_thread_pool.min_threads="${jgroups.oob_thread_pool.min_threads:20}"
oob_thread_pool.max_threads="${jgroups.oob_thread_pool.max_threads:200}"
oob_thread_pool.keep_alive_time="60000"
oob_thread_pool.queue_enabled="false"
/>
<MPING bind_addr="${jgroups.tcp.address:192.168.1.48}"
mcast_addr="${jgroups.mping.mcast_addr:228.2.4.6}"
mcast_port="${jgroups.mping.mcast_port:43366}"
ip_ttl="${jgroups.udp.ip_ttl:2}"
/>
<MERGE3 min_interval="10000"
max_interval="30000"
/>
<FD_SOCK />
<FD_ALL timeout="60000"
interval="15000"
timeout_check_interval="5000"
/>
<VERIFY_SUSPECT timeout="5000" />
<pbcast.NAKACK2 use_mcast_xmit="false"
xmit_interval="1000"
xmit_table_num_rows="50"
xmit_table_msgs_per_row="1024"
xmit_table_max_compaction_time="30000"
max_msg_batch_size="100"
resend_last_seqno="true"
/>
<UNICAST3 xmit_interval="500"
xmit_table_num_rows="50"
xmit_table_msgs_per_row="1024"
xmit_table_max_compaction_time="30000"
max_msg_batch_size="100"
conn_expiry_timeout="0"
/>
<pbcast.STABLE stability_delay="500"
desired_avg_gossip="5000"
max_bytes="1M"
/>
<pbcast.GMS print_local_addr="false"
join_timeout="15000"
/>
<MFC max_credits="2m"
min_threshold="0.40"
/>
<FRAG2 />
My persistence.xml is,
<?xml version="1.0" encoding="UTF-8" ?>
<persistence-unit name="IC" transaction-type="RESOURCE_LOCAL">
<class>com.csgsol.model.Course</class>
<properties>
<property name="javax.persistence.jdbc.driver" value="com.mysql.jdbc.Driver"/>
<property name="javax.persistence.jdbc.url" value="jdbc:mysql://192.168.1.99:3306/sampleDb"/>
<property name="javax.persistence.jdbc.user" value="my-username"/>
<property name="javax.persistence.jdbc.password" value="my-password"/>
<property name="hibernate.dialect" value="org.hibernate.dialect.MySQL5InnoDBDialect"/>
<property name="hibernate.hbm2ddl.auto" value="update"/>
<property name="hibernate.search.default.directory_provider" value="infinispan"/>
<property name="hibernate.search.default.indexmanager" value="org.infinispan.query.indexmanager.InfinispanIndexManager"/>
<property name="hibernate.search.default.indexmanager" value="near-real-time"/>
<property name="hibernate.search.default.exclusive_index_use" value="false"/>
<property name="hibernate.search.default.worker.execution" value = "async"/>
<property name="hibernate.search.lucene_version" value="LUCENE_4_10_4"/>
<property name="hibernate.search.infinispan.configuration_resourcename" value="infinispan-config.xml"/>
</properties>
</persistence-unit>
</persistence>
The versions i am using are
Infinispan - 7.2.0.Final
Lucene - 4.10.4
You should not use mode="ASYNC" for index caches. Since Infinispan 8.2.x (https://issues.jboss.org/browse/ISPN-4065) this configuration is forbidden
I have three datasources configured in my persistence.xml:
<?xml version="1.0" encoding="UTF-8"?>
<persistence xmlns="http://xmlns.jcp.org/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence http://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd"
version="2.1">
<persistence-unit name="MyPersistenceUnit" transaction-type="JTA">
<jta-data-source>java:jboss/datasources/myDS</jta-data-source>
<properties>
<property name="hibernate.dialect" value="org.hibernate.dialect.PostgreSQLDialect" />
<property name="hibernate.connection.useUnicode" value="true" />
<property name="hibernate.connection.characterEncoding" value="UTF-8" />
<property name="hibernate.connection.charSet" value="UTF-8" />
<property name="hibernate.show_sql" value="true" />
<property name="hibernate.hbm2ddl.auto" value="validate" />
</properties>
</persistence-unit>
<persistence-unit name="MyPersistenceTestUnit" transaction-type="RESOURCE_LOCAL">
<provider>org.hibernate.ejb.HibernatePersistence</provider>
<properties>
<property name="hibernate.dialect" value="org.hibernate.dialect.PostgreSQLDialect" />
<property name="hibernate.connection.useUnicode" value="true" />
<property name="hibernate.connection.characterEncoding" value="UTF-8" />
<property name="hibernate.connection.charSet" value="UTF-8" />
<property name="hibernate.connection.url" value="jdbc:postgresql://localhost:5432/mydb" />
<property name="hibernate.connection.username" value="myuser" />
<property name="hibernate.connection.password" value="mypass" />
<property name="hibernate.show_sql" value="true" />
<property name="hibernate.hbm2ddl.auto" value="validate" />
</properties>
</persistence-unit>
<persistence-unit name="MyLoggingUnit" transaction-type="RESOURCE_LOCAL">
<provider>org.hibernate.ejb.HibernatePersistence</provider>
<properties>
<property name="hibernate.dialect" value="org.hibernate.dialect.PostgreSQLDialect" />
<property name="hibernate.connection.useUnicode" value="true" />
<property name="hibernate.connection.characterEncoding" value="UTF-8" />
<property name="hibernate.connection.charSet" value="UTF-8" />
<property name="hibernate.connection.url" value="jdbc:postgresql://localhost:5432/mydb" />
<property name="hibernate.connection.username" value="myuser" />
<property name="hibernate.connection.password" value="mypass" />
<property name="hibernate.show_sql" value="true" />
<property name="hibernate.hbm2ddl.auto" value="validate" />
</properties>
</persistence-unit>
</persistence>
The first is the 'default' unit. It's used by the standard entity manager. The second is needed for running JUnit tests outside of a Java EE environment. The third one is used for logging to be independent from JTA transactions.
This works. But when I start Wildfly, I even get some of these errors (although I don't use H2 in my application):
21:32:33,748 ERROR [org.hibernate.tool.hbm2ddl.SchemaValidator] (ServerService Thread Pool -- 51) HHH000319: Could not get database metadata: org.h2.jdbc.JdbcSQLException: Table "PG_CLASS" not found; SQL statement:
select relname from pg_class where relkind='S' [42102-173]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:331)
It only happens when the application is deployed.
The idea was to disable or remove the ExampleDS by admin console or in standalone.xml, but when I do that, I get a startup with errors:
21:36:20,992 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) JBAS014613: Operation ("deploy") failed - address: ([("deployment" => "com.zonacroft.mycuisine.webserver-TRUNK.war")]) - failure description: {"JBAS014771: Services with missing/unavailable dependencies" => [
"jboss.persistenceunit.\"com.mydomain.myapp.webserver-TRUNK.war#MyLoggingUnit\".__FIRST_PHASE__ is missing [jboss.data-source.java:jboss/datasources/ExampleDS]",
"jboss.persistenceunit.\"com.mydomain.myapp.webserver-TRUNK.war#MyPersistenceTestUnit\".__FIRST_PHASE__ is missing [jboss.data-source.java:jboss/datasources/ExampleDS]"
What on earth is going on here? Why does this persistence.xml need the exampleDS? Are the RESOURCE_LOCAL persistence units maybe wrong configured? (But if so, why do they work then). So what's the problem here?
[UPDATE]
I figured out that I don't get startup errors (with exampleDS enabled) when I remove the hibernate.hbm2ddl.auto properties from the RESOURCE_LOCAL persistence units. But it annoys me that I have to let the exampleDS of Wildfly enabled. Otherwise Wildfly would not start with the application as described above. Why is this so. What's wrong here?
You only define a datasource for the first persistence unit:
<jta-data-source>java:jboss/datasources/myDS</jta-data-source>
That means WildFly uses the default datasource exampleDS for your other persistence units.
This causes the exception, since the configured PostgreSQL dialect does not work for the default H2 datasource.
I am configuring one HSQLDB database inside the application folder, follow below the persistence.xml:
<?xml version="1.0" encoding="UTF-8"?>
<persistence version="2.1" xmlns="http://xmlns.jcp.org/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence http://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd">
<persistence-unit name="crmUnity" transaction-type="JTA">
<provider>org.hibernate.ejb.HibernatePersistence</provider>
<class>br.com.crm.model.entities.Cliente</class>
<class>br.com.crm.model.entities.Contato</class>
<properties>
<property name="hibernate.dialect" value="org.hibernate.dialect.HSQLDialect"/>
<property name="hibernate.hbm2ddl.auto" value="update"/>
<property name="hibernate.format_sql" value="false" />
<property name="hibernate.show_sql" value="false" />
<property name="hibernate.archive.autodetection" value="class" />
<property name="hibernate.connection.driver_class" value="org.hsqldb.jdbcDriver" />
<property name="hibernate.connection.url" value="jdbc:hsqldb:file://database/crm;shutdown=true" />
<property name="hibernate.connection.user" value="sa" />
<property name="hibernate.flushMode" value="FLUSH_AUTO" />
</properties>
</persistence-unit>
</persistence>
The JBOSS 7.1.3 while starting show the error on log:
18:40:10,477 ERROR [org.hibernate.tool.hbm2ddl.SchemaUpdate] (ServerService Thread Pool -- 170) HHH000319: Could not get database metadata: java.sql.SQLException: No suitable driver found for jdbc:hsqldb:file://database/crm;shutdown=true
Can anybody help me about configure the jdbc url to resolve this error?
Thank's, TarcĂsio.
I have just upgraded to v 5.0.0-beta11 of izPack.
I have updated my configuration so the compiler works.
However, the output jar file throws the following error....
Could not find or load main class
com.izforge.izpack.installer.bootstrap.Installer
Any suggestions on what I may have missed?
<?xml version="1.0" encoding="iso-8859-1" standalone="yes" ?>
<installation version="5.0"
xmlns:izpack="http://izpack.org/schema/installation"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://izpack.org/schema/installation http://izpack.org/schema/5.0/izpack-installation-5.0.xsd">
<info>
<appname>appname (2013-09-03)</appname>
<appversion>20130903_1115</appversion>
<url>http://google.com</url>
<authors>
<author name="steven" email="sbirdranch#yahoo.com"/>
</authors>
<uninstaller write="no"/>
</info>
<run-privileged condition="izpack.windowsinstall.vista|izpack.windowsinstall.7"/>
<guiprefs width="700" height="550" resizable="no">
<modifier key="useButtonIcons" value="no"/>
<modifier key="useLabelIcons" value="no"/>
<modifier key="labelGap" value="2"/>
<modifier key="layoutAnchor" value="NORTHWEST"/>
<modifier key="useHeadingPanel" value="yes"/>
<modifier key="headingImageOnLeft" value="no"/>
<modifier key="headingLineCount" value="1"/>
<modifier key="headingFontSize" value="1.5"/>
<modifier key="headingBackgroundColor" value="0x00ffffff"/>
</guiprefs>
<locale>
<langpack iso3="eng"/>
</locale>
<resources>
<res id="HTMLLicencePanel.licence" src="./packager/pack_license.htm"/>
<res id="HTMLHelloPanel.hello" src="./packager/pack_welcome.htm"/>
<res id="Installer.image" src="./packager/pb_wizSplash.png"/>
<res id="Heading.image" src="./packager/tdkc_gradient.png"/>
<res id="userInputSpec.xml" src="./packager/userInputSpec.xml"/>
<res id="TargetPanel.dir" src="./packager/installDir.txt"/>
</resources>
<variables>
<variable name="ShowCreateDirectoryMessage" value="false"/>
</variables>
<conditions>
<condition type="variable" id="checkBox">
<name>ackVar</name>
<value>on</value>
</condition>
<condition type="or" id="isCheckedCondition">
<condition type="ref" refid="checkBox"/>
</condition>
</conditions>
<panels>
<panel classname="HTMLHelloPanel" id="helloPanel"/>
<panel classname="HTMLLicencePanel" id="licPanel"/>
<panel classname="UserInputPanel" id="ackSBIRPanel">
</panel>
<panel classname="TargetPanel" id="targetPanel"/>
<panel classname="InstallPanel" id="installPanel"/>
</panels>
<packs>
<pack name="UDK Core" required="yes">
<description>Core files </description>
<file src="c:\BUILD_AREA\Build_Tester/tempZip/Build_Tester_20130903_1115_X-win32.win32.x86_64/Build_Tester_Archive_Prefix" targetdir="$INSTALL_PATH"/>
</pack>
</packs>
</installation>
Could not find or load main class com.izforge.izpack.installer.bootstrap.Installer
i'd first check build.xml to see if the izpack-installer-5.0.0-beta11.jar is included in the classpath or not.
<project default="install">
<path id="build.classpath">
<fileset dir="${DEFAULT_IZPACK_HOME}">
<include name="lib/*.jar" />
</fileset>
</path>
I am trying to do the following :
execute some database scripts during mvn test phase into hsqldb
use that database for test purpose
I was able to configure maven so that each time test phase is called all scripts are executed successfully, but (of course there is a BUT), all my tests failed.
My configurations :
pom.xml
<build>
<plugins>
<!-- Plugin maven for sql -->
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>sql-maven-plugin</artifactId>
<dependencies>
<!-- Dependency to jdbc driver -->
<dependency>
<groupId>org.hsqldb</groupId>
<artifactId>hsqldb</artifactId>
<version>${hsql-version}</version>
</dependency>
</dependencies>
<configuration>
<source>1.5</source>
<target>1.5</target>
<encoding>UTF-8</encoding>
<driver>org.hsqldb.jdbcDriver</driver>
<url>jdbc:hsqldb:mem:sweetdev_skill_db;shutdown=false</url>
<settingsKey>hsql-db-test</settingsKey>
<!--all executions are ignored if -Dmaven.test.skip=true-->
<skip>${maven.test.skip}</skip>
</configuration>
<executions>
<!-- Create integration test data before running the tests -->
<execution>
<id>create-integration-test-data</id>
<phase>process-test-resources</phase>
<inherited>true</inherited>
<goals>
<goal>execute</goal>
</goals>
<configuration>
<url>jdbc:hsqldb:mem:db;shutdown=false</url>
<autocommit>true</autocommit>
<orderFile>ascending</orderFile>
<fileset>
<basedir>${basedir}/src/test/resources/sql</basedir>
<includes>
<include>create.sql</include>
<include>insert.sql</include>
</includes>
</fileset>
</configuration>
</execution>
<!-- Drop data after executing tests -->
<execution>
<id>drop-db-after-test</id>
<phase>post-integration-test</phase>
<goals>
<goal>execute</goal>
</goals>
<configuration>
<orderFile>ascending</orderFile>
<fileset>
<basedir>${basedir}/src/test/resources/sql</basedir>
<includes>
<include>drop.sql</include>
</includes>
</fileset>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
spring config for test :
<bean id="datasource"
class="org.springframework.jdbc.datasource.DriverManagerDataSource">
<property name="driverClassName" value="org.hsqldb.jdbcDriver">
</property>
<property name="url" value="jdbc:hsqldb:mem:db"></property>
<property name="username" value="sa">
</property>
<property name="password" value="">
</property>
</bean>
<bean id="entityManagerFactory"
class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="dataSource" ref="datasource" />
<!-- use testingSetup pu -->
<property name="persistenceUnitName" value="testingSetup" />
<property name="persistenceXmlLocation" value="persistence.xml" />
<property name="jpaVendorAdapter">
<bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter">
<property name="showSql" value="true" />
<property name="databasePlatform" value="org.hibernate.dialect.HSQLDialect" />
<property name="generateDdl" value="true" />
</bean>
</property>
</bean>
<tx:annotation-driven transaction-manager="transactionManager" />
<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="entityManagerFactory" ref="entityManagerFactory" />
</bean>
Output :
[INFO] ------------------------------------------------------------------------
[INFO] [clean:clean]
[INFO] Deleting directory D:\dev\projects\project-data\target
[INFO] [resources:resources]
[INFO] Using default encoding to copy filtered resources.
[INFO] [compiler:compile]
[INFO] Compiling 62 source files to D:\dev\projects\project-data\target\classes
[INFO] [resources:testResources]
[INFO] Using default encoding to copy filtered resources.
[INFO] [sql:execute {execution: create-integration-test-data}]
[INFO] Executing file: D:\dev\projects\project-data\src\test\resources\sql\create.sql
[INFO] Executing file: D:\dev\projects\project-data\src\test\resources\sql\insert.sql
[INFO] 230 of 230 SQL statements executed successfully
[INFO] [compiler:testCompile]
[INFO] Compiling 12 source files to D:\dev\projects\project-data\target\test-classes
[INFO] [surefire:test]
[INFO] Surefire report directory: D:\dev\projects\project-data\target\surefire-reports
But then all my test fail. I need your help to figure out why it doesn't work and help me find solution. Thank you
#AndrewLogvinov
Here is one of test output :
-------------------------------------------------------------------------------
Test set: com.ideo.sweetdevskill.data.impl.TestAcquisitionDAOImpl
-------------------------------------------------------------------------------
Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 0.17 sec <<< FAILURE!
testGetListAcquisitionByUser(com.ideo.sweetdevskill.data.impl.TestAcquisitionDAOImpl) Time elapsed: 0.148 sec <<< ERROR!
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.RangeCheck(ArrayList.java:547)
at java.util.ArrayList.get(ArrayList.java:322)
at com.ideo.sweetdevskill.data.impl.TestAcquisitionDAOImpl.testGetListAcquisitionByUser(TestAcquisitionDAOImpl.java:100)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
at org.springframework.test.context.junit4.statements.RunBeforeTestMethodCallbacks.evaluate(RunBeforeTestMethodCallbacks.java:74)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
at org.springframework.test.context.junit4.statements.RunAfterTestMethodCallbacks.evaluate(RunAfterTestMethodCallbacks.java:82)
at org.springframework.test.context.junit4.statements.SpringRepeat.evaluate(SpringRepeat.java:72)
at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:240)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:46)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41)
at org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
at org.springframework.test.context.junit4.statements.RunBeforeTestClassCallbacks.evaluate(RunBeforeTestClassCallbacks.java:61)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
at org.springframework.test.context.junit4.statements.RunAfterTestClassCallbacks.evaluate(RunAfterTestClassCallbacks.java:70)
at org.junit.runners.ParentRunner.run(ParentRunner.java:220)
at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(SpringJUnit4ClassRunner.java:180)
at org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:62)
at org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:140)
at org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:127)
at org.apache.maven.surefire.Surefire.run(Surefire.java:177)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:338)
at org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:997)
Thanks !
Compare these lines in your config. The database names are different, therefore two different databases are being used.
<url>jdbc:hsqldb:mem:sweetdev_skill_db;shutdown=false</url>
<property name="url" value="jdbc:hsqldb:mem:db"></property>
I have promised to post the solution, so here I am.
Let's start with ths spring config
<?xml version="1.0" encoding="UTF-8"?>
<!--beans followed by all xml schemas here -->
<!-- HSQL datasource for test purpose-->
<bean id="datasource"
class="org.springframework.jdbc.datasource.DriverManagerDataSource">
<property name="driverClassName" value="org.hsqldb.jdbcDriver" />
<property name="url" value="jdbc:hsqldb:file:db" />
<property name="username" value="sa" />
<property name="password" value="" />
</bean>
<!-- ============================ ENTITY MANAGER ================================= -->
<bean id="entityManagerFactory"
class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="dataSource" ref="datasource" />
<!-- use testingSetup pu -->
<property name="persistenceUnitName" value="testingSetup" />
<property name="persistenceXmlLocation" value="persistence.xml" />
<property name="jpaVendorAdapter">
<bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter">
<property name="showSql" value="false" />
<property name="databasePlatform" value="org.hibernate.dialect.HSQLDialect" />
<property name="generateDdl" value="false" />
</bean>
</property>
</bean>
<!-- all other things you may need-->
</beans>
persistence.xml would have same structure as for application
inside pom.xml of the project : (I only include here the related part :
<!-- Plugin maven for sql -->
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>sql-maven-plugin</artifactId>
<dependencies>
<!-- Dependency to jdbc driver -->
<dependency>
<groupId>org.hsqldb</groupId>
<artifactId>hsqldb</artifactId>
<version>${hsql-version}</version>
</dependency>
</dependencies>
<configuration>
<source>1.5</source>
<target>1.5</target>
<encoding>UTF-8</encoding>
<driver>org.hsqldb.jdbcDriver</driver>
<url>jdbc:hsqldb:file:${basedir}/db;shutdown=true</url>
<autocommit>true</autocommit>
<settingsKey>hsql-db-test</settingsKey>
<!--all executions are ignored if -DskipTests=true-->
<skip>${skipTests}</skip>
</configuration>
<executions>
<!-- Create test data before running the tests -->
<execution>
<id>create-test-compile-data</id>
<phase>process-test-sources</phase>
<inherited>true</inherited>
<goals>
<goal>execute</goal>
</goals>
<configuration>
<url>jdbc:hsqldb:file:${basedir}/db;shutdown=true</url>
<driver>org.hsqldb.jdbcDriver</driver>
<orderFile>ascending</orderFile>
<detail>true</detail>
<fileset>
<basedir>${basedir}/src/test/resources/sql</basedir>
<includes>
<include>script-create.sql</include>
<include>script-insert.sql</include>
</includes>
</fileset>
<autocommit>true</autocommit>
</configuration>
</execution>
<!-- Drop test data after running the tests include hereafter -->
</plugin>
Thanks for your help.