I'm working with the Datanucleus tutorial application for JDO, specifically this one.
Regardless which "inheritance strategy" I try the table layout is the same. I would like two tables, one for PRODUCT and one for BOOK, but using the configuration below I only get the PRODUCT table with columns for both class Product and class Book.
<class name="Product" identity-type="sequence">
<inheritance strategy="complete-table"/>
<field name="name">
<column name="PRODUCT_NAME" length="100" jdbc-type="VARCHAR"/>
</field>
<field name="description">
<column length="255" jdbc-type="VARCHAR"/>
</field>
</class>
<class name="Book" identity-type="sequence">
<field name="author">
<column length="40" jdbc-type="VARCHAR"/>
</field>
<field name="isbn">
<column length="20" jdbc-type="CHAR"/>
</field>
<field name="publisher">
<column length="40" jdbc-type="VARCHAR"/>
</field>
</class>
The directory structure is exactly as in the tutorial, as is the build.xml. I have tried generating the schema via both the Ant task and the command line.
I use the sequence of commands:
ant clean
ant compile
ant enhance
ant createschema
The schema is generated but not as the Datanucleus documentation suggests that it should be with inheritance strategy "compete-table."
My target database is PostgreSQL 8.4 running on Ubuntu 10.04 if that matters.
Anyone else run into this issue and found a solution?
To answer my own question:
In the datanucleus tutorial download, the build.xml file given has a "createschema" target like:
<target name="createschema">
...
<schematool ...>
<fileset dir="${basedir}/target/classes">
<include name="**/*.class"/>
</fileset>
...
</schematool>
</target>
It should be changed to include all .jdo files as shown below:
<target name="createschema">
...
<schematool ...>
<fileset dir="${basedir}/target/classes">
<include name="**/*.class"/>
<include name="**/*.jdo"/>
</fileset>
...
</schematool>
</target>
In addition the package-hsql.orm file needs to be renamed to package-hsql.jdo and its header needs to be changed to:
<?xml version="1.0"?>
<!DOCTYPE jdo PUBLIC
"-//Sun Microsystems, Inc.//DTD Java Data Objects ORM Metadata 2.0//EN"
"http://java.sun.com/dtd/orm_2_0.dtd">
<jdo>
...
<jdo>
Notice that the DOCTYPE and root element were changed. The root element was "orm" and changed to "jdo".
Once I made these changes the schema generation tool followed the "inheritance strategy" directive.
For my custom application, I had a similar issue, and it worked fine after making the changes in the header of the jdo file. I am using version 3.2.9.
Related
I have 10 test cases in testng suite,i need to execute single or individual test cases from jenkinks,can any one help me out how to execute single test case from jenkins,i tried maven surfire commands mvn -Dtest=TestCircle test include test command into golas and executed but its not working,please refer data and correct me how to set goals in jenkins for execute single test case.
"mvn clean compile install -DPICK_CONFIGURATION_FROM="JENKINS" -DEXECUTION_MODE="Remote" -Dtest=TestCircle#EAConsoleLandingPageTest test -DPLATFORM="WEB" -DOPERATING_SYSTEM="WINDOWS" -DBROWSER="chrome" -DCOMPUTERNAME="Test" -DTEST_ENVIRONMENT="RUNFC" -DPLAN_ID="" -DTEST_TYPE="Smoke" -DGRID_URL="http://localhost:4444/wd/hub""
Below is the sample test suite:
classes>
<class
name="com.ea.automation.tests.EAConsoleLandingPageTest">
<methods>
<include name="clickOnGetStartedBtn" />
<include name="selectProjectFromLandingPage" />
<include name="clickOnEAConsoleIcon" />
<include name="verifyRequestProjectAccess" />
<include name="verifyProjectAccess" />
<include name="verifyHomePageRedierecting" />
</methods>
</class>
<class
name="com.ea.automation.tests.EAConsoleSelfServiceTest">
<methods>
<include name="clickOnCreateNewproject" />
<include name="verifyProjectUserAcess" />
<include name="switchProjects" />
<include name="clickOnProjectSettings" />
</methods>
</class>
<class
name="com.ea.automation.tests.EAConsoleNotificationAppTest">
<methods>
<include name="selectProject" />
<include name="clickOnNotification" />
<include name="checkAllWdgetNotifications" />
</methods>
</class>
<class
name="com.ea.automation.tests.EAConsoleSupportAppTest">
<methods>
<include name="clickOnSupportApp" />
<include name="checkCreateTicket" />
<include name="clickOnCloseErrorMessageModel" />
<include name="clickOnSupportFromApp" />
<include name="clickOnSupportEmail" />
</methods>
</class>
<class
name="com.ea.automation.tests.EAConsoleSDKDownloadsAppTest">
<methods>
<include name="selectProject" />
<include name="clickOnSDKDownloads" />
</methods>
</class>
The approach I am suggesting is not preferable. Because the sole purpose of running Jenkins is to run a suite of test cases and not individual ones. But you could implement your idea in the following way.
Add three string parameters to the Jenkins job. One parameter points to the testng xml file name, second parameter points to the class name, and the third parameter points to the test name.
Create and have another testng.xml file in your project repo. Rename this file as small.xml
So the first Build parameter should be small.xml
Jenkins has the capability to execute shell script before the maven build starts.
So we are going to fetch the arguments that we used as Jenkins Build parameters inside the shell script to modify the small.xml file and modify the pom.xml file to point to the small.xml.
sed command in a shell script will help to replace the existing text in pom.xml and small.xml file. You could search this on the internet.
I had implemented this approach in one of the projects that I worked on for picking different suite files based on the Jenkins build parameters. This approach works. So during runtime, based on the Jenkins Build Parameters you pass, the corresponding tests will be run by Jenkins.
I am trying to index web pages AND pdf documents from a website. I am using Nutch 1.9.
I downloade the nutch-custom-search plugin from https://github.com/BayanGroup/nutch-custom-search. The plugin is awsome and indeed let me match selected divs to solr fieds.
The problem I am having is that, my site also contains numerous pdf files. I can see that they are fetched but never parsed. There is no pdf when I query solr. Just web pages. I am trying to use tika to parse .PDFs (I hope that I have the right idea)
If on cygwin, I run parsechecker see below, it seems to parse OK:
$ bin/nutch parsechecker -dumptext -forceAs application/pdf http://www.immunisationscotland.org.uk/uploads/documents/18304-Tuberculosis.pdf
I am not too sure what to do next (see below for my config)
extractor.xml
<config xmlns="http://bayan.ir" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://bayan.ir http://raw.github.com/BayanGroup/nutch-custom-search/master/zal.extractor/src/main/resources/extractors.xsd" omitNonMatching="true">
<fields>
<field name="pageTitleChris" />
<field name="contentChris" />
</fields>
<documents>
<document url="^.*\.(?!pdf$)[^.]+$" engine="css">
<extract-to field="pageTitleChris">
<text>
<expr value="head > title" />
</text>
</extract-to>
<extract-to field="contentChris">
<text>
<expr value="#primary-content" />
</text>
</extract-to>
</document>
</documents>
Inside my parse-plugins.xml i added
<mimeType name="application/pdf">
<plugin id="parse-tika" />
</mimeType>
nutch-site.xml
<name>plugin.includes</name>
<value>protocol-http|urlfilter-regex|parse-(html|tika|text)|extractor|index-(basic|anchor)|query-(basic|site|url)|indexer-solr|response-(json|xml)|summary-basic|scoring-opic|urlnormalizer-(pass|regex|basic)</value>
<property>
<name>http.content.limit</name>
<value>65536666</value>
<description></description>
</property>
<property>
<name>extractor.file</name>
<value>extractor.xml</value>
</property>
Help would be much appreciated,
Thanks
Chris
I think the problem relates to omitNonMatching="true" in your extractor.xml file.
omitNonMatching="true" means "don't index those pages that don't match in any extracto-to rules of extractor.xml". The default value is false.
I have had a NAnt/NAntContrib build running for a while on one machine:
(MS Windows Server 2003 Standard 32-bit SP2)
And I now need to run the same build script on a newer machine:
(Windows Server Standard 2008)
I have gotten NAnt and NAnt.Config installed and working on the new machine.
I am using NAnt.Core.Maillogger on the original machine, configured as such:
<property name="MailLogger.mailhost" value="mail.server.com" />
<property name="MailLogger.from" value="autobuild#hostredacted.com" />
<property name="MailLogger.failure.notify" value="true" />
<property name="MailLogger.success.notify" value="true" />
<property name="MailLogger.failure.to" value="team#hostredacted.com" />
<property name="MailLogger.success.to" value="team#hostredacted.com" />
<property name="MailLogger.failure.subject" value="AUTOBUILD: Failure on TEST" />
<property name="MailLogger.success.subject" value="AUTOBUILD: Success on TEST" />
<property name="MailLogger.failure.attachments" value="MailLogger.failure.files" />
<property name="MailLogger.success.attachments" value="MailLogger.success.files" />
<fileset id="MailLogger.failure.files">
<include name="build.log" />
</fileset>
<fileset id="MailLogger.success.files">
<include name="build.log" />
</fileset>
I run a very simple test .build file, to test mail functionality:
<target name="test_mail_pass">
<echo message="Test Success:
run by ${environment::get-user-name()}"/>
</target>
<target name="test_mail_fail">
<echo message="Test Fail:
run by ${environment::get-user-name()}"/>
<fail message="Some Failure occurred." />
</target>
The above works on the original machine, and seems to work on the new machine, except for the fact that no mail is sent.
There is no message in the console that indicates that anything went wrong (ignoring the obvious use of the <fail> task).
I don't even know where to begin figuring out what is wrong here, or how to troubleshoot this problem.
Any assistance would be greatly appreciated.
I have solved this problem with much Google-ing.
One of two things solved my problem, and I don't know which it was, but my problem is now solved.
My batch file needed to have the following command-line option:
-logger:NAnt.Core.MailLogger
The file that was referred to in the:
<fileset id="MailLogger.failure.files">
<include name="build.log" />
</fileset>
<fileset id="MailLogger.success.files">
<include name="build.log" />
</fileset>
Need to actually exist.
One post I read (lost the link) told of a problem where if the files to attach do not exist, the Mail will just not get sent.
How can I find/set the Assembly Name for Web Site created in Visual Web Developer?
Actually I am trying out NHibernate with Visual Web Developer Web site. In the .hbm.xml (mapping file) contains a attribute called Assembly where we need to specify the Assembly Name of the Project containing the entity class.
Where can I find the assembly name for the Web Site?
If a assembly name not available for a web site, how can I overcome this situation?
Update: If I omit the assembly attribute, I am getting error telling me that the assembly attribute is missing.
Product.hbm.xml:
<?xml version="1.0" encoding="utf-8" ?>
<hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" namespace="NHibernateSol.Domain" assembly="NHibernateSol1">
<class name="Product" >
<id name="Id">
<!-- generator class="guid" / -->
</id>
<property name="Name" />
<property name="Category" />
<property name="Discontinued" />
</class>
</hibernate-mapping>
System.Reflection.Assembly.GetAssembly(typeof(A_Class_Name_Here)).GetName().Name;
We currently run a specific SQL script as part of our Ant deploy process.
What we'd like to do is change this so we run all SQL scripts in a given directory. We can't figure out how to get this directory listing in Ant and iterate through the list and run each SQL script. Does anyone know how to do this?
Note: we currently run the sql file by using the Ant exec task that runs "call sqlplus ${id}/${pw}#${db.instance} #${file}"
I would recommend using the Ant SQL task. You can then specify with the following:
<sql
driver="org.database.jdbcDriver"
url="jdbc:database-url"
userid="sa"
password="pass">
<path>
<fileset dir=".">
<include name="data*.sql"/>
</fileset>
<path>
</sql>
i am doing basically the same thing on an oracle database.. but using sqlplus and the apply task. this allows the scripts to contain DDL statements.
would obviously only work where you have a command line program to use.
using ant 1.8.2 and antcontrib.
i have defined a few macros for use.. (see below) and then just call like
<compile_sql connectstring="${db.connect_string}" >
<filelist dir="${db_proc.dir}" files="specific_file.sql" />
<fileset dir="${db_proc.dir}" includes="wildcard*.pks" />
<fileset dir="${db_proc.dir}" includes="wildcard*.pkb" />
</compile_sql>
the macros are as follows
<macrodef name="compile_sql">
<attribute name="connectstring" />
<attribute name="dirtostart" default=""/>
<attribute name="arg1" default=""/>
<element name="sqllist" implicit="true" description="filesetlist of sql to run"/>
<sequential>
<check_missing_files>
<sqllist/>
</check_missing_files>
<apply executable="${sqlplus.exe}" failonerror="true" verbose="true" skipemptyfilesets="true" ignoremissing="false" dir="#{dirtostart}">
<arg value="-L"/>
<arg value="#{connectstring}"/>
<srcfile prefix="#" />
<sqllist/>
<arg value="#{arg1}"/>
<redirector>
<globmapper id="sqlout.mapper"
from="*"
to="*.out"/>
</redirector>
</apply>
</sequential>
</macrodef>
and
<macrodef name="check_missing_files">
<element name="checkfilelist" implicit="true" description="filelist of files to check for existance"/>
<sequential>
<restrict id="missing.files" xmlns:rsel="antlib:org.apache.tools.ant.types.resources.selectors">
<resources>
<checkfilelist/>
</resources>
<rsel:not>
<rsel:exists/>
</rsel:not>
</restrict>
<fail message="These files are missing: ${ant.refid:missing.files}" >
<condition >
<length string="${ant.refid:missing.files}" when="greater" length="0" />
</condition>
</fail>
</sequential>
</macrodef>