Error when run Hive-0.9.0 Exception in thread "main" java.lang.NoSuchFieldError: type - hive

Really sorry for stupid question, but struggling to find answer. I am trying to start up Hive on my 3 node Hadoop cluster, HDFS runs OK as does PIG, Hbase but for the life of me I can not get Hive to run properly.
This is the classpath output >
:/home/hduser/hive-0.9.0/conf:/home/hduser/hive-0.9.0/lib/antlr-runtime-3.0.1.jar:/home/hduser/hive-0.9.0/lib/commons-cli-1.2.jar:/home/hduser/hive-0.9.0/lib/commons-codec-1.3.jar:/home/hduser/hive-0.9.0/lib/commons-collections-3.2.1.jar:/home/hduser/hive-0.9.0/lib/commons-dbcp-1.4.jar:/home/hduser/hive-0.9.0/lib/commons-lang-2.4.jar:/home/hduser/hive-0.9.0/lib/commons-logging-1.0.4.jar:/home/hduser/hive-0.9.0/lib/commons-logging-api-1.0.4.jar:/home/hduser/hive-0.9.0/lib/commons-pool-1.5.4.jar:/home/hduser/hive-0.9.0/lib/datanucleus-connectionpool-2.0.3.jar:/home/hduser/hive-0.9.0/lib/datanucleus-core-2.0.3.jar:/home/hduser/hive-0.9.0/lib/datanucleus-enhancer-2.0.3.jar:/home/hduser/hive-0.9.0/lib/datanucleus-rdbms-2.0.3.jar:/home/hduser/hive-0.9.0/lib/derby-10.4.2.0.jar:/home/hduser/hive-0.9.0/lib/guava-r09.jar:/home/hduser/hive-0.9.0/lib/hadoop-0.20.2-core.jar:/home/hduser/hive-0.9.0/lib/hbase-0.92.0.jar:/home/hduser/hive-0.9.0/lib/hbase-0.92.0-tests.jar:/home/hduser/hive-0.9.0/lib/hive-builtins-0.9.0.jar:/home/hduser/hive-0.9.0/lib/hive-cli-0.9.0.jar:/home/hduser/hive-0.9.0/lib/hive-common-0.9.0.jar:/home/hduser/hive-0.9.0/lib/hive-contrib-0.9.0.jar:/home/hduser/hive-0.9.0/lib/hive_contrib.jar:/home/hduser/hive-0.9.0/lib/hive-exec-0.9.0.jar:/home/hduser/hive-0.9.0/lib/hive-hbase-handler-0.9.0.jar:/home/hduser/hive-0.9.0/lib/hive-hwi-0.9.0.jar:/home/hduser/hive-0.9.0/lib/hive-jdbc-0.9.0.jar:/home/hduser/hive-0.9.0/lib/hive-metastore-0.9.0.jar:/home/hduser/hive-0.9.0/lib/hive-pdk-0.9.0.jar:/home/hduser/hive-0.9.0/lib/hive-serde-0.9.0.jar:/home/hduser/hive-0.9.0/lib/hive-service-0.9.0.jar:/home/hduser/hive-0.9.0/lib/hive-shims-0.9.0.jar:/home/hduser/hive-0.9.0/lib/jackson-core-asl-1.8.8.jar:/home/hduser/hive-0.9.0/lib/jackson-jaxrs-1.8.8.jar:/home/hduser/hive-0.9.0/lib/jackson-mapper-asl-1.8.8.jar:/home/hduser/hive-0.9.0/lib/jackson-xc-1.8.8.jar:/home/hduser/hive-0.9.0/lib/JavaEWAH-0.3.2.jar:/home/hduser/hive-0.9.0/lib/jdo2-api-2.3-ec.jar:/home/hduser/hive-0.9.0/lib/jline-0.9.94.jar:/home/hduser/hive-0.9.0/lib/json-20090211.jar:/home/hduser/hive-0.9.0/lib/libfb303-0.7.0.jar:/home/hduser/hive-0.9.0/lib/libfb303.jar:/home/hduser/hive-0.9.0/lib/libthrift-0.7.0.jar:/home/hduser/hive-0.9.0/lib/libthrift.jar:/home/hduser/hive-0.9.0/lib/log4j-1.2.16.jar:/home/hduser/hive-0.9.0/lib/slf4j-api-1.6.1.jar:/home/hduser/hive-0.9.0/lib/slf4j-log4j12-1.6.1.jar:/home/hduser/hive-0.9.0/lib/stringtemplate-3.1-b1.jar:/home/hduser/hive-0.9.0/lib/zookeeper-3.4.3.jar:
Logging initialized using configuration in file:/home/hduser/hive-0.9.0/conf/hive-log4j.properties
Hive history file=/tmp/hduser/hive_job_log_hduser_201212181716_326152902.txt
and then from HIVE command line I run this:
hive> CREATE TABLE pokes (foo INT, bar STRING);
however I get the following error:
Exception in thread "main" java.lang.NoSuchFieldError: type
at org.apache.hadoop.hive.ql.parse.HiveLexer.mKW_CREATE(HiveLexer.java:1602)
at org.apache.hadoop.hive.ql.parse.HiveLexer.mTokens(HiveLexer.java:6380)
at org.antlr.runtime.Lexer.nextToken(Lexer.java:89)
at org.antlr.runtime.BufferedTokenStream.fetch(BufferedTokenStream.java:133)
at org.antlr.runtime.BufferedTokenStream.sync(BufferedTokenStream.java:127)
at org.antlr.runtime.CommonTokenStream.setup(CommonTokenStream.java:132)
at org.antlr.runtime.CommonTokenStream.LT(CommonTokenStream.java:91)
at org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:547)
at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:438)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:416)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:336)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:909)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:258)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:215)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:406)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:689)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:557)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

Make sure you don't have any other antlr-*.jar in your classpath except the one which is there in HIVE_HOME/lib folder. If still doesn't work download the latest version from the antlr's website and put it into the HIVE_HOME/lib folder and give it a try.
HTH

just remove all the files with prefix jackson*
and copy new version of jackson* files from hive.
rm /opt/hadoop/hadoop/lib/jackson*
cp /opt/hive/hive/lib/jackson* /opt/hadoop/hadoop/lib
I did it this way, and it worked perfectly fine!
I hope it helps.

Related

Array in output schema caused exception

I am following this WordCount example using the Google BigQuery-Hadoop connector:
https://developers.google.com/hadoop/writing-with-bigquery-connector#completecode
The example works fine as it is.
To test array in the output schema, I have altered just one line in the code by adding an array object definition to the output schema:
String outputTableSchema = "[{'name': 'Word','type': 'STRING'},{'name': 'Number','type': 'INTEGER'},{'name':'Persons','mode':'REPEATED','type':'RECORD','fields':[{'name': 'name','type': 'STRING'},{'name': 'age','type': 'INTEGER'}]}]";
Now when I run the WordCount example, it gives this exception:
java.lang.IllegalStateException
at com.google.gson.JsonArray.getAsString(JsonArray.java:133)
at com.google.cloud.hadoop.io.bigquery.BigQueryUtils.getSchemaFromString(BigQueryUtils.java:97)
at com.google.cloud.hadoop.io.bigquery.BigQueryOutputFormat.getRecordWriter(BigQueryOutputFormat.java:121)
at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.(ReduceTask.java:568)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:637)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:418)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
Does anyone know what the issue is?
Thank you
This is actually a bug in the current version of the BigQuery connector which prevents it from supporting inner records with more than 1 field.
We have a fix internally and it's slated to go out with the next release (0.4.3) which may still be a couple weeks out; if you'd like to help try out a staging build, feel free to reach out to gcp-hadoop-contact#google.com and we can provide instructions.

PriviledgedActionException: Able to populate Hbase via Hive, however, unable to query HBase via Hive

I'm using the current Cloudera Quick Start VM. I've created an Hive table with some data. Then, I've created an external table with the Hive Storage Handler. I was able to populate the HBase table. However, while quering the Hive/HBase table, I got the following error (NullpointerException):
14/04/16 01:18:51 ERROR security.UserGroupInformation: PriviledgedActionException as:hbase (auth:SIMPLE) cause:BeeswaxException(message:java.io.IOException: java.lang.NullPointerException, log_context:3ecc8100-e8f8-40a0-916b-00fa5a9b6b11, handle:QueryHandle(id:3ecc8100-e8f8-40a0-916b-00fa5a9b6b11, log_context:3ecc8100-e8f8-40a0-916b-00fa5a9b6b11), SQLState: )
14/04/16 01:18:51 ERROR beeswax.BeeswaxServiceImpl: Caught BeeswaxException
BeeswaxException(message:java.io.IOException: java.lang.NullPointerException, log_context:3ecc8100-e8f8-40a0-916b-00fa5a9b6b11, handle:QueryHandle(id:3ecc8100-e8f8-40a0-916b-00fa5a9b6b11, log_context:3ecc8100-e8f8-40a0-916b-00fa5a9b6b11), SQLState: )
at com.cloudera.beeswax.BeeswaxServiceImpl$RunningQueryState.fetch(BeeswaxServiceImpl.java:545)
at com.cloudera.beeswax.BeeswaxServiceImpl$5.run(BeeswaxServiceImpl.java:986)
at com.cloudera.beeswax.BeeswaxServiceImpl$5.run(BeeswaxServiceImpl.java:981)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
at com.cloudera.beeswax.BeeswaxServiceImpl.doWithState(BeeswaxServiceImpl.java:772)
at com.cloudera.beeswax.BeeswaxServiceImpl.fetch(BeeswaxServiceImpl.java:980)
at com.cloudera.beeswax.api.BeeswaxService$Processor$fetch.getResult(BeeswaxService.java:987)
at com.cloudera.beeswax.api.BeeswaxService$Processor$fetch.getResult(BeeswaxService.java:971)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:244)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
I embedded Guava, zookeeper, hbase and hive-hbase-handler JARs. I followed the instructions made in this tutorial: http://www.n10k.com/blog/hbase-via-hive-pt2/
I am using the current Cloudera-Quick-Start VM. Job and Task-Tracker logs as well as Beeswax logs are telling me nothing.
Do you have any ideas about what I am doing wrong?
I am thankfull for any advise!
Best regards, Lena
This is the solution:
Nullpointer exception in HBase MapReduce
The logs were misleading (for me). HBase or Hive was not able to resolve the NameNode.

Create back of hbase data on S3 and the restore

I had hbase cluster running on amazon ec2 nodes. I want to create the backup of my hbase table. So, I came up with this tool. I was able to create the back up of table dummy on s3 using the following command :
java com.bizosys.oneline.maintenance.HBaseBackup mode=backup.full backup.folder=s3://mybucket/ tables=dummy
But when i tried to restore the same data on some table(model). It failed with the following :
`13/10/24 10:52:52 WARN mapred.FileOutputCommitter: Output path is null in cleanup
13/10/24 10:52:52 WARN mapred.LocalJobRunner: job_local_0002
java.lang.NullPointerException
at org.apache.hadoop.fs.s3.Jets3tFileSystemStore.retrieveBlock(Jets3tFileSystemStore.java:209)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy5.retrieveBlock(Unknown Source)
at org.apache.hadoop.fs.s3.S3InputStream.blockSeekTo(S3InputStream.java:160)
at org.apache.hadoop.fs.s3.S3InputStream.read(S3InputStream.java:119)
at java.io.DataInputStream.readFully(DataInputStream.java:195)
at java.io.DataInputStream.readFully(DataInputStream.java:169)
at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1508)
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1486)
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1475)
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1470)
at org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader.initialize(SequenceFileRecordReader.java:50)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:522)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:763)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:212)
13/10/24 10:52:53 INFO mapred.JobClient: Job complete: job_local_0002
13/10/24 10:52:53 INFO mapred.JobClient: Counters: 0
Error in Job completetion Params
tablename inputputdir
model s3://mybucket/Wed_Oct_23_19_45_49_IST_2013/model
Access Failure to s3://mybucket/Wed_Oct_23_19_45_49_IST_2013/model , tries=1
`.
java com.bizosys.oneline.maintenance.HBaseBackup mode=restore backup.folder=s3://mybucket/Wed_Oct_23_19_45_49_IST_2013 tables="model"
FYI, please don't suggest me that there is an option of installation of hbase as well as back up on EMR. That i know but for some reason i am not using it.

How to load a delimited file with empty fields in pig

I am using the below command to load the file, when I try to dump or illustrate the loaded data, it fails with the below error. I have checked the sanity of the data, each line contains correct number of delimiters, but when the field is empty the delimiter immediately follows , I tried loading the below single sample line. It does not work.
hs_2_inr = LOAD 'hs_2_inr.dat' USING PigStorage('^') as ( year:chararray, country:chararray, s_no:chararray, hs_8:chararray, hs_8_desc:chararray, prevyr_inr:chararray, curyr_inr:chararray, growth:chararray, dummy:chararray);
Here is the sample data
1997^BOTSWANA^1.^10063001^*RICE PARBOILED^^2.43^^
Below is the exception
2013-06-30 21:02:23,015 [main] ERROR org.apache.pig.pen.AugmentBaseDataVisitor - No (valid) input data found!
java.lang.RuntimeException: No (valid) input data found!
at org.apache.pig.pen.AugmentBaseDataVisitor.visit(AugmentBaseDataVisitor.java:583)
at org.apache.pig.newplan.logical.relational.LOLoad.accept(LOLoad.java:229)
at org.apache.pig.pen.util.PreOrderDepthFirstWalker.depthFirst(PreOrderDepthFirstWalker.java:82)
at org.apache.pig.pen.util.PreOrderDepthFirstWalker.depthFirst(PreOrderDepthFirstWalker.java:84)
at org.apache.pig.pen.util.PreOrderDepthFirstWalker.walk(PreOrderDepthFirstWalker.java:66)
at org.apache.pig.newplan.PlanVisitor.visit(PlanVisitor.java:52)
at org.apache.pig.pen.ExampleGenerator.getExamples(ExampleGenerator.java:180)
at org.apache.pig.PigServer.getExamples(PigServer.java:1180)
at org.apache.pig.tools.grunt.GruntParser.processIllustrate(GruntParser.java:739)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.Illustrate(PigScriptParser.java:626)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:323)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:194)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:170)
at org.apache.pig.tools.grunt.Grunt.run(Grunt.java:69)
at org.apache.pig.Main.run(Main.java:538)
at org.apache.pig.Main.main(Main.java:157)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
2013-06-30 21:02:23,016 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 2997: Encountered IOException. Exception
so how do I load a file with empty fields in pig?
Your code works fine. As you mentioned in your comment, ILLUSTRATE was your problem. Per the docs, ILLUSTRATE wasn't being maintained for a while. Do not rely on it. You shouldn't need it in any non-diagnostic code anyway. Use DESCRIBE instead.
In newer Pig versions, the warning on ILLUSTRATE seems to have gone away, so it may be safe again, but I'd still rely more heavily on DESCRIBE to avoid a source of potential issues. In Pig 0.10, which I'm on, ILLUSTRATE still gave me the same error you received.

Exporting an HSQLDB to XML using DBUnit results in null pointer errors

I'm trying to export the entire contents of my database, an HSQLDB, to XML using DBUnit, and I'm getting null pointer errors that I can't understand. I'm following the example in the FAQ:
IDatabaseConnection xmlConnection = new DatabaseConnection(conn);
IDataSet allTables = xmlConnection.createDataSet();
XmlDataSet.write(allTables, new FileOutputStream(DATABASE_PATH + ".xml"));
The null pointer error occurs on the last line. conn and DATABASE_PATH aren't null as they're both checked for that and used later in the program without a problem (exporting the database into CSV using OpenCSV, which works perfectly and exactly as expected).
The stacktrace is as follows:
org.dbunit.dataset.DataSetException: java.sql.SQLException: java.lang.NullPointerException java.lang.NullPointerException
at org.dbunit.database.DatabaseDataSet.initialize(DatabaseDataSet.java:243)
at org.dbunit.database.DatabaseDataSet.getTableNames(DatabaseDataSet.java:272)
at org.dbunit.database.DatabaseDataSet.createIterator(DatabaseDataSet.java:258)
at org.dbunit.dataset.AbstractDataSet.iterator(AbstractDataSet.java:189)
at org.dbunit.dataset.stream.DataSetProducerAdapter.(DataSetProducerAdapter.java:63)
at org.dbunit.dataset.xml.XmlDataSetWriter.write(XmlDataSetWriter.java:128)
at org.dbunit.dataset.xml.XmlDataSet.write(XmlDataSet.java:104)
at org.dbunit.dataset.xml.XmlDataSet.write(XmlDataSet.java:91)
at pms.DatabaseExporter.exportToXML(DatabaseExporter.java:181)
at pms.DatabaseExporter.main(DatabaseExporter.java:301)
Caused by: java.sql.SQLException: java.lang.NullPointerException java.lang.NullPointerException
at org.hsqldb.jdbc.Util.sqlException(Util.java:224)
at org.hsqldb.jdbc.JDBCStatement.fetchResult(JDBCStatement.java:1830)
at org.hsqldb.jdbc.JDBCStatement.executeQuery(JDBCStatement.java:181)
at org.hsqldb.jdbc.JDBCDatabaseMetaData.execute(JDBCDatabaseMetaData.java:6150)
at org.hsqldb.jdbc.JDBCDatabaseMetaData.getTables(JDBCDatabaseMetaData.java:3170)
at org.dbunit.database.DefaultMetadataHandler.getTables(DefaultMetadataHandler.java:137)
at org.dbunit.database.DatabaseDataSet.initialize(DatabaseDataSet.java:199)
... 9 more
Caused by: org.hsqldb.HsqlException: java.lang.NullPointerException
at org.hsqldb.error.Error.error(Error.java:108)
at org.hsqldb.result.Result.newErrorResult(Result.java:1069)
at org.hsqldb.StatementDMQL.execute(StatementDMQL.java:192)
at org.hsqldb.Session.executeCompiledStatement(Session.java:1315)
at org.hsqldb.Session.executeDirectStatement(Session.java:1206)
at org.hsqldb.Session.execute(Session.java:990)
at org.hsqldb.jdbc.JDBCStatement.fetchResult(JDBCStatement.java:1822)
... 14 more
Caused by: java.lang.NullPointerException
at org.hsqldb.types.CharacterType.compare(CharacterType.java:418)
at org.hsqldb.index.IndexAVL.compareRowForInsertOrDelete(IndexAVL.java:617)
at org.hsqldb.index.IndexAVLMemory.insert(IndexAVLMemory.java:214)
at org.hsqldb.persist.RowStoreAVL.indexRow(RowStoreAVL.java:171)
at org.hsqldb.persist.RowStoreAVLHybridExtended.indexRow(RowStoreAVLHybridExtended.java:99)
at org.hsqldb.Table.insertSys(Table.java:2625)
at org.hsqldb.dbinfo.DatabaseInformationMain.SYSTEM_TABLES(DatabaseInformationMain.java:2353)
at org.hsqldb.dbinfo.DatabaseInformationMain.generateTable(DatabaseInformationMain.java:348)
at org.hsqldb.dbinfo.DatabaseInformationFull.generateTable(DatabaseInformationFull.java:379)
at org.hsqldb.dbinfo.DatabaseInformationMain.setStore(DatabaseInformationMain.java:507)
at org.hsqldb.persist.PersistentStoreCollectionSession.getStore(PersistentStoreCollectionSession.java:138)
at org.hsqldb.Table.getRowStore(Table.java:2817)
at org.hsqldb.RangeVariable$RangeIteratorMain.(RangeVariable.java:939)
at org.hsqldb.RangeVariable$RangeIteratorMain.(RangeVariable.java:917)
at org.hsqldb.RangeVariable.getIterator(RangeVariable.java:770)
at org.hsqldb.QuerySpecification.buildResult(QuerySpecification.java:1293)
at org.hsqldb.QuerySpecification.getSingleResult(QuerySpecification.java:1245)
at org.hsqldb.QuerySpecification.getResult(QuerySpecification.java:1235)
at org.hsqldb.StatementQuery.getResult(StatementQuery.java:66)
at org.hsqldb.StatementDMQL.execute(StatementDMQL.java:190)
... 18 more
I've googled and couldn't find anything relating to this kind of error during export. I'm not that experienced with SQL or JDBC so I'm hoping there's enough info in the stack trace for someone more knowledgeable to tell me what's going wrong. If there's some other library that would be better for my needs, I have no problem switching... The only thing I need is export/import with XML right now, so I'm not using DBUnit for anything else. Anyway if anybody can tell me what's going on wrong or if I ought to be using something else I'd really appreciate it.
This is an error in the particular version of HSQLDB's system table creation, which was spotted and corrected recently. You can try the updated HSQLDB jar from http://hsqldb.org/support/