In h2 db (client/server mode) during high load, 'Lob not found' exception occurs even if blob available - blob

See trace below
Caused by: java.io.IOException: org.h2.message.DbException: General error: "java.lang.RuntimeException: Lob not found: 12603" [50000-181]
at org.h2.message.DbException.convertToIOException(DbException.java:364)
at org.h2.store.LobStorageRemoteInputStream.read(LobStorageRemoteInputStream.java:73)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
at java.io.ObjectInputStream$PeekInputStream.read(ObjectInputStream.java:2265)
at java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2278)
at java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:2749)
at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:779)
at java.io.ObjectInputStream.<init>(ObjectInputStream.java:279)
at org.apache.ode.scheduler.simple.JdbcDelegate.dequeueImmediate(JdbcDelegate.java:214)
... 12 more
Caused by: org.h2.message.DbException: General error: "java.lang.RuntimeException: Lob not found: 12603" [50000-181]
at org.h2.message.DbException.convert(DbException.java:283)
at org.h2.engine.SessionRemote.done(SessionRemote.java:629)
at org.h2.engine.SessionRemote.readLob(SessionRemote.java:778)
at org.h2.store.LobStorageRemoteInputStream.read(LobStorageRemoteInputStream.java:71)
... 21 more
Caused by: org.h2.jdbc.JdbcSQLException: General error: "java.lang.RuntimeException: Lob not found: 12603" [50000-181]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
at org.h2.message.DbException.get(DbException.java:168)
at org.h2.message.DbException.convert(DbException.java:295)
at org.h2.server.TcpServerThread.sendError(TcpServerThread.java:221)
at org.h2.server.TcpServerThread.run(TcpServerThread.java:161)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.lang.RuntimeException: Lob not found: 12603
at org.h2.message.DbException.throwInternalError(DbException.java:242)
at org.h2.store.LobStorageMap.getInputStream(LobStorageMap.java:236)
at org.h2.server.TcpServerThread.process(TcpServerThread.java:454)
at org.h2.server.TcpServerThread.run(TcpServerThread.java:159)
... 1 more
The blob exists. I have verified. This exception didn't occur after executing SET MAX_LENGTH_INPLACE_LOB 2048 (Default value was 128). I assume after setting this property to a higher value, blobs less than 2048 size are getting stored as inline column. Thus it prevents exception. But any explanation about why the exception occurs in heavy load with the default value. If blobs are stored separate, why does h2 fail to retrieve them? (assuming this is the cause for exception)

I had the same issue.
After some searching i found this h2-lob-issue Github page which point on this issue.
So if you upgrade to 1.4.183 this issue is fixed.
btw. which version of h2 do you used?

Related

How to find batch element in Websphere commerce error

When I am running buildindex in my Websphere application, I have this error in buildindex log:
[2021/05/10 15:41:57:590 GMT] I Data import pre-processing completed in 0.389 seconds for table TI_CAT_EXTENDED_41060.
[2021/05/10 15:41:57:591 GMT] I /opt/IBM/WebSphere/CommerceServer80/instances/auth/search/pre-processConfig/MC_41060/DB2/wc-dataimport-preprocess-catentry-metainf.xml
[2021/05/10 15:41:57:591 GMT] I
Table name: TI_X_CATENT_META_INF_410600
Fetch size: 500
Batch size: 500
[2021/05/10 15:41:58:048 GMT] I Error for batch element #415: DB2 SQL Error: SQLCODE=-302, SQLSTATE=22001, SQLERRMC=null, DRIVER=4.19.77
[2021/05/10 15:41:58:048 GMT] I SQL: SELECT CATENTRY_ID, TITLE, TITLE_KEYWORDS, SHORT_DESC, SHORT_DESC_KEYWORDS, LONG_DESC, LONG_DESC_KEYWORDS, LOCALE FROM X_CATENT_META_INF WHERE STORE_ID = 41006
[2021/05/10 15:41:58:087 GMT] I
The program exiting with exit code: 1.
Data import pre-processing was unsuccessful. An unrecoverable error has occurred.
[2021/05/10 15:41:58:091 GMT] E com.ibm.commerce.foundation.dataimport.preprocess.DataImportPreProcessorMain:handleExecutionException Exception message: CWFDIH0002: An SQL exception was caught. The following error occurred: [jcc][t4][102][10040][4.19.77] Batch failure. The batch was submitted, but at least one exception occurred on an individual member of the batch.
Use getNextException() to retrieve the exceptions for specific batched elements. ERRORCODE=-4229, SQLSTATE=null., stack trace: com.ibm.commerce.foundation.dataimport.exception.DataImportSystemException: CWFDIH0002: An SQL exception was caught. The following error occurred: [jcc][t4][102][10040][4.19.77] Batch failure. The batch was submitted, but at least one exception occurred on an individual member of the batch.
Use getNextException() to retrieve the exceptions for specific batched elements. ERRORCODE=-4229, SQLSTATE=null.
at com.ibm.commerce.foundation.dataimport.preprocess.DataImportPreProcessorMain.processDataConfig(DataImportPreProcessorMain.java:1515)
at com.ibm.commerce.foundation.dataimport.preprocess.DataImportPreProcessorMain.execute(DataImportPreProcessorMain.java:1331)
at com.ibm.commerce.foundation.dataimport.preprocess.DataImportPreProcessorMain.main(DataImportPreProcessorMain.java:534)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:95)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:56)
at java.lang.reflect.Method.invoke(Method.java:620)
at com.ibm.ws.bootstrap.WSLauncher.main(WSLauncher.java:280)
Caused by: com.ibm.db2.jcc.am.BatchUpdateException: [jcc][t4][102][10040][4.19.77] Batch failure. The batch was submitted, but at least one exception occurred on an individual member of the batch.
Use getNextException() to retrieve the exceptions for specific batched elements. ERRORCODE=-4229, SQLSTATE=null
at com.ibm.db2.jcc.am.b4.a(b4.java:475)
at com.ibm.db2.jcc.am.Agent.endBatchedReadChain(Agent.java:414)
at com.ibm.db2.jcc.am.ki.a(ki.java:5342)
at com.ibm.db2.jcc.am.ki.c(ki.java:4929)
at com.ibm.db2.jcc.am.ki.executeBatch(ki.java:3045)
at com.ibm.commerce.foundation.dataimport.preprocess.AbstractDataPreProcessor.populateTable(AbstractDataPreProcessor.java:373)
at com.ibm.commerce.foundation.dataimport.preprocess.StaticAttributeDataPreProcessor.process(StaticAttributeDataPreProcessor.java:461)
at com.ibm.commerce.foundation.dataimport.preprocess.DataImportPreProcessorMain.processDataConfig(DataImportPreProcessorMain.java:1482)
... 7 more
The exception seems to be clear, but I can't identify what is the element #415 in batch. Even the log doesn't helps, because it doesn't point to another more detailed log. Do you have any suggestion for find it?
Thanks to the comment of user #mao, I have followed this link
The failing table first must be identified. Enable more detailed tracing for di-preprocess:
Navigate to :
WC_installdir/instances/instance_name/xml/config/dataimport
and open the logging.properties file. Find all instances of INFO and
change it to FINEST. Optionally increase the size of the log file and
the number of historical log files while editing this file.
Thanks to this suggestion, I had re-run the buildindex process, and found that solr was wrongly grouping fields from original table, thus generating a too long field for the destination, and generating the error.

repast.simphony.ui.GUIScheduleRunner error message

I'm a new user in RePast learning to run the mesoFON model. I get this error message. What is the problem?
I'm using Eclipse IDE 2018-09.
FATAL [Thread-5] 11:34:18,767 repast.simphony.ui.GUIScheduleRunner -
RunTimeException when running the schedule
Current tick (1.0)
java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
at repast.simphony.engine.schedule.DynamicTargetAction.execute(DynamicTargetAction.java:72)
at repast.simphony.engine.schedule.DefaultAction.execute(DefaultAction.java:38)
at repast.simphony.engine.schedule.ScheduleGroup.executeList(ScheduleGroup.java:205)
at repast.simphony.engine.schedule.ScheduleGroup.execute(ScheduleGroup.java:231)
at repast.simphony.engine.schedule.Schedule.execute(Schedule.java:352)
at repast.simphony.ui.GUIScheduleRunner$ScheduleLoopRunnable.run(GUIScheduleRunner.java:52)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.lang.reflect.InvocationTargetException
at meso_FON.application.Environment$$FastClassByCGLIB$$fd509841.invoke(<generated>)
at net.sf.cglib.reflect.FastMethod.invoke(FastMethod.java:53)
at repast.simphony.engine.schedule.DynamicTargetAction.execute(DynamicTargetAction.java:69)
... 6 more
Caused by: java.lang.IllegalArgumentException: Comparison method violates its general contract!
at java.base/java.util.TimSort.mergeLo(TimSort.java:781)
at java.base/java.util.TimSort.mergeAt(TimSort.java:518)
at java.base/java.util.TimSort.mergeCollapse(TimSort.java:448)
at java.base/java.util.TimSort.sort(TimSort.java:245)
at java.base/java.util.Arrays.sort(Arrays.java:1515)
at java.base/java.util.ArrayList.sort(ArrayList.java:1749)
at java.base/java.util.Collections.sort(Collections.java:177)
at org.khelekore.prtree.MinMaxNodeGetter.<init>(MinMaxNodeGetter.java:29)
at org.khelekore.prtree.LeafBuilder.getMM(LeafBuilder.java:69)
at org.khelekore.prtree.LeafBuilder.buildLeafs(LeafBuilder.java:34)
at org.khelekore.prtree.PRTree.load(PRTree.java:65)
at meso_FON.application.Environment.getPRTree(Environment.java:423)
at meso_FON.application.Environment.queryPRTree(Environment.java:234)
... 9 more
It appears that there is an issue with a mesoFOM specific-method call. I'd suggest reaching out to the mesoFOM model developers directly to see if they can help.

OrientDB failed to synchronize Luncene index

I am running a large integration test suite using embedded orientdb server with cleanup after every test. However, at some point the tests failed due to some fts indexes has been deleted while another trying to access them. As a result I received:
Exception in thread "Thread-11" java.lang.RuntimeException: java.io.FileNotFoundException: _2.fdt
at org.apache.lucene.search.ControlledRealTimeReopenThread.run(ControlledRealTimeReopenThread.java:247)
Caused by: java.io.FileNotFoundException: _2.fdt
at org.apache.lucene.store.FSDirectory.fileLength(FSDirectory.java:261)
at org.apache.lucene.index.SegmentCommitInfo.sizeInBytes(SegmentCommitInfo.java:141)
at org.apache.lucene.index.DocumentsWriterPerThread.sealFlushedSegment(DocumentsWriterPerThread.java:529)
at org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:502)
at org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:506)
at org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:616)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:370)
at org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
at org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
at org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
at org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:170)
at org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:118)
at org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:58)
at org.apache.lucene.search.ReferenceManager.doMaybeRefresh(ReferenceManager.java:176)
at org.apache.lucene.search.ReferenceManager.maybeRefreshBlocking(ReferenceManager.java:253)
at org.apache.lucene.search.ControlledRealTimeReopenThread.run(ControlledRealTimeReopenThread.java:245)
Any one know how to fix this problem?

HSQLDB throws Asset failed exception and file io error on db.script.new file during Checkpoint

Our application is a Java based desktop application which will download the binary data from the source, parses it and add it to HSQLDB database. When downloading from the sources individually, application works perfectly. But when doing the same from multiple sources simultaneously with each source in an individual thread, I am getting an error of
java.sql.SQLException: Assert failed: java.lang.ArrayIndexOutOfBoundsException: 23 in statement [CHECKPOINT]
at org.hsqldb.jdbc.Util.throwError(Unknown Source)
at org.hsqldb.jdbc.jdbcPreparedStatement.execute(Unknown Source)
or sometimes,
java.sql.SQLException: Assert failed: java.lang.ArrayIndexOutOfBoundsException: 1016 in statement [CHECKPOINT]
followed by
java.sql.SQLException: File input/output error: C:\ProgramData\test\data\database\db.script.new in statement [CHECKPOINT]
at org.hsqldb.jdbc.Util.throwError(Unknown Source)
at org.hsqldb.jdbc.jdbcPreparedStatement.execute(Unknown Source)
Java: 1.8;
HSQL version: 1.8.10
We are not in the position to migrate the HSQLDB to latest version because of various reasons.
HSQL Properties:
hsqldb.script_format=0
runtime.gc_interval=0
sql.enforce_strict_size=false
hsqldb.cache_size_scale=8
readonly=false
hsqldb.nio_data_file=true
hsqldb.cache_scale=14
version=1.8.0
hsqldb.default_table_type=memory
hsqldb.cache_file_scale=1
hsqldb.log_size=200
modified=yes
hsqldb.cache_version=1.7.0
hsqldb.original_version=1.8.0
hsqldb.compatible_version=1.8.0
Any help or hint will be appreciated.
This is an 7 year old version which is not ideal for multi-threaded usage.
The simple solution is to perform the database updates with a single thread. You can retrofit your multi-threaded application with a synchronized block over a singleton object around the code that performs the database update.

Hive with Tez, No input paths specified in job

I have used hadoop-0.20.x.x, hive-0.11.0. I would talk about hive queries: with the specified configuration every thing is good and working fine.
Now, we have upgraded to hadoop-2.6.x (hadoop2)and hive-0.14.x. Also using Apache Tez.
The problem is, hadoop works as is. But hive sql queries doesn't.
The below query works fine in the older version's. But throw errors in the upgraded version's:
QUERY : SELECT abc.property_name, xyz.date, xyz.time, xyz.value_as_number, xyz.value_units FROM dbname.xyz JOIN dbname.abc ON (xyz.id = abc.src_id) WHERE xyz.person_id=138312;
EXCEPTION:
INFO : Session is already open
INFO : Tez session was closed. Reopening...
INFO : Session re-established.
INFO :
INFO : Status: Running (Executing on YARN cluster with App id application_1435524970199_0035)
INFO : Map 1: -/- Map 2: -/-
ERROR : Status: Failed
ERROR : Vertex failed, vertexName=Map 1, vertexId=vertex_1435524970199_0035_1_00, diagnostics=[Vertex vertex_1435524970199_0035_1_00 [Map 1] killed/failed due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: concept initializer failed, vertex=vertex_1435524970199_0035_1_00 [Map 1], java.io.IOException: No input paths specified in job
at org.apache.hadoop.hive.ql.io.HiveInputFormat.getInputPaths(HiveInputFormat.java:318)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:328)
at org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:130)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:245)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:239)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:239)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:226)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
]
ERROR : Vertex failed, vertexName=Map 2, vertexId=vertex_1435524970199_0035_1_01, diagnostics=[Vertex vertex_1435524970199_0035_1_01 [Map 2] killed/failed due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: observation initializer failed, vertex=vertex_1435524970199_0035_1_01 [Map 2], java.io.IOException: No input paths specified in job
at org.apache.hadoop.hive.ql.io.HiveInputFormat.getInputPaths(HiveInputFormat.java:318)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:328)
at org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:130)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:245)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:239)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:239)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:226)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
]
ERROR : DAG failed due to vertex failure. failedVertices:2 killedVertices:0
Error: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask (state=08S01,code=2)
Exception says, No input path specified. Well, i understand and know how to do solve in haodop-mapreduce program. But, how do we do it using hive query. Anyway, i don't think this is the same.
To make out, i have used hive shell and beeline shell, hive returned expected output but, beeline returned the same exception as above.
The beauty of the problem is query on individual table works fine. But, when i try to work on the JOIN, it throws the above exception.
But, i have understood that, there's an impact of Apache Tez on my query. Can some one suggest the solution or pin point tez reference, so i could read and rewrite the query accordingly. Thanks
It worked by disabling apache tez.
Look's like apache tez isn't stable yet.