Exception while reading from the Chronicle Map file - chronicle-map

I am using version 3.8.0 and I am getting following exception. I am using the ChronicleMap without much customization. I am currently prototyping to exhibit ChronicleMap as a viable option to share data between different JVM processes on the same box. I do not see any problem when I create an instance where I am putting stuff into the ChronicleMap. But, when, I try to use ChronicleMap mainly as a reader then I am seeing this exception everytime.
Exception in thread "main" java.lang.AssertionError: java.lang.IllegalArgumentException: No enum constant net.openhft.chronicle.hash.serialization.impl.StopBitSizeMarshaller.{}
at net.openhft.chronicle.core.util.ObjectUtils.convertTo0(ObjectUtils.java:142)
at net.openhft.chronicle.core.util.ObjectUtils.convertTo(ObjectUtils.java:130)
at net.openhft.chronicle.wire.ValueIn.object(ValueIn.java:440)
at net.openhft.chronicle.wire.TextWire$TextValueIn.objectWithInferredType(TextWire.java:2482)
at net.openhft.chronicle.wire.TextWire$TextValueIn.typedMarshallable(TextWire.java:2290)
at net.openhft.chronicle.hash.impl.VanillaChronicleHash.readMarshallableFields(VanillaChronicleHash.java:240)
at net.openhft.chronicle.map.VanillaChronicleMap.readMarshallableFields(VanillaChronicleMap.java:107)
at net.openhft.chronicle.hash.impl.VanillaChronicleHash.readMarshallable(VanillaChronicleHash.java:225)
at net.openhft.chronicle.wire.SerializationStrategies$1.readUsing(SerializationStrategies.java:22)
at net.openhft.chronicle.wire.TextWire$TextValueIn.marshallable(TextWire.java:2228)
at net.openhft.chronicle.wire.ValueIn.object(ValueIn.java:429)
at net.openhft.chronicle.wire.TextWire$TextValueIn.objectWithInferredType(TextWire.java:2482)
at net.openhft.chronicle.wire.TextWire$TextValueIn.typedMarshallable(TextWire.java:2290)
at net.openhft.chronicle.map.ChronicleMapBuilder.openWithExistingFile(ChronicleMapBuilder.java:1598)
at net.openhft.chronicle.map.ChronicleMapBuilder.createWithFile(ChronicleMapBuilder.java:1444)
at net.openhft.chronicle.map.ChronicleMapBuilder.recoverPersistedTo(ChronicleMapBuilder.java:1416)
at net.openhft.chronicle.map.ChronicleMapBuilder.createOrRecoverPersistedTo(ChronicleMapBuilder.java:1410)

Most likely reason is wrong version of chronicle-wire dependency used. Chronicle Map 3.8.0 is proven to work with chronicle-bom:1.11.16, which specifies Chronicle Wire version 1.3.6, but no other older or newer version of chronicle-bom or Chronicle Wire.
Update. The new Chronicle Map 3.9.0 version shouldn't have this issue regardless Chronicle Wire version used.

Related

How to generate the HTML file using Plotly Kotlin

After using the sample code want to generate the HTML for the plot plot.makeFile() throws the below exception even passed a custom path still there are errors in generating the HTML file using
implementation("kscience.plotlykt:plotlykt-core:0.2.0")
Exception in thread "main" java.lang.NoSuchMethodError: java.nio.file.Path.of(Ljava/lang/String;[Ljava/lang/String;)Ljava/nio/file/Path;
at kscience.plotly.PlotlyHeadersKt$systemPlotlyHeader$1.invoke(plotlyHeaders.kt:39)
at kscience.plotly.PlotlyHeadersKt$systemPlotlyHeader$1.invoke(plotlyHeaders.kt)
at kscience.plotly.HtmlKt.toHTML(html.kt:34)
at kscience.plotly.FileExportKt.makeFile(fileExport.kt:53)
at kscience.plotly.FileExportKt.makeFile$default(fileExport.kt:49)
at UndefinedKt.main(Undefined.kt:30)
at UndefinedKt.main(Undefined.kt)
The method used here is introduced in Java 11: https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/nio/file/Path.html#of(java.lang.String,java.lang.String...)
You need to use JDK 11 or newer to run it. If I remember properly, newer versions won't even work with something below JDK 11. So just use newer JDK. If for some reason you are not able to do so, please open an issue here: https://github.com/mipt-npm/plotly.kt/issues. We can roll back to 1.8 bytecode.

Command Strategy Class Not found in Spark 1.3

I am using spark 1.3 and am able to create spark context . when i try to access a Cassandra DB using CassandraSQL context. I get the below error.
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/spark/sql/execution/SparkStrategies$CommandStrategy
at org.apache.spark.sql.cassandra.CassandraSQLContext$$anon$1.(CassandraSQLContext.scala:67)
at org.apache.spark.sql.cassandra.CassandraSQLContext.(CassandraSQLContext.scala:64)
at SparkSample$.main(SparkSample.scala:28)
at SparkSample.main(SparkSample.scala)
I use the spark-cassandra connector from Datastax. Also that I am not able to see any documentation related to SQL Execution
https://spark.apache.org/docs/1.2.0/api/java/index.html?org.apache.spark.sql.execution in spark 1.3 version. Any thoughts ?
First, if you look at the Cassandra readme you will see that they do not support 1.3 yet. I'm sure they would accept PR's though :)
Now, to the crux of the matter; they are using package private pieces so they are prone to these types of breaking changes. If you look at SparkStrategies in the 1.2 branch, you will see the CommandStrategy at the bottom. However, in SparkStrategies 1.3, the last object has become DDLStrategy, which does not even look the same at a glance. So, they might have removed this altogether. Your best bet is to report this to the Cassandra connector project and wait for official support of 1.3

how to debug SIGSEGV in jvm GCTaskThread

My application is experiencing cashes in production.
The crash dump indicates a SIGSEGV has occurred in GCTaskThread
It uses JNI, so there might be some source for memory corruption, although I can't be sure.
How can I debug this problem - I though of doing -XX:OnError... but i am not sure what will help me debug this.
Also, can some of you give a concrete example on how JNI code can crash GC with SIGSEGV
EDIT:
OS:SUSE Linux Enterprise Server 10 (x86_64)
vm_info: Java HotSpot(TM) 64-Bit Server VM (11.0-b15) for linux-amd64 JRE (1.6.0_10-b33), built on Sep 26 2008 01:10:29 by "java_re" with gcc 3.2.2 (SuSE Linux)
EDIT:
The issue stop occurring after we disable the hyper threading, any thoughts?
Errors in JNI code can occur in several ways:
The program crashes during execution of a native method (most common).
The program crashes some time after returning from the native method, often during GC (not so common).
Bad JNI code causes deadlocks shortly after returning from a native method (occasional).
If you think that you have a problem with the interaction between user-written native code and the JVM (that is, a JNI problem), you can run diagnostics that help you check the JNI transitions. to invoke these diagnostics; specify the -Xcheck:jni option when you start up the JVM.
The -Xcheck:jni option activates a set of wrapper functions around the JNI functions. The wrapper functions perform checks on the incoming parameters. These checks include:
Whether the call and the call that initialized JNI are on the same thread.
Whether the object parameters are valid objects.
Whether local or global references refer to valid objects.
Whether the type of a field matches the Get<Type>Field or Set<Type>Field call.
Whether static and nonstatic field IDs are valid.
Whether strings are valid and non-null.
Whether array elements are non-null.
The types on array elements.
Pls read the following links
http://publib.boulder.ibm.com/infocenter/javasdk/v5r0/index.jsp?topic=/com.ibm.java.doc.diagnostics.50/html/jni_debug.html
http://www.oracle.com/technetwork/java/javase/clopts-139448.html#gbmtq
Use valgrind. This sounds like a memory corruption. The output will be verbose but try to isolate the report to the JNI library if its possible.
Since the faulty thread seems to be GCTaskThread, did you try enabling verbose:gc and analyzing the output (preferably using a graphical tool like samurai, etc.)? Are you able to isolate a specific lib after examining the hs_err file?
Also, can you please provide more information on what causes the issue and if it is easily reproducible?

'Requested bean is currently in creation' on a domain object

I'm trying to migrate from grails 1.2.2 to 1.3.6 and got the following error when trying to access a page :
Error creating bean with name 'com.example.domain.UserAccount': Requested bean is currently in creation: Is there an unresolvable circular reference?
It seems that grails tryed to instanciate UserAccount as a spring bean (probably to be able to inject some dependencies).
Is there some constraints that appears on grails 1.3.x that were not relevant on 1.2.x ?
Thanks & Regard,
David.
The problem was coming from a property in the UserAccount class :
Program program = new Program(user:this)
The this reference was escaping from the object construction before the end of the construction.
Your best bet would be to read through the release notes for the versions between your old and new version used. If that does not shed any light, you might also consider doing incremental upgrades, one version at a time... that would be a big pain, but might be the more revealing way to upgrade.
Good luck.

How to make sure Solr/Lucene won't die with java.lang.OutOfMemoryError?

I'm really puzzled why it keeps dying with java.lang.OutOfMemoryError during indexing even though it has a few GBs of memory.
Is there a fundamental reason why it needs manual tweaking of config files / jvm parameters instead of it just figuring out how much memory is available and limiting itself to that? No other programs except Solr ever have this kind of problem.
Yes, I can keep tweaking JVM heap size every time such crashes happen, but this is all so backwards.
Here's stack trace of the latest such crash in case it is relevant:
SEVERE: java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOfRange(Arrays.java:3209)
at java.lang.String.<init>(String.java:216)
at org.apache.lucene.index.TermBuffer.toTerm(TermBuffer.java:122)
at org.apache.lucene.index.SegmentTermEnum.term(SegmentTermEnum.java:169)
at org.apache.lucene.search.FieldCacheImpl$StringIndexCache.createValue(FieldCacheImpl.java:701)
at org.apache.lucene.search.FieldCacheImpl$Cache.get(FieldCacheImpl.java:208)
at org.apache.lucene.search.FieldCacheImpl.getStringIndex(FieldCacheImpl.java:676)
at org.apache.lucene.search.FieldComparator$StringOrdValComparator.setNextReader(FieldComparator.java:667)
at org.apache.lucene.search.TopFieldCollector$OneComparatorNonScoringCollector.setNextReader(TopFieldCollector.java:94)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:245)
at org.apache.lucene.search.Searcher.search(Searcher.java:171)
at org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:988)
at org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:884)
at org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:341)
at org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:182)
at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:195)
at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1316)
at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:338)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:241)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:286)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:845)
at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447)
at java.lang.Thread.run(Thread.java:619)
Looking at the stack trace, it looks like you are performing a search, and sorting by a field. If you need to sort by a field, internally Lucene needs to load up all the values of all the terms in the field into memory. If the field contains a lot of data, then it is very possible that you may run out of memory.
I'm not certain there is a steadfast way to ensure you won't run into OutOfMemoryExceptions with Lucene. The problem you are facing is problem related to the use of FieldCache. From the Lucene API "Maintains caches of term values.". If your terms exceed the amount of memory allocated to the JVM you'll get the exception.
The documents are being sorted "at org.apache.lucene.search.FieldComparator$StringOrdValComparator.setNextReader(FieldComparator.java:667)", which will take up as much memory as is needed to store the terms being sorted for the index.
You'll need to review projected size of the fields that are sortable and adjust the JVM settings accordingly.
a wild guess, the documents you are indexing are very large
Lucene by default only indexes the first 10,000 terms of a document to avoid OutOfMemory errors, you can overcome this limit see setMaxFieldLength
Also, you could call optimize() and close as soon as you are done with processing with Indexwriter()
a definite way is to profile and find the bottleneck =]
You are using the post.jar to index data? This jar has a bug in solr1.2/1.3 I think (but I don't know the details). Our company has fixed this internally and it should be also fixed in the latest trunk solr1.4/1.5.
I was using this Java:
$ java -version
java version "1.6.0"
OpenJDK Runtime Environment (build 1.6.0-b09)
OpenJDK 64-Bit Server VM (build 1.6.0-b09, mixed mode)
Which was running out of heap space, but then I upgraded to this Java:
$ java -version
java version "1.6.0_24"
Java(TM) SE Runtime Environment (build 1.6.0_24-b07)
Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode)
And now it works fine, on a huge dataset, with lots of term facets.
For me it worked after restarting the Tomcat server.
navigate to C:\Bitnami\solr-4.7.2-0\apache-solr\scripts
open up serviceinstall.bat (with notepad++ or another program)
Either add or update the following properties:- ++JvmOptions=-Xms1024M ++JvmOptions=-Xmx1024M
from the command prompt in that window, run serviceinstall.bat REMOVE
then run serviceinstall.bat INSTALL
Hope that helpw!
An old question but since I stumbled upon it:
The String Field Cache is lot more compact from Lucene 4.0. So lot can fit in.
Field Cache is an in-memory structure. So can't prevent OOME.
For fields which need sorting or faceting - one should try DocValues to overcome this problem. DocValues do work with numeric and non-analyzed string values. And I presume many use cases of sorting/faceting will have one of these value types.