Hybris Solr Indexing throws Nullpointer exception - indexing

I am executing full solr indexing in Hybris, and i am getting the below error. Can anyone help.
Indexing failed.
16.03.2021 13:43:42: ERROR: Caught throwable null java.lang.NullPointerException at de.hybris.platform.solrfacetsearch.solr.impl.AbstractSolrSearchProvider.createSynonyms(AbstractSolrSearchProvider.java:382) at de.hybris.platform.solrfacetsearch.solr.impl.AbstractSolrSearchProvider.createSynonymsForLanguages(AbstractSolrSearchProvider.java:363) at de.hybris.platform.solrfacetsearch.solr.impl.SolrStandaloneSearchProvider.exportConfig(SolrStandaloneSearchProvider.java:266) at de.hybris.platform.solrfacetsearch.indexer.listeners.IndexerOperationListener.afterPrepareContext(IndexerOperationListener.java:87) at de.hybris.platform.solrfacetsearch.indexer.impl.DefaultIndexerContextFactory.executeAfterPrepareListeners(DefaultIndexerContextFactory.java:168) at de.hybris.platform.solrfacetsearch.indexer.impl.DefaultIndexerContextFactory.prepareContext(DefaultIndexerContextFactory.java:97) at de.hybris.platform.solrfacetsearch.indexer.strategies.impl.AbstractIndexerStrategy.doExecute(AbstractIndexerStrategy.java:147) at de.hybris.platform.solrfacetsearch.indexer.strategies.impl.AbstractIndexerStrategy.execute(AbstractIndexerStrategy.java:116) ...
Thanks

It looks some synonyms configuration not correct. You can try it in debug mode or you can check the code and of there is some logging function you can enable it in hac for getting more detail.

Related

Presto - Unable to enable performance tuning

I'm getting below errors after enabling the performance tuning parameters,
Performance tuning parameters used,
optimizer.join-reordering-strategy=AUTOMATIC optimizer.join_distribution_type=AUTOMATIC experimental.enable-dynamic-filtering=TRUE
I'm using amazon emr,
presto version: Presto CLI 0.267-amzn-1
I'm adding these parameters,
/etc/presto/conf/config.properties
`2022-07-11T11:02:36.728Z ERROR main com.facebook.presto.server.PrestoServer Unable to create injector, see the following errors:
Configuration property 'optimizer.join_distribution_type' was not used
at com.facebook.airlift.bootstrap.Bootstrap.lambda$initialize$2(Bootstrap.java:244)
1 error
com.google.inject.CreationException: Unable to create injector, see the following errors:
Configuration property 'optimizer.join_distribution_type' was not used
at com.facebook.airlift.bootstrap.Bootstrap.lambda$initialize$2(Bootstrap.java:244)
1 error
at com.google.inject.internal.Errors.throwCreationExceptionIfErrorsExist(Errors.java:543)
at com.google.inject.internal.InternalInjectorCreator.initializeStatically(InternalInjectorCreator.java:159)
at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:106)
at com.google.inject.Guice.createInjector(Guice.java:87)
at com.facebook.airlift.bootstrap.Bootstrap.initialize(Bootstrap.java:251)
at com.facebook.presto.server.PrestoServer.run(PrestoServer.java:143)
at com.facebook.presto.server.PrestoServer.main(PrestoServer.java:85)
2022-07-11T11:02:42.674Z INFO main com.facebook.airlift.log.Logging Disabling stderr output`
Any idea how to fix this issue?

Crawling Jira with Manifoldcf and Solr - String index out of range

I am using Manifoldcf v2.7.1 and Solr v5.2.1 and trying to crawl Jira using the Jira connector and am getting the following error in Manifoldcf:
Error: Repeated service interruptions - failure processing document:
Error from server at (servername:port/solr/jira): String index out of range: -11
Note: I removed my server and port info from the error message
One of the error logs from Solr is showing the following at the top of the stacktrace:
java.lang.StringIndexOutOfBoundsException: String index out of range: -11
at org.apache.solr.request.macro.MacroExpander._expand(MacroExpander.java:144)
Don't know what is causing this area and how to fix it. Thanks in advance!
Turns out that there was a Jira issue with Java written in the comments section. I think that it wasn't being exited out properly by Manifold. To avoid this, I excluded this one issue that was causing issues from future crawls.

Coldfusion ORM EntityToQuery NullPointerException

I am getting an error:
java.lang.NullPointerException
When I call:
<cfdump var="#posts#" top="2">
<cfset postsQuery = EntityToQuery(posts)>
dumping posts shows an array of objects as it should but for some reason the EntityToQuery(posts) is breaking. The error message is not one of the normal ones which tell you what line it broke on etc. its just the following struct:
message: [empty string]
StackTrace: java.lang.NullPointerException
TagContext: array[empty]
Type: java.lang.NullPointerException
Does anyone have any idea what could cause this? I think its data related but I don't know what to look for. Its only happening on one implementation of this code, not the others im working on.
Strange errors stopped the next time the coldfusion service was restarted.
So if anyone else has that problem.... give restarting cf a try

Exporting an HSQLDB to XML using DBUnit results in null pointer errors

I'm trying to export the entire contents of my database, an HSQLDB, to XML using DBUnit, and I'm getting null pointer errors that I can't understand. I'm following the example in the FAQ:
IDatabaseConnection xmlConnection = new DatabaseConnection(conn);
IDataSet allTables = xmlConnection.createDataSet();
XmlDataSet.write(allTables, new FileOutputStream(DATABASE_PATH + ".xml"));
The null pointer error occurs on the last line. conn and DATABASE_PATH aren't null as they're both checked for that and used later in the program without a problem (exporting the database into CSV using OpenCSV, which works perfectly and exactly as expected).
The stacktrace is as follows:
org.dbunit.dataset.DataSetException: java.sql.SQLException: java.lang.NullPointerException java.lang.NullPointerException
at org.dbunit.database.DatabaseDataSet.initialize(DatabaseDataSet.java:243)
at org.dbunit.database.DatabaseDataSet.getTableNames(DatabaseDataSet.java:272)
at org.dbunit.database.DatabaseDataSet.createIterator(DatabaseDataSet.java:258)
at org.dbunit.dataset.AbstractDataSet.iterator(AbstractDataSet.java:189)
at org.dbunit.dataset.stream.DataSetProducerAdapter.(DataSetProducerAdapter.java:63)
at org.dbunit.dataset.xml.XmlDataSetWriter.write(XmlDataSetWriter.java:128)
at org.dbunit.dataset.xml.XmlDataSet.write(XmlDataSet.java:104)
at org.dbunit.dataset.xml.XmlDataSet.write(XmlDataSet.java:91)
at pms.DatabaseExporter.exportToXML(DatabaseExporter.java:181)
at pms.DatabaseExporter.main(DatabaseExporter.java:301)
Caused by: java.sql.SQLException: java.lang.NullPointerException java.lang.NullPointerException
at org.hsqldb.jdbc.Util.sqlException(Util.java:224)
at org.hsqldb.jdbc.JDBCStatement.fetchResult(JDBCStatement.java:1830)
at org.hsqldb.jdbc.JDBCStatement.executeQuery(JDBCStatement.java:181)
at org.hsqldb.jdbc.JDBCDatabaseMetaData.execute(JDBCDatabaseMetaData.java:6150)
at org.hsqldb.jdbc.JDBCDatabaseMetaData.getTables(JDBCDatabaseMetaData.java:3170)
at org.dbunit.database.DefaultMetadataHandler.getTables(DefaultMetadataHandler.java:137)
at org.dbunit.database.DatabaseDataSet.initialize(DatabaseDataSet.java:199)
... 9 more
Caused by: org.hsqldb.HsqlException: java.lang.NullPointerException
at org.hsqldb.error.Error.error(Error.java:108)
at org.hsqldb.result.Result.newErrorResult(Result.java:1069)
at org.hsqldb.StatementDMQL.execute(StatementDMQL.java:192)
at org.hsqldb.Session.executeCompiledStatement(Session.java:1315)
at org.hsqldb.Session.executeDirectStatement(Session.java:1206)
at org.hsqldb.Session.execute(Session.java:990)
at org.hsqldb.jdbc.JDBCStatement.fetchResult(JDBCStatement.java:1822)
... 14 more
Caused by: java.lang.NullPointerException
at org.hsqldb.types.CharacterType.compare(CharacterType.java:418)
at org.hsqldb.index.IndexAVL.compareRowForInsertOrDelete(IndexAVL.java:617)
at org.hsqldb.index.IndexAVLMemory.insert(IndexAVLMemory.java:214)
at org.hsqldb.persist.RowStoreAVL.indexRow(RowStoreAVL.java:171)
at org.hsqldb.persist.RowStoreAVLHybridExtended.indexRow(RowStoreAVLHybridExtended.java:99)
at org.hsqldb.Table.insertSys(Table.java:2625)
at org.hsqldb.dbinfo.DatabaseInformationMain.SYSTEM_TABLES(DatabaseInformationMain.java:2353)
at org.hsqldb.dbinfo.DatabaseInformationMain.generateTable(DatabaseInformationMain.java:348)
at org.hsqldb.dbinfo.DatabaseInformationFull.generateTable(DatabaseInformationFull.java:379)
at org.hsqldb.dbinfo.DatabaseInformationMain.setStore(DatabaseInformationMain.java:507)
at org.hsqldb.persist.PersistentStoreCollectionSession.getStore(PersistentStoreCollectionSession.java:138)
at org.hsqldb.Table.getRowStore(Table.java:2817)
at org.hsqldb.RangeVariable$RangeIteratorMain.(RangeVariable.java:939)
at org.hsqldb.RangeVariable$RangeIteratorMain.(RangeVariable.java:917)
at org.hsqldb.RangeVariable.getIterator(RangeVariable.java:770)
at org.hsqldb.QuerySpecification.buildResult(QuerySpecification.java:1293)
at org.hsqldb.QuerySpecification.getSingleResult(QuerySpecification.java:1245)
at org.hsqldb.QuerySpecification.getResult(QuerySpecification.java:1235)
at org.hsqldb.StatementQuery.getResult(StatementQuery.java:66)
at org.hsqldb.StatementDMQL.execute(StatementDMQL.java:190)
... 18 more
I've googled and couldn't find anything relating to this kind of error during export. I'm not that experienced with SQL or JDBC so I'm hoping there's enough info in the stack trace for someone more knowledgeable to tell me what's going wrong. If there's some other library that would be better for my needs, I have no problem switching... The only thing I need is export/import with XML right now, so I'm not using DBUnit for anything else. Anyway if anybody can tell me what's going on wrong or if I ought to be using something else I'd really appreciate it.
This is an error in the particular version of HSQLDB's system table creation, which was spotted and corrected recently. You can try the updated HSQLDB jar from http://hsqldb.org/support/

Lucene Search Error Stack

I am seeing the following error when trying to search using Lucene. (version 1.4.3). Any ideas as to why I could be seeing this and how to fix it?
Caused by: java.io.IOException: read past EOF
at org.apache.lucene.store.InputStream.refill(InputStream.java:154)
at org.apache.lucene.store.InputStream.readByte(InputStream.java:43)
at org.apache.lucene.store.InputStream.readVInt(InputStream.java:83)
at org.apache.lucene.index.FieldInfos.read(FieldInfos.java:195)
at org.apache.lucene.index.FieldInfos.<init>(FieldInfos.java:55)
at org.apache.lucene.index.SegmentReader.initialize(SegmentReader.java:109)
at org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:89)
at org.apache.lucene.index.IndexReader$1.doBody(IndexReader.java:118)
at org.apache.lucene.store.Lock$With.run(Lock.java:109)
at org.apache.lucene.index.IndexReader.open(IndexReader.java:111)
at org.apache.lucene.index.IndexReader.open(IndexReader.java:106)
at org.apache.lucene.search.IndexSearcher.<init>(IndexSearcher.java:43)
In this same environment I also see the following error:
Caused by: java.io.IOException: Lock obtain timed out:
Lock#/tmp/lucene-3ec31395c8e06a56e2939f1fdda16c67-write.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:58)
at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:223)
at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:213)
The same code works in a test environment, however not in production. Cannot identify any obvious differences between the two environments.
File permissions are wrong (it needs write permission) or your are not able to access a locked file that the current process needs.