I am seeing the following error when trying to search using Lucene. (version 1.4.3). Any ideas as to why I could be seeing this and how to fix it?
Caused by: java.io.IOException: read past EOF
at org.apache.lucene.store.InputStream.refill(InputStream.java:154)
at org.apache.lucene.store.InputStream.readByte(InputStream.java:43)
at org.apache.lucene.store.InputStream.readVInt(InputStream.java:83)
at org.apache.lucene.index.FieldInfos.read(FieldInfos.java:195)
at org.apache.lucene.index.FieldInfos.<init>(FieldInfos.java:55)
at org.apache.lucene.index.SegmentReader.initialize(SegmentReader.java:109)
at org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:89)
at org.apache.lucene.index.IndexReader$1.doBody(IndexReader.java:118)
at org.apache.lucene.store.Lock$With.run(Lock.java:109)
at org.apache.lucene.index.IndexReader.open(IndexReader.java:111)
at org.apache.lucene.index.IndexReader.open(IndexReader.java:106)
at org.apache.lucene.search.IndexSearcher.<init>(IndexSearcher.java:43)
In this same environment I also see the following error:
Caused by: java.io.IOException: Lock obtain timed out:
Lock#/tmp/lucene-3ec31395c8e06a56e2939f1fdda16c67-write.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:58)
at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:223)
at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:213)
The same code works in a test environment, however not in production. Cannot identify any obvious differences between the two environments.
File permissions are wrong (it needs write permission) or your are not able to access a locked file that the current process needs.
Related
I was trying to delete the old history of builds using a groovy script, and earlier it was working fine and without any changes now I am facing issue as below:
ERROR: Build step failed with exception
org.jenkinsci.plugins.scriptsecurity.sandbox.RejectedAccessException: Scripts not permitted to use method hudson.model.Item getName
at org.jenkinsci.plugins.scriptsecurity.sandbox.whitelists.StaticWhitelist.rejectMethod(StaticWhitelist.java:175)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:137)
at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:155)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:159)
at org.kohsuke.groovy.sandbox.impl.Checker$checkedCall.callStatic(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallStatic(CallSiteArray.java:56)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callStatic(AbstractCallSite.java:194)
at Script1.deleteBuildHistory(Script1.groovy:71)
at Script1$deleteBuildHistory.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:157)
at org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:23)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:133)
at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:155)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:159)
at org.kohsuke.groovy.sandbox.impl.Checker$checkedCall.callStatic(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallStatic(CallSiteArray.java:56)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callStatic(AbstractCallSite.java:194)
at Script1.run(Script1.groovy:58)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.GroovySandbox.run(GroovySandbox.java:141)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SecureGroovyScript.evaluate(SecureGroovyScript.java:333)
at hudson.plugins.groovy.SystemGroovy.run(SystemGroovy.java:95)
at hudson.plugins.groovy.SystemGroovy.perform(SystemGroovy.java:59)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:744)
at hudson.model.Build$BuildExecution.build(Build.java:206)
at hudson.model.Build$BuildExecution.doRun(Build.java:163)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:504)
at hudson.model.Run.execute(Run.java:1798)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Build step 'Execute system Groovy script' marked build as failure
Finished: FAILURE
In my groovy I am using the API "hudson.model.Hudson.instance.getItem(envVar.get("JOB_NAME"));" to get the Jenkins job name. Since it is working earlier, now I am facing this issue and not sure how to resolve the same. Kindly provide inputs.
You are using a rather generic way to access data from an object, which might be exploited somehow, so it got blacklisted or rather not whitelisted in Jenkins Groovy Sandbox.
You have several options here:
Just add an exception using in-process script approval
Use a less generic and therefore saver syntax like env.JOB_NAME.
I would definitely go for the second option in your case for it has no disadvantages and is simpler then your current code.
As for why it worked before: three might have been an approval, which somehow got lost –happened to me once– or the call you are using got un-whitelisted in an update of the security plugin.
Spark streaming job running in DSE using DSEFS for check-pointing directory. I see this error in debug log file. How to resolve this error?
ERROR [dsefs-netty-worker-5] 2017-12-01 05:23:02,679 DSE-FS RestServerHandler.scala:126 - [id: 0x9964e082, /<>:58874 :> 0.0.0.0/0.0.0.0:5598] Streaming data to remote end failed.
java.io.IOException: Block not found a3859f30-aa23-11e7-80b9-4b8bdaf197cd
at com.datastax.bdp.fs.server.blocks.BlockService$stateMachine$33$1.apply(BlockService.scala:706) ~[dsefs-server_2.10-5.0.19.jar:5.0.19]
at com.datastax.bdp.fs.server.blocks.BlockService$stateMachine$33$1.apply(BlockService.scala:703) ~[dsefs-server_2.10-5.0.19.jar:5.0.19]
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) [scala-library-2.10.6.jar:na]
at com.datastax.bdp.fs.exec.SameThreadExecutionContext$class.executeInSameThread(SameThreadExecutionContext.scala:24) ~[dsefs-common_2.10-5.0.19.jar:5.0.19]
at com.datastax.bdp.fs.exec.SameThreadExecutionContext$class.execute(SameThreadExecutionContext.scala:33) ~[dsefs-common_2.10-5.0.19.jar:5.0.19]
at com.datastax.bdp.fs.exec.SerialExecutionContextProvider$$anon$5$$anon$2.execute(SerialExecutionContextProvider.scala:24) ~[dsefs-common_2.10-5.0.19.jar:5.0.19]
at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40) [scala-library-2.10.6.jar:na]
at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248) ~[scala-library-2.10.6.jar:na]
at scala.concurrent.Promise$class.complete(Promise.scala:55) ~[scala-library-2.10.6.jar:na]
at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:153) ~[scala-library-2.10.6.jar:na]
at com.datastax.bdp.fs.server.blocks.BlockService$stateMachine$1$1.apply(BlockService.scala:60) ~[dsefs-server_2.10-5.0.19.jar:5.0.19]
at com.datastax.bdp.fs.server.blocks.BlockService$stateMachine$1$1.apply(BlockService.scala:60) ~[dsefs-server_2.10-5.0.19.jar:5.0.19]
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) [scala-library-2.10.6.jar:na]
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:358) [netty-all-4.0.34.Final.jar:4.0.34.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357) [netty-all-4.0.34.Final.jar:4.0.34.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112) [netty-all-4.0.34.Final.jar:4.0.34.Final]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_112]
This error means DSEFS server failed to find metadata of the data block in the dsefs.blocks Cassandra table. The ids of the file blocks are stored in the dsefs.block_offsets table and they reference blocks stored in dsefs.blocks. If a row exists in dsefs.block_offsets and points to the block id that is absent in dsefs.blocks, you get this error when reading the file.
This error should not happen under normal circumstances and it means the filesystem metadata somehow got into inconsistent state. This may be a bug in the DSEFS implementation, a result of a data loss caused by setting up dsefs keyspace with insufficient replication factor or a result of a write operation that did not finish successfully and was applied only partially.
Please make sure you set dsefs keyspace RF to at least 3 and run nodetool repair to avoid accidental data loss or unavailability of some DSEFS metadata.
If this doesn't help, please contact me directly or through DataStax technical support and provide more details, including logs from the time before the error and more context on what the job was doing when the failure occurred.
sometimes i have this error when i try to connect to my database in B1if :
vBIU.errhdlg='' exceptionmsg='com.sap.b1i.xcellerator.XcelleratorException: XCE001 Nested exception:
com.sap.b1i.bizprocessor.BizProcException: BPE001 Nested exception:
com.sap.b1i.xcellerator.XcelleratorException: XCE001 Nested exception:
com.sap.b1i.xcellerator.XcelleratorException: XCE001 Nested exception: java.lang.RuntimeException: Connect to
Business One failed. (-119) Database server type not supported -b1Server=SRV-B1-HYPPROD,
company=SBO_HYPFR_PROD, licenseServer=HYP-B1-LIC:30000, dbType=7, dbUser=sa, userName=B1i-' msglogexcl='false'
handover2CentralSrv='' MessageLog='true' msglogdbop='insert'>
the question is how to fix that ?
The Error happens when B1i don't get a connection to the DB. That should not be a problem because B1i repeats the process after a minuten and then the connection should be there.
We fixed this by Installing the 64 Bit version of B1DIAPI.x64. Not sure why that worked.
Note you need to update the SLD links for your database and SBO-COMMON - the jcoPath will likely be different - replace the "Program Files (x86)" with just "Program Files".
I am running a large integration test suite using embedded orientdb server with cleanup after every test. However, at some point the tests failed due to some fts indexes has been deleted while another trying to access them. As a result I received:
Exception in thread "Thread-11" java.lang.RuntimeException: java.io.FileNotFoundException: _2.fdt
at org.apache.lucene.search.ControlledRealTimeReopenThread.run(ControlledRealTimeReopenThread.java:247)
Caused by: java.io.FileNotFoundException: _2.fdt
at org.apache.lucene.store.FSDirectory.fileLength(FSDirectory.java:261)
at org.apache.lucene.index.SegmentCommitInfo.sizeInBytes(SegmentCommitInfo.java:141)
at org.apache.lucene.index.DocumentsWriterPerThread.sealFlushedSegment(DocumentsWriterPerThread.java:529)
at org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:502)
at org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:506)
at org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:616)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:370)
at org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
at org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
at org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
at org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:170)
at org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:118)
at org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:58)
at org.apache.lucene.search.ReferenceManager.doMaybeRefresh(ReferenceManager.java:176)
at org.apache.lucene.search.ReferenceManager.maybeRefreshBlocking(ReferenceManager.java:253)
at org.apache.lucene.search.ControlledRealTimeReopenThread.run(ControlledRealTimeReopenThread.java:245)
Any one know how to fix this problem?
I'm trying to export the entire contents of my database, an HSQLDB, to XML using DBUnit, and I'm getting null pointer errors that I can't understand. I'm following the example in the FAQ:
IDatabaseConnection xmlConnection = new DatabaseConnection(conn);
IDataSet allTables = xmlConnection.createDataSet();
XmlDataSet.write(allTables, new FileOutputStream(DATABASE_PATH + ".xml"));
The null pointer error occurs on the last line. conn and DATABASE_PATH aren't null as they're both checked for that and used later in the program without a problem (exporting the database into CSV using OpenCSV, which works perfectly and exactly as expected).
The stacktrace is as follows:
org.dbunit.dataset.DataSetException: java.sql.SQLException: java.lang.NullPointerException java.lang.NullPointerException
at org.dbunit.database.DatabaseDataSet.initialize(DatabaseDataSet.java:243)
at org.dbunit.database.DatabaseDataSet.getTableNames(DatabaseDataSet.java:272)
at org.dbunit.database.DatabaseDataSet.createIterator(DatabaseDataSet.java:258)
at org.dbunit.dataset.AbstractDataSet.iterator(AbstractDataSet.java:189)
at org.dbunit.dataset.stream.DataSetProducerAdapter.(DataSetProducerAdapter.java:63)
at org.dbunit.dataset.xml.XmlDataSetWriter.write(XmlDataSetWriter.java:128)
at org.dbunit.dataset.xml.XmlDataSet.write(XmlDataSet.java:104)
at org.dbunit.dataset.xml.XmlDataSet.write(XmlDataSet.java:91)
at pms.DatabaseExporter.exportToXML(DatabaseExporter.java:181)
at pms.DatabaseExporter.main(DatabaseExporter.java:301)
Caused by: java.sql.SQLException: java.lang.NullPointerException java.lang.NullPointerException
at org.hsqldb.jdbc.Util.sqlException(Util.java:224)
at org.hsqldb.jdbc.JDBCStatement.fetchResult(JDBCStatement.java:1830)
at org.hsqldb.jdbc.JDBCStatement.executeQuery(JDBCStatement.java:181)
at org.hsqldb.jdbc.JDBCDatabaseMetaData.execute(JDBCDatabaseMetaData.java:6150)
at org.hsqldb.jdbc.JDBCDatabaseMetaData.getTables(JDBCDatabaseMetaData.java:3170)
at org.dbunit.database.DefaultMetadataHandler.getTables(DefaultMetadataHandler.java:137)
at org.dbunit.database.DatabaseDataSet.initialize(DatabaseDataSet.java:199)
... 9 more
Caused by: org.hsqldb.HsqlException: java.lang.NullPointerException
at org.hsqldb.error.Error.error(Error.java:108)
at org.hsqldb.result.Result.newErrorResult(Result.java:1069)
at org.hsqldb.StatementDMQL.execute(StatementDMQL.java:192)
at org.hsqldb.Session.executeCompiledStatement(Session.java:1315)
at org.hsqldb.Session.executeDirectStatement(Session.java:1206)
at org.hsqldb.Session.execute(Session.java:990)
at org.hsqldb.jdbc.JDBCStatement.fetchResult(JDBCStatement.java:1822)
... 14 more
Caused by: java.lang.NullPointerException
at org.hsqldb.types.CharacterType.compare(CharacterType.java:418)
at org.hsqldb.index.IndexAVL.compareRowForInsertOrDelete(IndexAVL.java:617)
at org.hsqldb.index.IndexAVLMemory.insert(IndexAVLMemory.java:214)
at org.hsqldb.persist.RowStoreAVL.indexRow(RowStoreAVL.java:171)
at org.hsqldb.persist.RowStoreAVLHybridExtended.indexRow(RowStoreAVLHybridExtended.java:99)
at org.hsqldb.Table.insertSys(Table.java:2625)
at org.hsqldb.dbinfo.DatabaseInformationMain.SYSTEM_TABLES(DatabaseInformationMain.java:2353)
at org.hsqldb.dbinfo.DatabaseInformationMain.generateTable(DatabaseInformationMain.java:348)
at org.hsqldb.dbinfo.DatabaseInformationFull.generateTable(DatabaseInformationFull.java:379)
at org.hsqldb.dbinfo.DatabaseInformationMain.setStore(DatabaseInformationMain.java:507)
at org.hsqldb.persist.PersistentStoreCollectionSession.getStore(PersistentStoreCollectionSession.java:138)
at org.hsqldb.Table.getRowStore(Table.java:2817)
at org.hsqldb.RangeVariable$RangeIteratorMain.(RangeVariable.java:939)
at org.hsqldb.RangeVariable$RangeIteratorMain.(RangeVariable.java:917)
at org.hsqldb.RangeVariable.getIterator(RangeVariable.java:770)
at org.hsqldb.QuerySpecification.buildResult(QuerySpecification.java:1293)
at org.hsqldb.QuerySpecification.getSingleResult(QuerySpecification.java:1245)
at org.hsqldb.QuerySpecification.getResult(QuerySpecification.java:1235)
at org.hsqldb.StatementQuery.getResult(StatementQuery.java:66)
at org.hsqldb.StatementDMQL.execute(StatementDMQL.java:190)
... 18 more
I've googled and couldn't find anything relating to this kind of error during export. I'm not that experienced with SQL or JDBC so I'm hoping there's enough info in the stack trace for someone more knowledgeable to tell me what's going wrong. If there's some other library that would be better for my needs, I have no problem switching... The only thing I need is export/import with XML right now, so I'm not using DBUnit for anything else. Anyway if anybody can tell me what's going on wrong or if I ought to be using something else I'd really appreciate it.
This is an error in the particular version of HSQLDB's system table creation, which was spotted and corrected recently. You can try the updated HSQLDB jar from http://hsqldb.org/support/