findbugs-maven-plugin throws OutOfMemoryError, how to fix this
java.lang.OutOfMemoryError: Java heap space
at org.apache.bcel.classfile.Constant.readConstant(Constant.java:14
2)
at org.apache.bcel.classfile.ConstantPool.<init>(ConstantPool.java:
67)
at org.apache.bcel.classfile.ClassParser.readConstantPool(ClassPars
er.java:225)
at org.apache.bcel.classfile.ClassParser.parse(ClassParser.java:136
)
at edu.umd.cs.findbugs.classfile.engine.bcel.JavaClassAnalysisEngin
e.analyze(JavaClassAnalysisEngine.java:55)
at edu.umd.cs.findbugs.classfile.engine.bcel.JavaClassAnalysisEngin
e.analyze(JavaClassAnalysisEngine.java:43)
at edu.umd.cs.findbugs.classfile.impl.AnalysisCache.getClassAnalysi
s(AnalysisCache.java:213)
at edu.umd.cs.findbugs.classfile.engine.bcel.ClassContextClassAnaly
sisEngine.analyze(ClassContextClassAnalysisEngine.java:46)
at edu.umd.cs.findbugs.classfile.engine.bcel.ClassContextClassAnaly
sisEngine.analyze(ClassContextClassAnalysisEngine.java:38)
at edu.umd.cs.findbugs.classfile.impl.AnalysisCache.getClassAnalysi
s(AnalysisCache.java:213)
at edu.umd.cs.findbugs.ba.AnalysisContext.isTooBig(AnalysisContext.
java:385)
at edu.umd.cs.findbugs.FindBugs2.analyzeApplication(FindBugs2.java:
949)
at edu.umd.cs.findbugs.FindBugs2.execute(FindBugs2.java:222)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessor
Impl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethod
AccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.
java:86)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:230)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:912)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:756)
at org.codehaus.groovy.runtime.InvokerHelper.invokePogoMethod(Invok
erHelper.java:778)
at org.codehaus.groovy.runtime.InvokerHelper.invokeMethod(InvokerHe
lper.java:758)
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodN(
ScriptBytecodeAdapter.java:170)
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethod0(
ScriptBytecodeAdapter.java:198)
at org.codehaus.mojo.findbugs.FindBugsMojo.executeReport(FindBugsMo
jo.groovy:792)
at org.apache.maven.reporting.AbstractMavenReport.generate(Abstract
MavenReport.java:101)
at org.apache.maven.reporting.AbstractMavenReport.execute(AbstractM
avenReport.java:66)
at org.apache.maven.plugin.DefaultPluginManager.executeMojo(Default
PluginManager.java:490)
at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals
(DefaultLifecycleExecutor.java:694)
at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeStand
aloneGoal(DefaultLifecycleExecutor.java:569)
at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(
DefaultLifecycleExecutor.java:539)
On a Windows machine, the following command will increase the maximum amount of memory made available to Maven (heap and permGen respectively):
set MAVEN_OPTS=-Xmx1024m -XX:MaxPermSize=512m
You can also pass a heap size to FindBugs with -maxHeap
Related
I'm using Payara 5.184 and Java 1.8.241 on Linux (Ubuntu 19.10). I'm also using Liquibase as my database schema change management. A field named myDataSource is being injected as follows:
#Resource(mappedName = "java:global/ds")
private DataSource myDataSource;
When the method createDataSource() from Liquibase is invoked I noticed that the variable myDataSource is null, what makes me understand that the resource is not being properly injected. This error seems to happen only on Linux, as my other colleagues that run Windows did not have any problems so far. We're using exactly the same versions of Payara/Java. Is there any specific step that needs to be done on Linux? Here it is the stacktrace found on Payara logs.
Exception during lifecycle processing
org.glassfish.deployment.common.DeploymentException: CDI deployment failure:Exception List with 2 exceptions:
Exception 0 :
org.jboss.weld.exceptions.WeldException: WELD-000049: Unable to invoke public void liquibase.integration.cdi.CDILiquibase.onStartup() on liquibase.integration.cdi.CDILiquibase#911a89a
at org.jboss.weld.injection.producer.DefaultLifecycleCallbackInvoker.invokeMethods(DefaultLifecycleCallbackInvoker.java:85)
at org.jboss.weld.injection.producer.DefaultLifecycleCallbackInvoker.postConstruct(DefaultLifecycleCallbackInvoker.java:66)
at org.jboss.weld.injection.producer.BasicInjectionTarget.postConstruct(BasicInjectionTarget.java:122)
at liquibase.integration.cdi.CDIBootstrap$1.create(CDIBootstrap.java:95)
at liquibase.integration.cdi.CDIBootstrap$1.create(CDIBootstrap.java:38)
at org.jboss.weld.contexts.AbstractContext.get(AbstractContext.java:96)
at org.jboss.weld.bean.ContextualInstanceStrategy$DefaultContextualInstanceStrategy.get(ContextualInstanceStrategy.java:100)
at org.jboss.weld.bean.ContextualInstance.get(ContextualInstance.java:50)
at org.jboss.weld.bean.proxy.ContextBeanInstance.getInstance(ContextBeanInstance.java:102)
at org.jboss.weld.bean.proxy.ProxyMethodHandler.getInstance(ProxyMethodHandler.java:131)
at liquibase.integration.cdi.CDILiquibase$Proxy$_$$_WeldClientProxy.toString(Unknown Source)
at liquibase.integration.cdi.CDIBootstrap.afterDeploymentValidation(CDIBootstrap.java:111)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.jboss.weld.injection.StaticMethodInjectionPoint.invoke(StaticMethodInjectionPoint.java:95)
at org.jboss.weld.injection.MethodInvocationStrategy$SpecialParamPlusBeanManagerStrategy.invoke(MethodInvocationStrategy.java:144)
at org.jboss.weld.event.ObserverMethodImpl.sendEvent(ObserverMethodImpl.java:330)
at org.jboss.weld.event.ExtensionObserverMethodImpl.sendEvent(ExtensionObserverMethodImpl.java:123)
at org.jboss.weld.event.ObserverMethodImpl.sendEvent(ObserverMethodImpl.java:308)
at org.jboss.weld.event.ObserverMethodImpl.notify(ObserverMethodImpl.java:286)
at javax.enterprise.inject.spi.ObserverMethod.notify(ObserverMethod.java:124)
at org.jboss.weld.util.Observers.notify(Observers.java:166)
at org.jboss.weld.event.ObserverNotifier.notifySyncObservers(ObserverNotifier.java:285)
at org.jboss.weld.event.ObserverNotifier.notify(ObserverNotifier.java:273)
at org.jboss.weld.event.ObserverNotifier.fireEvent(ObserverNotifier.java:177)
at org.jboss.weld.event.ObserverNotifier.fireEvent(ObserverNotifier.java:171)
at org.jboss.weld.bootstrap.events.AbstractContainerEvent.fire(AbstractContainerEvent.java:53)
at org.jboss.weld.bootstrap.events.AbstractDeploymentContainerEvent.fire(AbstractDeploymentContainerEvent.java:35)
at org.jboss.weld.bootstrap.events.AfterDeploymentValidationImpl.fire(AfterDeploymentValidationImpl.java:28)
at org.jboss.weld.bootstrap.WeldStartup.validateBeans(WeldStartup.java:499)
at org.jboss.weld.bootstrap.WeldBootstrap.validateBeans(WeldBootstrap.java:93)
at org.glassfish.weld.WeldDeployer.processApplicationLoaded(WeldDeployer.java:517)
at org.glassfish.weld.WeldDeployer.event(WeldDeployer.java:428)
at org.glassfish.kernel.event.EventsImpl.send(EventsImpl.java:131)
at org.glassfish.internal.data.ApplicationInfo.load(ApplicationInfo.java:333)
at com.sun.enterprise.v3.server.ApplicationLifecycle.prepare(ApplicationLifecycle.java:497)
at org.glassfish.deployment.admin.DeployCommand.execute(DeployCommand.java:540)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$2$1.run(CommandRunnerImpl.java:549)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$2$1.run(CommandRunnerImpl.java:545)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$2.execute(CommandRunnerImpl.java:544)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$3.run(CommandRunnerImpl.java:575)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$3.run(CommandRunnerImpl.java:567)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:566)
at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:1475)
at com.sun.enterprise.v3.admin.CommandRunnerImpl.access$1300(CommandRunnerImpl.java:111)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1857)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1733)
at com.sun.enterprise.v3.admin.AdminAdapter.doCommand(AdminAdapter.java:564)
at com.sun.enterprise.v3.admin.AdminAdapter.onMissingResource(AdminAdapter.java:251)
at org.glassfish.grizzly.http.server.StaticHttpHandlerBase.service(StaticHttpHandlerBase.java:166)
at com.sun.enterprise.v3.services.impl.ContainerMapper$HttpHandlerCallable.call(ContainerMapper.java:520)
at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:217)
at org.glassfish.grizzly.http.server.HttpHandler.runService(HttpHandler.java:182)
at org.glassfish.grizzly.http.server.HttpHandler.doHandle(HttpHandler.java:156)
at org.glassfish.grizzly.http.server.HttpServerFilter.handleRead(HttpServerFilter.java:218)
at org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:95)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:260)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:177)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:109)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:88)
at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:53)
at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:524)
at org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:89)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(WorkerThreadIOStrategy.java:94)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.access$100(WorkerThreadIOStrategy.java:33)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy$WorkerThreadRunnable.run(WorkerThreadIOStrategy.java:114)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:569)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:549)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.jboss.weld.injection.producer.DefaultLifecycleCallbackInvoker.invokeMethods(DefaultLifecycleCallbackInvoker.java:83)
... 74 more
Caused by: java.lang.NullPointerException
at liquibase.integration.cdi.CDILiquibase.performUpdate(CDILiquibase.java:126)
at liquibase.integration.cdi.CDILiquibase.onStartup(CDILiquibase.java:116)
... 79 more
EDIT: Added stacktrace
EDIT 2: Problem solved! It ended up being something not related with Liquibase at all. The queue size of the PayaraExecutorService was too small, which leads me to think that some tasks submissions were rejected. Apparently this is a known issue of Payara 5.184, and a related issue had already been opened on Github (https://github.com/payara/Payara/issues/3495). Once I increased the size of the queue to a large number everything started working as expected.
I think you need to use lookup to inject the resource.
Compare this bug report: https://github.com/payara/Payara/issues/4413
Our application is a Java based desktop application which will download the binary data from the source, parses it and add it to HSQLDB database. When downloading from the sources individually, application works perfectly. But when doing the same from multiple sources simultaneously with each source in an individual thread, I am getting an error of
java.sql.SQLException: Assert failed: java.lang.ArrayIndexOutOfBoundsException: 23 in statement [CHECKPOINT]
at org.hsqldb.jdbc.Util.throwError(Unknown Source)
at org.hsqldb.jdbc.jdbcPreparedStatement.execute(Unknown Source)
or sometimes,
java.sql.SQLException: Assert failed: java.lang.ArrayIndexOutOfBoundsException: 1016 in statement [CHECKPOINT]
followed by
java.sql.SQLException: File input/output error: C:\ProgramData\test\data\database\db.script.new in statement [CHECKPOINT]
at org.hsqldb.jdbc.Util.throwError(Unknown Source)
at org.hsqldb.jdbc.jdbcPreparedStatement.execute(Unknown Source)
Java: 1.8;
HSQL version: 1.8.10
We are not in the position to migrate the HSQLDB to latest version because of various reasons.
HSQL Properties:
hsqldb.script_format=0
runtime.gc_interval=0
sql.enforce_strict_size=false
hsqldb.cache_size_scale=8
readonly=false
hsqldb.nio_data_file=true
hsqldb.cache_scale=14
version=1.8.0
hsqldb.default_table_type=memory
hsqldb.cache_file_scale=1
hsqldb.log_size=200
modified=yes
hsqldb.cache_version=1.7.0
hsqldb.original_version=1.8.0
hsqldb.compatible_version=1.8.0
Any help or hint will be appreciated.
This is an 7 year old version which is not ideal for multi-threaded usage.
The simple solution is to perform the database updates with a single thread. You can retrofit your multi-threaded application with a synchronized block over a singleton object around the code that performs the database update.
We are running hadoop on GCE with HDFS default file system, and data input/output from/to GCS.
Hadoop version: 1.2.1
Connector version: com.google.cloud.bigdataoss:gcs-connector:1.3.0-hadoop1
Observed behavior: JT will accumulate threads in waiting state, leading to OOM:
2015-02-06 14:15:51,206 ERROR org.apache.hadoop.mapred.JobTracker: Job initialization failed:
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:949)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1371)
at com.google.cloud.hadoop.util.AbstractGoogleAsyncWriteChannel.initialize(AbstractGoogleAsyncWriteChannel.java:318)
at com.google.cloud.hadoop.gcsio.GoogleCloudStorageImpl.create(GoogleCloudStorageImpl.java:275)
at com.google.cloud.hadoop.gcsio.CacheSupplementedGoogleCloudStorage.create(CacheSupplementedGoogleCloudStorage.java:145)
at com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.createInternal(GoogleCloudStorageFileSystem.java:184)
at com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.create(GoogleCloudStorageFileSystem.java:168)
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopOutputStream.<init>(GoogleHadoopOutputStream.java:77)
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.create(GoogleHadoopFileSystemBase.java:655)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:564)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:545)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:452)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:444)
at org.apache.hadoop.mapred.JobHistory$JobInfo.logSubmitted(JobHistory.java:1860)
at org.apache.hadoop.mapred.JobInProgress$3.run(JobInProgress.java:709)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.mapred.JobInProgress.initTasks(JobInProgress.java:706)
at org.apache.hadoop.mapred.JobTracker.initJob(Jobenter code hereTracker.java:3890)
at org.apache.hadoop.mapred.EagerTaskInitializationListener$InitJob.run(EagerTaskInitializationListener.java:79)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
After looking through the JT logs I found these warnings:
2015-02-06 14:30:17,442 WARN org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #0 from primary datanode xx.xxx.xxx.xxx:50010
java.io.IOException: Call to /xx.xxx.xxx.xxx:50020 failed on local exception: java.io.IOException: Couldn't set up IO streams
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1150)
at org.apache.hadoop.ipc.Client.call(Client.java:1118)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
at com.sun.proxy.$Proxy10.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.checkVersion(RPC.java:422)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:414)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:392)
at org.apache.hadoop.hdfs.DFSClient.createClientDatanodeProtocolProxy(DFSClient.java:201)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3317)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2200(DFSClient.java:2783)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2987)
Caused by: java.io.IOException: Couldn't set up IO streams
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:642)
at org.apache.hadoop.ipc.Client$Connection.access$2200(Client.java:205)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1249)
at org.apache.hadoop.ipc.Client.call(Client.java:1093)
... 9 more
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:635)
... 12 more
This appears to be similar to hadoop bug reporter here: https://issues.apache.org/jira/browse/MAPREDUCE-5606
I tried proposed solution by disabling saving job logs into the output path and it solved the problem at the expense of missing logs :)
I also ran jstack on JT and it showed hundreds of WAITING or TIMED_WAITING threads as such:
pool-52-thread-1" prio=10 tid=0x00007feaec581000 nid=0x524f in Object.wait() [0x00007fead39b3000]
java.lang.Thread.State: TIMED_WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x000000074d86ba60> (a java.io.PipedInputStream)
at java.io.PipedInputStream.read(PipedInputStream.java:327)
- locked <0x000000074d86ba60> (a java.io.PipedInputStream)
at java.io.PipedInputStream.read(PipedInputStream.java:378)
- locked <0x000000074d86ba60> (a java.io.PipedInputStream)
at com.google.api.client.util.ByteStreams.read(ByteStreams.java:181)
at com.google.api.client.googleapis.media.MediaHttpUploader.setContentAndHeadersOnCurrentReque
st(MediaHttpUploader.java:629)
at com.google.api.client.googleapis.media.MediaHttpUploader.resumableUpload(MediaHttpUploader.
java:409)
at com.google.api.client.googleapis.media.MediaHttpUploader.upload(MediaHttpUploader.java:336)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(Abstr
actGoogleClientRequest.java:419)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(Abstr
actGoogleClientRequest.java:343)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogl
eClientRequest.java:460)
at com.google.cloud.hadoop.util.AbstractGoogleAsyncWriteChannel$UploadOperation.run(AbstractGo
ogleAsyncWriteChannel.java:354)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Locked ownable synchronizers:
- <0x000000074d864918> (a java.util.concurrent.ThreadPoolExecutor$Worker)
It appears JT is having hard time keeping up communicating with GCS via GCS Connector.
Please advise,
Thank you
At the moment, every open FSDataOutputStream in the GCS connector for Hadoop consumes a thread until it's closed, because a separate thread needs to run the "resumable" HttpRequests while the user of the OutputStream writes bytes intermittently. In most cases, (such as in individual Hadoop tasks), there's only ever one long-lived output stream, and possibly a few shorter-lived ones for writing small metadata/marker files, etc.
In general, there are two possible causes for the OOM you're running into:
You have lots of queued up jobs; every submitted job holds an unclosed OutputStream, and thus consumes a "waiting" thread. However, since you mention you only need to queue up ~10 jobs, this shouldn't be the root cause.
Something is causing a "leak" of the PrintWriter objects, originally created in logSubmitted and added to fileManager. Typically, terminal events (like logFinished will correctly close() all the PrintWriters before removing them from the map via markCompleted, but in theory they may be bugs here or there which can cause one of the OutputStreams to leak without being close()'d. For example, while I haven't had a chance to verify this assertion, it seems that IOException trying to do something like logMetaInfo will "removeWriter" without closing it.
I've verified that at least under normal circumstances, the OutputStream seem to get closed correctly, and my sample JobTracker shows a clean jstack after having successfully run a lot of jobs.
TL;DR: There are some working theories as to why some resource may leak and ultimately prevent necessary threads from being created. You should consider changing hadoop.job.history.user.location to some HDFS location in the meantime, as a way to preserve the job logs in the absence of placing them on GCS.
I was trying to insert more than 10k nodes in my local neo4j server.[Mac OS].But after some time it produces run time error.
Exception in thread "main" java.lang.RuntimeException: java.io.FileNotFoundException: /Users/shihabrahman/Development/projects/opt_neo4j_importar/neo4j/data/graph.db/index/lucene/node/146483049887488028/_0.nrm (Too many open files)
at org.neo4j.index.impl.lucene.LuceneDataSource.refreshSearcher(LuceneDataSource.java:516)
at org.neo4j.index.impl.lucene.LuceneDataSource.refreshSearcherIfNeeded(LuceneDataSource.java:635)
at org.neo4j.index.impl.lucene.LuceneDataSource.getIndexSearcher(LuceneDataSource.java:577)
at org.neo4j.index.impl.lucene.LuceneIndex.query(LuceneIndex.java:293)
at org.neo4j.index.impl.lucene.LuceneIndex.get(LuceneIndex.java:229)
at org.neo4j.graphdb.index.UniqueFactory.getOrCreateWithOutcome(UniqueFactory.java:230)
at org.neo4j.graphdb.index.UniqueFactory.getOrCreate(UniqueFactory.java:216)
at com.ws.dao.NodeDaoImpl.createUniqueNode(NodeDaoImpl.java:71)
at com.ws.dao.NodeDaoImpl.createNode(NodeDaoImpl.java:34)
at com.ws.App.main(App.java:21)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
I have already increased the limit of # of open files in ulimit.
I have a feeling you changed your ulimit value, but not the right way to persist them for OSX.
Did you add values to /etc/launchd.conf ?
limit maxfiles 40000 40000
Sometimes when I run my auto-tests (Jenkins, TestNG, WebDriver, Selenium Grid) I see following output:
Exception in thread "Thread-1" java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:2882)
at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:100)
at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:515)
at java.lang.StringBuffer.append(StringBuffer.java:306)
at java.io.BufferedReader.readLine(BufferedReader.java:345)
at java.io.BufferedReader.readLine(BufferedReader.java:362)
at org.codehaus.plexus.util.cli.StreamPumper.run(StreamPumper.java:129)
org.apache.maven.surefire.util.SurefireReflectionException: java.lang.reflect.InvocationTargetException; nested exception is java.lang.reflect.InvocationTargetException: null
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:188)
at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:166)
at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:86)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:101)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:74)
Caused by: java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:2882)
at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:100)
at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:390)
at java.lang.StringBuffer.append(StringBuffer.java:224)
at org.testng.reporters.TestHTMLReporter.generateTable(TestHTMLReporter.java:159)
at org.testng.reporters.TestHTMLReporter.generateLog(TestHTMLReporter.java:305)
at org.testng.reporters.TestHTMLReporter.onFinish(TestHTMLReporter.java:40)
at org.testng.TestRunner.fireEvent(TestRunner.java:1241)
at org.testng.TestRunner.afterRun(TestRunner.java:1040)
at org.testng.TestRunner.run(TestRunner.java:621)
at org.testng.SuiteRunner.runTest(SuiteRunner.java:334)
at org.testng.SuiteRunner.runSequentially(SuiteRunner.java:329)
at org.testng.SuiteRunner.privateRun(SuiteRunner.java:291)
at org.testng.SuiteRunner.run(SuiteRunner.java:240)
at org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:53)
at org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:87)
at org.testng.TestNG.runSuitesSequentially(TestNG.java:1188)
at org.testng.TestNG.runSuitesLocally(TestNG.java:1113)
at org.testng.TestNG.run(TestNG.java:1025)
at org.apache.maven.surefire.testng.TestNGExecutor.run(TestNGExecutor.java:70)
at org.apache.maven.surefire.testng.TestNGDirectoryTestSuite.executeMulti(TestNGDirectoryTestSuite.java:160)
at org.apache.maven.surefire.testng.TestNGDirectoryTestSuite.execute(TestNGDirectoryTestSuite.java:100)
at org.apache.maven.surefire.testng.TestNGProvider.invoke(TestNGProvider.java:115)
... 9 more
I can see that there are enough of disk space 2.55 GB, Windows Task Manager shows 1.39 GB in use.
How can I avoid it? Thanks
I am not sure but this blog might help
http://rationaleemotions.wordpress.com/2013/01/28/building-a-self-maintaining-grid-environment/
This issue seems to be solved with newer versions of testng . check this thread out : https://github.com/cbeust/testng/issues/291