I have deployed Pentaho biserver-ce and data-integration (6.1.0.1.196) successfully. It first time loading without any issue.
After I have deployed report set, that is already working on locally. This deploy done using pentaho API. After completing the deployment I can see following issue. Please advise me.
16:34:40,983 ERROR [FileHandler] Couldn't find or create CDE /widgets/sample.cdfde file
org.pentaho.platform.api.repository2.unified.UnifiedRepositoryAccessDeniedException: access denied while updating file with id "5117ec88-9572-43f2-9774-1a69a09c2b0e"
Reference number: 69b57b5c-c978-4292-bcb9-c1967ee04654
at org.pentaho.platform.repository2.unified.exception.AccessDeniedExceptionConverter.convertException(AccessDeniedExceptionConverter.java:31)
at org.pentaho.platform.repository2.unified.ExceptionLoggingDecorator.callLogThrow(ExceptionLoggingDecorator.java:506)
at org.pentaho.platform.repository2.unified.ExceptionLoggingDecorator.updateFile(ExceptionLoggingDecorator.java:433)
at pt.webdetails.cpf.repository.pentaho.unified.UnifiedRepositoryAccess.saveFile(UnifiedRepositoryAccess.java:182)
at pt.webdetails.cpf.repository.pentaho.unified.UnifiedRepositoryAccess.saveFile(UnifiedRepositoryAccess.java:163)
at pt.webdetails.cdf.dd.extapi.FileHandler$1.call(FileHandler.java:114)
at pt.webdetails.cdf.dd.extapi.FileHandler$1.call(FileHandler.java:110)
at org.pentaho.platform.engine.security.SecurityHelper.runAsSystem(SecurityHelper.java:396)
at pt.webdetails.cdf.dd.extapi.FileHandler.createBasicFileIfNotExists(FileHandler.java:110)
at pt.webdetails.cdf.dd.CdeEngine.saveAndClose(CdeEngine.java:135)
at pt.webdetails.cdf.dd.CdeEngine.ensureBasicDirs(CdeEngine.java:118)
at pt.webdetails.cdf.dd.CdeLifeCycleListener.ready(CdeLifeCycleListener.java:39)
at org.pentaho.platform.web.http.context.PentahoSystemReadyListener.contextInitialized(PentahoSystemReadyListener.java:52)
To start pentaho server, taking time and show the following error message as well.
16:03:48,064 INFO [PeriodicStatusLogger] Caution, the system is initializing. Do not shut down or restart the system at this time.
16:04:18,064 INFO [PeriodicStatusLogger] Caution, the system is initializing. Do not shut down or restart the system at this time.
16:04:29,865 ERROR [BlueprintContainerImpl] Unable to start blueprint container for bundle pdi-dataservice-server-plugin due to unresolved dependencies [(objectClass=org.pentaho.metaverse.api.ILineageClient)]
java.util.concurrent.TimeoutException
at org.apache.aries.blueprint.container.BlueprintContainerImpl$1.run(BlueprintContainerImpl.java:336)
at org.apache.aries.blueprint.utils.threading.impl.DiscardableRunnable.run(DiscardableRunnable.java:48)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Related
I'm debugging a HiveProcessor which follows the official PutHiveStreaming processor, but it writes to Hive 2.x instead of 3.x. The flow runs in Nifi cluster 1.7.1. Although this exception happens, data is still written to Hive.
The exception is:
java.lang.NullPointerException: null
at org.apache.hadoop.hive.ql.security.authorization.plugin.AuthorizationMetaStoreFilterHook.getFilteredObjects(AuthorizationMetaStoreFilterHook.java:77)
at org.apache.hadoop.hive.ql.security.authorization.plugin.AuthorizationMetaStoreFilterHook.filterDatabases(AuthorizationMetaStoreFilterHook.java:54)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getDatabases(HiveMetaStoreClient.java:1147)
at org.apache.hive.hcatalog.common.HiveClientCache$CacheableHiveMetaStoreClient.isOpen(HiveClientCache.java:471)
at sun.reflect.GeneratedMethodAccessor1641.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:169)
at com.sun.proxy.$Proxy308.isOpen(Unknown Source)
at org.apache.hive.hcatalog.common.HiveClientCache.get(HiveClientCache.java:270)
at org.apache.hive.hcatalog.common.HCatUtil.getHiveMetastoreClient(HCatUtil.java:558)
at org.apache.hive.hcatalog.streaming.AbstractRecordWriter.<init>(AbstractRecordWriter.java:95)
at org.apache.hive.hcatalog.streaming.StrictJsonWriter.<init>(StrictJsonWriter.java:82)
at org.apache.hive.hcatalog.streaming.StrictJsonWriter.<init>(StrictJsonWriter.java:60)
at org.apache.nifi.util.hive.HiveWriter.lambda$getRecordWriter$0(HiveWriter.java:91)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.nifi.util.hive.HiveWriter.getRecordWriter(HiveWriter.java:91)
at org.apache.nifi.util.hive.HiveWriter.<init>(HiveWriter.java:75)
at org.apache.nifi.util.hive.HiveUtils.makeHiveWriter(HiveUtils.java:46)
at org.apache.nifi.processors.hive.PutHive2Streaming.makeHiveWriter(PutHive2Streaming.java:1152)
at org.apache.nifi.processors.hive.PutHive2Streaming.getOrCreateWriter(PutHive2Streaming.java:1065)
at org.apache.nifi.processors.hive.PutHive2Streaming.access$1000(PutHive2Streaming.java:114)
at org.apache.nifi.processors.hive.PutHive2Streaming$1.lambda$process$2(PutHive2Streaming.java:858)
at org.apache.nifi.processor.util.pattern.ExceptionHandler.execute(ExceptionHandler.java:127)
at org.apache.nifi.processors.hive.PutHive2Streaming$1.process(PutHive2Streaming.java:855)
at org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2211)
at org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2179)
at org.apache.nifi.processors.hive.PutHive2Streaming.onTrigger(PutHive2Streaming.java:808)
at org.apache.nifi.processors.hive.PutHive2Streaming.lambda$onTrigger$4(PutHive2Streaming.java:672)
at org.apache.nifi.processor.util.pattern.PartialFunctions.onTrigger(PartialFunctions.java:114)
at org.apache.nifi.processor.util.pattern.RollbackOnFailure.onTrigger(RollbackOnFailure.java:184)
at org.apache.nifi.processors.hive.PutHive2Streaming.onTrigger(PutHive2Streaming.java:672)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
I also like to re-produce the error. Would using TestRunners.newTestRunner(processor); be able to find it? I refer to the test case for Hive 3.x
https://github.com/apache/nifi/blob/ea9b0db2f620526c8dd0db595cf8b44c3ef835be/nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/test/java/org/apache/nifi/processors/hive/TestPutHiveStreaming.java
The other way is to run Hive 2.x and Nifi container locally. But then I have to run docker cp to copy the nar package by mvn, and attach remote JVM from intellij as this blog describes.
https://community.hortonworks.com/articles/106931/nifi-debugging-tutorial.html
Have someone done similar? or is there an easier way to debug a custom processor?
This is a red herring error, there's some issue on the Hive side where it can't get its own IP address or hostname, and issues this error periodically as a result. However I don't think it causes any real problems, as you said the data gets written to Hive.
Just for completeness, in Apache NiFi PutHiveStreaming is built to work against Hive 1.2.x, not Hive 2.x. There are currently no specific Hive 2.x processors, we've never determined whether the Hive 1.2.x processors work against Hive 2.x.
For debugging, if you can run Hive in a container and expose the metastore port (9083 is the default I believe), then you should be able to create an integration test using things like TestRunners and run NiFi locally from your IDE. This is how other integration tests are performed for external systems such as MongoDB or Elasticsearch for example.
There is a MiniHS2 class in the Hive test suite for integration testing, but it is not in a published artifact so unfortunately we're left with having to run tests against a real Hive instance.
NPE doesn't show up after hcatalog.hive.client.cache.disabled set to true
Kafka Connect recommends this setting, too.
from Kafka Connect Doc https://docs.confluent.io/3.0.0/connect/connect-hdfs/docs/hdfs_connector.html
As connector tasks are long running, the connections to Hive metastore
are kept open until tasks are stopped. In the default Hive
configuration, reconnecting to Hive metastore creates a new
connection. When the number of tasks is large, it is possible that the
retries can cause the number of open connections to exceed the max
allowed connections in the operating system. Thus it is recommended to
set hcatalog.hive.client.cache.disabled to true in hive.xml.
When Max Concurrent Tasks of PutHiveStreaming is set more than 1, this property is automatically set as false
Also the document from Nifi resolved the issue already.
The NiFi PutHiveStreaming has a pool of connections, therefore
multithreaded; Setting hcatalog.hive.client.cache.disabled to true
would allow each connection to set is own Session without relying on
the cache.
ref:
https://community.hortonworks.com/content/supportkb/196628/hive-client-puthivestreaming-fails-against-partiti.html
I'm trying to connect the SonarQube Community Plugin (from IntellIJ IDEA) to a local SonarQube instance, but looks like my users lack privileges to access the API. While trying to connect I get:
Cannot fetch SonarQube project and modules from SonarQube at mobilex.intra
org.sonarqube.ws.client.HttpException: Error 403 on https://qube.my.local/api/components/search?qualifiers=TRK : {"errors":[{"msg":"Insufficient privileges"}]}
at org.sonarqube.ws.client.BaseResponse.failIfNotSuccessful(BaseResponse.java:36)
at org.sonarqube.ws.client.BaseService.call(BaseService.java:55)
at org.sonarqube.ws.client.BaseService.call(BaseService.java:50)
at org.sonarqube.ws.client.component.ComponentsService.search(ComponentsService.java:58)
at org.intellij.sonar.sonarserver.SonarServer.getAllProjects(SonarServer.java:137)
at org.intellij.sonar.sonarserver.SonarServer.getAllProjectsAndModules(SonarServer.java:110)
at org.intellij.sonar.configuration.ResourcesSelectionConfigurable$DownloadResourcesRunnable.run(ResourcesSelectionConfigurable.java:120)
at com.intellij.openapi.progress.impl.CoreProgressManager$2.run(CoreProgressManager.java:247)
at com.intellij.openapi.progress.impl.CoreProgressManager$TaskRunnable.run(CoreProgressManager.java:750)
at com.intellij.openapi.progress.impl.CoreProgressManager$5.run(CoreProgressManager.java:434)
at com.intellij.openapi.progress.impl.CoreProgressManager.lambda$runProcess$1(CoreProgressManager.java:157)
at com.intellij.openapi.progress.impl.CoreProgressManager.registerIndicatorAndRun(CoreProgressManager.java:580)
at com.intellij.openapi.progress.impl.CoreProgressManager.executeProcessUnderProgress(CoreProgressManager.java:525)
at com.intellij.openapi.progress.impl.ProgressManagerImpl.executeProcessUnderProgress(ProgressManagerImpl.java:85)
at com.intellij.openapi.progress.impl.CoreProgressManager.runProcess(CoreProgressManager.java:144)
at com.intellij.openapi.application.impl.ApplicationImpl.lambda$null$10(ApplicationImpl.java:565)
at com.intellij.openapi.application.impl.ApplicationImpl$1.run(ApplicationImpl.java:305)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Works fine while using an administrative account - but I don't want that. What privilege is needed to allow a common user to access the API and how do I apply it?
I'm using Apache Ignite to cluster our web sessions. Occasionally one of our caches crashes, and I keep receiving the following exception messages in my Tomcat application log file:
2016-05-23 15:04:02,200 ERROR root/error 495 - Failed to update web session: null
java.lang.IllegalStateException: Cache has been closed or destroyed: session-cache
at org.apache.ignite.internal.processors.cache.GridCacheGateway.enter(GridCacheGateway.java:160)
at org.apache.ignite.internal.processors.cache.IgniteCacheProxy.onEnter(IgniteCacheProxy.java:1923)
at org.apache.ignite.internal.processors.cache.IgniteCacheProxy.get(IgniteCacheProxy.java:855)
at org.apache.ignite.cache.websession.WebSessionFilter.doFilter0(WebSessionFilter.java:341)
at org.apache.ignite.cache.websession.WebSessionFilter.doFilter(WebSessionFilter.java:315)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:501)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:950)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1070)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:611)
at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:314)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:745)
2016-05-23 15:04:06,497 ERROR root/error 495 - Failed to update web session: null
java.lang.IllegalStateException: Cache has been closed or destroyed: session-cache
...
It worries me because I cannot find any further error messages explaining the cause of the crash, not even in the folder IGNITE_HOME/work
(By the way this crash happened before due to running out of memory, and I fixed it by increasing the application JVM heap size.)
My questions are:
Where I can find the logs that explain the cause of the crash?
If no logs available, shall I need to register certain events so as to receive the notification/explanation of the crash, how?
When receiving such notification in the event of such cache crash, is there any way to restore the cache without having to restart the whole application?
Many thanks!
Update (2016-08-04)
As Valentin pointed out, by default Ignite inherits the application's logging settings if the node is embedded in the application. And indeed Ignite writes its logs in the same application log file.
If I would like to output Ignite logs to a separate log file, I could add configuration to log4j.properties as below:
log4j.appender.IGNITE=org.apache.log4j.DailyRollingFileAppender
log4j.appender.IGNITE.File=${IGNITE_HOME}/work/logs/${tomcat.hostname}/ignite.log
log4j.appender.IGNITE.DatePattern='.'yyyy-MM-dd
log4j.appender.IGNITE.Threshold=DEBUG
log4j.appender.IGNITE.layout=org.apache.log4j.PatternLayout
log4j.appender.IGNITE.layout.ConversionPattern=[%d{ABSOLUTE}][%-5p][%t][%c{1}] %m%n
log4j.logger.org.apache.ignite=INFO, IGNITE
But there is still one thing I do not quite understand still is that, the configuration aforementioned would output the Ignite logging information to both ignite.log as specified above and the application's log file. #Valentin, wonder if you know why? Thanks. Is there any way that I could output the Ignite logs to ignite.log only, not to the application's log file at all?
If a node is embedded in the application, it will by default inherit the its logging settings. So most likely Ignite writes the log in the same file where the application writes it.
Note that if you're using Log4J, you should include ignite-log4j dependency in you project.
As for restoring the data, you should not lose any data if only one node in the cluster fails and you have at least one backup. To add backups use CacheConfiguration.backups configuration property.
I don't know if this will solve your specific problem but I found logs in IGNITE_HOME/work/logs
I'm using JBoss AS 7 Hornetq. Our standalone java application interacts with a queue and sends messages. We had the entire environment setup and it was working pretty smoothly. Suddenly, one fine day, our standalone application failed with the below exception:
Caused by: java.io.NotSerializableException: org.hornetq.api.core.client.loadbalance.RoundRobinConnectionLoadBalancingPolicy
Detailed exception stack trace is below
javax.naming.NamingException: Failed to lookup [Root exception is java.io.NotSerializableException: org.hornetq.api.core.client.loadbalance.RoundRobinConnectionLoadBalancingPolicy]
at org.jboss.naming.remote.client.ClientUtil.namingException(ClientUtil.java:36)
at org.jboss.naming.remote.protocol.v1.Protocol$1.execute(Protocol.java:104)
at org.jboss.naming.remote.protocol.v1.RemoteNamingStoreV1.lookup(RemoteNamingStoreV1.java:79)
at org.jboss.naming.remote.client.RemoteContext.lookup(RemoteContext.java:79)
at org.jboss.naming.remote.client.RemoteContext.lookup(RemoteContext.java:83)
at javax.naming.InitialContext.lookup(Unknown Source)
at com.infosys.lbs.publishing.LocationProcessor.postMessageInQueue(LocationProcessor.java:377)
at com.infosys.lbs.publishing.LocationProcessor.process(LocationProcessor.java:69)
at com.infosys.lbs.publishing.main.Publisher.main(Publisher.java:34)
Caused by: java.io.NotSerializableException: org.hornetq.api.core.client.loadbalance.RoundRobinConnectionLoadBalancingPolicy
at org.jboss.marshalling.river.RiverMarshaller.doWriteObject(RiverMarshaller.java:891)
at org.jboss.marshalling.river.RiverMarshaller.doWriteFields(RiverMarshaller.java:1063)
at org.jboss.marshalling.river.RiverMarshaller.doWriteSerializableObject(RiverMarshaller.java:1019)
at org.jboss.marshalling.river.RiverMarshaller.doWriteObject(RiverMarshaller.java:885)
at org.jboss.marshalling.river.RiverMarshaller.doWriteFields(RiverMarshaller.java:1063)
at org.jboss.marshalling.river.RiverMarshaller.doWriteSerializableObject(RiverMarshaller.java:1019)
at org.jboss.marshalling.river.RiverMarshaller.doWriteSerializableObject(RiverMarshaller.java:998)
at org.jboss.marshalling.river.RiverMarshaller.doWriteObject(RiverMarshaller.java:885)
at org.jboss.marshalling.AbstractObjectOutput.writeObject(AbstractObjectOutput.java:62)
at org.jboss.marshalling.AbstractMarshaller.writeObject(AbstractMarshaller.java:119)
at org.jboss.naming.remote.protocol.v1.Protocol$1$2.write(Protocol.java:138)
at org.jboss.naming.remote.protocol.v1.WriteUtil.write(WriteUtil.java:61)
at org.jboss.naming.remote.protocol.v1.Protocol$1.handleServerMessage(Protocol.java:128)
at org.jboss.naming.remote.protocol.v1.RemoteNamingServerV1$MessageReciever$1.run(RemoteNamingServerV1.java:73)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: an exception which occurred:
in field loadBalancingPolicy
in field serverLocator
in object org.hornetq.jms.client.HornetQJMSConnectionFactory#ea074d
Exception was happening when the app was trying to lookup the connection factory
QueueConnectionFactory qcf = (QueueConnectionFactory)context.lookup("jms/RemoteConnectionFactory");
Below are the steps on how we resolved the issue
There was almost zero help for resolving this issue. A web search on this exception returned next to nothing. However, this particular thread on JBoss AS Dev site spinned a thought in my head: RemoteConnectionFactory is not found when looking up in a remote client
The scenario mentioned in this thread was not same as ours. (In our app this is the first and the only lookup happening.) This thread got me thinking towards a possible connection factory initialization issue. While there is nothing I could do to debug or find the issue around it, I thought that if I could reinitialize it, that would help.
So I tried lookup with java:jboss/exported/jms/RemoteConnectionFactory As expected it failed with a NamingException. Hoping that this naming syntax (using java:/) would have resulted in a reinitialization, I tried lookup again with jms/RemoteConnectionFactory. And bingo!!! it worked!
Unfortunately, we still don't know why it happened, and if it is just a one-off case! Documenting it here just in case some mortal soul hits this issue.
I was stopping the Liferay portal, but few seconds after, I stopped the database (db2 quiesce, that means, that the connections are closed) and apparently, Liferay did not stopped correctly its execution.
After that, I restarted the database and liferay, but the portal does not work now. It shows this message in the browser:
HTTP Status 500 -
type Exception report
message
description The server encountered an internal error () that prevented it from fulfilling this request.
exception
javax.servlet.ServletException: Servlet execution threw an exception
com.liferay.portal.kernel.servlet.filters.invoker.InvokerFilterChain.doFilter(InvokerFilterChain.java:72)
...
root cause
java.lang.NoSuchMethodError: com.liferay.portal.util.PortalUtil.getCDNHostHttp()Ljava/lang/String;
com.liferay.portal.events.ServicePreActionExt.servicePre(ServicePreActionExt.java:937)
After looking in the logs, I found the following messages (they are edited):
SEVERE: Error waiting for multi-thread deployment of directories to completehostConfig.deployWar=Deploying web application archive {0}
java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1000)
WARN [DefaultConnectionTester:203] SQL State '08001' of Exception which occurred during a Connection test (fallback DatabaseMetaData test) implies that the database is invalid, and the pool should refill itself with fresh Connections.
com.ibm.db2.jcc.am.DisconnectNonTransientConnectionException: [jcc][t4][2030][11211][3.63.75] A communication error occurred during operations on the connection's underlying socket, socket input stream, or socket output stream. Error location: Reply.fill() - insufficient data (-1). Message: Insufficient data. ERRORCODE=-4499, SQLSTATE=08001
at com.ibm.db2.jcc.am.fd.a(fd.java:321)
WARN [DefaultConnectionTester:136] SQL State '08001' of Exception tested by statusOnException() implies that the database is invalid, and the pool should refill itself with fresh Connections.
WARN [C3P0PooledConnectionPool:708] A ConnectionTest has failed, reporting that all previously acquired Connections are likely invalid. The pool will be reset.
WARN [NewPooledConnection:486] [c3p0] A PooledConnection that has already signalled a Connection error is still in use!
WARN [NewPooledConnection:487] [c3p0] Another error has occurred [ com.ibm.db2.jcc.am.SqlNonTransientConnectionException: [jcc][t4][10335][10366][3.63.75] Invalid operation: Connection is closed. ERRORCODE=-4470, SQLSTATE=08003 ] which will not be reported to listeners!
com.ibm.db2.jcc.am.SqlNonTransientConnectionException: [jcc][t4][10335][10366][3.63.75] Invalid operation: Connection is closed. ERRORCODE=-4470, SQLSTATE=08003
WARN [BasicResourcePool:1841] com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask#4fad5112 -- Acquisition Attempt Failed!!! Clearing pending acquires. While trying to acquire a needed new resource, we failed to succeed more than the maximum number of allowed acquisition attempts (3). Last acquisition attempt exception:
com.ibm.db2.jcc.am.SqlNonTransientConnectionException: DB2 SQL Error: SQLCODE=-20157, SQLSTATE=08004, SQLERRMC=FUT5MAN;QUIESCE DATABASE;;, DRIVER=3.63.75
ERROR [PortalJobStore:109] MisfireHandler: Error handling misfires: Unexpected runtime exception: null
org.quartz.JobPersistenceException: Unexpected runtime exception: null [See nested exception: java.lang.reflect.UndeclaredThrowableException]
Caused by: java.lang.reflect.UndeclaredThrowableException
at $Proxy279.prepareStatement(Unknown Source)
at org.quartz.impl.jdbcjobstore.StdJDBCDelegate.countMisfiredTriggersInState(StdJDBCDelegate.java:413)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.GeneratedMethodAccessor65.invoke(Unknown Source)
Caused by: java.sql.SQLException: Connections could not be acquired from the underlying database!
at com.mchange.v2.sql.SqlUtils.toSQLException(SqlUtils.java:106)
Caused by: com.mchange.v2.resourcepool.CannotAcquireResourceException: A ResourcePool could not acquire a resource from its primary factory or source.
at com.mchange.v2.resourcepool.BasicResourcePool.awaitAvailable(BasicResourcePool.java:1319)
Now, I see that it is almost impossible to start the current Liferay installation. However, I have the database (I made a full backup), and the lucene's data directory. How can I recreate a Liferay installation with these two things? I would like to recover some of this data in a new installation, but I do not how.
This is not the best solution, but I installed Liferay with a new database. Once it was configured, I change the database configuration in order to use the other one.
Probably, it was a problem with the ROOT deployment, but this is very weird.
I could recover all the data from the Lucene and the database.
The database is still quiesced and the Liferay user doesn't have the QUIESCE_CONNECT privilege.
Unquiesce the database and restart Liferay.
Using DB2 instance owner (if you're on Windows, any administrator):
db2 connect to DBNAME
db2 unquiesce database
db2 connect reset
Regards.