Cuba-Platform: sample-user-registration-master HSQLDB connection error: OutOfMemoryError: Java heap space - cuba-platform

I like to use the user registration demo. I just cloned the project and compiled and started it, just out of the box. the following error occured:
'''HSQLDB connection error
java.sql.SQLTransientConnectionException: connection exception: connection failure: java.lang.OutOfMemoryError: Java heap space
connection exception: connection failure: java.lang.OutOfMemoryError: Java heap space'''

It looks like this exception happens when port 9010 used by HSQL by default is occupied by another application (don't ask how occupied port is connected to "outofmemory", no idea).
As a workaround, you can change HSQLDB connection port in the main data store settings.
More discussion here:
https://www.cuba-platform.com/discuss/t/hsqldb-connection-error-outofmemory/13294/7

Related

JMX connection to Gemfire over SSL

I have used GFSH to start locator like below
start locator --name=gemfire_locator --security-properties-file="../config/gfsecurity.properties" --J=-Dgemfire.ssl-enabled-components=all --mcast-port=0 --J=-Dgemfire.jmx-manager-ssl=true
Also started server
start server --name=server1 --security-properties-file="../config/gfsecurity.properties" --J=-Dgemfire.ssl-enabled-components=all --mcast-port=0 --J=-Dgemfire.jmx-manager-ssl=true
I am trying to connect to Gemfire as ClientCache which works perfectly fine over SSL. But When I connect as JMX client, I am getting below error in Java code as well as Jconsole.
Error:
Exception in thread "main" java.io.IOException: Failed to retrieve RMIServer stub: javax.naming.CommunicationException [Root exception is java.rmi.ConnectIOException: non-JRMP server at remote endpoint]
at javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:369)
at javax.management.remote.JMXConnectorFactory.connect(JMXConnectorFactory.java:270)
at SamplePlugin.main(SamplePlugin.java:101)
Am I missing any other configuration?
Here is my JAVA_TOOL_OPTIONS:
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.local.only=false
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=true
-Djava.rmi.server.hostname=myhostname
You will also need to add the geode-core jar to your classpath for jvisualvm. Use the --cp:a option. I would suggest just using geode-dependencies.jar as that will get everything you might need.
The reason this is required is explained a bit in the comments for ContextAwareSSLRMIClientSocketFactory. Basically it seems that when RMI uses SSL, the necessary RMIClientSocketFactory is exported from the server to the client for use there. In general this would simply just be SslRMIClientSocketFactory. But in our case, we have a custom socket factory and so the client (jvisualvm in this case) needs to have access to it.

EofException when doing a deployment using the Tooltwist Controller

I'm deploying a ToolTwist application to a production server using FIP, and Im getting this error on Transfer Phase.
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:152)
and in the fipserver console
org.eclipse.jetty.io.EofException
at org.eclipse.jetty.http.HttpGenerator.flushBuffer(HttpGenerator.java:892)
at org.eclipse.jetty.http.AbstractGenerator.blockForOutput(AbstractGenerator.java:486)
at org.eclipse.jetty.http.AbstractGenerator.flush(AbstractGenerator.java:424)
at org.eclipse.jetty.server.HttpOutput.flush(HttpOutput.java:78)
at org.eclipse.jetty.server.HttpConnection$Output.flush(HttpConnection.java:1094)
at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:159)
at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:98)
at tooltwist.fip.jetty.GetFileListServlet.doGet(GetFileListServlet.java:82)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
Caused by: java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:94)
what should be the solution for this?
This error is occuring in the first stage of the FIP file transfer, where the fipserver creates an index of the existing files on the destination server. This is done in GetFileListServlet.doGet(), which can be seen in the stack trace. It is also indicated on the client side by the message...
Indexing source...
Indexing destination...
ERROR: java.net.SocketTimeoutException Read timed out
Exception: tooltwist.fip.FipException: java.net.SocketTimeoutException: Read timed out
This indexing process involves creating a hash for each file on the destination server, which the fip client then compares with the hashes of files on the source machine. It does this to determine which files are different, and so need to be installed.
A read timeout occurs when the client is waiting too long for the FIP server to index the files on the destination machine. Indexing is normally a fairly quick process, but does involve reading all the files beneath the destination directory (e.g. in ~/server). If monsterously huge files exist within that destination directory then the scanning will take a proportionately long time to complete. If that time is too long, then the client times out and drops the connection, and the server also sees the connection was dropped and stops indexing.
The most common cause of this error is excessively large log files in ~/server/tomcat/logs. If you clean those up, the problem should go away.

Spring XD on YARN

I am getting the below error, while I am trying to install Spring XD on YARN.
Error executing a spring application; nested exception is org.springframework.yarn.YarnSystemException:
Call From c01dfobi43.vcac.dc1.dsghost.net/100.98.226.45 to c01dfobi41.vcac.dc1.dsghost.net:8032 failed on connection exception:
java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused;
nested exception is java.net.ConnectException:
Call From c01dfobi43.vcac.dc1.dsghost.net/100.98.226.45 to c01dfobi41.vcac.dc1.dsghost.net:8032 failed on connection exception:
java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
Not sure where I am committing a mistake. Also do we need to install Spring XD Yarn on all nodes?
It would be great if you can share any documentation mentioned explicitly for YARN.
I am going to assume that c01dfobi41.vcac.dc1.dsghost.net:8032 is a ResourceManager host, I am also going to assume that based on your comment stating that yarn applications do run, you have more then one. In that case what may be happening (and I see this all the time) is that your yarn client attempts to contact the resource manager by looking it up in yarn-site.xml, it picks the first one and gets ConnectionRefused as the standby resource manager does not listen on its RPC port, it moves on to the next one and succeeds. If this is the case this is not a fatal error and can be ignored.

Liferay stopped at database shutdown caused a crash

I was stopping the Liferay portal, but few seconds after, I stopped the database (db2 quiesce, that means, that the connections are closed) and apparently, Liferay did not stopped correctly its execution.
After that, I restarted the database and liferay, but the portal does not work now. It shows this message in the browser:
HTTP Status 500 -
type Exception report
message
description The server encountered an internal error () that prevented it from fulfilling this request.
exception
javax.servlet.ServletException: Servlet execution threw an exception
com.liferay.portal.kernel.servlet.filters.invoker.InvokerFilterChain.doFilter(InvokerFilterChain.java:72)
...
root cause
java.lang.NoSuchMethodError: com.liferay.portal.util.PortalUtil.getCDNHostHttp()Ljava/lang/String;
com.liferay.portal.events.ServicePreActionExt.servicePre(ServicePreActionExt.java:937)
After looking in the logs, I found the following messages (they are edited):
SEVERE: Error waiting for multi-thread deployment of directories to completehostConfig.deployWar=Deploying web application archive {0}
java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1000)
WARN [DefaultConnectionTester:203] SQL State '08001' of Exception which occurred during a Connection test (fallback DatabaseMetaData test) implies that the database is invalid, and the pool should refill itself with fresh Connections.
com.ibm.db2.jcc.am.DisconnectNonTransientConnectionException: [jcc][t4][2030][11211][3.63.75] A communication error occurred during operations on the connection's underlying socket, socket input stream, or socket output stream. Error location: Reply.fill() - insufficient data (-1). Message: Insufficient data. ERRORCODE=-4499, SQLSTATE=08001
at com.ibm.db2.jcc.am.fd.a(fd.java:321)
WARN [DefaultConnectionTester:136] SQL State '08001' of Exception tested by statusOnException() implies that the database is invalid, and the pool should refill itself with fresh Connections.
WARN [C3P0PooledConnectionPool:708] A ConnectionTest has failed, reporting that all previously acquired Connections are likely invalid. The pool will be reset.
WARN [NewPooledConnection:486] [c3p0] A PooledConnection that has already signalled a Connection error is still in use!
WARN [NewPooledConnection:487] [c3p0] Another error has occurred [ com.ibm.db2.jcc.am.SqlNonTransientConnectionException: [jcc][t4][10335][10366][3.63.75] Invalid operation: Connection is closed. ERRORCODE=-4470, SQLSTATE=08003 ] which will not be reported to listeners!
com.ibm.db2.jcc.am.SqlNonTransientConnectionException: [jcc][t4][10335][10366][3.63.75] Invalid operation: Connection is closed. ERRORCODE=-4470, SQLSTATE=08003
WARN [BasicResourcePool:1841] com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask#4fad5112 -- Acquisition Attempt Failed!!! Clearing pending acquires. While trying to acquire a needed new resource, we failed to succeed more than the maximum number of allowed acquisition attempts (3). Last acquisition attempt exception:
com.ibm.db2.jcc.am.SqlNonTransientConnectionException: DB2 SQL Error: SQLCODE=-20157, SQLSTATE=08004, SQLERRMC=FUT5MAN;QUIESCE DATABASE;;, DRIVER=3.63.75
ERROR [PortalJobStore:109] MisfireHandler: Error handling misfires: Unexpected runtime exception: null
org.quartz.JobPersistenceException: Unexpected runtime exception: null [See nested exception: java.lang.reflect.UndeclaredThrowableException]
Caused by: java.lang.reflect.UndeclaredThrowableException
at $Proxy279.prepareStatement(Unknown Source)
at org.quartz.impl.jdbcjobstore.StdJDBCDelegate.countMisfiredTriggersInState(StdJDBCDelegate.java:413)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.GeneratedMethodAccessor65.invoke(Unknown Source)
Caused by: java.sql.SQLException: Connections could not be acquired from the underlying database!
at com.mchange.v2.sql.SqlUtils.toSQLException(SqlUtils.java:106)
Caused by: com.mchange.v2.resourcepool.CannotAcquireResourceException: A ResourcePool could not acquire a resource from its primary factory or source.
at com.mchange.v2.resourcepool.BasicResourcePool.awaitAvailable(BasicResourcePool.java:1319)
Now, I see that it is almost impossible to start the current Liferay installation. However, I have the database (I made a full backup), and the lucene's data directory. How can I recreate a Liferay installation with these two things? I would like to recover some of this data in a new installation, but I do not how.
This is not the best solution, but I installed Liferay with a new database. Once it was configured, I change the database configuration in order to use the other one.
Probably, it was a problem with the ROOT deployment, but this is very weird.
I could recover all the data from the Lucene and the database.
The database is still quiesced and the Liferay user doesn't have the QUIESCE_CONNECT privilege.
Unquiesce the database and restart Liferay.
Using DB2 instance owner (if you're on Windows, any administrator):
db2 connect to DBNAME
db2 unquiesce database
db2 connect reset
Regards.

SEVERE: CouchDBQuery error. java.net.SocketException: Too many open files

When i try to connect to couchdb I get this error. Can someone tell me the reasons why this is happening. Do I have to assign null to HttpClient and GetMethod in a method which is calling couchdb?
SEVERE: CouchDBQuery error
java.net.SocketException: Too many open files
at java.net.Socket.createImpl(Socket.java:397)
at java.net.Socket.<init>(Socket.java:371)
at java.net.Socket.<init>(Socket.java:249)
at org.apache.commons.httpclient.protocol.DefaultProtocolSocketFactory.createSocket(DefaultProtocolSocketFactory.java:80)
at org.apache.commons.httpclient.protocol.DefaultProtocolSocketFactory.createSocket(DefaultProtocolSocketFactory.java:122)
at org.apache.commons.httpclient.HttpConnection.open(HttpConnection.java:707)
at org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:387)
at org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:171)
at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397)
at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:323)
The exception means there are not enough file handles to open sockets on your machine. How to check this on Linux and Windows is here.
With HTTPClient, it's recommended to use one of the connection managers available to ensure shutting down of connections.
See Sec 2.8 of this guide on how to use HTTPClient connection manager