Weblogic Server PermGen errors - weblogic

When I created datasource and tried to attach to target server, I am getting the following errors, where do I need to increase space in Weblogic 10.3.6? Any help is highly appreciable
Console encountered the following error
weblogic.application.WrappedDeploymentException: PermGen space at
java.lang.ClassLoader.defineClass1(Native Method) at
java.lang.ClassLoader.defineClass(ClassLoader.java:791) at
java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at
java.net.URLClassLoader.defineClass(URLClassLoader.java:449) at
java.net.URLClassLoader.access$100(URLClassLoader.java:71) at
java.net.URLClassLoader$1.run(URLClassLoader.java:361) at
java.net.URLClassLoader$1.run(URLClassLoader.java:355) at
java.security.AccessController.doPrivileged(Native Method) at
java.net.URLClassLoader.findClass(URLClassLoader.java:354) at
java.lang.ClassLoader.loadClass(ClassLoader.java:423) at
sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at
java.lang.ClassLoader.loadClass(ClassLoader.java:356) at
oracle.jdbc.driver.T4CTTIdcb.fillupAccessors(T4CTTIdcb.java:399) at
oracle.jdbc.driver.T4CTTIdcb.receiveCommon(T4CTTIdcb.java:208) at
oracle.jdbc.driver.T4CTTIdcb.receive(T4CTTIdcb.java:146) at
oracle.jdbc.driver.T4C8Oall.readDCB(T4C8Oall.java:844) at
oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:358) at
oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:192) at
oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:531) at

PermGen Space Out-of-Memory Error when Using the Sun JDK
Bug: 8589284
Added: 06-May-2011
Platform: All
When the Sun JDK is used as the JVM for the SOA managed server, Oracle recommends that the following memory settings be used. If proper memory settings are not used, repeated operations on task detail applications (human workflow) can result in PermGen space out-of-memory errors.
For:
For UNIX operating systems, open the $DOMAIN_HOME/bin/setSOADomainEnv.sh file.
For Windows operating systems, open the DOMAIN_HOME\bin\setSOADomainEnv.cmd file.
Increase the following values:
PORT_MEM_ARGS="${PORT_MEM_ARGS} -XX:PermSize=256m -XX:MaxPermSize=512m

I had a similar problem when running WLS as a Windows service, even after setting MEM_ARGS properly in my service creation script. I finally resolved it by updating a few Windows registry entries:
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\wlsvc yerdomain_yerserver\Parameters]
[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet002\services\wlsvc yerdomain_yerserver\Parameters]
[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\services\wlsvc yerdomain_yerserver\Parameters]
Not sure why there were 3 entries, but after updating those all and restarting the service everything is back to normal.

You should do what #better_user_mkstemp said, but also I can't help myself from noticing that you have a serious problem with your DataSource, an Oracle DataSource to be specific, try to go to
http://your_server:7001/console/dashboardand start monitoring your Data Sources and see which one is this, and try tune it using this documentation

In my case the solution was to edit the DOMAIN\bin\setDomainEnv.cmd file. The following modifications were made before the server would start as intended:
The -Xms and -Xmx values were increased
the -XX:PermSize and -XX:MaxPermSize values were increased too
and lastly, but probably most importantly
the if "%JAVA_VENDOR%"=="Sun" ( conditions were changed to if "%JAVA_VENDOR%"=="Oracle" ( in order to properly recognize my JVM.
Before this last modification the memory changes were only partly reflected to the initialised JVM, and this meant that the parameters regarding the PermGen Space were simply ignored.

Related

tomcat server sql exception

I have an app that runs the Tomcat server. I use IntelliJ on my machine and run it from there when I do tests.
Running it many times, all works, and suddenly server do not go up well, and I see the following in log:
com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#1: BasicResourcePool$AcquireTask: com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask#1ab64513 -- Acquisition Attempt Failed!!! Clearing pending acquires. While trying to acquire a needed new resource, we failed to succeed more than the maximum number of allowed acquisition attempts (30). Last acquisition attempt exception:
java.sql.SQLException: Unexpected exception encountered during query.
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1073)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:987)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:982)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:927)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2664)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2568)
at com.mysql.jdbc.StatementImpl.executeQuery(StatementImpl.java:1557)
at com.mysql.jdbc.ConnectionImpl.loadServerVariables(ConnectionImpl.java:3868)
at com.mysql.jdbc.ConnectionImpl.initializePropsFromServer(ConnectionImpl.java:3407)
at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2384)
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2153)
at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:792)
at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:47)
at sun.reflect.GeneratedConstructorAccessor38.newInstance(Unknown Source)
I have no clue what happened since I did not change anything related to JDBC or SQL. I tried to replace kotlin version (upgrade) and returned right away, but I don't know if it has anything to do with the exception, and how can I solve it.
checking the connection to the database from IntelliJ DB pane - connected successfully.
I will be thankful if someone has a clue what could go wrong.

Processing stuck in step WriteSuccessfulRecords ... when using template Pub/Sub -> BigQuery

I keep getting this issue when I use a defined template. I am not sure what could be the issue.
As far as sdk: Apache Beam SDK for Java 2.10.0
Processing stuck in step WriteSuccessfulRecords/StreamingInserts/StreamingWriteTables/StreamingWrite for at least 05m00s without outputting or completing in state finish
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.FutureTask.awaitDone(FutureTask.java:429)
at java.util.concurrent.FutureTask.get(FutureTask.java:191)
at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.insertAll(BigQueryServicesImpl.java:765)
at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.insertAll(BigQueryServicesImpl.java:829)
at org.apache.beam.sdk.io.gcp.bigquery.StreamingWriteFn.flushRows(StreamingWriteFn.java:130)
at org.apache.beam.sdk.io.gcp.bigquery.StreamingWriteFn.finishBundle(StreamingWriteFn.java:102)
at org.apache.beam.sdk.io.gcp.bigquery.StreamingWriteFn$DoFnInvoker.invokeFinishBundle(Unknown Source)
It seems existing template require schema in order dataflow to work. After adding schema issue gone away.

JaCoCo agent Configuration in Weblogic | ClassCastException in server startup

Trying to configure JaCoCo agent in Weblogic 11g, after giving below mention parameter
-XXaggressive -Xmx8192m -Xms8192m -Xgc:pausetime -Xgc:gencon -XXnosystemgc -Duser.home=/scratch/app/product/fmw/XXXinstall/XXX/config -Dsun.net.client.defaultConnectTimeout=10000
-javaagent:/scratch/app/product/fmw/user_projects/domains/host_domain/lib/jacocoagent.jar=destfile=/scratch/app/product/fmw/user_projects/domains/host_domain/tmp/host_jacoco.exec,output=tcpserver,address=,includes=com.*
WEBLOGIC SERVER IS NOT COMING UP PROPERLY FOR BELOW MENTION LOGS
Error
Caused by: java.lang.ClassCastException: [Z
at XXX.app.AbstractApplication.fetchAllOverriddenServices(AbstractApplication.java:1000)
at XXX.app.AbstractApplication.checkAccess(AbstractApplication.java:930)
at XXX.app.sms.service.provider.AccessibleResourceApplicationService.initializeRequestedResource(AccessibleResourceApplicationService.java:1011)
at XXX.app.sms.service.provider.AccessibleResourceLoader.initializeSingleton(AccessibleResourceLoader.java:187)
at XXX.app.sms.service.provider.AccessibleResourceLoader.loadResources(AccessibleResourceLoader.java:232)
at XXX.app.adapter.impl.sms.AccessibleResourceLoaderAdapter.loadResources(AccessibleResourceLoaderAdapter.java:49)
at XXX.app.bootstrap.BootstrapInitializer.initializeSecuritySingletons(BootstrapInitializer.java:292)
at XXX.channel.branch.bootstrap.BootstrapServlet.init(BootstrapServlet.java:46)
Note : After removing parameter :
-javaagent:/scratch/app/product/fmw/user_projects/domains/host_domain/lib/jacocoagent.jar=destfile=/scratch/app/product/fmw/user_projects/domains/host_domain/tmp/host_jacoco.exec,output=tcpserver,address=,includes=com.*
Server is coming up properly.
As was answered in https://github.com/jacoco/jacoco/issues/567 :
WebLogic server without applications starts perfectly with JaCoCo.
Most likely you have an issue in your code:
To collect execution data JaCoCo adds members to the classes. One of those members is a field with type boolean[] ([Z in bytecode notation). These members are marked as synthetic. Your application and its libraries must ignore synthetic members. If they are not ignored, then change you application to do so, or exclude classes from instrumentation using agent parameters includes and excludes.
Run your application under debugger, put a breakpoint at line XXX.app.AbstractApplication.fetchAllOverriddenServices(AbstractApplication.java:1000) and investigate why there is wrong cast or/and which classes to exclude. Or start excluding packages of your application one by one.

Unable to open excel (.xlsx) file in pentaho spoon

I have an excel sheet (.xlsx format). But when I try opening it using "ExcelInput", I get
Unable to opendialog for this step.
java.lang.OutOfMemoryError: GC overhead limit exceeded error
error. I have enabled the "Excel 2007 XLSX (Apache POI)" in content as well.
java.lang.OutOfMemoryError: GC overhead limit exceeded error
This error occurs especially when the process is out of Memory. It means that Garbage Collection(GC) has been trying to free the memory but is unable to do so.
Check this article for more.
The possible solution is to increase the memory size of the application, Kettle in this case. You can do so by editing the "kitchen.sh / pan.sh" or "kitchen.bat / pan.bat" file located inside the "../pentaho/design-tools/data-integration". Increase the JAVAMAXMEM number to larger number maybe 1024.
Check the image as below:
Hoping this might help you in reading the excel file :)
If you use the Spoon client (i.e. the default application used when running the Pentaho Data Integration - PDI), you can change the parameters in Spoon.bat (if you use windows) or Spoon.sh (if you use unix). The java memory parameters are Xms and Xmx. You would find them in a statement like the following:
if "%PENTAHO_DI_JAVA_OPTIONS%"=="" set PENTAHO_DI_JAVA_OPTIONS="-Xms1024m" "-Xmx2048m" "-XX:MaxPermSize=256m"
After changing the value, you should restart spoon.

IntelliJ IDEA 14 : What is capture memory snapshot

I tried to search this, but couldn't find, exactly what I am looking for
so someone please provide me an explanation on IDEA14's capture memory snapshot
It is added to 14 version for convenience reporting in case of memory troubles.
Snippet from How to report IntelliJ IDEA performance problems:
In case of memory related issues (memory usage goes high, garbage is not collected, etc) please use the Memory snapshot button in the menu near the CPU snapshot button. If it's not possible to get the snapshot because of the application crashing with OutOfMemory errors, please add the
-XX:+HeapDumpOnOutOfMemoryError
option to the IntelliJ IDEA JVM options. On the next OOM error the .hrpof dump will be produced and saved by the JVM (usually in the application working directory which is IDEA_HOME\bin).
Upload this dump to our FTP as described above in the CPU snapshot section.
Please note that memory snapshot may contain the sensitive source code from your project.
If you are uploading to a public service, use some password protection or enctyption. JetBrains FTP server is write only and you don't need to protect files uploaded there.
Additional link:
Reporting performance problems