How to set JVM XMX parameter for SQL Workbench? (increase available memory) - sql

I'm running Sql Workbench/J and keep running out of memory on a heavy query. (on MAC osx)
I just need to fetch this data once in a blue moon and would like to increase the available memory.
The exact error I get is:
I found a solution when running from terminal which is to use the command below, but I would like to have one from the Sql Workbench application itself:
java -Xmx4g -jar sqlworkbench.jar

Disclaimer: I am not a Mac user.
The memory setting that is used for the MacOS launcher is stored in the file Info.plist which should be inside the sub-folder Contents of the SQL Workbench/J "app" folder (not sure how this is called).
There is already an entry with -Xmx2048m present that you can change.

Related

Req. Ovftool command to overwrite memory size and CPU count described in ova file during deployment of VM

I have a OVA file(MyOvafile.ova) which contain MemorySize=16GB and CPU count=4.
I have deployed the Ovftool on VMware ESXi server.
I am using the following command to deploy the VM:
/vmfs/volumes/DataStore1/vmware-ovftool/ovftool --memorySize:15360 --name=Test_VM -dm=thin -ds=DataStore1 /vmfs/volumes/DataStore1/OVA_V5.1_BSI-8/MyOvafile.ova
Now the problem i am facing:
As i am giving MemorySize of 15360MB but after deployment VM has the same values as defined in ova file (MyOvafile.ova i.e 16GB)
My Question:
How can i change the value of MemorySize and CPU count through ovftool command?
Apparently, this seems a bug in OVFTOOL (and documentation as well).
CPU and memory cannot be overridden by OVFTOOL's corresponding parameters.
However, there is hack by modifying it in VMX file of VM (and then using reconfigure command).
1) Get VMXfile Location (ending with .vmx) :
vim-cmd vmsvc/getallvms
Vmid Name File Guest OS Version Annotation
72 Test_vm [datastore2] VM_name/VM_name.vmx rhel6_64Guest vmx-08
2) Modify vmx file (for example, using awk) for changing 'vCPUS=REQ_CPUs' entry.
3) Reconfigure .vmx file
vim-cmd vmsvc/reload <VM_ID>
Issue reported in VMware community: https://communities.vmware.com/message/2698710#2698710

Error initializing JVM

I am getting "Could not reserve enough space for object heap" error when I am trying to start hybris server.
I have set
wrapper.java.additional.1=-Xmx1G
wrapper.java.additional.2=-XX:MaxPermSize=1024M
My machine is 64 bit 8GB RAM Windows
I faced the same problem once, The problem in my case was that too many other Applications were running on my system.
So go to the task manager and check the memory available use.
Close some applications and try running.
Also if you are using eclipse Then, In your eclipse.ini file (this is beside the eclipse executable), replace -Xmx256m with -Xmx1024m (or Xmx512m).
This is not compulsory but in certain cases it works.
If you are using some extension then,
Open YOURPATH/config/local.properties file.
Add the following entry:
config/local.properties
build.parallel=true
Save the file.
(In cases where we have multiple cores in the machine, we can tell hybris to utilize these by building in parallel and in certain cases this too works)
I too faced the same probelm. I followed below steps and set the max heap size to 1GB.
Add the following content to local.properties
tomcat.generaloptions=-Xmx4G -ea -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dorg.tanukisoftware.wrapper.WrapperManager.mbean=true -Djava.endorsed.dirs="%CATALINA_HOME%/lib/endorsed" -Dcatalina.base=%CATALINA_BASE% -Dcatalina.home=%CATALINA_HOME% -Dfile.encoding=UTF-8 -Dlog4j.configuration=log4j_init_tomcat.properties -Djava.util.logging.config.file=jdk_logging.properties -Djava.io.tmpdir="${HYBRIS_TEMP_DIR}"
ant clean all
start hybrisserver
Reference
https://launchpad.support.sap.com/#/notes/0002437669

Strange apache behaviour when lauching an external binary called by a perl script

I am currently setting up a web service powered by apache and running on CENTOS 6.4.
This service uses perl scripts (cgi-bin) launching in particular external homemade fortran compiled binaries.
Here is the issue: when I boot my server, everything goes well except that one of my binary crashes systematically (with a kernel segfault) when called by my perl scripts.
If I restart manually the httpd service (at the command line: service httpd restart), the issue is totally fixed.
I examined apache/system logs and nothing suspicious can be found.
It appears that the problem occurs only when httpd is launched by /etc/rc[0-6].d startup directives. I tried to change the launch order of http (S85httpd by default) to any other position without success.
To summarize, my web service is only functional (with no external binary crash) when httpd is launched at the command line once the server has fully booted up!
[EDIT] This issue is now resolved:
My fortran binary handles very large arrays and complex functions requiring an unlimited stack size.
Despite that the stack size limit was defined on a system-wide basis (in /etc/security/limits.conf), for any reason it appears that the "apache/perl/fortran binary" ensemble was not aware of that (causing my binary to crash each time it was called).
At the contrary, when I manually restarted apache at the shell prompt, the stacksize limit was correctly passed (.bashrc with 'ulimit -S -s unlimited').
As a workaround, I used BSD::Resource module (http://metacpan.org/pod/BSD::Resource) to define stacksize directly in my perl script by using e.g. setrlimit(RLIMIT_STACK, $softlimit, $hardlimit);
Thus, this new stack size limit is now directly passed from my perl script to my binary.
I've run into similar problems before. Maybe one way to solve this is to put the binary on a 'delayed start', so that it starts after everything else on your system is running. One way to do this is to put an at job in your /etc/rc.local script, to start the binary in X minutes.

Find the commands that jvm was started with during run time (1.6.0_12)

Full story:
I am trying to start up an instance of hudson with a larger memory allocation and I'm currently using scripts owned by root that I can't modify directly to pass arguments. However the script currently passes the $JAVA_ARGS variable when starting up the service. I have exported the required parameters to JAVA_ARGS but the application still appears to be bound by the old memory restrictions.
Question:
Is there a way to find out which command line parameters were used to start up the instance. More specifically I'm looking to find the values that were passed (if any) to Xmx and Xms.
java version "1.6.0_12"
Java(TM) SE Runtime Environment (build 1.6.0_12-b04)
Java HotSpot(TM) 64-Bit Server VM (build 11.2-b01, mixed mode)
After some searching I came about a pretty simple solution (which I'm a little embarrassed to have missed for so long). You can see the command line to any command running in linux with ps, as long as you pass the correct flags. I just made a call to ps -fHu hudson and was able to see the full command line call to java which showed the passed in parameters.
Since you can export $JAVA_ARGS, maybe you can override $PATH to trick the script to run another program instead of the JVM, which could be a program that simply writes its arguments somewhere.

Debugging Solaris OS crash

I have access to a remote Solaris terminal which crashes occasionally, and I have to ask someone with physical access to boot the machine up, which it does successfully. I would like to know which tools/files should I look at to find out the cause of the crash so that I can make the necessary configuration changes and avoid it in the future.
What tools you can use will depend on what version of solaris you have running and what the actual problem
is. The first thing to do is check the system console (which it sounds like you don't have access to) and the /var/adm/messages file. This file is updated with system messages and the newest will appear at the end.
Next, you can look for a system core file. If a core file is created, it would be in /var/crash/hostname where "hostname" is the name of the machine.
If you have an actual core file in the /var/crash/hostname directory, this set of commands will give you a good
string to search google with:
# cd /var/crash/hostname
Replace "hostname" with the hostname of your machine.
# mdb -k unix.0 vmcore.0
If you have multiple core files, select the most recent version.
> ::status
This should give you a panic message, cut and paste that into google and see what you can find.
For more core file analysis read this:
http://cuddletech.com/blog/pivot/entry.php?id=965