install run Tomcat on VPS/Burstnet - jvm

What is the minimum memory requirement to start JVM?
I have the cheapest VPS Burstnet (512mb memory) and installed java.
When I type java, it says
$java
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.
When I type top, there is still around around 400MB free memory.
The support tells me the only solution is to increase my memory, I doubt his suggestion. In case it is caused by insufficient memory,I tried to create a swap file, however swap file is not allowed due to my VPS runs on OpenVS, and OpenVs does not allow swap files. http://writereadspread.blogspot.com/2010/08/swap-on-vpsopenvs.html
I would be very much appreciated if you can answer any of the following questions:
What is the cause of the issue?
What is the minimum memory requirement for installing JRE and JDK?
If u r running java apps on VPS, what is your memory, and which host r u using?

To run Tomcat on VPS:
* if openjkd installed, uninstall
* install sun-java linux installation
* install tomcat
* run export JAVA_OPTS="-XX:MaxPermSize=64m -Xms16M -Xmx64m"
* start Tomcat ./startup.sh

Related

testcontainers-python hanging while showing "waiting to be ready...", then fails

I'm running my unit testing code for neo4j.
My environment:
Ubuntu 20.04LTS server
1Gb Memory
1CPU
Here is what is displayed in the console:
====================================== test session starts ======================================
platform linux -- Python 3.8.10, pytest-7.1.2, pluggy-1.0.0
rootdir: ~/morsvq, configfile: pytest.ini
plugins: mock-3.8.2
collected 2 items
---------------------------------------- live log setup -----------------------------------------
INFO testcontainers.core.container:container.py:52 Pulling image neo4j:latest
INFO testcontainers.core.container:container.py:63 Container started: ad7963ed01
INFO testcontainers.core.waiting_utils:waiting_utils.py:46 Waiting to be ready...
INFO testcontainers.core.waiting_utils:waiting_utils.py:46 Waiting to be ready...
ERROR neo4j:__init__.py:571 Failed to read from defunct connection IPv4Address(('localhost', 49153)) (IPv4Address(('127.0.0.1', 49153)))
The same code runs successfully on a faster virtual machine with 8Gb Memory. So the code itself shouldn't be faulty. My suspision is that there is something to do with my configuration, so that it now consumes to much memory?
I've checked the official websites' documentation, but it doesn't mention the memory problem. I wonder if someone has encountered similar problem? How to fix this?
Disclaimer: I am a maintainer of tc-java, so I have only some basic experience with tc-python. However, some facts and constraints are universal across Testcontainers language implementations.
As you already wrote, the code runs fine on a more powerful machine, while it fails on an extremely limited machine. 1GB of RAM is not much, I would expect it is generally not enough to successfully start a Neo4j Docker container without memory swapping. Swapping would make the startup and interactions very slow, hence the startup timeout triggers.
For further debugging, you can run the Neo4j container directly using Docker CLI on your environment and see how it behaves.

Selenium crashing in Docker due to Browsing context has been discarded

How do you run Selenium based tests inside Docker?
I'm trying to get some Python+Selenium tests, which use Firefox and Geckodriver, to run under an Ubuntu 18 Docker image.
My docker-compose.yml file is simply:
version: "3.5"
services:
app_test:
build:
context: .
shm_size: '4gb'
mem_limit: 4096MB
dockerfile: Dockerfile.test
Unfortunately, most tests are failing with errors like:
selenium.common.exceptions.NoSuchWindowException: Message: Browsing context has been discarded
The few search results I can find mentioning this error suggest it's because of low memory. The server I'm running the tests on has 8GB of total memory, although I also tested on a machine with 32GB and received the same error.
I also added a call to print the output of top before each test, and it's showing virtually no memory usage, so I'm not sure what would be causing the test to crash due to insufficient memory.
Some articles suggested adding the shm_size and mem_limit lines, but those had no effect.
I've also tried different versions of Firefox, from the most recent 71 version to the older ESR releases, to rule out it's not a bug due to incompatible versions of Firefox+Selenium+Geckodriver. I'm otherwise following this compatibility table.
What is causing this error and how do I fix it?
Root cause could be running out of RAM memory.
To fix it run the docker container adding --shm-size.
Example:
--shm-size="2G"

AttachNotSupportedException when trying to start a JFR recording

I'm receiving AttachNotSupportedException when trying to start a JFR recording.
It was working normally, until now.
jcmd 3658 JFR.start maxsize=100M filename=jfr_1.jfr dumponexit=true settings=profile
Output:
3658:
com.sun.tools.attach.AttachNotSupportedException: Unable to open socket file: target process not responding or HotSpot VM not loaded
at sun.tools.attach.LinuxVirtualMachine.<init>(LinuxVirtualMachine.java:106)
at sun.tools.attach.LinuxAttachProvider.attachVirtualMachine(LinuxAttachProvider.java:63)
at com.sun.tools.attach.VirtualMachine.attach(VirtualMachine.java:208)
What might be happening?
SO: Oracle Linux Server release 6.7
$ java -version
java version "1.8.0_102"
Java(TM) SE Runtime Environment (build 1.8.0_102-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.102-b14, mixed mode)
One of the probable reasons is that /tmp/.java_pid1234 file has been deleted (where 1234 is PID of a Java process).
Tools that depend on Dynamic Attach Mechanism (jstack, jmap, jcmd, jinfo) communicate to JVM through a UNIX domain socket created at /tmp.
This socket is created by JVM lazily on the first attach attempt or eagerly at JVM initialization if -XX:+StartAttachListener flag is specified.
Once the file corresponding to the socket is deleted, tools cannot connect to the target process, and unfortunately there is no way to re-create communication socket without restarting JVM.
For the description of Dynamic Attach Mechanism see this answer.
With personal experience... This problem also occurs in scenarios where the development environment is divided into partitions, and the partition where the operating system is located is different from the operating system partition. Example, operating system partition is EXT4 and the development environment partition is NTFS (where the JVM is). Problem occurs because you can not create a file "/tmp/.java_pid6024" (where 6024 is the PID of the java process).
To troubleshoot add -XX: + StartAttachListener at the start of the JVM, or application server.
Another possibility: your app is running under systemd with 'PrivateTmp=yes'. This prevents the /tmp/.java_pid1234 file from being found.

Error initializing JVM

I am getting "Could not reserve enough space for object heap" error when I am trying to start hybris server.
I have set
wrapper.java.additional.1=-Xmx1G
wrapper.java.additional.2=-XX:MaxPermSize=1024M
My machine is 64 bit 8GB RAM Windows
I faced the same problem once, The problem in my case was that too many other Applications were running on my system.
So go to the task manager and check the memory available use.
Close some applications and try running.
Also if you are using eclipse Then, In your eclipse.ini file (this is beside the eclipse executable), replace -Xmx256m with -Xmx1024m (or Xmx512m).
This is not compulsory but in certain cases it works.
If you are using some extension then,
Open YOURPATH/config/local.properties file.
Add the following entry:
config/local.properties
build.parallel=true
Save the file.
(In cases where we have multiple cores in the machine, we can tell hybris to utilize these by building in parallel and in certain cases this too works)
I too faced the same probelm. I followed below steps and set the max heap size to 1GB.
Add the following content to local.properties
tomcat.generaloptions=-Xmx4G -ea -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dorg.tanukisoftware.wrapper.WrapperManager.mbean=true -Djava.endorsed.dirs="%CATALINA_HOME%/lib/endorsed" -Dcatalina.base=%CATALINA_BASE% -Dcatalina.home=%CATALINA_HOME% -Dfile.encoding=UTF-8 -Dlog4j.configuration=log4j_init_tomcat.properties -Djava.util.logging.config.file=jdk_logging.properties -Djava.io.tmpdir="${HYBRIS_TEMP_DIR}"
ant clean all
start hybrisserver
Reference
https://launchpad.support.sap.com/#/notes/0002437669

Strange apache behaviour when lauching an external binary called by a perl script

I am currently setting up a web service powered by apache and running on CENTOS 6.4.
This service uses perl scripts (cgi-bin) launching in particular external homemade fortran compiled binaries.
Here is the issue: when I boot my server, everything goes well except that one of my binary crashes systematically (with a kernel segfault) when called by my perl scripts.
If I restart manually the httpd service (at the command line: service httpd restart), the issue is totally fixed.
I examined apache/system logs and nothing suspicious can be found.
It appears that the problem occurs only when httpd is launched by /etc/rc[0-6].d startup directives. I tried to change the launch order of http (S85httpd by default) to any other position without success.
To summarize, my web service is only functional (with no external binary crash) when httpd is launched at the command line once the server has fully booted up!
[EDIT] This issue is now resolved:
My fortran binary handles very large arrays and complex functions requiring an unlimited stack size.
Despite that the stack size limit was defined on a system-wide basis (in /etc/security/limits.conf), for any reason it appears that the "apache/perl/fortran binary" ensemble was not aware of that (causing my binary to crash each time it was called).
At the contrary, when I manually restarted apache at the shell prompt, the stacksize limit was correctly passed (.bashrc with 'ulimit -S -s unlimited').
As a workaround, I used BSD::Resource module (http://metacpan.org/pod/BSD::Resource) to define stacksize directly in my perl script by using e.g. setrlimit(RLIMIT_STACK, $softlimit, $hardlimit);
Thus, this new stack size limit is now directly passed from my perl script to my binary.
I've run into similar problems before. Maybe one way to solve this is to put the binary on a 'delayed start', so that it starts after everything else on your system is running. One way to do this is to put an at job in your /etc/rc.local script, to start the binary in X minutes.