pgloader: Socket error in "connect": EINTR (Interrupted system call) and HEAP-EXHAUSTED-ERROR - pgloader

These days, I try to migrate data from mysql to postgres using pgloader. I encountered HEAP-EXHAUSTED-ERROR and Socket error.
For HEAP-EXHAUSTED-ERROR, I have tried to reduce the batch size and workers, but it didn't work.
For EINTR (Interrupted system call), I am not sure the root cause.

I have tried to build pgloader using Clozure CL, and it seems to work. Maybe the CCL offers a better Garbage Collector. see Heap exhausted
You can also try the docker image:
docker pull dimitri/pgloader:ccl.latest
This way works for both issues.

Related

testcontainers-python hanging while showing "waiting to be ready...", then fails

I'm running my unit testing code for neo4j.
My environment:
Ubuntu 20.04LTS server
1Gb Memory
1CPU
Here is what is displayed in the console:
====================================== test session starts ======================================
platform linux -- Python 3.8.10, pytest-7.1.2, pluggy-1.0.0
rootdir: ~/morsvq, configfile: pytest.ini
plugins: mock-3.8.2
collected 2 items
---------------------------------------- live log setup -----------------------------------------
INFO testcontainers.core.container:container.py:52 Pulling image neo4j:latest
INFO testcontainers.core.container:container.py:63 Container started: ad7963ed01
INFO testcontainers.core.waiting_utils:waiting_utils.py:46 Waiting to be ready...
INFO testcontainers.core.waiting_utils:waiting_utils.py:46 Waiting to be ready...
ERROR neo4j:__init__.py:571 Failed to read from defunct connection IPv4Address(('localhost', 49153)) (IPv4Address(('127.0.0.1', 49153)))
The same code runs successfully on a faster virtual machine with 8Gb Memory. So the code itself shouldn't be faulty. My suspision is that there is something to do with my configuration, so that it now consumes to much memory?
I've checked the official websites' documentation, but it doesn't mention the memory problem. I wonder if someone has encountered similar problem? How to fix this?
Disclaimer: I am a maintainer of tc-java, so I have only some basic experience with tc-python. However, some facts and constraints are universal across Testcontainers language implementations.
As you already wrote, the code runs fine on a more powerful machine, while it fails on an extremely limited machine. 1GB of RAM is not much, I would expect it is generally not enough to successfully start a Neo4j Docker container without memory swapping. Swapping would make the startup and interactions very slow, hence the startup timeout triggers.
For further debugging, you can run the Neo4j container directly using Docker CLI on your environment and see how it behaves.

Terminal process get killed with code ELIFECYCLE errno: 137 when VS Code is open. Quitting VS Code resolves the issue?

I've only recently in the last two days begun encountering this issue.
When I attempt to build my Angular project, It's getting to this one point and failing with errors below.
The only way I can get it to run is to quit VS code and rerun the exact same command and it builds without issue.
Any ideas what may be causing this?
137 is 128 + 9. In some situations—and I'm guessing that this is one of them—this indicates that the process died with a signal 9. Signal 9 is, on macOS (and multiple other OSes), SIGKILL. This signal is sent by the "out of memory" killer.
This also explains why exiting VSCode fixes things: VSCode is a memory hog. Exiting it returns the memory to the system.
To fix this more permanently, either reduce the memory needs of your build and/or of VSCode, or add more memory to your system.
See also What killed my process and why?

Selenium crashing in Docker due to Browsing context has been discarded

How do you run Selenium based tests inside Docker?
I'm trying to get some Python+Selenium tests, which use Firefox and Geckodriver, to run under an Ubuntu 18 Docker image.
My docker-compose.yml file is simply:
version: "3.5"
services:
app_test:
build:
context: .
shm_size: '4gb'
mem_limit: 4096MB
dockerfile: Dockerfile.test
Unfortunately, most tests are failing with errors like:
selenium.common.exceptions.NoSuchWindowException: Message: Browsing context has been discarded
The few search results I can find mentioning this error suggest it's because of low memory. The server I'm running the tests on has 8GB of total memory, although I also tested on a machine with 32GB and received the same error.
I also added a call to print the output of top before each test, and it's showing virtually no memory usage, so I'm not sure what would be causing the test to crash due to insufficient memory.
Some articles suggested adding the shm_size and mem_limit lines, but those had no effect.
I've also tried different versions of Firefox, from the most recent 71 version to the older ESR releases, to rule out it's not a bug due to incompatible versions of Firefox+Selenium+Geckodriver. I'm otherwise following this compatibility table.
What is causing this error and how do I fix it?
Root cause could be running out of RAM memory.
To fix it run the docker container adding --shm-size.
Example:
--shm-size="2G"

Strange apache behaviour when lauching an external binary called by a perl script

I am currently setting up a web service powered by apache and running on CENTOS 6.4.
This service uses perl scripts (cgi-bin) launching in particular external homemade fortran compiled binaries.
Here is the issue: when I boot my server, everything goes well except that one of my binary crashes systematically (with a kernel segfault) when called by my perl scripts.
If I restart manually the httpd service (at the command line: service httpd restart), the issue is totally fixed.
I examined apache/system logs and nothing suspicious can be found.
It appears that the problem occurs only when httpd is launched by /etc/rc[0-6].d startup directives. I tried to change the launch order of http (S85httpd by default) to any other position without success.
To summarize, my web service is only functional (with no external binary crash) when httpd is launched at the command line once the server has fully booted up!
[EDIT] This issue is now resolved:
My fortran binary handles very large arrays and complex functions requiring an unlimited stack size.
Despite that the stack size limit was defined on a system-wide basis (in /etc/security/limits.conf), for any reason it appears that the "apache/perl/fortran binary" ensemble was not aware of that (causing my binary to crash each time it was called).
At the contrary, when I manually restarted apache at the shell prompt, the stacksize limit was correctly passed (.bashrc with 'ulimit -S -s unlimited').
As a workaround, I used BSD::Resource module (http://metacpan.org/pod/BSD::Resource) to define stacksize directly in my perl script by using e.g. setrlimit(RLIMIT_STACK, $softlimit, $hardlimit);
Thus, this new stack size limit is now directly passed from my perl script to my binary.
I've run into similar problems before. Maybe one way to solve this is to put the binary on a 'delayed start', so that it starts after everything else on your system is running. One way to do this is to put an at job in your /etc/rc.local script, to start the binary in X minutes.

Valgrind: How to force it to generate heap summary without terminating process?

When using Valgrind, I noticed that it only generates the Heap Summary when the process is terminating. Is there a way to force Valgrind to scan the memory and print leak reports when process is still running?
In addition to the VALGRIND_DO_LEAK_CHECK client request, you can also run valgrind with --vgdb=yes to enable embedded gdbserver, and then issue monitor leak_check full reachable any command at the (gdb) prompt.
This doesn't require modifying and rebuilding the target program, and has other advantages: you can set breakpoints and perform leak checks at arbitrary points in the execution, not just the ones where you've put in the client request.
Use the VALGRIND_DO_LEAK_CHECK client request from valgrind/memcheck.h.