I'm running my unit testing code for neo4j.
My environment:
Ubuntu 20.04LTS server
1Gb Memory
1CPU
Here is what is displayed in the console:
====================================== test session starts ======================================
platform linux -- Python 3.8.10, pytest-7.1.2, pluggy-1.0.0
rootdir: ~/morsvq, configfile: pytest.ini
plugins: mock-3.8.2
collected 2 items
---------------------------------------- live log setup -----------------------------------------
INFO testcontainers.core.container:container.py:52 Pulling image neo4j:latest
INFO testcontainers.core.container:container.py:63 Container started: ad7963ed01
INFO testcontainers.core.waiting_utils:waiting_utils.py:46 Waiting to be ready...
INFO testcontainers.core.waiting_utils:waiting_utils.py:46 Waiting to be ready...
ERROR neo4j:__init__.py:571 Failed to read from defunct connection IPv4Address(('localhost', 49153)) (IPv4Address(('127.0.0.1', 49153)))
The same code runs successfully on a faster virtual machine with 8Gb Memory. So the code itself shouldn't be faulty. My suspision is that there is something to do with my configuration, so that it now consumes to much memory?
I've checked the official websites' documentation, but it doesn't mention the memory problem. I wonder if someone has encountered similar problem? How to fix this?
Disclaimer: I am a maintainer of tc-java, so I have only some basic experience with tc-python. However, some facts and constraints are universal across Testcontainers language implementations.
As you already wrote, the code runs fine on a more powerful machine, while it fails on an extremely limited machine. 1GB of RAM is not much, I would expect it is generally not enough to successfully start a Neo4j Docker container without memory swapping. Swapping would make the startup and interactions very slow, hence the startup timeout triggers.
For further debugging, you can run the Neo4j container directly using Docker CLI on your environment and see how it behaves.
Related
I'm trying to build Superset locally using docker-compose.
After cloning the repository, I modify docker-compose.yml so that it builds images from local source code instead of pulling from Docker Hub. My modifications include:
In service db, change Postgres image version from image: postgres:14 to image: postgres:10 since the service cannot be built properly with Postgres 14.
In services superset, superset-init, superset-worker, superset-worker-beat and superset-tests-worker, change image: *superset-image to build: . so that Docker builds the application from local source code.
However, after running docker-compose build and then docker-compose up, I got the blank screen like this. I checked out the logs and found out that a lot of asset files are missing, for example /static/assets/images/loading.gif is missing which results in that blank screen.
What am I wrong or missing from my configuration steps? Please help me.
I finally figured it out, it's because the webpackage of the superset frontend is installed inside the container superset_node instead of while building it. That's why although the superset_node is built, we have to wait (in my case for about 15-20 more minutes) for the webpackage to be completely installed. Another point to note is that this installation takes up a lot of memory, so make sure you allocate enough RAM to it (in my case I allocate 16GB to Docker).
I have a small question regarding my geth node.
I have started the node on my machine with the following command:
geth --snapshot=false --mainnet --syncmode "full" --datadir=$HOME/.ethereum --port 30302 --http --http.addr localhost --http.port 8543 --ws --ws.port 8544 --ws.api personal,eth,net,web3 --http.api personal,eth,net,web3
Currently a full geth node is supposed to take up around 600GB of storage on my disk. But after checking my used disk space (command on ubuntu: #du -h) I spotted this:
Can anyone explain to me, why my full node is using 1.4TB of disk space for chaindata? The node is running for some time (around two weeks) and is fully synced. I am using Ubuntu 20.04.
Thanks in advance!
You set syncmode to "full" and disabled snapshot. This will get you an archive node which is much bigger than 600 GB. You can still get a full (but not archive) node by running with the default snapshot and syncmode settings.
Airflow 1.10.12 Seeing this error in the UI:
Broken DAG: [/home/airflow/dags/something.py] The version of cryptography does not match the loaded shared object. This can happen if you have multiple copies of cryptography installed in your Python path. Please try creating a new virtual environment to resolve this issue. Loaded python version: 2.9.2, shared object version: b'2.9'
The dags compile on the machine with no errors, but these messages appear for almost all the dags.
I have also recreated the virtualenv multiple times, but the error persists.
Anyone seen this before?
Turns out that a celery host had a scheduler running that was inserting the errors in the database. Stopped the extra scheduler and the messages went away
How do you run Selenium based tests inside Docker?
I'm trying to get some Python+Selenium tests, which use Firefox and Geckodriver, to run under an Ubuntu 18 Docker image.
My docker-compose.yml file is simply:
version: "3.5"
services:
app_test:
build:
context: .
shm_size: '4gb'
mem_limit: 4096MB
dockerfile: Dockerfile.test
Unfortunately, most tests are failing with errors like:
selenium.common.exceptions.NoSuchWindowException: Message: Browsing context has been discarded
The few search results I can find mentioning this error suggest it's because of low memory. The server I'm running the tests on has 8GB of total memory, although I also tested on a machine with 32GB and received the same error.
I also added a call to print the output of top before each test, and it's showing virtually no memory usage, so I'm not sure what would be causing the test to crash due to insufficient memory.
Some articles suggested adding the shm_size and mem_limit lines, but those had no effect.
I've also tried different versions of Firefox, from the most recent 71 version to the older ESR releases, to rule out it's not a bug due to incompatible versions of Firefox+Selenium+Geckodriver. I'm otherwise following this compatibility table.
What is causing this error and how do I fix it?
Root cause could be running out of RAM memory.
To fix it run the docker container adding --shm-size.
Example:
--shm-size="2G"
I notice that the file size of *.wasm compiled by Rust is acceptable . However , a minimal HelloWorld compiled by AspNet/Blazor will take up almost 2.8MB .
mono.wasm 1.75MB
mscorlib.dll 1.64MB
*.dll ....
If I understand correctly , the mono.wasm is the VM that runs in browser and runs the dll we write . Does that mean no matter what we do , we cannot make the size of files less than 1.75MB ? If not , is there a way to reduce the file size ?
Yes, 2.8 MBytes is quite a large payload for a 'Hello World' applications. However, Blazor is still very much an experimental technology, which is not ready for production use yet. There are numerous reasons why the generated output is so large at the moment:
Your current application runs in an interpreted mode, where the mono.wasm file ships the CLR to your browser, allowing it to execute your DLL. A faster, and more size efficient approach would be to use Ahead of Time Compilation (AOT) as described in this article. This would allow the compiler to strip out any library functions that are not used, giving a highly optimised output.
The features of the WebAssembly runtime itself are quite limited, future version will add garbage collection and various other capabilities that Blazor will be able to use directly. At the moment mono.wasm includes its own garbage collector.
The Blazor project itself has a number of open issues describing various optimisations which are being actively worked on. It already performs tree-shaking and various other optimisations, but this type of work takes time.
Currently (2021), a hello world Blazor WASM application (Visual Studio project template) downloads over 17 MB of data. When gzip is used, this got reduced to 7 MB - which is really huge if we think about the fact that no application code/logic is included yet!
But I found out that it seems the linker was not active during debugging. If we publish the application in release mode (-c Release switch), only necessary files were loaded. This increases the transfer size to 5.6 MB or even 2.4 MB with gzip activated. You can also see this in the size of the published folder:
$ dotnet publish --output publish_debug -c Debug
$ dotnet publish --output publish_release -c Release
$ du -hs publish_debug/
30M publish_debug/
$ du -hs publish_release/
11M publish_release/
It's still a noticeable amount of data. However, this information may help others finding this questions because of the much larger 17/7 MB shown in debug mode.
Since the question is from 2018, it may be also interested to mention that framework caching was improved in 3.2.0-preview2. This means: The runtime and framework are stored in the browser cache after fetching them initially from the server. Since this is handled by JavaScript, no further requests are made to this files after they got cached! The server may would respond with 304 Not Modified, but it's still some overhead which we haven't any more now.
This also means that they only appear on the first page load in the network tab! If you want to measure the loading time without cache, delete the cache for those domain. This has to be done manually! Checking the no cache checkbox in the browser console is not enough, since it seems that Blazor uses the local storage with JS.