Docker-compose node express container doesn't show error logs and restart continuously - express

I am trying to make nginx, node express work on the container by using docker compose.
But problem is express container doesn't show any error log why it is died.
This is my docker-compose.yml
version: '3'
services:
proxy:
image: nginx:latest
container_name: proxy
ports:
- "80:80"
volumes:
- ./proxy/nginx.conf:/etc/nginx/nginx.conf
restart: "unless-stopped"
express:
build:
context: ./server
container_name: express
expose:
- "3000"
volumes:
- ./source:/source
- /source/node_modules
restart: "unless-stopped"
This is my directory structure.
source directory, i move all files and directories from express-generator outputs.
This is my Dockerfile.
FROM node:12
COPY package*.json /source/
WORKDIR /source
RUN npm install
CMD [ "node", "app.js" ]
This is my package.json
{
"name": "docker_web_app",
"version": "1.0.0",
"description": "",
"main": "server.js",
"scripts": {
"start": "node app.js",
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "sjk5766",
"license": "ISC",
"dependencies": {
"cookie-parser": "~1.4.4",
"debug": "~2.6.9",
"express": "~4.16.1",
"http-errors": "~1.6.3",
"jade": "~1.11.0",
"morgan": "~1.9.1"
}
}
When i did docker ps after docker-compose up -d, result is like below.
When i did docker logs express, there is nothing i can see.
I really wanna know what is the problem.

Considering your express application is running fine without docker.
you can change your Dockerfile as below
FROM node:12
WORKDIR /source
COPY . .
RUN npm install
EXPOSE 3000
CMD [ "node", "app.js" ]
COPY command will copy your local directory code to /server directory in docker
try docker-compose file as below
version: '3'
services:
proxy:
image: nginx:latest
container_name: proxy
ports:
- "80:80"
volumes:
- ./proxy/nginx.conf:/etc/nginx/nginx.conf
networks:
- test_bridge
restart: "unless-stopped"
express:
build: ./server
container_name: express
ports:
- "3000:3000"
networks:
- test_bridge
networks:
test_bridge:
driver: bridge

Related

Connection error: docker compose: Loopback4 + Mongo

I run the project with:
clean volumes & containers, then docker compose up --build
Tried creating different users for mongo. Made different settings for the docker file. For docker compose.
I assume that the error is somewhere between loopback & mongodb container.
For watch changes i use tsc-watch. Dont know it cause any bugs here, not sure.
Configuration
package.json
{
"name": "lympha-backend",
"version": "0.0.1",
"description": "lympha backend",
"keywords": [
"loopback-application",
"loopback"
],
"main": "dist/index.js",
"types": "dist/index.d.ts",
"engines": {
"node": "14 || 16 || 18 || 19"
},
"scripts": {
"dev": "tsc-watch --target es2017 --outDir ./dist --onSuccess \"node .\"",
"build": "lb-tsc",
"build:watch": "lb-tsc --watch",
"watch": "lb-tsc --watch",
"lint": "yarn run eslint && yarn run prettier:check",
"lint:fix": "yarn run eslint:fix && yarn run prettier:fix",
"prettier:cli": "lb-prettier \"**/*.ts\" \"**/*.js\"",
"prettier:check": "yarn run prettier:cli -l",
"prettier:fix": "yarn run prettier:cli --write",
"eslint": "lb-eslint --report-unused-disable-directives .",
"eslint:fix": "yarn run eslint --fix",
"pretest": "yarn run rebuild",
"test": "lb-mocha --allow-console-logs \"dist/__tests__\"",
"posttest": "yarn run lint",
"test:dev": "lb-mocha --allow-console-logs dist/__tests__/**/*.js && yarn run posttest",
"docker:build": "docker build -t lympha-backend .",
"docker:run": "docker run -p 3000:3000 -d lympha-backend",
"premigrate": "yarn run build",
"migrate": "node ./dist/migrate",
"preopenapi-spec": "yarn run build",
"openapi-spec": "node ./dist/openapi-spec",
"prestart": "yarn run rebuild",
"start": "node -r source-map-support/register .",
"clean": "lb-clean dist *.tsbuildinfo .eslintcache",
"rebuild": "yarn run clean && yarn run build"
},
"repository": {
"type": "git",
"url": ""
},
"license": "",
"files": [
"README.md",
"dist",
"src",
"!*/__tests__"
],
"dependencies": {
"#loopback/boot": "^5.0.7",
"#loopback/core": "^4.0.7",
"#loopback/repository": "^5.1.2",
"#loopback/rest": "^12.0.7",
"#loopback/rest-crud": "^0.15.6",
"#loopback/rest-explorer": "^5.0.7",
"#loopback/service-proxy": "^5.0.7",
"loopback-connector-mongodb": "^5.2.3",
"tsc-watch": "^6.0.0",
"tslib": "^2.0.0"
},
"devDependencies": {
"#loopback/build": "^9.0.7",
"#loopback/eslint-config": "^13.0.7",
"#loopback/testlab": "^5.0.7",
"#types/node": "^14.18.36",
"eslint": "^8.30.0",
"source-map-support": "^0.5.21",
"typescript": "~4.9.4"
}
}
Docker compose
version: '3.9'
services:
mongodb:
image: mongo
container_name: mongodb
ports:
- "27017:27017"
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=password
- MONGO_INITDB_DATABASE=admin
restart: always
volumes:
- ./mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
- mongodb:/data/db
backend:
container_name: backend
build:
context: .
dockerfile: ./lympha-backend/Dockerfile
command: ["yarn", "dev"]
ports:
- 4000:3000
environment:
NAME: TEST_DEVELOPMENT
PORT: 3000
DB_NAME: lympha_db
DB_USER: root
DB_PASS: password
restart: always
volumes:
- ./lympha-backend:/home/node/app
depends_on:
- mongodb
links:
- mongodb
volumes:
backend:
mongodb:
Dockerfile
# Check out https://hub.docker.com/_/node to select a new base image
FROM node:16-slim
# Set to a non-root built-in user `node`
USER node
# Create app directory (with user `node`)
RUN mkdir -p /home/node/app
RUN mkdir -p /home/node/app/dist
WORKDIR /home/node/app
RUN pwd
COPY --chown=node package*.json ./
# RUN npm install
RUN yarn
# Bundle app source code
COPY --chown=node . .
# Bind to all network interfaces so that it can be mapped to the host OS
ENV HOST=0.0.0.0 PORT=3000
EXPOSE ${PORT}
mongo-init.js
db.createUser({
user: 'admin',
pwd: 'password',
roles: [
{ role: 'root', db: 'admin' },
]
});
db = db.getSiblingDB('lympha_db');
db.createCollection("lympha_db"); //MongoDB creates the database when you first store data in that database
db.createUser(
{
user: "lympha",
pwd: "lympha",
roles: [
{
role: "readWrite",
db: "lympha_db"
}
]
}
);
Mongo terminal
Look at the picture
Logs
Mongodb logs
Look at the picture
Backend logs
{
name: 'mongodb',
connector: 'mongodb',
host: 'localhost',
port: 27017,
user: 'admin',
password: 'password',
database: 'admin'
}
Server is running at http://127.0.0.1:3000
Try http://127.0.0.1:3000/ping
Connection fails: MongoServerSelectionError: connect ECONNREFUSED 127.0.0.1:27017
It will be retried for the next request.
/home/node/app/node_modules/mongodb/lib/utils.js:698
throw error;
^
MongoServerSelectionError: connect ECONNREFUSED 127.0.0.1:27017
at Timeout.\_onTimeout (/home/node/app/node_modules/mongodb/lib/core/sdam/topology.js:438:30)
at listOnTimeout (node:internal/timers:559:17)
at processTimers (node:internal/timers:502:7)
Emitted 'error' event on MongoDataSource instance at:
at MongoDataSource.postInit (/home/node/app/node_modules/loopback-datasource-juggler/lib/datasource.js:502:16)
at onError (/home/node/app/node_modules/loopback-connector-mongodb/lib/mongodb.js:325:21)
at /home/node/app/node_modules/loopback-connector-mongodb/lib/mongodb.js:333:9
at /home/node/app/node_modules/mongodb/lib/utils.js:695:9
at /home/node/app/node_modules/mongodb/lib/mongo_client.js:285:23
at connectCallback (/home/node/app/node_modules/mongodb/lib/operations/connect.js:367:5)
at /home/node/app/node_modules/mongodb/lib/operations/connect.js:554:14
at connectHandler (/home/node/app/node_modules/mongodb/lib/core/sdam/topology.js:286:11)
at Object.callback (/home/node/app/node_modules/mongodb/lib/core/sdam/topology.js:672:9)
at Timeout.\_onTimeout (/home/node/app/node_modules/mongodb/lib/core/sdam/topology.js:443:25)
at listOnTimeout (node:internal/timers:559:17)
at processTimers (node:internal/timers:502:7) {
reason: TopologyDescription {
type: 'Single',
setName: null,
maxSetVersion: null,
maxElectionId: null,
servers: Map(1) {
'localhost:27017' =\> ServerDescription {
address: 'localhost:27017',
error: Error: connect ECONNREFUSED 127.0.0.1:27017
at TCPConnectWrap.afterConnect \[as oncomplete\] (node:net:1278:16) {
name: 'MongoNetworkError'
},
roundTripTime: -1,
lastUpdateTime: 7989795,
lastWriteDate: null,
opTime: null,
type: 'Unknown',
topologyVersion: undefined,
minWireVersion: 0,
maxWireVersion: 0,
hosts: \[\],
passives: \[\],
arbiters: \[\],
tags: \[\]
}
},
stale: false,
compatible: true,
compatibilityError: null,
logicalSessionTimeoutMinutes: null,
heartbeatFrequencyMS: 10000,
localThresholdMS: 15,
commonWireVersion: null
}
}
I cant connect lb4 and mongo.
I can start for scratch to figure out what is going on. Ask everything you want, i will do everything is needed as well. Ping me please.
When you start in compose, it sets up single network. (https://docs.docker.com/compose/networking/)
In short, each container can see another by service name. In your case - mongodb, not localhost
try to change:
- host: 'localhost',
+ host: 'mongodb',

GitLab CI: Unable to test socketIO endpoints when FF_NETWORK_PER_BUILD is 1

I use GitLab CI to perform the E2E tests on socket IO endpoints and it used to be working correctly until I set FF_NETWORK_PER_BUILD to 1 in the .gitlab-ci.yml file.
No specific error is thrown, what is the problem?
This is how I connect to socket server in Jest test:
const address = app.listen().address(); // returns { address: '::', family: 'IPv6', port: 42073 }
const baseAddress = `http://${address.host}:${address.port}`;
socket = io(baseAddress, {
transports: ['websocket'],
auth: { token: response.body.token },
forceNew: true,
});
.gitlab-ci.yml
image: node:16
stages:
- dependencies
- e2e
- build
cache:
paths:
- node_modules
dependency_job:
stage: dependencies
script:
- npm ci
test_e2e:
stage: test
variables:
NODE_ENV: test
PORT: 3000
THROTTLE_TTL: 60000
THROTTLE_LIMIT: 1
SESSION_SECRET: somesecret
DOMAIN: console.okhtapos.com
POSTGRES_PASSWORD: password
DATABASE_URL: "postgresql://postgres:password#postgres:5432/postgres?schema=public"
REDIS_HOST: redis
REDIS_PORT: 6379
REDIS_VOLATILE_HOST: redis
REDIS_VOLATILE_PORT: 6379
SESSIONS_REDIS_PREFIX: "sess:"
SESSIONS_SECRET: somesecret
SESSIONS_NAME: omni-session
GRPC_URL: localhost:5004
OCTO_CENTRAL_GRPC_URL: localhost:5005
CUSTOMERS_ATTRIBUTES_DATABASE_NAME: omnichannel-customer-attributes
CUSTOMERS_ATTRIBUTES_MAX: 100
CUSTOMERS_ATTRIBUTES_MAX_TOTAL_SIZE_IN_BYTES: 5000
MONGODB_URL: mongodb://mongo:27017
CUSTOMERS_ATTRIBUTES_MAX_QUARANTINE_DAYS: 7
KAFKA_BROKERS: kafka:9092
KAFKA_ADVERTISED_LISTENERS: "PLAINTEXT://kafka:9092,INTERNAL://localhost:9093"
KAFKA_BROKER_ID: "1"
KAFKA_INTER_BROKER_LISTENER_NAME: "INTERNAL"
KAFKA_LISTENERS: "PLAINTEXT://0.0.0.0:9092,INTERNAL://0.0.0.0:9093"
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: "PLAINTEXT:PLAINTEXT,INTERNAL:PLAINTEXT"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
LABELS_MAX_QUARANTINE_DAYS: 7
LABELS_MAX: 100
DEPARTMENTS_MAX: 100
AUTHENTICATION_GATEWAY_SECRET: someGatewaySecret
CASSANDRA_CONTACT_POINTS: cassandra:9042
CASSANDRA_LOCAL_DATA_CENTER: datacenter1
CASSANDRA_KEYSPACE: omnichannel
FF_NETWORK_PER_BUILD: 1
ZOOKEEPER_CONNECT: zookeeper:2181
BROKER_ID: 1
services:
- redis:latest
- postgres:15beta3
- mongo:latest
- cassandra:latest
- name: debezium/zookeeper:2.0.0.Beta1
alias: zookeeper
- name: debezium/kafka:2.0.0.Beta1
alias: kafka
script:
- npx prisma generate
- npx prisma migrate deploy
- npm run test:e2e
build_job:
stage: build
script:
- npm run build

Nuxt 3 + Vite & HMR : infinite reload & failed

On a fresh install of Nuxt3, using Docker, I have this error on the console and an infinite reload of the page :
client.ts:28 WebSocket connection to 'wss://shop.store.local/_nuxt/'
failed: (anonyme) # client:188 client.ts:224 [vite] server connection
lost. polling for restart...
Here is the configuration of my vite server (via nuxt.config.js):
vite: {
server: {
hmr: {
host: 'shop.store.local',
port: 443,
}
}
}
The docker-compose describes the Traefik labels:
vuejs:
labels:
- "traefik.http.routers.front_store.rule=Host(`shop.store.local`)"
- "traefik.http.routers.front_store.tls=true"
- "traefik.http.services.front_store.loadbalancer.server.port=3000"
What I've tried too, in my package.json file:
"scripts": {
"dev": "nuxi dev --host=0.0.0.0",
"build": "nuxi build",
"start": "node .output/server/index.mjs"
},
Any idea ? I looked over internet, people have the problem, but no solution...
Expose ports for nuxt container.
ports:
3000:3000
24678:24678
Also edit your nuxt.config:
vite: {
server: {
host: "0.0.0.0",
hmr: {
},
},
},

Test runner node cannot see selenium-hub. Getting Error: ECONNREFUSED connect ECONNREFUSED 127.0.0.1:4444

In a nutshell, there is additional container as a test runner which cannot reach the selenium-hub. So, tests are failed.
This container was added for running test on cloud by using cloud build.
I created docker-compose as below:
version: "3"
services:
selenium-hub:
image: selenium/hub:4.0.0-rc-1-prerelease-20210804
container_name: selenium-hub
ports:
- "4444:4444"
expose:
- 4444
chrome:
image: selenium/node-chrome:4.0.0-rc-1-prerelease-20210804
shm_size: 2gb
depends_on:
- selenium-hub
environment:
- SE_EVENT_BUS_HOST=selenium-hub
- SE_EVENT_BUS_PUBLISH_PORT=4442
- SE_EVENT_BUS_SUBSCRIBE_PORT=4443
ports:
- "6900:5900"
chrome_video:
image: selenium/video:ffmpeg-4.3.1-20210804
volumes:
- /Users/videos:/videos
depends_on:
- chrome
environment:
- DISPLAY_CONTAINER_NAME=chrome
- FILE_NAME=chrome_video.mp4
After containers start to work successfully, when I run npm run test for running selenium js tests, I got successful results and video recording in expected directory. But it should be automated as well. npm run test should be triggered somehow.
In our CI/CD process, cloudbuild.yaml file is added for running on cloud.
steps:
- name: 'docker/compose:1.29.2'
args: ['run','test']
- name: 'docker/compose:1.29.2'
args: ['stop']
timeout: 60s
Cloud build should trigger the new container below which is added to docker-compose file as test runner:
test:
image: node:16-alpine
entrypoint:
- sh
- -c
- |-
cd /test
npm install
sleep 3
npm run test
volumes:
- .:/test
depends_on:
- selenium
network_mode: host
However with test container, tests are failed and get the error below:
24 packages are looking for funding
run `npm fund` for details
2 moderate severity vulnerabilities
To address all issues, run:
npm audit fix
Run `npm audit` for details.
> js_mocha_selenium#1.0.0 test
> mocha test
Preliminary steps for End to End Tests
initalising the session...
1) Login
closing the session...
2) "after each" hook for "Login"
0 passing (108ms)
2 failing
1) Preliminary steps for End to End Tests
Login:
Error: ECONNREFUSED connect ECONNREFUSED 127.0.0.1:4444
at ClientRequest.<anonymous> (node_modules/selenium-webdriver/http/index.js:273:15)
at ClientRequest.emit (node:events:394:28)
at Socket.socketErrorListener (node:_http_client:447:9)
at Socket.emit (node:events:394:28)
at emitErrorNT (node:internal/streams/destroy:157:8)
at emitErrorCloseNT (node:internal/streams/destroy:122:3)
at processTicksAndRejections (node:internal/process/task_queues:83:21)
2) Preliminary steps for End to End Tests
"after each" hook for "Login":
Error: ECONNREFUSED connect ECONNREFUSED 127.0.0.1:4444
at ClientRequest.<anonymous> (node_modules/selenium-webdriver/http/index.js:273:15)
at ClientRequest.emit (node:events:394:28)
at Socket.socketErrorListener (node:_http_client:447:9)
at Socket.emit (node:events:394:28)
at emitErrorNT (node:internal/streams/destroy:157:8)
at emitErrorCloseNT (node:internal/streams/destroy:122:3)
at processTicksAndRejections (node:internal/process/task_queues:83:2
Containers:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7ca30366bc09 node:16-alpine "sh -c 'cd /test\nnpm…" About a minute ago Up About a minute e2e-tests_test_1
fdf43be1b4da selenium/video:ffmpeg-4.3.1-20210804 "/opt/bin/entry_poin…" 16 minutes ago Up About a minute 9000/tcp e2e-tests_chrome_video_1
92c023b15cb6 selenium/node-chrome:4.0.0-rc-1-prerelease-20210804 "/opt/bin/entry_poin…" 16 minutes ago Up About a minute 0.0.0.0:6900->5900/tcp, :::6900->5900/tcp e2e-tests_chrome_1
86002f3d1eb9 selenium/hub:4.0.0-rc-1-prerelease-20210804 "/opt/bin/entry_poin…" 16 minutes ago Up About a minute 4442-4443/tcp, 0.0.0.0:4444->4444/tcp, :::4444->4444/tcp selenium-hub
I can ping selenuim-hub from e2e-tests_test_1 container, but cannot do it reverse(ping e2e-tests_test_1 from selenium-hub).
About current network:
>> % docker network inspect -v host
[
{
"Name": "host",
"Id": "36e4060f18be618399692294d10cf6be3478c1bf5190ea035b002ca87c18276b",
"Created": "2021-06-30T10:36:33.170635189Z",
"Scope": "local",
"Driver": "host",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": []
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {},
"Labels": {}
}
]
It seems test node cannot reach to 127.0.0.1:4444
What should I do for solving this issue?
It'd be good to hear alternative solutions.
Thanks in advance.
You need to wait for the Grid to be ready before running tests. I documented a few approaches for this on the project's README, please check https://github.com/seleniumhq/docker-selenium/#waiting-for-the-grid-to-be-ready

Selenium isn't able to reach a docker container with docker-compose run

I have the following docker-compose.yml which starts a chrome-standalone container and a nodejs application:
version: '3.7'
networks:
selenium:
services:
selenium:
image: selenium/standalone-chrome-debug:3
networks:
- selenium
ports:
- '4444:4444'
- '5900:5900'
volumes:
- /dev/shm:/dev/shm
user: '7777:7777'
node:
image: node_temp:latest
build:
context: .
target: development
args:
UID: '${USER_UID}'
GID: '${USER_GID}'
networks:
- selenium
env_file:
- .env
ports:
- '8090:8090'
volumes:
- .:/home/node
depends_on:
- selenium
command: >
sh -c 'yarn install &&
yarn dev'
I'm running the containers as follows:
docker-compose up -d selenium
docker-compose run --service-ports node sh
and starting the e2e from within the shell.
When running the e2e tests, selenium can be reached from the node container(through: http://selenium:4444), but node isn't reachable from the selenium container.
I have tested this by VNC'ing into the selenium container and pointing the browser to: http://node:8090. (The node container is reachable on the host however, through: http://localhost:8090).
I first thought that docker-compose run doesn't add the running container to the proper network, however by running docker network inspect test_app I get the following:
[
{
"Name": "test_app_selenium",
"Id": "df6517cc7b6446d1712b30ee7482c83bb7c3a9d26caf1104921abd6bbe2caf68",
"Created": "2019-06-30T16:08:50.724889157+02:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.31.0.0/16",
"Gateway": "172.31.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"8a76298b237790c62f80ef612debb021549439286ce33e3e89d4ee2f84de3aec": {
"Name": "test_app_node_run_78427bac2fd1",
"EndpointID": "04310bc4e564f831e5d08a0e07891d323a5953fa936e099d20e5e384a6053da8",
"MacAddress": "02:42:ac:1f:00:03",
"IPv4Address": "172.31.0.3/16",
"IPv6Address": ""
},
"ef087732aacf0d293a2cf956855a163a081fc3748ffdaa01c240bde452eee0fa": {
"Name": "test_app_selenium_1",
"EndpointID": "24a597e30a3b0b671c8b19fd61b9254bea9e5fcbd18693383d93d3df789ed895",
"MacAddress": "02:42:ac:1f:00:02",
"IPv4Address": "172.31.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "selenium",
"com.docker.compose.project": "test_app",
"com.docker.compose.version": "1.24.1"
}
}
]
Which shows both containers running on the "selenium" network. I'm not sure however if the node container is properly aliased on the network and if this is proper behaviour.
Am I missing some config here?
Seems like docker-compose run names the container differently to evade the service namespace as noted in docker-compose.yml. http://node:8090 was therefore not reachable.
I solved this by adding a --name flag as follows:
docker-compose run --service-ports --name node node sh
EDIT:
It took me a while to notice, but I was overcomplicating the implementation by a lot. The above docker-compose.yml can be simplified by adding host networking. This simply exposes all running containers on localhost and makes them reachable on localhost by their specified ports. Considering that I don't need any encapsulation (it's meant for dev), the following docker-compose.yml sufficed:
version: '3.7'
services:
selenium:
image: selenium/standalone-chrome:3
# NOTE: port definition is useless with network_mode: host
network_mode: host
user: '7777:7777'
node:
image: node_temp:latest
build:
context: .
target: development
args:
UID: '${USER_UID}'
GID: '${USER_GID}'
network_mode: host
env_file:
- .env
volumes:
- .:/home/node
command: >
sh -c 'yarn install &&
yarn dev'