nerdctl compose do not support expose - nerdctl

docker-compose.yaml:
mysql_db:
build:
context: .
dockerfile: DockerfileMySQL
container_name: mysql_db_${TAG:-latest}
image: mysql_db:${TAG:-latest}
environment:
- MYSQL_ROOT_PASSWORD=test1
expose:
- "3306"
...
but when i use nerdctl compose up,it would WARN[0000] Ignoring: service mysql_db: [Expose],and the container would not expose port,so other container withing the same network could not access mysql_db

Related

grpc_health_probe timeout with dotnet GRPC API

I'm running a dotnet GRPC API (recipients-api) and I want to use grpcui to test it. Both services are been declared on a docker-compose file. I'm declaring grpcui as a dependant service of recipients-api, and as I need it to be fully available before grpcui can run, I'm also using a health probe to ensure it's alive. The problem is that it seems like recipients-api entry point is not been run, it's like the process were stuck waiting for the health check to be sucessfull before hiting the container entrypoint, so it alsways throws a timeout. Any clues?
Here is my docker-compose file:
version: '3.4'
services:
recipientservice-api:
image: ${DOCKER_REGISTRY-}dataintegrationrecipientserviceapi
container_name: recipientservice-api
build:
context: ..
dockerfile: ../src/DataIntegration.RecipientService.Api/Dockerfile
environment:
ASPNETCORE_ENVIRONMENT: "Development"
ASPNETCORE_URLS: "http://+:5000"
MongoDb__ConnectionString: mongodb://wfuser:MyPassw0rd_#mongodb:27017/RecipientService?tls=false
MongoDb__"Database": "RecipientService"
expose:
- "5000"
ports:
- "5000:5000"
depends_on:
- mongodb
networks:
- recipients-network
healthcheck:
test: ["CMD", "bin/grpc_health_probe", "-connect-timeout 10s", "-rpc-timeout 4s", "-addr=localhost:5000"]
interval: 2s
retries: 5
start_period: 15s
timeout: 10s
grpcui:
image: fullstorydev/grpcui
container_name: grpcui
depends_on:
recipientservice-api:
condition: service_healthy
command:
- -plaintext
- -vvv
- recipientservice-api:5000
networks:
- recipients-network
ports:
- "8080:8080"
mongodb:
image: mongo:5.0
container_name: "mongodb"
hostname: mongodb
environment:
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD: MyPassw0rd_123
MONGO_INITDB_DATABASE: RecipientService
volumes:
- mongo-recipients:/var/opt/mongodb
- ./init-mongo.js:/docker-entrypoint-initdb.d/init-mongo.js:ro
ports:
- "27017:27017"
expose:
- 27017
networks:
- recipients-network
networks:
recipients-network:
name: recipients-network
driver: bridge
volumes:
mongo-recipients:

Unable to invoke another service with Dapr

I'm having major problems getting Dapr up and running with my microservices. Every time I try to invoke another service, it returns a 500 error with the message
client error: the server closed connection before returning the first response byte. Make sure the server returns 'Connection: close' response header before closing the connection
The services and dapr sidecars are currently running in docker-compose on our dev machines but will run in Kubernetes when it is deployed properly.
When I look at the logs for the dapr containers in Docker for Windows, I can see the application being discovered on port 443 and a few initialisation messages but nothing else ever gets logged after that, even when I make my invoke request.
I have a container called clients, which I'm calling an API called test in it and this is then trying to call Microsoft's example weather forecast API in another container called simpleapi.
I'm using swaggerUI to call the apis. The test api returns 200 but when I put a breakpoint on the invoke, I can see the response is 500.
If I call the weatherforecast api directly using swaggerui, it returns a 200 with the expected payload.
I have the Dapr dashboard running in a container and it doesn't show any applications.
Docker-Compose.yml
version: '3.4'
services:
clients:
image: ${DOCKER_REGISTRY-}clients
container_name: "Clients"
build:
context: .
dockerfile: Clients/Dockerfile
ports:
- "50002:50002"
depends_on:
- placement
- database
networks:
- platform
clients-dapr:
image: "daprio/daprd:edge"
container_name: clients-dapr
command: [
"./daprd",
"-app-id", "clients",
"-app-port", "443",
"-placement-host-address", "placement:50006",
"-dapr-grpc-port", "50002"
]
depends_on:
- clients
network_mode: "service:clients"
simpleapi:
image: ${DOCKER_REGISTRY-}simpleapi
build:
context: .
dockerfile: SimpleAPI/Dockerfile
ports:
- "50003:50003"
depends_on:
- placement
networks:
- platform
simpleapi-dapr:
image: "daprio/daprd:edge"
container_name: simpleapi-dapr
command: [
"./daprd",
"-app-id", "simpleapi",
"-app-port", "443",
"-placement-host-address", "placement:50006",
"-dapr-grpc-port", "50003"
]
depends_on:
- simpleapi
network_mode: "service:simpleapi"
placement:
image: "daprio/dapr"
container_name: placement
command: ["./placement", "-port", "50006"]
ports:
- "50006:50006"
networks:
- platform
dashboard:
image: "daprio/dashboard"
container_name: dashboard
ports:
- "8080:8080"
networks:
- platform
networks:
platform:
Test controller from the Clients API.
[Route("api/[controller]")]
[ApiController]
public class TestController : ControllerBase
{
[HttpGet]
public async Task<ActionResult> Get()
{
var httpClient = DaprClient.CreateInvokeHttpClient();
var response = await httpClient.GetAsync("https://simpleapi/weatherforecast");
return Ok();
}
}
This is a major new project for my company and it's looking like we're going to have to abandon Dapr and implement everything ourselves if we can't get this working soon.
I'm hoping there's some glaringly obvious problem here.
Actually turned out to be quite simple.
I needed to tell dapr to use ssl.
The clients-dapr needed the -app-ssl parameter so clients-dapr should have been as follows (the simpleapi-dapr needs the same param adding too)
clients-dapr:
image: "daprio/daprd:edge"
container_name: clients-dapr
command: [
"./daprd",
"-app-id", "clients",
"-app-port", "443",
"-app-ssl",
"-placement-host-address", "placement:50006",
"-dapr-grpc-port", "50002"
]
depends_on:
- clients
network_mode: "service:clients"
you can run your service-specific port without docker and check dapr works as expected. you can specify http port & grpc port.
dapr run `
--app-id serviceName `
--app-port 5139 `
--dapr-http-port 3500 `
--dapr-grpc-port 50001 `
--components-path ./dapr-components
if the above setup works then you can setup with the docker. check above solution

Multiple domains with Traefik

I am new to Traefik but trying to migrate from jwilder/nginx-proxy and letsencrypt-companion to Traefik.
I have setup Traefik with this config file:
traefik.yml
entryPoints:
web:
address: :80
http:
redirections:
entryPoint:
to: websecure
scheme: https
websecure:
address: :443
api:
dashboard: true
insecure: true
certificatesResolvers:
le:
acme:
email: username#gmail.com
storage: acme.json
httpChallenge:
# used during the challenge
entryPoint: web
providers:
docker:
endpoint: unix:///var/run/docker.sock
exposedByDefault: false
docker-compose.yml
version: '3'
services:
traefik:
image: traefik:v2.2
restart: always
ports:
- 80:80
- 443:443
- 8080:8080
networks:
- web
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /data/disk1/traefik/traefik.yml:/traefik.yml
- /data/disk1/traefik/acme.json:/acme.json
container_name: traefik
When starting one container on domain #1
docker-compose.yml
version: "3"
services:
confluence:
container_name: confluence
image: atlassian/confluence-server:7.6.2
volumes:
- /data/disk1/atlassian/application-data/confluence:/var/atlassian/application-data/confluence
ports:
- "8090:8090"
external_links:
- postgres:postgres
environment:
- CATALINA_CONNECTOR_PROXYNAME=confluence.tld
- CATALINA_CONNECTOR_PROXYPORT=443
- CATALINA_CONNECTOR_SCHEME=https
- CATALINA_CONNECTOR_SECURE=true
- VIRTUAL_HOST=confluence.tld
- VIRTUAL_NETWORK=web
- VIRTUAL_PORT=8090
- LETSENCRYPT_EMAIL=user#tld
- LETSENCRYPT_HOST=confluence.tld
labels:
- traefik.enable=true
- traefik.http.routers.confluence.rule=Host(`confluence.tld`)
- traefik.http.routers.confluence.tls=true
- traefik.http.routers.confluence.tls.certresolver=le
- traefik.http.routers.confluence.service=confluence
- traefik.http.services.confluence.loadbalancer.server.port=8090
networks:
- web
restart: always
networks:
web:
external:
name: web
It works perfect.
NOTE: I have kept the environment variables for jwilder/nginx-proxy for the time being
When launching another container with different tld I can't get that working.
E.g.
docker-compose.yml
version: "3"
services:
confluence:
container_name: myapp
image: nginx:latest
volumes:
- /data/disk1/myapp/www/:/usr/share/nginx/html:ro
- /data/disk1/myapp/conf/nginx.conf:/etc/nginx/nginx.conf:ro
ports:
- "9999:80"
environment:
- VIRTUAL_HOST=www.tld2,tld2
- VIRTUAL_NETWORK=web
- VIRTUAL_PORT=9999
- LETSENCRYPT_EMAIL=user#tld2
- LETSENCRYPT_HOST=www.tld2,tld2
labels:
- traefik.enable=true
- traefik.http.routers.myapp.rule=Host(`tld2`) || Host(`www.tld2`)
- traefik.http.routers.myapp.tls=true
- traefik.http.routers.myapp.tls.certresolver=le
- traefik.http.routers.myapp.service=tld2
- traefik.http.services.myapp.loadbalancer.server.port=9999
networks:
- web
restart: always
networks:
web:
external:
name: web
It doesn't work but everything looks OK in Traefik dashboard.
Any ideas?
There is an error in the second docker-compose.yml:
You define the router named mypp to use a service named tld2:
traefik.http.routers.myapp.service=tld2
but your service is named myapp:
traefik.http.services.myapp.loadbalancer.server.port=9999
This should have generated an error in Traefik's log regarding an unresolvable service.
To fix this, configure your router myapp to use the service myapp:
traefik.http.routers.myapp.service=myapp

Traefik not set backend/frontend in swarm mode

i would like use traefik in a cluster swarm, following this guide https://docs.traefik.io/user-guide/swarm-mode/#deploy-traefik i've write this stack file:
traefik:
image: traefik:alpine
deploy:
placement:
constraints:
- node.role == manager
command: --api --docker --docker.watch --docker.swarmMode
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "80:80"
- "8080:8080"
labels:
- "traefik.enable=false"
backend:
image: registry.example.com/backend
labels:
- "traefik.backend=backend"
- "traefik.backend.buffering.maxRequestBodyBytes=2147483648"
- "traefik.backend.loadbalancer.sticky=true"
- "traefik.frontend.rule=Host:backend.localhost"
- "traefik.frontend.passHostHeader=true"
- "traefik.port=80"
api:
image: registry.example.com/api
labels:
- "traefik.backend=api"
- "traefik.backend.buffering.maxRequestBodyBytes=2147483648"
- "traefik.backend.loadbalancer.sticky=true"
- "traefik.frontend.rule=Host:api.localhost"
- "traefik.frontend.passHostHeader=true"
- "traefik.port=80"
Traefik start but nothing is configured, I can not understand where is the error.
You forgot the network part from the example.
You miss both network related labels and the networks itslelf:
deploy:
labels:
- "traefik.docker.network=traefik-network" # for both api and backend
...
networks:
- "traefik-network" # for traefik, api and backend
...
networks:
traefik-network:{} # you can also make it external
EDIT :
also, on swarm, the labels should be set under the "deploy" section of your service, and not ont the service itself.

docker swarm custom containers with traefik

I'm trying to set up a docker swarm using traefik in digital ocean. I followed this tutorial and I get it to work entirely until I add one of my custom made containers. I am trying to simply add one first (there are 14 in total) and they are all very very similar, all of them are express apps that serve as restful API's handling one resource per service. However, when trying to access that specific subdomain I get a connection refused error.
Here's my docker-stack.yml file:
version: '3.6'
services:
traefik:
image: traefik:latest
networks:
- mynet
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./traefik.toml:/traefik.toml
ports:
- "80:80"
- "8080:8080"
command: --api
main:
image: nginx
networks:
- mynet
deploy:
labels:
- "traefik.port=80"
- "traefik.backend=main"
- "traefik.frontend.rule=Host:domain.com"
two:
image: jwilder/whoami
networks:
- mynet
deploy:
labels:
- "traefik.port=8000"
- "traefik.backend=two"
- "traefik.frontend.rule=Host:two.domain.com"
three:
image: emilevauge/whoami
networks:
- mynet
deploy:
labels:
- "traefik.port=80"
- "traefik.backend=three"
- "traefik.frontend.rule=Host:three.domain.com"
user-service:
image: hollarves/users:latest
env_file:
- .env.user
networks:
- mynet
deploy:
labels:
- "traefik.port=80"
- "traefik.backend=users"
- "traefik.frontend.rule=Host:users.domain.com"
networks:
mynet:
driver: overlay
As I said, going to two.domain.com and three.domain.com works fine, and the whoami containers respond with their info. However, I get a connection refused error when trying users.domain.com
Note: domain.com is an actual domain I am using that is live pointing to a digitalocean cluster, I'm just hiding it for privacy reasons.
The entrypoint for this users-service is:
if (process.env.NODE_ENV !== "production") {
require("dotenv").load()
}
const express = require("express"),
bodyParser = require("body-parser"),
logger = require("morgan"),
//helmet = require("helmet"),
cors = require("cors"),
PORT = parseInt(process.env.PORT, 10) || 80
const server = express(),
routes = require("./server/routes")
//server.use(helmet())
server.use(cors())
server.use(logger("dev"))
server.use(bodyParser.json())
server.use("/", routes)
/*eslint no-console: ["error", { allow: ["log"] }] */
const serverObj = server.listen(PORT, () => { console.log("Server listening in PORT ", PORT) })
module.exports = serverObj
I can also confirm that this service is listening on PORT 80 as that's what it outputs when fetching logs from it using docker service logs test-stack_user-service:
test-stack_user-service.1.35p3lxzovphr#node-2 | > users-mueve#0.0.1 start /usr/src/app
test-stack_user-service.1.35p3lxzovphr#node-2 | > node server.js
test-stack_user-service.1.35p3lxzovphr#node-2 |
test-stack_user-service.1.35p3lxzovphr#node-2 | Server listening in PORT 80
Here is my traefik.toml config file just in case:
debug = true
logLevel = "DEBUG"
defaultEntryPoints = ["http"]
[entryPoints]
[entryPoints.http]
address = ":80"
[retry]
[docker]
endpoint="unix:///var/run/docker.sock"
exposedByDefault=true
watch=true
swarmmode=true
I can also see the containers in the traefik dashboard like I used to in my local environment.
I feel like I'm missing a very small detail that is preventing my service from working correctly. Any pointers will be extremely appreciated.
Thanks!