I am creating a Data pipeline where I fetch data from BigQuery either through Bigquery operator or google cloud library. But I am always getting an error. Following is the dag for big query operator:
from airflow import DAG
from datetime import datetime, timedelta
from airflow.contrib.operators.bigquery_operator import BigQueryOperator
from airflow.contrib.operators.bigquery_check_operator import BigQueryCheckOperator
from read_val_send1 import read,validating_hit,track_google_analytics_event,add_gcp_connection
default_args = {
"owner" : "Airflow",
"depends_on_past": False,
"start_date" : datetime(2021,5,9),
"email": ["airflow#airflow.com"],
"email_on_failure": False,
"email_on_retry": False,
"retries": 0,
"retry_delay": timedelta(seconds = 5)
}
dag = DAG("Automp", default_args = default_args, schedule_interval = "#daily", catchup = False)
activateGCP = PythonOperator(
task_id='add_gcp_connection_python',
python_callable=add_gcp_connection,
provide_context=True, dag = dag
)
BQ_CONN_ID = "my_gcp_conn"
BQ_PROJECT = 'pii-test'
BQ_DATASET = 'some_Dataset'
t1 = BigQueryCheckOperator(
task_id='bq_check',
sql='''
#standardSQL
Select * from table''',
use_legacy_sql=False,
bigquery_conn_id=BQ_CONN_ID,
dag=dag
)
activateGCP >> t1
Error
I have attached the error image
Broken DAG: [/usr/local/airflow/dags/Automp.py] No module named 'httplib2'
I am not able to install python packages in airflow as well with required.txt file. Following is compose file:
version: '2.1'
services:
redis:
image: 'redis:5.0.5'
# command: redis-server --requirepass redispass
postgres:
image: postgres:9.6
environment:
- POSTGRES_USER=airflow
- POSTGRES_PASSWORD=airflow
- POSTGRES_DB=airflow
# Uncomment these lines to persist data on the local filesystem.
# - PGDATA=/var/lib/postgresql/data/pgdata
# volumes:
# - ./pgdata:/var/lib/postgresql/data/pgdata
webserver:
image: puckel/docker-airflow:1.10.9
restart: always
depends_on:
- postgres
- redis
environment:
- LOAD_EX=n
- FERNET_KEY=46BKJoQYlPPOexq0OhDZnIlNepKFf87WFwLbfzqDDho=
- EXECUTOR=Celery
# - POSTGRES_USER=airflow
# - POSTGRES_PASSWORD=airflow
# - POSTGRES_DB=airflow
# - REDIS_PASSWORD=redispass
volumes:
- ./dags:/usr/local/airflow/dags
# Uncomment to include custom plugins
# - ./plugins:/usr/local/airflow/plugins
ports:
- "8080:8080"
command: webserver
healthcheck:
test: ["CMD-SHELL", "[ -f /usr/local/airflow/airflow-webserver.pid ]"]
interval: 30s
timeout: 30s
retries: 3
flower:
image: puckel/docker-airflow:1.10.9
restart: always
depends_on:
- redis
environment:
- EXECUTOR=Celery
# - REDIS_PASSWORD=redispass
ports:
- "5555:5555"
command: flower
scheduler:
image: puckel/docker-airflow:1.10.9
restart: always
depends_on:
- webserver
volumes:
- ./dags:/usr/local/airflow/dags
- ./requirements.txt:/requirements.txt
# Uncomment to include custom plugins
# - ./plugins:/usr/local/airflow/plugins
environment:
- LOAD_EX=n
- FERNET_KEY=46BKJoQYlPPOexq0OhDZnIlNepKFf87WFwLbfzqDDho=
- EXECUTOR=Celery
# - POSTGRES_USER=airflow
# - POSTGRES_PASSWORD=airflow
# - POSTGRES_DB=airflow
# - REDIS_PASSWORD=redispass
command: scheduler
worker:
image: puckel/docker-airflow:1.10.9
restart: always
depends_on:
- scheduler
volumes:
- ./dags:/usr/local/airflow/dags
- ./requirements.txt:/requirements.txt
# Uncomment to include custom plugins
# - ./plugins:/usr/local/airflow/plugins
environment:
- FERNET_KEY=46BKJoQYlPPOexq0OhDZnIlNepKFf87WFwLbfzqDDho=
- EXECUTOR=Celery
# - POSTGRES_USER=airflow
# - POSTGRES_PASSWORD=airflow
# - POSTGRES_DB=airflow
# - REDIS_PASSWORD=redispass
command: worker
My folder structure looks like this:
Folder Structure
The image that you are using does not include the httplib2 package, which is perhaps used by the imports coming from the read_val_send1 directory.
What you can do is adding the following line on your ./requirements.txt.
httplib2==0.19.1
The puckel's docker-airflow setup has an entrypoint.sh that supports pip install -r requirements.txt. So this must be sufficient.
In case something goes wrong you can always use Docker logs or Docker interactive execute bash to see what is going wrong.
I also recommend using the latest docker-compose for Airflow to have a smoother workflow.
Related
I am trying to redirect from the specific port of the service to the domain name in traefik.
This is my config in yml file (swarm mode). Here I am trying to automatically redirect from https://portainer.com:8443 to https://portainer.com
I opened the port 8443 for traefik as well.
But when I am trying to do the redirection using: https://portainer.com/example to https://portainer.com it is working fine. How to make that work with ports?
version: '3.8'
services:
reverse-proxy:
image: traefik:latest
ports:
- 80:80
- 443:443
- 8443:8443
env_file:
- ./.env
deploy:
placement:
constraints: [node.role == manager]
update_config:
failure_action: rollback
labels:
# Enable traefik for the specific service
- "traefik.enable=true"
# global redirect to https
- "traefik.http.routers.http-catchall.rule=hostregexp(`{host:.+}`)"
- "traefik.http.routers.http-catchall.entrypoints=http"
- "traefik.http.routers.http-catchall.middlewares=https-redirect"
- "traefik.http.middlewares.https-redirect.redirectscheme.scheme=https"
- "traefik.http.middlewares.https-redirect.redirectscheme.permanent=true"
# Make the Traefik use this domain in HTTPS
- "traefik.http.routers.traefik-https.rule=Host(`traefik.com`)"
# Allow the connections to the traefik api for the dashboard support
- "traefik.http.routers.traefik-https.service=api#internal"
- "traefik.http.services.traefik-svc.loadbalancer.server.port=9999"
# Use the Let's encrypt resolver
- "traefik.http.routers.traefik-https.tls=true"
- "traefik.http.routers.traefik-https.tls.certresolver=le"
# Use the traefik_net network that is declared below
- "traefik.docker.network=traefik_net"
# Use the auth for traefik dashboard
- "traefik.http.middlewares.traefik-auth.basicauth.users=${TRAEFIK_AUTH_USER_PASSWORD}"
- "traefik.http.routers.traefik-https.middlewares=traefik-auth"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- traefik-public-certificates:/certificates
command:
- --providers.docker
- --providers.docker.swarmMode=true
- --providers.docker.exposedbydefault=false
- --entrypoints.http.address=:80
- --entrypoints.https.address=:443
- --certificatesresolvers.le.acme.email=port#port.com
- --certificatesresolvers.le.acme.storage=/certificates/acme.json
- --certificatesresolvers.le.acme.httpchallenge=true
- --certificatesresolvers.le.acme.httpchallenge.entrypoint=http
- --accesslog
- --log
- --api
networks:
- traefik_net
agent:
image: portainer/agent:latest
environment:
# REQUIRED: Should be equal to the service name prefixed by "tasks." when
# deployed inside an overlay network
AGENT_CLUSTER_ADDR: tasks.agent
# AGENT_PORT: 9001
# LOG_LEVEL: debug
env_file:
- ./.env
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- /var/lib/docker/volumes:/var/lib/docker/volumes
networks:
- agent_network
deploy:
mode: global
placement:
constraints: [ node.platform.os == linux ]
portainer:
image: portainer/portainer-ce:latest
command: -H tcp://tasks.agent:9001 --tlsskipverify
volumes:
- portainer_data:/data
networks:
- traefik_net
- agent_network
env_file:
- ./.env
deploy:
mode: replicated
replicas: 1
placement:
constraints: [ node.role == manager ]
labels:
- "traefik.enable=true"
- "traefik.http.routers.portainer.service=portainer"
- "traefik.http.routers.portainer.rule=Host(`portainer.com`)"
- "traefik.http.routers.portainer.entrypoints=https"
- "traefik.http.services.portainer.loadbalancer.server.port=9000"
- "traefik.http.routers.portainer.tls=true"
- "traefik.http.routers.portainer.tls.certresolver=le"
- "traefik.docker.network=traefik_net"
- "traefik.http.middlewares.portainer-redirect.redirectregex.regex=^https?://portainer.com:8443"
- "traefik.http.middlewares.portainer-redirect.redirectregex.replacement=https://portainer.com"
- "traefik.http.middlewares.portainer-redirect.redirectregex.permanent=true"
- "traefik.http.routers.portainer.middlewares=portainer-redirect"
Just need to add one more entrypoint and it will work:
version: '3.8'
services:
reverse-proxy:
image: traefik:latest
ports:
- 80:80
- 443:443
- 8443:8443
env_file:
- ./.env
deploy:
placement:
constraints: [node.role == manager]
update_config:
failure_action: rollback
labels:
# Enable traefik for the specific service
- "traefik.enable=true"
# global redirect to https
- "traefik.http.routers.http-catchall.rule=hostregexp(`{host:.+}`)"
- "traefik.http.routers.http-catchall.entrypoints=http"
- "traefik.http.routers.http-catchall.middlewares=https-redirect"
- "traefik.http.middlewares.https-redirect.redirectscheme.scheme=https"
- "traefik.http.middlewares.https-redirect.redirectscheme.permanent=true"
# Make the Traefik use this domain in HTTPS
- "traefik.http.routers.traefik-https.rule=Host(`traefik.com`)"
# Allow the connections to the traefik api for the dashboard support
- "traefik.http.routers.traefik-https.service=api#internal"
- "traefik.http.services.traefik-svc.loadbalancer.server.port=9999"
# Use the Let's encrypt resolver
- "traefik.http.routers.traefik-https.tls=true"
- "traefik.http.routers.traefik-https.tls.certresolver=le"
# Use the traefik_net network that is declared below
- "traefik.docker.network=traefik_net"
# Use the auth for traefik dashboard
- "traefik.http.middlewares.traefik-auth.basicauth.users=${TRAEFIK_AUTH_USER_PASSWORD}"
- "traefik.http.routers.traefik-https.middlewares=traefik-auth"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- traefik-public-certificates:/certificates
command:
- --providers.docker
- --providers.docker.swarmMode=true
- --providers.docker.exposedbydefault=false
- --entrypoints.http.address=:80
- --entrypoints.https.address=:443
- --entrypoints.https-new.address=:8443
- --certificatesresolvers.le.acme.email=port#port.com
- --certificatesresolvers.le.acme.storage=/certificates/acme.json
- --certificatesresolvers.le.acme.httpchallenge=true
- --certificatesresolvers.le.acme.httpchallenge.entrypoint=http
- --accesslog
- --log
- --api
networks:
- traefik_net
agent:
image: portainer/agent:latest
environment:
# REQUIRED: Should be equal to the service name prefixed by "tasks." when
# deployed inside an overlay network
AGENT_CLUSTER_ADDR: tasks.agent
# AGENT_PORT: 9001
# LOG_LEVEL: debug
env_file:
- ./.env
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- /var/lib/docker/volumes:/var/lib/docker/volumes
networks:
- agent_network
deploy:
mode: global
placement:
constraints: [ node.platform.os == linux ]
portainer:
image: portainer/portainer-ce:latest
command: -H tcp://tasks.agent:9001 --tlsskipverify
volumes:
- portainer_data:/data
networks:
- traefik_net
- agent_network
env_file:
- ./.env
deploy:
mode: replicated
replicas: 1
placement:
constraints: [ node.role == manager ]
labels:
- "traefik.enable=true"
- "traefik.http.routers.portainer.service=portainer"
- "traefik.http.routers.portainer.rule=Host(`portainer.com`)"
- "traefik.http.routers.portainer.entrypoints=https,https-new"
- "traefik.http.services.portainer.loadbalancer.server.port=9000"
- "traefik.http.routers.portainer.tls=true"
- "traefik.http.routers.portainer.tls.certresolver=le"
- "traefik.docker.network=traefik_net"
- "traefik.http.middlewares.portainer-redirect.redirectregex.regex=^https?://portainer.com:8443"
- "traefik.http.middlewares.portainer-redirect.redirectregex.replacement=https://portainer.com"
- "traefik.http.middlewares.portainer-redirect.redirectregex.permanent=true"
- "traefik.http.routers.portainer.middlewares=portainer-redirect"
I had deploy Gitea + drone + runner for my group. Normally runner-docker works fine, but runner-ssh and runner-exec don't work
deploy by docker compose
gitea
version: "3"
services:
gitea:
image: gitea/gitea:1.15.7
# container_name: gitea
environment:
- USER_UID=1000
- USER_GID=1000
- DB_TYPE=mysql
- DB_HOST=db:3306
- DB_NAME=gitea
- DB_USER=gitea
- DB_PASSWD=xxxxxxx
restart: always
volumes:
- ./gitea:/data
- /home/git/.ssh/:/data/git/.ssh
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
ports:
- "33333:3000"
- "22:22"
depends_on:
- db
db:
image: mariadb
restart: always
environment:
- MYSQL_ROOT_PASSWORD=xxxxxx
- MYSQL_USER=gitea
- MYSQL_PASSWORD=xxxxxx
- MYSQL_DATABASE=gitea
volumes:
- ./db:/var/lib/mysql
drone
version: "3"
services:
drone:
image: drone/drone
container_name: drone
ports:
- "8800:80"
# - "44443:443"
volumes:
- ./drone:/data
environment:
- DRONE_GITEA_SERVER=https://git.ioiox.com
- DRONE_GITEA_CLIENT_ID=xxxxxxxxx
- DRONE_GITEA_CLIENT_SECRET=xxxxxxxxx
- DRONE_RPC_SECRET=yyyyyyyyyyyyyy
- DRONE_SERVER_HOST=drone.ioiox.com
- DRONE_SERVER_PROTO=https
- DRONE_GIT_ALWAYS_AUTH=true
- DRONE_USER_CREATE=username:stilleshan,admin:true
restart: always
runner-docker & runner-ssh
version: "3"
services:
drone-runner-docker:
image: drone/drone-runner-docker:1
container_name: drone-runner-docker
# ports:
# - "3000:3000"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- TZ=Asia/Shanghai
- DRONE_RPC_PROTO=https
- DRONE_RPC_HOST=drone.ioiox.com
- DRONE_RPC_SECRET=yyyyyyyyyyyyyy
- DRONE_RUNNER_CAPACITY=5
- DRONE_RUNNER_NAME=runner-docker
restart: always
drone-runner-ssh:
image: drone/drone-runner-ssh
container_name: drone-runner-ssh
# ports:
# - 3001:3000
environment:
- TZ=Asia/Shanghai
- DRONE_RPC_PROTO=https
- DRONE_RPC_HOST=drone.ioiox.com
- DRONE_RPC_SECRET=yyyyyyyyyyyyyy
- DRONE_RUNNER_CAPACITY=5
- DRONE_RUNNER_NAME=runner-ssh
restart: always
runner-exec
Install in centos7 by drone.io documents
issue
runner-exec
When I push to private repo activate runner, something wrong about git clone. But public repo works fine.
I tried setting DRONE_GIT_ALWAYS_AUTH=true or false also to no effect
enter image description here
runner-ssh
I use type: ssh in .drone.yml, Can trigger workflows. But got clone: skipped error, when i set disable clone step, also got error "greeting: skipped", I think the whole workflow can not working.
enter image description here
I am working on a software project and I am using Elasticsearch to support my search functionality. I have a server that I use to test with and I am using docker compose to create my cluster. Whenever I use my apps search bar (hosted via Apache on the same server as ES), I can never get the responses back due to CORS restrictions (I am getting around this using a Chrome extension, but I don't want users to have to install this to search).
I have tried enabled the proper http settings in the elasticsearch.yml file, but that doesn't seem to have done anything. I am using the Elasticsearch javascript module to make requests.
docker-compose:
version: '3.6'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.5.0
container_name: elasticsearch
environment:
- node.name=es01
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
nproc: 65535
memlock:
soft: -1
hard: -1
cap_add:
- ALL
privileged: true
deploy:
mode: global
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
volumes:
- esdata1:/usr/share/elasticsearch/data
- ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
ports:
- 9200:9200
- 9300:9300
elasticsearch2:
image: docker.elastic.co/elasticsearch/elasticsearch:6.5.0
container_name: elasticsearch2
environment:
- node.name=es02
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=elasticsearch"
ulimits:
nproc: 65535
memlock:
soft: -1
hard: -1
cap_add:
- ALL
privileged: true
deploy:
mode: global
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
volumes:
- esdata2:/usr/share/elasticsearch/data
- ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
elasticsearch3:
image: docker.elastic.co/elasticsearch/elasticsearch:6.5.0
container_name: elasticsearch3
environment:
- node.name=es03
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=elasticsearch"
ulimits:
nproc: 65535
memlock:
soft: -1
hard: -1
cap_add:
- ALL
privileged: true
deploy:
mode: global
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
volumes:
- esdata3:/usr/share/elasticsearch/data
- ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
kibana:
image: docker.elastic.co/kibana/kibana-oss:6.5.0
container_name: kibana
environment:
SERVER_NAME: localhost
ELASTICSEARCH_URL: http://elasticsearch:9200/
ports:
- 5601:5601
ulimits:
nproc: 65535
memlock:
soft: -1
hard: -1
cap_add:
- ALL
deploy:
mode: global
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
volumes:
esdata1:
esdata2:
esdata3:
elasticsearch.yml:
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-methods: OPTIONS, HEAD, GET, POST, PUT, DELETE
http.cors.allow-headers: "X-Requested-With,X-Auth-Token,Content-Type, Content-Length, Authorization"
how I connect to ES (in javascript):
var elasticsearch = require('elasticsearch');
var client = new elasticsearch.Client({
host: 'myServersIP:9200'
});
So when I go to my servers webpage (www.mydomain.com) I get my application, everything looks good. When I try to search, I get the COR issue and can't get any results back. I think this might be because I am connecting to my client via "myServersIP:9200" and not some proxied subdomain, but I am not sure. Any ideas on what I could be doing wrong/how to fix this issue?
Try the one under C:\ProgramData\Elastic\Elasticsearch\config (windows case)
Edit it and add the CORS cofiguration (responsibly) as the "*" is quite dangerous
Hope this will help you
i would like use traefik in a cluster swarm, following this guide https://docs.traefik.io/user-guide/swarm-mode/#deploy-traefik i've write this stack file:
traefik:
image: traefik:alpine
deploy:
placement:
constraints:
- node.role == manager
command: --api --docker --docker.watch --docker.swarmMode
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "80:80"
- "8080:8080"
labels:
- "traefik.enable=false"
backend:
image: registry.example.com/backend
labels:
- "traefik.backend=backend"
- "traefik.backend.buffering.maxRequestBodyBytes=2147483648"
- "traefik.backend.loadbalancer.sticky=true"
- "traefik.frontend.rule=Host:backend.localhost"
- "traefik.frontend.passHostHeader=true"
- "traefik.port=80"
api:
image: registry.example.com/api
labels:
- "traefik.backend=api"
- "traefik.backend.buffering.maxRequestBodyBytes=2147483648"
- "traefik.backend.loadbalancer.sticky=true"
- "traefik.frontend.rule=Host:api.localhost"
- "traefik.frontend.passHostHeader=true"
- "traefik.port=80"
Traefik start but nothing is configured, I can not understand where is the error.
You forgot the network part from the example.
You miss both network related labels and the networks itslelf:
deploy:
labels:
- "traefik.docker.network=traefik-network" # for both api and backend
...
networks:
- "traefik-network" # for traefik, api and backend
...
networks:
traefik-network:{} # you can also make it external
EDIT :
also, on swarm, the labels should be set under the "deploy" section of your service, and not ont the service itself.
I'm trying to set up a docker swarm using traefik in digital ocean. I followed this tutorial and I get it to work entirely until I add one of my custom made containers. I am trying to simply add one first (there are 14 in total) and they are all very very similar, all of them are express apps that serve as restful API's handling one resource per service. However, when trying to access that specific subdomain I get a connection refused error.
Here's my docker-stack.yml file:
version: '3.6'
services:
traefik:
image: traefik:latest
networks:
- mynet
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./traefik.toml:/traefik.toml
ports:
- "80:80"
- "8080:8080"
command: --api
main:
image: nginx
networks:
- mynet
deploy:
labels:
- "traefik.port=80"
- "traefik.backend=main"
- "traefik.frontend.rule=Host:domain.com"
two:
image: jwilder/whoami
networks:
- mynet
deploy:
labels:
- "traefik.port=8000"
- "traefik.backend=two"
- "traefik.frontend.rule=Host:two.domain.com"
three:
image: emilevauge/whoami
networks:
- mynet
deploy:
labels:
- "traefik.port=80"
- "traefik.backend=three"
- "traefik.frontend.rule=Host:three.domain.com"
user-service:
image: hollarves/users:latest
env_file:
- .env.user
networks:
- mynet
deploy:
labels:
- "traefik.port=80"
- "traefik.backend=users"
- "traefik.frontend.rule=Host:users.domain.com"
networks:
mynet:
driver: overlay
As I said, going to two.domain.com and three.domain.com works fine, and the whoami containers respond with their info. However, I get a connection refused error when trying users.domain.com
Note: domain.com is an actual domain I am using that is live pointing to a digitalocean cluster, I'm just hiding it for privacy reasons.
The entrypoint for this users-service is:
if (process.env.NODE_ENV !== "production") {
require("dotenv").load()
}
const express = require("express"),
bodyParser = require("body-parser"),
logger = require("morgan"),
//helmet = require("helmet"),
cors = require("cors"),
PORT = parseInt(process.env.PORT, 10) || 80
const server = express(),
routes = require("./server/routes")
//server.use(helmet())
server.use(cors())
server.use(logger("dev"))
server.use(bodyParser.json())
server.use("/", routes)
/*eslint no-console: ["error", { allow: ["log"] }] */
const serverObj = server.listen(PORT, () => { console.log("Server listening in PORT ", PORT) })
module.exports = serverObj
I can also confirm that this service is listening on PORT 80 as that's what it outputs when fetching logs from it using docker service logs test-stack_user-service:
test-stack_user-service.1.35p3lxzovphr#node-2 | > users-mueve#0.0.1 start /usr/src/app
test-stack_user-service.1.35p3lxzovphr#node-2 | > node server.js
test-stack_user-service.1.35p3lxzovphr#node-2 |
test-stack_user-service.1.35p3lxzovphr#node-2 | Server listening in PORT 80
Here is my traefik.toml config file just in case:
debug = true
logLevel = "DEBUG"
defaultEntryPoints = ["http"]
[entryPoints]
[entryPoints.http]
address = ":80"
[retry]
[docker]
endpoint="unix:///var/run/docker.sock"
exposedByDefault=true
watch=true
swarmmode=true
I can also see the containers in the traefik dashboard like I used to in my local environment.
I feel like I'm missing a very small detail that is preventing my service from working correctly. Any pointers will be extremely appreciated.
Thanks!