Running localstack with docker-compose - amazon-s3

I'm running localstack setup with docker-compose.yaml and below is the content of docker-compose.yaml
version: "3.8"
services:
localstack:
image: localstack/localstack:latest
container_name: localstack_test
ports:
- "127.0.0.1:4566:4566"
- "127.0.0.1:4463-4499:4463-4499"
environment:
- SERVICES=s3
- DEBUG=1
- DATA_DIR=/tmp/localstack/data
volumes:
- "./.localstack:/var/lib/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
Whenever we try to hit the URL localhost:4566, it returns a blank page in browser and the docker logs are as below.
localstack | 2023-02-08T13:06:25.695 DEBUG --- [ MainThread] plugin.manager : plugin localstack.hooks.on_infra_ready:extensions_on_infra_ready is disabled
localstack | 2023-02-08T13:06:25.695 DEBUG --- [ MainThread] plugin.manager : instantiating plugin PluginSpec(localstack.hooks.on_infra_ready.initialize_health_info = <function initialize_health_info at 0x7fbfdcf48310>)
localstack | 2023-02-08T13:06:25.695 DEBUG --- [ MainThread] plugin.manager : plugin localstack.hooks.on_infra_ready:initialize_health_info is disabled
localstack | 2023-02-08T13:06:34.860 DEBUG --- [ asgi_gw_0] l.aws.handlers.service : no service set in context, skipping request parsing
localstack | 2023-02-08T13:06:34.864 INFO --- [ asgi_gw_0] localstack.request.http : GET / => 200
Can anyone, please let me know what's the issue here ? it was invoking s3 initially without any changes and then restarting docker is causing this. Thanks in advance

Related

How to setup Traefik as reverse proxy for ASP.NET Core app with kestrel?

So long ago I started studying the data of the solution, but I am so stupid that I have not found a solution to how to configure file docker-compose for a simple ASP.NET Core for traefik proxy.
I took an example of a simple ASP.NET Core application from the Microsoft site, which, after deployment, is available at localhost:8443 using https, because earlier I released a self-signed (aspnetapp.pfx), ok.
Then I unfolded the traefik and configured the dashboard I see that traefik gets information about the aspnet_demo container, but at web app addresses, or webapp.mydomen.com/ or localhost nothing is available - maximum I get the error ERR_TOO_MANY_REDIRECTS in browser.
In logs traefik when referring to webapp.mydomen.com I get "RequestURI ": "/ "
What did you forget to point out?
I understand that the content aspnet_demo get on 443 port, so I tell Traefik where to look, but nothing...
Help me please understanding this. Thank you
My docker compose ASP.NET Core app looks like this:
version: "3.8"
services:
aspnet_demo:
image: mcr.microsoft.com/dotnet/core/samples:aspnetapp
container_name: aspnet_sample
ports:
- 8080:80
- 8443:443
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=https://+:443;http://+:80
- ASPNETCORE_Kestrel__Certificates__Default__Password=password
- ASPNETCORE_Kestrel__Certificates__Default__Path=/https/aspnetapp.pfx
volumes:
- ~/.aspnet/https:/https:ro
networks:
- traefik-reverse-proxy
labels:
- traefik.enable=true
- traefik.http.routers.aspnet.entrypoints=web
- traefik.http.routers.aspnet.rule=Host(`webapp`)
- traefik.http.routers.aspnet_secure.entrypoints=web-secure
- traefik.http.routers.aspnet_secure.rule=Host(`webapp.mydomen.com`)
- traefik.http.routers.aspnet_secure.tls=true
- traefik.http.services.aspnet.loadbalancer.server.port=443
networks:
traefik-reverse-proxy:
external: true
My docker compose Traefik looks like this:
version: "3.8"
services:
traefik:
image: traefik:v2.9
ports:
- "80:80"
- "443:443"
- "8080:8080"
networks:
- traefik-reverse-proxy
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
- "./configuration/dynamic.yaml:/traefik_conf/dynamic.yaml"
- "./configuration/traefik.yml:/traefik.yml:ro"
- "./cert/:/traefik_conf/cert/"
labels:
- "traefik.enable=true"
- "traefik.http.routers.traefik.service=api#internal"
- "traefik.http.routers.traefik.entrypoints=web-secure"
- "traefik.http.routers.traefik.rule=Host(`traefiklocal.mydomen.com`)"
- "traefik.http.routers.traefik.tls=true"
- "traefik.http.routers.traefik.tls.certresolver=tls"
- "traefik.http.routers.traefik.middlewares=traefik-auth"
- "traefik.http.middlewares.traefik-auth.basicauth.users=unixhost:$$apr1$$vqyMX723$$6nZ1lC3/2JN6QJyeEhJB8/"
networks:
traefik-reverse-proxy:
external: true
My static config Traefik looks like this:
api:
dashboard: true
insecure: true
log:
level: DEBUG
entryPoints:
web:
address: ":80"
forwardedHeaders:
insecure: true
http:
redirections:
entryPoint:
to: web-secure
web-secure:
address: ":443"
providers:
docker:
watch: true
exposedbydefault: false
file:
directory: /traefik_conf/
watch: true
filename: dynamic.yaml
My dynamic config Traefik:
tls:
certificates:
# first certificate
- certFile: "/traefik_conf/cert/pem_com_2022.pem"
keyfile: "/traefik_conf/cert/star_com_2022.key"
# second certificate
- certFile: "/traefik_conf/cert/aspnetapp.pem"
keyfile: "/traefik_conf/cert/aspnetapp.key"
stores:
- default

Unable to invoke another service with Dapr

I'm having major problems getting Dapr up and running with my microservices. Every time I try to invoke another service, it returns a 500 error with the message
client error: the server closed connection before returning the first response byte. Make sure the server returns 'Connection: close' response header before closing the connection
The services and dapr sidecars are currently running in docker-compose on our dev machines but will run in Kubernetes when it is deployed properly.
When I look at the logs for the dapr containers in Docker for Windows, I can see the application being discovered on port 443 and a few initialisation messages but nothing else ever gets logged after that, even when I make my invoke request.
I have a container called clients, which I'm calling an API called test in it and this is then trying to call Microsoft's example weather forecast API in another container called simpleapi.
I'm using swaggerUI to call the apis. The test api returns 200 but when I put a breakpoint on the invoke, I can see the response is 500.
If I call the weatherforecast api directly using swaggerui, it returns a 200 with the expected payload.
I have the Dapr dashboard running in a container and it doesn't show any applications.
Docker-Compose.yml
version: '3.4'
services:
clients:
image: ${DOCKER_REGISTRY-}clients
container_name: "Clients"
build:
context: .
dockerfile: Clients/Dockerfile
ports:
- "50002:50002"
depends_on:
- placement
- database
networks:
- platform
clients-dapr:
image: "daprio/daprd:edge"
container_name: clients-dapr
command: [
"./daprd",
"-app-id", "clients",
"-app-port", "443",
"-placement-host-address", "placement:50006",
"-dapr-grpc-port", "50002"
]
depends_on:
- clients
network_mode: "service:clients"
simpleapi:
image: ${DOCKER_REGISTRY-}simpleapi
build:
context: .
dockerfile: SimpleAPI/Dockerfile
ports:
- "50003:50003"
depends_on:
- placement
networks:
- platform
simpleapi-dapr:
image: "daprio/daprd:edge"
container_name: simpleapi-dapr
command: [
"./daprd",
"-app-id", "simpleapi",
"-app-port", "443",
"-placement-host-address", "placement:50006",
"-dapr-grpc-port", "50003"
]
depends_on:
- simpleapi
network_mode: "service:simpleapi"
placement:
image: "daprio/dapr"
container_name: placement
command: ["./placement", "-port", "50006"]
ports:
- "50006:50006"
networks:
- platform
dashboard:
image: "daprio/dashboard"
container_name: dashboard
ports:
- "8080:8080"
networks:
- platform
networks:
platform:
Test controller from the Clients API.
[Route("api/[controller]")]
[ApiController]
public class TestController : ControllerBase
{
[HttpGet]
public async Task<ActionResult> Get()
{
var httpClient = DaprClient.CreateInvokeHttpClient();
var response = await httpClient.GetAsync("https://simpleapi/weatherforecast");
return Ok();
}
}
This is a major new project for my company and it's looking like we're going to have to abandon Dapr and implement everything ourselves if we can't get this working soon.
I'm hoping there's some glaringly obvious problem here.
Actually turned out to be quite simple.
I needed to tell dapr to use ssl.
The clients-dapr needed the -app-ssl parameter so clients-dapr should have been as follows (the simpleapi-dapr needs the same param adding too)
clients-dapr:
image: "daprio/daprd:edge"
container_name: clients-dapr
command: [
"./daprd",
"-app-id", "clients",
"-app-port", "443",
"-app-ssl",
"-placement-host-address", "placement:50006",
"-dapr-grpc-port", "50002"
]
depends_on:
- clients
network_mode: "service:clients"
you can run your service-specific port without docker and check dapr works as expected. you can specify http port & grpc port.
dapr run `
--app-id serviceName `
--app-port 5139 `
--dapr-http-port 3500 `
--dapr-grpc-port 50001 `
--components-path ./dapr-components
if the above setup works then you can setup with the docker. check above solution

Loki config with s3

I can't get Loki to connect to AWS S3 using docker-compose. Logs are visible in Grafana but the S3 bucket remains empty.
The s3 bucket is public and I have an IAM role attached to allow s3:FullAccess.
I updated loki to v2.0.0 and changed the period to 24h but it made no difference. There are no errors in the loki logs.
Here are the selected lines from docker logs (loki):
msg="Starting Loki" version="(version=master-4e661cd, branch=master, revision=4e661cde)"
caller=server.go:225 http=[::]:3100 grpc=[::]:9095 msg="server listening on addresses"
caller=worker.go:65 msg="no address specified, not starting worker"
msg="cleaning up mapped rules directory" path=/loki/tmprules
msg=initialising module=memberlist-kv
msg=initialising module=store
msg=initialising module=server
msg=initialising module=ring
msg="value is nil" key=collectors/ring index=1
msg=initialising module=ingester
msg="not loading tokens from file, tokens file path is empty"
msg="instance not found in ring, adding with no tokens" ring=ingester
msg="auto-joining cluster after timeout" ring=ingester
msg=initialising module=table-manager
msg=initialising module=distributor
msg=initialising module=ingester-querier
msg=initialising module=ruler
msg="ruler up and running"
msg="Loki started"
msg="synching tables" expected_tables=132
Here is my loki.config:
auth_enabled: false
server:
http_listen_port: 3100
distributor:
ring:
kvstore:
store: memberlist
ingester:
lifecycler:
ring:
kvstore:
store: memberlist
replication_factor: 1
final_sleep: 0s
chunk_idle_period: 5m
chunk_retain_period: 30s
schema_config:
configs:
- from: 2020-10-27
store: boltdb-shipper
object_store: s3
schema: v11
index:
prefix: index_
period: 24h
storage_config:
boltdb_shipper:
active_index_directory: /loki/index
cache_location: /loki/index_cache
resync_interval: 5s
shared_store: s3
aws:
s3: s3://AKIARE3#us-east-1/mydomain.com.docker.loki.logs
s3forcepathstyle: true
limits_config:
enforce_metric_name: false
reject_old_samples: true
reject_old_samples_max_age: 168h
Here is docker-compose.yaml
version: "3.8"
networks:
traefik:
external: true
volumes:
data:
services:
fluentd:
image: grafana/fluent-plugin-loki:master
command:
- "fluentd"
- "-v"
- "-p"
- "/fluentd/plugins"
environment:
LOKI_URL: http://loki:3100
LOKI_USERNAME:
LOKI_PASSWORD:
container_name: "fluentd"
restart: always
ports:
- '24224:24224'
networks:
- traefik
volumes:
- type: bind
source: ./config/fluent.conf
target: /fluentd/etc/fluent.conf
logging:
options:
tag: docker.monitoring
loki:
image: grafana/loki:master
container_name: "loki"
restart: always
networks:
- traefik
volumes:
- type: volume
source: data
target: /loki
ports:
- 3100
volumes:
- type: bind
source: ./config/s3.loki.conf
target: /loki/etc/loki.conf
depends_on:
- fluentd
I finally did work this out. It requires a compactor but gives no warning about it. Best practice is to create an AWS s3 bucket without any public access. Next create an IAM user with programmatic access only. Create an access policy which gives full access only to the bucket you created. Attach the policy to the user's permissions. You do not need to attach a policy to the bucket itself. Check if you have "/" in your URL that you escape it with %2F otherwise you will get an auth error. Note that this config is for loki v2.0.0 which was released yesterday.
Here are my complete working docker-compose and loki config files. I put them on an external network to enable prometheus monitoring.
here is my docker-compose.yaml
version: "3.8"
networks:
appnet:
external: true
volumes:
loki_data:
services:
fluentd:
container_name: "fluentd"
image: grafana/fluent-plugin-loki:master
command:
- "fluentd"
- "-v"
- "-p"
- "/fluentd/plugins"
environment:
LOKI_URL: http://loki:3100
LOKI_USERNAME:
LOKI_PASSWORD:
restart: always
ports:
- '24224:24224'
networks:
- appnet
volumes:
- type: bind
source: ./config/fluent.conf
target: /fluentd/etc/fluent.conf
loki:
container_name: "loki"
image: grafana/loki:2.0.0
restart: always
networks:
- appnet
ports:
- 3100
volumes:
- type: volume
source: loki_data
target: /data
- type: bind
source: ./config/s3-loki-bolt-conf.yml
target: /etc/loki/local-config.yaml
command: -config.file=/etc/loki/local-config.yaml
depends_on:
- fluentd
Here is my loki config in prometheus/config/s3-loki-bolt-conf.yml. You can name this anything you want but keep the target file name as above as it is the loki default config file.
auth_enabled: false
ingester:
chunk_idle_period: 3m
chunk_block_size: 262144
chunk_retain_period: 1m
max_transfer_retries: 0
lifecycler:
ring:
kvstore:
store: inmemory
replication_factor: 1
limits_config:
enforce_metric_name: false
reject_old_samples: true
reject_old_samples_max_age: 168h
compactor:
working_directory: /loki/boltdb-shipper-compactor
shared_store: aws
schema_config:
configs:
- from: 2020-07-01
store: boltdb-shipper
object_store: aws
schema: v11
index:
prefix: loki_index_
period: 24h
server:
http_listen_port: 3100
storage_config:
aws:
s3: s3://ACCESS_KEY:SECRET_ACCESS_KEY#us-west-1/mydomain.com.docker.loki.logs
boltdb_shipper:
active_index_directory: /loki/index
shared_store: s3
cache_location: /loki/boltdb-cache
chunk_store_config:
max_look_back_period: 0s
table_manager:
retention_deletes_enabled: false
retention_period: 0s
For those who want to use boltdb-shipper and store in S3 compatible object store (in my case from Scaleway), using helm and loki 2.0.0
Here is my values.yml:
loki:
enabled: true
config:
auth_enabled: false
ingester:
chunk_idle_period: 3m
chunk_block_size: 262144
chunk_retain_period: 1m
max_transfer_retries: 0
lifecycler:
ring:
kvstore:
store: inmemory
replication_factor: 1
limits_config:
enforce_metric_name: false
reject_old_samples: true
reject_old_samples_max_age: 168h
compactor:
working_directory: /data/loki/boltdb-shipper-compactor
shared_store: aws
schema_config:
configs:
- from: 2020-11-13
store: boltdb-shipper
object_store: aws
schema: v11
index:
prefix: loki_index_
period: 24h
server:
http_listen_port: 3100
storage_config:
aws:
s3: s3://<key>:<secret>#s3.fr-par.scw.cloud/<bucket-name>
region: fr-par
s3forcepathstyle: true
boltdb_shipper:
active_index_directory: /data/loki/index
shared_store: s3
cache_location: /data/loki/boltdb-cache
chunk_store_config:
max_look_back_period: 0s
table_manager:
retention_deletes_enabled: true
retention_period: 720h
promtail:
enabled: true

How to redirect to the dashboard from a URL?

I currently access the V2 dashboard through http://traefik.my.server:8080/dashboard/ (Traefik runs in a docker container and 8080 is exposed to the host).
I would like to change that so that the dashboard is available at http://traefik.my.server/dashboard
I tried to add the following labels to configure this behavior but I get a 404 when accessing http://traefik.my.server/dashboard
- traefik.http.routers.dashboard.rule=Host(`traefik.my.server:`) && Path(`/dashboard`)
- traefik.http.services.dashboard.loadbalancer.server.port=8080
- traefik.http.routers.dashboard.entryPoints=http
(the http entrypoint is port 80)
What is the correct way to set up such redirectio
Recommend read:
https://docs.traefik.io/v2.1/operations/dashboard/#secure-mode
https://blog.containo.us/traefik-2-0-docker-101-fc2893944b9d
https://github.com/containous/blog-posts/tree/master/2019_09_10-101_docker
FYI it's not redirection but a routing.
https://community.containo.us/t/how-to-redirect-to-the-dashboard-from-a-url/4082/2
Following up on #Idez help at https://community.containo.us/t/how-to-redirect-to-the-dashboard-from-a-url/4082, a working configuration is
The docker-compose file:
services:
traefik:
container_name: traefik
image: traefik
ports:
- 80:80
- 443:443
restart: unless-stopped
volumes:
- /etc/docker/container-data/traefik:/etc/traefik
- /var/run/docker.sock:/var/run/docker.sock
- /etc/localtime:/etc/localtime:ro
labels:
- traefik.http.routers.api.rule=Host(`traefik.mydomain.org`)
- traefik.http.routers.api.service=api#internal
- traefik.http.routers.api.middlewares=lan
- traefik.http.middlewares.lan.ipwhitelist.sourcerange=192.168.10.0/24, 192.168.20.0/24
- traefik.enable=true
version: "3"
Configuration file
global:
sendAnonymousUsage: true
entryPoints:
http:
address: ":80"
https:
address: ":443"
api:
dashboard: true
providers:
docker:
endpoint: "unix:///var/run/docker.sock"
exposedByDefault: false
defaultRule: "Host(`{{ index .Labels \"com.docker.compose.service\" }}.mydomain.org`)"
log:
level: INFO
#level: DEBUG
certificatesResolvers:
le:
acme:
email: le#mydomain.org
storage: /etc/traefik/acme.json
tlsChallenge: {}
#caServer: "https://acme-staging-v02.api.letsencrypt.org/directory"

docker swarm custom containers with traefik

I'm trying to set up a docker swarm using traefik in digital ocean. I followed this tutorial and I get it to work entirely until I add one of my custom made containers. I am trying to simply add one first (there are 14 in total) and they are all very very similar, all of them are express apps that serve as restful API's handling one resource per service. However, when trying to access that specific subdomain I get a connection refused error.
Here's my docker-stack.yml file:
version: '3.6'
services:
traefik:
image: traefik:latest
networks:
- mynet
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./traefik.toml:/traefik.toml
ports:
- "80:80"
- "8080:8080"
command: --api
main:
image: nginx
networks:
- mynet
deploy:
labels:
- "traefik.port=80"
- "traefik.backend=main"
- "traefik.frontend.rule=Host:domain.com"
two:
image: jwilder/whoami
networks:
- mynet
deploy:
labels:
- "traefik.port=8000"
- "traefik.backend=two"
- "traefik.frontend.rule=Host:two.domain.com"
three:
image: emilevauge/whoami
networks:
- mynet
deploy:
labels:
- "traefik.port=80"
- "traefik.backend=three"
- "traefik.frontend.rule=Host:three.domain.com"
user-service:
image: hollarves/users:latest
env_file:
- .env.user
networks:
- mynet
deploy:
labels:
- "traefik.port=80"
- "traefik.backend=users"
- "traefik.frontend.rule=Host:users.domain.com"
networks:
mynet:
driver: overlay
As I said, going to two.domain.com and three.domain.com works fine, and the whoami containers respond with their info. However, I get a connection refused error when trying users.domain.com
Note: domain.com is an actual domain I am using that is live pointing to a digitalocean cluster, I'm just hiding it for privacy reasons.
The entrypoint for this users-service is:
if (process.env.NODE_ENV !== "production") {
require("dotenv").load()
}
const express = require("express"),
bodyParser = require("body-parser"),
logger = require("morgan"),
//helmet = require("helmet"),
cors = require("cors"),
PORT = parseInt(process.env.PORT, 10) || 80
const server = express(),
routes = require("./server/routes")
//server.use(helmet())
server.use(cors())
server.use(logger("dev"))
server.use(bodyParser.json())
server.use("/", routes)
/*eslint no-console: ["error", { allow: ["log"] }] */
const serverObj = server.listen(PORT, () => { console.log("Server listening in PORT ", PORT) })
module.exports = serverObj
I can also confirm that this service is listening on PORT 80 as that's what it outputs when fetching logs from it using docker service logs test-stack_user-service:
test-stack_user-service.1.35p3lxzovphr#node-2 | > users-mueve#0.0.1 start /usr/src/app
test-stack_user-service.1.35p3lxzovphr#node-2 | > node server.js
test-stack_user-service.1.35p3lxzovphr#node-2 |
test-stack_user-service.1.35p3lxzovphr#node-2 | Server listening in PORT 80
Here is my traefik.toml config file just in case:
debug = true
logLevel = "DEBUG"
defaultEntryPoints = ["http"]
[entryPoints]
[entryPoints.http]
address = ":80"
[retry]
[docker]
endpoint="unix:///var/run/docker.sock"
exposedByDefault=true
watch=true
swarmmode=true
I can also see the containers in the traefik dashboard like I used to in my local environment.
I feel like I'm missing a very small detail that is preventing my service from working correctly. Any pointers will be extremely appreciated.
Thanks!