I have a rabbitmq container in docker and another service to send stream type messages to it. But it is only ok when the service is outside the docker, if I build the service as a container run in docker,and send stream messages, It always shows "System.Net.Sockets.SocketException (111): Connection refused ". But if you send a message of classic type, it's a success.
rabbitmq:
container_name: rabbitmq
image: rabbitmq:3-management
environment:
RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS: -rabbitmq_stream advertised_host localhost
RABBITMQ_DEFAULT_USER:"admin"
RABBITMQ_DEFAULT_PASS:"admin"
RABBITMQ_DEFAULT_VHOST:"application"
ports:
- 5672:5672
- 5552:5552
- 15672:15672
volumes:
- ./conf/rabbitmq/rabbitmq.conf:/etc/rabbitmq/rabbitmq.conf
- ./conf/rabbitmq/enabled_plugins:/etc/rabbitmq/enabled_plugins
healthcheck:
test: rabbitmq-diagnostics -q ping
interval: 15s
timeout: 15s
retries: 5
env_file:
- ./.local.env
./conf/rabbitmq/rabbitmq.conf
enter code herestream.listeners.tcp.1 = 5552
stream.tcp_listen_options.backlog = 4096
stream.tcp_listen_options.recbuf = 131072
stream.tcp_listen_options.sndbuf = 131072
stream.tcp_listen_options.keepalive = true
stream.tcp_listen_options.nodelay = true
stream.tcp_listen_options.exit_on_close = true
stream.tcp_listen_options.send_timeout = 120
./conf/rabbitmq/enabled_plugins
[rabbitmq_management,rabbitmq_prometheus,rabbitmq_stream,rabbitmq_stream_management].
another service configures in docker:
# RabbitMQ
Host = "host.docker.internal",
VirtualHost = "application",
Port= 5672,
StreamPort = 5552,
User= "admin",
Password = "admin",
UseSSL = false
Related
I'm trying to get a traefik docker instance running on my raspberry pi 4 8gb. I have everything setup, but I can't get the let's encrypt certification working. (My name registrar is Porkbun)
Here's my docker-compose :
Version: '3.4'
services:
traefik:
image: 'traefik:2.3'
restart: 'unless-stopped'
ports:
- '80:80'
- '443:443'
volumes:
- '/var/run/docker/sock:/var/run/docker.sock'
- './config_files/traefik.toml:/traefik.toml'
- './config_files/traefik_dynamic.toml:/traefik_dynamic.toml'
- './config_files/acme.json:/acme.json'
networks:
- pi
whoami:
image: 'traefik/whoami'
restart: 'unless-stopped'
labels:
- 'traefik.enable=true'
- 'traefik.http.routers.whoami.rule=PathPrefix(`/whoami{regex:$$|/.*}`)'
- 'traefik.http.services.whoami.loadbalancer.server.port=80'
networks:
pi:
external: true
And here's my traefik.toml :
[entryPoints]
[entryPoints.web]
address = ":80"
[entryPoints.web.http.redirections.entryPoint]
to = "websecure"
scheme = "https"
[entryPoints.websecure]
address = ":443"
[entryPoints.websecure.http.tls]
certResolver = "lets-encrypt"
[api]
dashboard = true
[certificatesResolvers.lets-encrypt.acme]
email = "lucien.astie#gmail.com"
storage = "acme.json"
[certificatesResolvers.lets-encrypt.acme.tlsChallenge]
[providers.docker]
watch = true
network = "web"
exposedByDefault = false
[providers.file]
filename = "traefik_dynamic.toml"
Lastly my traefik_dynamic.toml :
[http.middlewares.simpleAuth.basicAuth]
users = [
"uberfluff:$apr1$qAWpnRq5$W94tcAy9JCKE6TN.Zy/Kp1"
]
[http.routers.api]
rule = "Host(`lulusworld.art`)"
entrypoints = ["web"]
middlewares = ["simpleAuth"]
service = "api#internal"
[http.routers.api.tls]
certResolver = "lets-encrypt"
But with all of this I get this error :
Unable to obtain ACME certificate for domains "lulusworld.art": unable to generate a certificate for the domains [lulusworld.art]: error: one or more domains had a problem:\n[lulusworld.art] acme: error: 400 :: urn:ietf:params:acme:error:dns :: no valid A records found for lulusworld.art; no valid AAAA records found for lulusworld.art, url: \n" routerName=api#file rule="Host(lulusworld.art)" providerName=lets-encrypt.acme
Here's what I did to try to fix this :
I made a A record (the record is working but not the SSL)
According to docs for wildcard certificate you need DNS challenge but I can't get porkbun working with DNS Challenge
If you have any idea how I could solve my problem it would be greatly appreciated.
I'm trying to push an image to my registry with the gitlab ci. I can login without any problems (the before script). However I get the following error on the push command. error parsing HTTP 400 response body: invalid character '<' looking for beginning of value: "<html>\r\n<head><title>400 The plain HTTP request was sent to HTTPS port</title></head>\r\n<body>\r\n<center><h1>400 Bad Request</h1></center>\r\n<center>The plain HTTP request was sent to HTTPS port</center>\r\n<hr><center>nginx</center>\r\n</body>\r\n</html>\r\n"
This in the config.toml from the used gitlab-runner
[[runners]]
name = "e736f9d48a40"
url = "https://gitlab.domain.com/"
token = "token"
executor = "docker"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
tls_verify = false
image = "docker"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/cache"]
shm_size = 0
This is the relevant part of the gitlab-ci
image: docker
services:
- docker:dind
variables:
BACKEND_PROJECT: "test"
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
containerize:
stage: containerize
before_script:
- "docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY"
only:
- master
script:
- "cd backend/"
- "docker build -t $CI_REGISTRY_IMAGE/api:latest ."
- "docker push $CI_REGISTRY_IMAGE/api:latest"
The GitLab omnibus registry configuration
registry_external_url 'https://gitlab.domain.com:5050'
registry_nginx['enable'] = true
registry_nginx['ssl_certificate_key'] = "/etc/letsencrypt/live/gitlab.domain.com/privkey.pem"
registry_nginx['ssl_certificate'] = "/etc/letsencrypt/live/gitlab.domain.com/fullchain.pem"
registry_nginx['port'] = 443
registry_nginx['redirect_http_to_https'] = true
### Settings used by Registry application
registry['enable'] = true
registry_nginx['proxy_set_headers'] = {
"Host" => "$http_host",
"X-Real-IP" => "$remote_addr",
"X-Forwarded-For" => "$proxy_add_x_forwarded_for",
"X-Forwarded-Proto" => "http",
"X-Forwarded-Ssl" => "on"
}
Can someone help me with this problem?
Okay, the solution was quite simple. I only had to change the
"X-Forwarded-Proto" => "http",
to
"X-Forwarded-Proto" => "https",
I'm working on a project to build a front end for a private/secure docker registry. The way I'm doing this is to use docker-compose to create a network between the front end and the registry. My idea is to use express to serve my site and forward requests from the client to the registry via the docker network.
Locally, everything works perfectly....
However, in production the client doesn't get a response back from the registry. I can login to the registry and access it's api via postman (for ex the catalog) at https://myregistry.net:5000/v2/_catalog. But... the client just errors out.
when I go into the express server container and try to curl the endpoint I created to proxy requests, I get this
curl -vvv http://localhost:3000/api/images
* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 3000 (#0)
> GET /api/images HTTP/1.1
> Host: localhost:3000
> User-Agent: curl/7.61.1
> Accept: */*
>
* Empty reply from server
* Connection #0 to host localhost left intact
curl: (52) Empty reply from server
and the error that's returned includes a _currentUrl of https://username:password#registry:5000/v2/_catalog
my docker-compose file looks like this...
version: '3'
services:
registry:
image: registry:2
container_name: registry
ports:
# forward requests to registry.ucdev.net:5000 to 127.0.0.1:443 on the container
- "5000:443"
environment:
REGISTRY_AUTH: htpasswd
REGISTRY_AUTH_HTPASSWD_PATH: /auth/htpasswd
REGISTRY_AUTH_HTPASSWD_REALM: Registry Realm
REGISTRY_HTTP_ADDR: 0.0.0.0:443
REGISTRY_HTTP_TLS_CERTIFICATE: /certs/fullchain.pem
REGISTRY_HTTP_TLS_KEY: /certs/privkey.pem
volumes:
- /etc/letsencrypt/live/registry.ucdev.net/fullchain.pem:/certs/fullchain.pem
- /etc/letsencrypt/live/registry.ucdev.net/privkey.pem:/certs/privkey.pem
- ./auth:/auth
restart: always
server:
image: uc/express
container_name: registry-server
ports:
- "3000:3000"
volumes:
- ./:/project
environment:
NODE_ENV: production
restart: always
entrypoint: ["npm", "run", "production"]
an example of my front end request looks like this...
axios.get('http://localhost:3000/api/images')
.then((response) => {
const { data: { registry, repositories } } = response;
this.setState((state, props) => {
return { registry, repositories }
})
})
.catch((err) => {
console.log(`Axios error -> ${err}`)
console.error(err)
})
and that request is sent to the express server and then to the registry like this...
app.get('/api/images', async (req, res) => {
// scheme is either http or https depending on NODE_ENV
// registry is the name of the container on the docker network
await axios.get(`${scheme}://registry:5000/v2/_catalog`)
.then((response) => {
const { data } = response;
data.registry = registry;
res.json(data);
})
.catch((err) => {
console.log('Axios error -> images ', err);
return err;
})
})
any help you could offer would be great! thanks!
In this particular case it was an issue related to the firewall the server was behind. requests coming from the docker containers were being blocked. to solve this problem we had to explicitly set the network_mode to bridge. this allowed requests to from within the containers to behave correctly. the final docker-compose file looks like this
version: '3'
services:
registry:
image: registry:2
container_name: registry
# setting network_mode here and on the server helps the express api calls work correctly on the myregistry.net server.
# otherwise, the calls fail with 'network unreachable' due to the firewall.
network_mode: bridge
ports:
# forward requests to myregistry.net:5000 to 127.0.0.1:443 on the container
- "5000:443"
environment:
REGISTRY_AUTH: htpasswd
REGISTRY_AUTH_HTPASSWD_PATH: /auth/htpasswd
REGISTRY_AUTH_HTPASSWD_REALM: Registry Realm
REGISTRY_HTTP_ADDR: 0.0.0.0:443
REGISTRY_HTTP_TLS_CERTIFICATE: /certs/fullchain.pem
REGISTRY_HTTP_TLS_KEY: /certs/privkey.pem
volumes:
- /etc/letsencrypt/live/myregistry.net/fullchain.pem:/certs/fullchain.pem
- /etc/letsencrypt/live/myregistry.net/privkey.pem:/certs/privkey.pem
- ./auth:/auth
restart: always
server:
image: uc/express
container_name: registry-server
network_mode: bridge
ports:
- "3000:3000"
volumes:
- ./:/project
environment:
NODE_ENV: production
restart: always
entrypoint: ["npm", "run", "production"]
rabbit connection from console app :
var factory = new ConnectionFactory()
{
HostName = Environment.GetEnvironmentVariable("RabbitMq/Host"),
UserName = Environment.GetEnvironmentVariable("RabbitMq/Username"),
Password = Environment.GetEnvironmentVariable("RabbitMq/Password")
};
using (var connection = factory.CreateConnection()) // GETTING ERROR HERE
using (var channel = connection.CreateModel())
{
channel.QueueDeclare(queue: "rss",
durable: fa...
I'm getting this error :
Unhandled Exception:
RabbitMQ.Client.Exceptions.BrokerUnreachableException: None of the
specified endpoints were reachable --->
RabbitMQ.Client.Exceptions.ConnectFailureException: Connection failed
---> System.Net.Internals.SocketExceptionFactory+ExtendedSocketException:
Connection refused 127.0.0.1:5672
my docker-compose.yml file :
version: '3'
services:
message.api:
image: message.api
build:
context: ./message_api
dockerfile: Dockerfile
container_name: message.api
environment:
- "RabbitMq/Host=rabbit"
- "RabbitMq/Username=guest"
- "RabbitMq/Password=guest"
depends_on:
- rabbit
rabbit:
image: rabbitmq:3.7.2-management
hostname: rabbit
ports:
- "15672:15672"
- "5672:5672"
rsscomparator:
image: rsscomparator
build:
context: ./rss_comparator_app
dockerfile: Dockerfile
container_name: rsscomparator
environment:
- "RabbitMq/Host=rabbit"
- "RabbitMq/Username=guest"
- "RabbitMq/Password=guest"
depends_on:
- rabbit
I'm using dotnetcore console app. When I use this app in docker I'm getting error. I can reach rabbitmq web browser(http://192.168.99.100:15672) but app can not reach.
You are trying to connect from your container app to your rabbitmq app.
You try to achieve this with 127.0.0.1:5672 in your console app container.
But this is pointing to your localhost inside this container, and not to your localhost on your host.
You are deploying your services using the same docker-compose without specifying network settings which means they are all deployed inside the same docker bridge network. This will allow you to let the containers communicate with each other using their container or service names.
So try to connect to rabbit:5672 instead of 127.0.0.1:5672. This name will be translated to the container IP (172.xx.xx.xx) which means you'll create a private connection between your containers.
I created a simple .net core console application with docker support. Following
Masstransit code fails to connect to RabbitMQ instance on host machine. But similar implementation using RabitMq.Client is able to connect to host machine RabbitMQ instance.
Masstransit throws
MassTransit.RabbitMqTransport.RabbitMqConnectionException: Connect
failed: ctas#192.168.0.9:5672/ --->
RabbitMQ.Client.Exceptions.BrokerUnreachableException:
host machine ip : 192.168.0.9
using Masstransit
string rabbitMqUri = "rabbitmq://192.168.0.9/";
string userName = "ctas";
string password = "ctas#123";
string assetServiceQueue = "hello";
var bus = Bus.Factory.CreateUsingRabbitMq(cfg =>
{
var host = cfg.Host(new Uri(rabbitMqUri), hst =>
{
hst.Username(userName);
hst.Password(password);
});
cfg.ReceiveEndpoint(host,
assetServiceQueue, e =>
{
e.Consumer<AddNewAssetReceivedConsumer>();
});
});
bus.Start();
Console.WriteLine("Service Running.... Press enter to exit");
Console.ReadLine();
bus.Stop();
Using RabbitMQ Client
public static void Main()
{
var factory = new ConnectionFactory();
factory.UserName = "ctas";
factory.Password = "ctas#123";
factory.VirtualHost = "watcherindustry";
factory.HostName = "192.168.0.9";
using (var connection = factory.CreateConnection())
using (var channel = connection.CreateModel())
{
channel.QueueDeclare(queue: "hello",
durable: false,
exclusive: false,
autoDelete: false,
arguments: null);
var consumer = new EventingBasicConsumer(channel);
consumer.Received += (model, ea) =>
{
var body = ea.Body;
var message = Encoding.UTF8.GetString(body);
Console.WriteLine(" [x] Received {0}", message);
};
channel.BasicConsume(queue: "hello",
autoAck: true,
consumer: consumer);
Console.WriteLine(" Press [enter] to exit.");
Console.ReadLine();
}
}
Docker file
FROM microsoft/dotnet:1.1-runtime
ARG source
WORKDIR /app
COPY ${source:-obj/Docker/publish} .
ENTRYPOINT ["dotnet", "TestClient.dll"]
I created an example, and was able to connect my host, using the preview package from masstransit.
Start rabbitmq in docker and expose ports on the host
docker run -d -p 5672:5672 -p 15672:15672 --hostname my-rabbit --name some-rabbit rabbitmq:3-management
Build and run console app.
docker build -t dotnetapp .
docker run -d -e RABBITMQ_URI=rabbitmq://guest:guest#172.17.0.2:5672 --name some-dotnetapp dotnetapp
To verify your receiving messages run
docker logs some-dotnetapp --follow
you should see the following output
Application is starting...
Connecting to rabbitmq://guest:guest#172.17.0.2:5672
Received: Hello, World [08/12/2017 04:35:53]
Received: Hello, World [08/12/2017 04:35:58]
Received: Hello, World [08/12/2017 04:36:03]
Received: Hello, World [08/12/2017 04:36:08]
Received: Hello, World [08/12/2017 04:36:13]
...
Notes:
172.17.0.2 was my-rabbit container ip address but you can replace it with your machine ip address
http://localhost:15672 is the rabbitmq management console log in with guest as username and password.
Lastly portainer.io is a very useful application to visually view you local docker environment.
Thanks for the response. I managed to resolve this issue. My findings are as follows.
to connect to a rabbitmq instance on another docker container, they have to be moved/connected to the same network. To do this
create a newtork
docker network create -d bridge my_bridge
connect both app and rabbitmq containers to same network
docker network connect my_bridge <container name>
For masstransit uri use rabbitmq container IP on that network or container name
To connect rabbitmq instance of host machine from a app on docker container.
masstransit uri should include machine name( I tried IP, that did not work)
Try using virtual host in MassTransit configuration too, not sure why you decided to omit it.
var host = cfg.Host("192.168.0.9", "watcherindustry", hst =>
{
hst.Username(userName);
hst.Password(password);
});
Look at Alexey Zimarev comment to your question, if your rabbit runs on a container then it should be on your docker-compese file and then use that entry in your endpoint definition to connect to rabbit because docker creates an internal network on which you are agnostic from source code...
rabbitmq:
container_name: "rabbitmq-yournode01"
hostname: rabbit
image: rabbitmq:3.6.6-management
environment:
- RABBITMQ_DEFAULT_USER=yourusergoeshere
- RABBITMQ_DEFAULT_PASS=yourpasswordgoeshere
- RABBITMQ_DEFAULT_VHOST=vhost
volumes:
- rabbit-volume:/var/lib/rabbitmq
ports:
- "5672:5672"
- "15672:15672"
In your app settings you should have something lie:
"ConnectionString": "host=rabbitmq:5672;virtualHost=vhost;username=yourusergoeshere;password=yourpasswordgoeshere;timeout=0;prefetchcount=1",
And if you'd use EasyNEtQ you could do:
_bus = RabbitHutch.CreateBus(_connectionString); // The one above
I hope it helps,
Juan