dotnetcore console app : rabbitmq with docker Connection refused 127.0.0.1:5672 - rabbitmq

rabbit connection from console app :
var factory = new ConnectionFactory()
{
HostName = Environment.GetEnvironmentVariable("RabbitMq/Host"),
UserName = Environment.GetEnvironmentVariable("RabbitMq/Username"),
Password = Environment.GetEnvironmentVariable("RabbitMq/Password")
};
using (var connection = factory.CreateConnection()) // GETTING ERROR HERE
using (var channel = connection.CreateModel())
{
channel.QueueDeclare(queue: "rss",
durable: fa...
I'm getting this error :
Unhandled Exception:
RabbitMQ.Client.Exceptions.BrokerUnreachableException: None of the
specified endpoints were reachable --->
RabbitMQ.Client.Exceptions.ConnectFailureException: Connection failed
---> System.Net.Internals.SocketExceptionFactory+ExtendedSocketException:
Connection refused 127.0.0.1:5672
my docker-compose.yml file :
version: '3'
services:
message.api:
image: message.api
build:
context: ./message_api
dockerfile: Dockerfile
container_name: message.api
environment:
- "RabbitMq/Host=rabbit"
- "RabbitMq/Username=guest"
- "RabbitMq/Password=guest"
depends_on:
- rabbit
rabbit:
image: rabbitmq:3.7.2-management
hostname: rabbit
ports:
- "15672:15672"
- "5672:5672"
rsscomparator:
image: rsscomparator
build:
context: ./rss_comparator_app
dockerfile: Dockerfile
container_name: rsscomparator
environment:
- "RabbitMq/Host=rabbit"
- "RabbitMq/Username=guest"
- "RabbitMq/Password=guest"
depends_on:
- rabbit
I'm using dotnetcore console app. When I use this app in docker I'm getting error. I can reach rabbitmq web browser(http://192.168.99.100:15672) but app can not reach.

You are trying to connect from your container app to your rabbitmq app.
You try to achieve this with 127.0.0.1:5672 in your console app container.
But this is pointing to your localhost inside this container, and not to your localhost on your host.
You are deploying your services using the same docker-compose without specifying network settings which means they are all deployed inside the same docker bridge network. This will allow you to let the containers communicate with each other using their container or service names.
So try to connect to rabbit:5672 instead of 127.0.0.1:5672. This name will be translated to the container IP (172.xx.xx.xx) which means you'll create a private connection between your containers.

Related

Cannot send stream messages to rabbitmq which in docker

I have a rabbitmq container in docker and another service to send stream type messages to it. But it is only ok when the service is outside the docker, if I build the service as a container run in docker,and send stream messages, It always shows "System.Net.Sockets.SocketException (111): Connection refused ". But if you send a message of classic type, it's a success.
rabbitmq:
container_name: rabbitmq
image: rabbitmq:3-management
environment:
RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS: -rabbitmq_stream advertised_host localhost
RABBITMQ_DEFAULT_USER:"admin"
RABBITMQ_DEFAULT_PASS:"admin"
RABBITMQ_DEFAULT_VHOST:"application"
ports:
- 5672:5672
- 5552:5552
- 15672:15672
volumes:
- ./conf/rabbitmq/rabbitmq.conf:/etc/rabbitmq/rabbitmq.conf
- ./conf/rabbitmq/enabled_plugins:/etc/rabbitmq/enabled_plugins
healthcheck:
test: rabbitmq-diagnostics -q ping
interval: 15s
timeout: 15s
retries: 5
env_file:
- ./.local.env
./conf/rabbitmq/rabbitmq.conf
enter code herestream.listeners.tcp.1 = 5552
stream.tcp_listen_options.backlog = 4096
stream.tcp_listen_options.recbuf = 131072
stream.tcp_listen_options.sndbuf = 131072
stream.tcp_listen_options.keepalive = true
stream.tcp_listen_options.nodelay = true
stream.tcp_listen_options.exit_on_close = true
stream.tcp_listen_options.send_timeout = 120
./conf/rabbitmq/enabled_plugins
[rabbitmq_management,rabbitmq_prometheus,rabbitmq_stream,rabbitmq_stream_management].
another service configures in docker:
# RabbitMQ
Host = "host.docker.internal",
VirtualHost = "application",
Port= 5672,
StreamPort = 5552,
User= "admin",
Password = "admin",
UseSSL = false

Cannot start the provider *file.Provider: field not found, node: entrypoint in Traefik configuration

I want to redirect the request to a non-dockerized webapp running in another host using traefik.
I am starting traefik with docker-compose with the following yml :
version: "3.3"
services:
reverse-proxy:
image: traefik:v2.4
command:
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.file=true"
- "--providers.file.filename=/etc/traefik/rules.toml"
ports:
- "80:80"
- "8050:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- "./rules.toml:/etc/traefik/rules.toml"
labels:
- traefik.enable=false
And my rules.toml file is :
[entrypoints]
[entrypoints.http]
address = ":8080"
[providers]
[providers.file]
[http]
[http.routers]
[http.routers.auth-router]
rule = "Path(`/auth`)"
service = "auth"
entrypoint=["http"]
[http.services]
[http.services.auth.loadbalancer]
[[http.services.auth.loadbalancer.servers]]
url = "http://myhost.com:8080/auth"
Whenever user opens http://localhost:8080/auth, traefik should redirect them to http://myhost.com:8080/auth, that is my requirement. but I'm getting the following error during traefik startup
Cannot start the provider *file.Provider: field not found, node: entrypoint"
How can I resolve this issue.
The error makes it seem like it's a file provider issue, but I think it's just a type on your part -- should be entryPoints (uppercase P) in your rules.toml file
[entryPoints]
[entryPoints.http]
address = ":8080"
[providers]
[providers.file]
[http]
[http.routers]
[http.routers.auth-router]
rule = "Path(`/auth`)"
service = "auth"
entryPoints = ["http"]
[http.services]
[http.services.auth.loadbalancer]
[[http.services.auth.loadbalancer.servers]]
url = "http://myhost.com:8080/auth"

Why does my proxied docker network request work locally but not in production?

I'm working on a project to build a front end for a private/secure docker registry. The way I'm doing this is to use docker-compose to create a network between the front end and the registry. My idea is to use express to serve my site and forward requests from the client to the registry via the docker network.
Locally, everything works perfectly....
However, in production the client doesn't get a response back from the registry. I can login to the registry and access it's api via postman (for ex the catalog) at https://myregistry.net:5000/v2/_catalog. But... the client just errors out.
when I go into the express server container and try to curl the endpoint I created to proxy requests, I get this
curl -vvv http://localhost:3000/api/images
* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 3000 (#0)
> GET /api/images HTTP/1.1
> Host: localhost:3000
> User-Agent: curl/7.61.1
> Accept: */*
>
* Empty reply from server
* Connection #0 to host localhost left intact
curl: (52) Empty reply from server
and the error that's returned includes a _currentUrl of https://username:password#registry:5000/v2/_catalog
my docker-compose file looks like this...
version: '3'
services:
registry:
image: registry:2
container_name: registry
ports:
# forward requests to registry.ucdev.net:5000 to 127.0.0.1:443 on the container
- "5000:443"
environment:
REGISTRY_AUTH: htpasswd
REGISTRY_AUTH_HTPASSWD_PATH: /auth/htpasswd
REGISTRY_AUTH_HTPASSWD_REALM: Registry Realm
REGISTRY_HTTP_ADDR: 0.0.0.0:443
REGISTRY_HTTP_TLS_CERTIFICATE: /certs/fullchain.pem
REGISTRY_HTTP_TLS_KEY: /certs/privkey.pem
volumes:
- /etc/letsencrypt/live/registry.ucdev.net/fullchain.pem:/certs/fullchain.pem
- /etc/letsencrypt/live/registry.ucdev.net/privkey.pem:/certs/privkey.pem
- ./auth:/auth
restart: always
server:
image: uc/express
container_name: registry-server
ports:
- "3000:3000"
volumes:
- ./:/project
environment:
NODE_ENV: production
restart: always
entrypoint: ["npm", "run", "production"]
an example of my front end request looks like this...
axios.get('http://localhost:3000/api/images')
.then((response) => {
const { data: { registry, repositories } } = response;
this.setState((state, props) => {
return { registry, repositories }
})
})
.catch((err) => {
console.log(`Axios error -> ${err}`)
console.error(err)
})
and that request is sent to the express server and then to the registry like this...
app.get('/api/images', async (req, res) => {
// scheme is either http or https depending on NODE_ENV
// registry is the name of the container on the docker network
await axios.get(`${scheme}://registry:5000/v2/_catalog`)
.then((response) => {
const { data } = response;
data.registry = registry;
res.json(data);
})
.catch((err) => {
console.log('Axios error -> images ', err);
return err;
})
})
any help you could offer would be great! thanks!
In this particular case it was an issue related to the firewall the server was behind. requests coming from the docker containers were being blocked. to solve this problem we had to explicitly set the network_mode to bridge. this allowed requests to from within the containers to behave correctly. the final docker-compose file looks like this
version: '3'
services:
registry:
image: registry:2
container_name: registry
# setting network_mode here and on the server helps the express api calls work correctly on the myregistry.net server.
# otherwise, the calls fail with 'network unreachable' due to the firewall.
network_mode: bridge
ports:
# forward requests to myregistry.net:5000 to 127.0.0.1:443 on the container
- "5000:443"
environment:
REGISTRY_AUTH: htpasswd
REGISTRY_AUTH_HTPASSWD_PATH: /auth/htpasswd
REGISTRY_AUTH_HTPASSWD_REALM: Registry Realm
REGISTRY_HTTP_ADDR: 0.0.0.0:443
REGISTRY_HTTP_TLS_CERTIFICATE: /certs/fullchain.pem
REGISTRY_HTTP_TLS_KEY: /certs/privkey.pem
volumes:
- /etc/letsencrypt/live/myregistry.net/fullchain.pem:/certs/fullchain.pem
- /etc/letsencrypt/live/myregistry.net/privkey.pem:/certs/privkey.pem
- ./auth:/auth
restart: always
server:
image: uc/express
container_name: registry-server
network_mode: bridge
ports:
- "3000:3000"
volumes:
- ./:/project
environment:
NODE_ENV: production
restart: always
entrypoint: ["npm", "run", "production"]

Masstransit cannot access host machine RabbitMQ from a docker container

I created a simple .net core console application with docker support. Following
Masstransit code fails to connect to RabbitMQ instance on host machine. But similar implementation using RabitMq.Client is able to connect to host machine RabbitMQ instance.
Masstransit throws
MassTransit.RabbitMqTransport.RabbitMqConnectionException: Connect
failed: ctas#192.168.0.9:5672/ --->
RabbitMQ.Client.Exceptions.BrokerUnreachableException:
host machine ip : 192.168.0.9
using Masstransit
string rabbitMqUri = "rabbitmq://192.168.0.9/";
string userName = "ctas";
string password = "ctas#123";
string assetServiceQueue = "hello";
var bus = Bus.Factory.CreateUsingRabbitMq(cfg =>
{
var host = cfg.Host(new Uri(rabbitMqUri), hst =>
{
hst.Username(userName);
hst.Password(password);
});
cfg.ReceiveEndpoint(host,
assetServiceQueue, e =>
{
e.Consumer<AddNewAssetReceivedConsumer>();
});
});
bus.Start();
Console.WriteLine("Service Running.... Press enter to exit");
Console.ReadLine();
bus.Stop();
Using RabbitMQ Client
public static void Main()
{
var factory = new ConnectionFactory();
factory.UserName = "ctas";
factory.Password = "ctas#123";
factory.VirtualHost = "watcherindustry";
factory.HostName = "192.168.0.9";
using (var connection = factory.CreateConnection())
using (var channel = connection.CreateModel())
{
channel.QueueDeclare(queue: "hello",
durable: false,
exclusive: false,
autoDelete: false,
arguments: null);
var consumer = new EventingBasicConsumer(channel);
consumer.Received += (model, ea) =>
{
var body = ea.Body;
var message = Encoding.UTF8.GetString(body);
Console.WriteLine(" [x] Received {0}", message);
};
channel.BasicConsume(queue: "hello",
autoAck: true,
consumer: consumer);
Console.WriteLine(" Press [enter] to exit.");
Console.ReadLine();
}
}
Docker file
FROM microsoft/dotnet:1.1-runtime
ARG source
WORKDIR /app
COPY ${source:-obj/Docker/publish} .
ENTRYPOINT ["dotnet", "TestClient.dll"]
I created an example, and was able to connect my host, using the preview package from masstransit.
Start rabbitmq in docker and expose ports on the host
docker run -d -p 5672:5672 -p 15672:15672 --hostname my-rabbit --name some-rabbit rabbitmq:3-management
Build and run console app.
docker build -t dotnetapp .
docker run -d -e RABBITMQ_URI=rabbitmq://guest:guest#172.17.0.2:5672 --name some-dotnetapp dotnetapp
To verify your receiving messages run
docker logs some-dotnetapp --follow
you should see the following output
Application is starting...
Connecting to rabbitmq://guest:guest#172.17.0.2:5672
Received: Hello, World [08/12/2017 04:35:53]
Received: Hello, World [08/12/2017 04:35:58]
Received: Hello, World [08/12/2017 04:36:03]
Received: Hello, World [08/12/2017 04:36:08]
Received: Hello, World [08/12/2017 04:36:13]
...
Notes:
172.17.0.2 was my-rabbit container ip address but you can replace it with your machine ip address
http://localhost:15672 is the rabbitmq management console log in with guest as username and password.
Lastly portainer.io is a very useful application to visually view you local docker environment.
Thanks for the response. I managed to resolve this issue. My findings are as follows.
to connect to a rabbitmq instance on another docker container, they have to be moved/connected to the same network. To do this
create a newtork
docker network create -d bridge my_bridge
connect both app and rabbitmq containers to same network
docker network connect my_bridge <container name>
For masstransit uri use rabbitmq container IP on that network or container name
To connect rabbitmq instance of host machine from a app on docker container.
masstransit uri should include machine name( I tried IP, that did not work)
Try using virtual host in MassTransit configuration too, not sure why you decided to omit it.
var host = cfg.Host("192.168.0.9", "watcherindustry", hst =>
{
hst.Username(userName);
hst.Password(password);
});
Look at Alexey Zimarev comment to your question, if your rabbit runs on a container then it should be on your docker-compese file and then use that entry in your endpoint definition to connect to rabbit because docker creates an internal network on which you are agnostic from source code...
rabbitmq:
container_name: "rabbitmq-yournode01"
hostname: rabbit
image: rabbitmq:3.6.6-management
environment:
- RABBITMQ_DEFAULT_USER=yourusergoeshere
- RABBITMQ_DEFAULT_PASS=yourpasswordgoeshere
- RABBITMQ_DEFAULT_VHOST=vhost
volumes:
- rabbit-volume:/var/lib/rabbitmq
ports:
- "5672:5672"
- "15672:15672"
In your app settings you should have something lie:
"ConnectionString": "host=rabbitmq:5672;virtualHost=vhost;username=yourusergoeshere;password=yourpasswordgoeshere;timeout=0;prefetchcount=1",
And if you'd use EasyNEtQ you could do:
_bus = RabbitHutch.CreateBus(_connectionString); // The one above
I hope it helps,
Juan

Unable to configure and run pithos.io using AWS Java SDK

I am trying to configure pithos.io on my server testmbr1.kabuter.com:8081:
Here is how I start pithos.io:
java -jar pithos-0.7.5-standalone.jar -f pithos.yaml
My pithos.yaml:
service:
host: "0.0.0.0"
port: 8081
logging:
level: info
console: true
overrides:
io.pithos: debug
options:
service-uri: testmbr1.kabuter.com
default-region: myregion
keystore:
keys:
AKIAIOSFODNN7EXAMPLE:
master: true
tenant: test#example.com
secret: 'wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY'
bucketstore:
default-region: myregion
cluster: "45.33.37.148"
keyspace: storage
regions:
myregion:
metastore:
cluster: "45.33.37.148"
keyspace: storage
storage-classes:
standard:
cluster: "45.33.37.148"
keyspace: storage
max-chunk: "128k"
max-block-chunk: 1024
cassandra:
saved_caches_directory: "target/db/saved_caches"
data_file_directories:
- "target/db/data"
commitlog_directory: "target/db/commitlog"
I am using AWS Java SDK to connect. Below is my JUnit:
#Test
public void testPithosIO() {
try {
ClientConfiguration config = new ClientConfiguration();
config.setSignerOverride("S3SignerType");
EndpointConfiguration endpointConfiguration = new EndpointConfiguration("http://testmbr1.kabuter.com:8081",
"myregion");
BasicAWSCredentials awsCreds = new BasicAWSCredentials("AKIAIOSFODNN7EXAMPLE",
"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY");
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion("myregion")
.withClientConfiguration(config)
.withCredentials(new AWSStaticCredentialsProvider(awsCreds))
.withEndpointConfiguration(endpointConfiguration).build();
s3Client.createBucket("mybucket1");
System.out.println(s3Client.getRegionName());
System.out.println(s3Client.listBuckets());
} catch (Exception e) {
e.printStackTrace();
}
}
My problems is 1) I am getting: com.amazonaws.SdkClientException: Unable to execute HTTP request: Connect to mybucket1.testmbr1.kabuter.com:8081 [mybucket1.testmbr1.kabuter.com/198.105.254.130, mybucket1.testmbr1.kabuter.com/104.239.207.44] failed: connect timed out
This was fixed by adding mybucket1.testmbr1 CNAME record pointing to testmbr1.kabuter.com.
2) while trying to createBucket: s3Client.createBucket("mybucket1") I am getting:
com.amazonaws.services.s3.model.AmazonS3Exception: The request signature we calculated does not match the signature you provided. Check your key and signing method. (Service: Amazon S3; Status Code: 403; Error Code: SignatureDoesNotMatch; Request ID: d98b7908-d11e-458a-be27-254b136f344a), S3 Extended Request ID: d98b7908-d11e-458a-be27-254b136f344a
How do I get it to working? pithos.io seems to have limited documentation.
Any pointers?
Since my endpoint was using a non-standard port:
http://testmbr1.kabuter.com:8081
I had to define server-uri in pithos.yaml with the port as well:
server-uri : testmbr1.kabuter.com:8081