How to run a nats ws server on localhost - ssl

I have a nats server on localhost or local docker with WS configuration. Nats require tls for websocket. I created a cert and key. But, on connect it throws an error TLS handshake error from 127.0.0.1:61732: remote error: tls: unknown certificate authority. What should I do?

For test env, you need to create a conf file like this:
# ws.conf
websocket {
# Specify a host and port to listen for websocket connections
#
# listen: "host:port"
# It can also be configured with individual parameters,
# namely host and port.
#
# host: "hostname"
port: 443
# For test environments, you can disable the need for TLS
# by explicitly setting this option to `true`
#
no_tls: true
}
I called this as ws.conf here, but you could name it as you wish. Note I explicitly set the no_tls to true and didn't passed any certificate.
Docker way
After that you need to create a Dockerfile, since the entrypoint of nats image is the nats executable, we could do this:
FROM nats:alpine
COPY ws.conf ws.conf
CMD [ "-c", "ws.conf" ]
Note in CMD I'm passing some params, let me explain:
-c is a shortcurt for --config option for nats server
ws.conf is the config file we created
Now you could build and run this image. Or alternatively you could create a docker-compose to do it for you, something like that:
version: "3.8"
services:
nats-ws:
dockerfile: Dockerfile
ports:
- 443:443
- 4222:4222
Standalone way
After create the ws.conf start the nats with the params equal to docker way.
nats-server -c path/to/ws.conf
Thats it, if everthing is right you will see this message in nats logs / stdout:
[1] 2021/04/25 18:29:52.854288 [INF] Starting nats-server
[1] 2021/04/25 18:29:52.854402 [INF] Version: 2.2.2
[1] 2021/04/25 18:29:52.854407 [INF] Git: [a5f3aab]
[1] 2021/04/25 18:29:52.854411 [INF] Name: NCXFZRJ6L2PYQM6DND5YRKZA5OMQBQTFGH7ATZULU3US7KBWQDU4RKE3
[1] 2021/04/25 18:29:52.854416 [INF] ID: NCXFZRJ6L2PYQM6DND5YRKZA5OMQBQTFGH7ATZULU3US7KBWQDU4RKE3
[1] 2021/04/25 18:29:52.854421 [INF] Using configuration file: ws.conf
[1] 2021/04/25 18:29:52.858602 [INF] Listening for websocket clients on ws://0.0.0.0:443
[1] 2021/04/25 18:29:52.858643 [WRN] Websocket not configured with TLS. DO NOT USE IN PRODUCTION!
[1] 2021/04/25 18:29:52.859209 [INF] Listening for client connections on 0.0.0.0:4222
[1] 2021/04/25 18:29:52.859696 [INF] Server is ready
Refs
nats flags
websocker config file

Related

Confluent REST proxy API SSL handshake fails

I have a kafka cluster on docker using confluent images. I am using docker-compose to build the containers.
When I tried to run the container it starts but can't communicate with any broker due to SSL handshake failed. I don't know if I miss some configuration
[kafka-admin-client-thread | adminclient-1] ERROR org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -3 (/XXX:19092) failed authentication due to: SSL handshake failed
My Kafka brokers are configured as follows:
kafka1:
image: confluentinc/cp-kafka:5.2.2
container_name: kafka1
ports:
- "19092:19092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: XXX:12181,XXX:12181,XXX:12181
KAFKA_ADVERTISED_LISTENERS: SSL://XXXX:19092
KAFKA_SSL_KEYSTORE_FILENAME: kafka.broker1.keystore.jks
KAFKA_SSL_KEYSTORE_CREDENTIALS: broker1_keystore_creds
KAFKA_SSL_KEY_CREDENTIALS: broker1_sslkey_creds
KAFKA_SSL_TRUSTSTORE_FILENAME: kafka.broker1.truststore.jks
KAFKA_SSL_TRUSTSTORE_CREDENTIALS: broker1_truststore_creds
KAFKA_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: ""
KAFKA_SSL_CLIENT_AUTH: required
KAFKA_SECURITY_INTER_BROKER_PROTOCOL: SSL
KAFKA_SECURITY_PROTOCOL: SSL
volumes:
- ./../../secrets:/etc/kafka/secrets
I am trying to bring a Confluent REST Proxy API into another container using the configurations:
kafka-rest-proxy:
image: confluentinc/cp-kafka-rest:5.2.2
hostname: kafka-rest-proxy
ports:
- "18082:18082"
environment:
KAFKA_REST_LISTENERS: "http://0.0.0.0:18082"
KAFKA_REST_ZOOKEEPER_CONNECT: XXX:12181,XXX:12181,XXX:12181
KAFKA_REST_HOST_NAME: kafka-rest-proxy
KAFKA_REST_BOOTSTRAP_SERVERS: SSL://XXX:19092,SSL://XXX:19092,SSL://XXX:19092
KAFKA_REST_CLIENT_SECURITY_PROTOCOL: SSL
KAFKA_REST_CLIENT_SSL_KEYSTORE_LOCATION: /etc/kafka/secrets/kafka.broker1.keystore.jks
KAFKA_REST_CLIENT_SSL_KEYSTORE_PASSWORD: XXX
KAFKA_REST_CLIENT_SSL_TRUSTSTORE_LOCATION: /etc/kafka/secrets/kafka.broker1.truststore.jks
KAFKA_REST_CLIENT_SSL_TRUSTSTORE_PASSWORD: XXX
KAFKA_REST_CLIENT_SSL_KEY_PASSWORD: XXX
KAFKA_REST_SSL_KEYSTORE_LOCATION: /etc/kafka/secrets/kafka.producer.keystore.jks
KAFKA_REST_SSL_KEYSTORE_PASSWORD: XXX
KAFKA_REST_SSL_TRUSTSTORE_LOCATION: /etc/kafka/secrets/kafka.producer.truststore.jks
KAFKA_REST_SSL_TRUSTSTORE_PASSWORD: XXX
volumes:
- ./../../secrets:/etc/kafka/secrets
I configured the SSH connection only with the truststore (I removed the keystore config completely) and I used the OPTS environment variable:
docker run -d \
--name krp \
-p 8082:8082 \
...
-v /home/ubuntu/kafka-keys:/kafka-keys \
-e KAFKA_REST_CLIENT_OPTS="-Dssl.keystore.location=/kafka-keys/kafka.client.keystore.jks -Dssl.keystore.password=changeit -Dssl.truststore.location=/kafka-keys/kafka.client.truststore.jks" \
confluentinc/cp-kafka-rest:5.3.1
And the connection worked.
In my case (kubernetes with helm) i had to add to change
"listeners":"http://0.0.0.0:8082" to "listeners":"https://0.0.0.0:8082"
i see the same mistake in your configuration
KAFKA_REST_LISTENERS: "http://0.0.0.0:18082"
After that you will see in the end of the startup logs that it tryes to load the keystore file

502 Proxy Error ( docker + traefik + apache )

I'm trying to setup traefik for SSL termination on my local development instance. Following up this guide I have the following configuration.
docker-compose.yml
version: '2.1'
services:
mariadb:
image: wodby/mariadb:10.2-3.0.2
healthcheck:
test: "/usr/bin/mysql --user=dummyuser --password=dummypasswd --execute \"SHOW DATABASES;\" | grep database"
interval: 3s
timeout: 1s
retries: 5
restart: always
environment:
MYSQL_ROOT_PASSWORD: dummy
MYSQL_DATABASE: database
volumes:
- ./mariadb-init:/docker-entrypoint-initdb.d # Place init .sql file(s) here.
- mysql:/var/lib/mysql # I want to manage volumes manually.
php:
depends_on:
mariadb:
condition: service_healthy
ports:
- "25:25"
- "587:587"
environment:
PHP_FPM_CLEAR_ENV: "no"
DB_HOST: mariadb
#DB_USER: dummy
DB_PASSWORD: dummypasswd
DB_NAME: database
DB_DRIVER: mysql
PHP_POST_MAX_SIZE: "256M"
PHP_UPLOAD_MAX_FILESIZE: "256M"
PHP_MAX_EXECUTION_TIME: 300
volumes:
- codebase:/var/www/html/
- private:/var/www/html/private
solr:
image: mxr576/apachesolr-4.x-drupal-docker
ports:
- "8983:8983"
labels:
- 'traefik.backend=solr'
- 'traefik.port=8983'
# - 'traefik.frontend.rule=Host:192.168.33.10'
volumes:
- solr:/opt/solr/example/solr/collection1/data
restart: always
portainer:
image: portainer/portainer
command: --no-auth -H unix:///var/run/docker.sock
volumes:
- /var/run/docker.sock:/var/run/docker.sock
labels:
- 'traefik.backend=portainer'
- 'traefik.port=9000'
restart: always
apache:
image: wodby/php-apache:2.4-2.0.2
# ports:
# - "80:80"
depends_on:
- php
environment:
APACHE_LOG_LEVEL: warn
APACHE_BACKEND_HOST: php
APACHE_SERVER_ROOT: /var/www/html/drupal
volumes:
- codebase:/var/www/html/
- private:/var/www/html/private
labels:
- 'traefik.backend=apache'
- 'traefik.docker.network=proxy'
- "traefik.frontend.rule=Host:127.0.0.1"
- "traefik.enable=true"
- "traefik.port=80"
- "traefik.default.protocol=http"
restart: always
networks:
- proxy
traefik:
image: traefik
command: -c /traefik.toml --web --docker --logLevel=INFO
ports:
- '80:80'
- '443:443'
- '8888:8080' # Dashboard
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /codebase/traefik.toml:/traefik.toml
- /codebase/certs/cert.crt:/cert.crt
- /codebase/certs/cert.key:/cert.key
volumes:
solr:
external: true
mysql:
external: true
codebase:
external: true
private:
external: true
networks:
proxy:
external: true
traefik.toml
logLevel = "DEBUG" # <---
defaultEntryPoints = ["https", "http"] # <---
[accessLog]
[traefikLog]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[[entryPoints.https.tls.certificates]]
certFile = "/cert.crt"
keyFile = "/cert.key"
[retry]
[docker]
endpoint = "unix:///var/run/docker.sock"
watch = true
exposedbydefault = false
When trying to verify the instance, I get a 502 Bad Gateway
curl -i -k https://127.0.0.1
HTTP/1.1 502 Bad Gateway
Content-Length: 392
Content-Type: text/html; charset=iso-8859-1
Date: Fri, 14 Sep 2018 16:34:36 GMT
Server: Apache/2.4.29 (Unix) LibreSSL/2.5.5
X-Content-Type-Options: nosniff
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>502 Proxy Error</title>
</head><body>
<h1>Proxy Error</h1>
<p>The proxy server received an invalid
response from an upstream server.<br />
The proxy server could not handle the request <em>GET /index.php</em>.<p>
Reason: <strong>DNS lookup failure for: php</strong></p></p>
</body></html>
A reset for docker-compose and the docker network didn't help.
I've checked the issue on their repo and it seems like nobody got a definitive solution. Anybody has an idea on how to solve this?
Edit:Update for full docker-compose file.
You are trying to connect to php container from apache service using service discovery. But php container is not attached to the network proxy, Because you haven't declared network for it. The same case is with mariabd as well. So, When you connect to apache/traefik they look for host php which is not attached to the network proxy and throw error 502.
Unless and until you specify external network, Docker containers will not be connected to them.
Hence, You have to specify the network as follows for all the services in order to make docker service discovery work properly.
networks:
- proxy
Bonus:
Since you have done port mapping. You can also use public Ip of your host machine followed by the port to connect to services from docker container and from outside containers as well.
Example:
Let us assume your ip is 192.168.0.123 then you can connect to php from
any services in docker container and even from outside docker as 192.168.0.123:25 and 192.168.0.123:587. This is because you have exposed ports
25,587 by mapping them to host ports 25,587.
Some references:
Docker networking
Networking using the host network
Connect a container to a user-defined bridge
Networking with standalone containers
Service discovery
Networking in Compose (check "Specify custom networks" section)

Hyperledger Fabric-ca connection to a LDAP directory

We are implementing a Hyperledger Fabric solution. To do so, we set up a fabric-CA, using the minimal configuration (we are still trying to figure out how the things works) in a specific docker.
As we need to login our users, using a email/password couple, we set up a LDAP component. We choosed to use OpenLDAP, using osixia/openldap implementation in a different docker.
We set the parameters in the fabric-ca-server-config.yaml to connect Fabric CA to the LDAP. At the start of both dockers, the logs seems fine :
Successfully initialized LDAP client
When we carry on the Fabric-CA tutorial, we fail at the command :
fabric-ca-client enroll -u http://cn=admin,dc=example:admin#localhost:7054
The result is :
[INFO] 127.0.0.1:46244 POST /enroll 401 23 "Failed to get user: Failed to connect to LDAP server over TCP at localhost:389: LDAP Result Code 200 "": dial tcp 127.0.0.1:389: connect: connection refused"
The LDAP is setup and functionning correctly, when sollicitated in CLI and via PHPLdapAdmin, an LDAP Browser, using the same credentials.
This is a bit of the fabric-ca-server-config.yaml:
ldap:
enabled: true
url: ldap://cn=admin,dc=example:admin#localhost:389/dc=example
userfilter: (uid=%s)
tls:
enabled: false
certfiles:
client:
certfile: noclientcert
keyfile:
attribute:
names: ['uid','member']
converters:
- name: hf.Revoker
value: attr("uid") =~ "revoker*"
maps:
groups:
- name: example
value: peer
Anyone could help ?
Thanks for reading,
I see two issues here:
First is more related with docker rather than fabric-ca. You have to set netowrk_mode to host to remove network isolation between the container and the Docker host. Then your docker container will see OpenLDAP located on Docker host
Please look into sample docker-compose.yaml file
version: '2'
services:
fabric-ca-server:
image: hyperledger/fabric-ca:1.1.0
container_name: fabric-ca-server
ports:
- "7054:7054"
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
volumes:
- ./fabric-ca-server:/etc/hyperledger/fabric-ca-server
command: sh -c 'fabric-ca-server start'
network_mode: host
More about docker network you can find here: https://docs.docker.com/network/
When network issue will be resolved, you have also to modify userfilter to relate with admin prefix so it should looks like this: userfilter: (cn=%s) If userfilter will not be repaired then you will get info that admin cannot be found in LDAP.
I did not using the local LDAP server, instead I am using the one line for the quick test...
http://www.forumsys.com/tutorials/integration-how-to/ldap/online-ldap-test-server/
However I am still getting the error as well.
My fabric-ca-server-config.yaml is
ldap:
enabled: true
url: ldap://cn=read-only-admin,dc=example,dc=com:password#ldap.forumsys.com:389/dc=example,dc=com
tls:
certfiles:
client:
certfile:
keyfile:
# Attribute related configuration for mapping from LDAP entries to Fabric CA attributes
attribute:
names: ['uid','member']
converters:
- name: hf.Revoker
value: attr("uid") =~ "revoker*"
maps:
groups:
- name:
value:
And I run it by:
fabric-ca-server start -c fabric-ca-server-config.yaml
I saw logs:
Successfully initialized LDAP client
Here is the screenshot for phpLDAPAdmin:
I am using the same script for testing:
$fabric-ca-client enroll -u http://cn=read-only-admin,dc=example,dc=com:password#localhost:7054
$fabric-ca-client enroll -u http://uid=tesla,dc=example,dc=com:password#localhost:7054
But still not good, getting something like:
POST /enroll 401 23 "Failed to get user: User 'uid=tesla,dc=example,dc=com' does not exist in LDAP directory"

Routing to different container in docker using zuul not working

I have 2 microservices (spring boot app) running in different docker containers and configured with zuul api gateway. Routing to other container is not working. Container 1 is running in 8030 port & container 2 is running on port 8030.
Below is the zuul configuration in application.yml -
server:
port: 8030
# TODO: figure out why I need this here and in bootstrap.yml
spring:
application:
name: zuul server
endpoints:
restart:
enabled: true
shutdown:
enabled: true
health:
sensitive: false
zuul:
routes:
zuultest:
url: http://localhost:8080
stripPrefix: false
ribbon:
eureka:
enabled: false
When access through localhost:8030/zuultest/test am getting the exception as -
2016-09-19 09:10:14.597 INFO 1 --- [nio-8030-exec-3] hello.SimpleFilter : GET request to http://localhost:8030/zuultest/test
2016-09-19 09:10:14.600 WARN 1 --- [nio-8030-exec-3] o.s.c.n.z.filters.post.SendErrorFilter : Error during filtering
Can I know why I am getting this?
you can use links option in the docker-compose.yml to link between the two containers.
demo1:
image: <demo1 image name>
links: - demo2
demo2:
image: <demo2 image name>
Then in the zuul:routs:url configuration you can use the conatiner name, demo2 instead of it's IP.
How did you start the 2 containers? Both cannot have the same port if you exposed them to the docker host.
docker run --name service A --net=host -p 8030:8030 ...
docker run --name service B --net=host -p 8030:8031 ...
Without this, if you are calling localhost:8030, you are calling the host (not the container), and you are not getting a response.
You need to map the port to the host when you start them with different ports, and call them with localhost to the right exposed port

Redis in docker-compose: any way to specify a redis.conf file?

my Redis container is defined as a standard image in my docker_compose.yml
redis:
image: redis
ports:
- "6379"
I guess it's using standard settings like binding to Redis at localhost.
I need to bind it to 0.0.0.0, is there any way to add a local redis.conf file to change the binding and let docker-compose to use it?
thanks for any trick...
Yes. Just mount your redis.conf over the default with a volume:
redis:
image: redis
volumes:
- ./redis.conf:/usr/local/etc/redis/redis.conf
ports:
- "6379"
Alternatively, create a new image based on the redis image with your conf file copied in. Full instructions are at: https://registry.hub.docker.com/_/redis/
However, the redis image does bind to 0.0.0.0 by default. To access it from the host, you need to use the port that Docker has mapped to the host for you which you find by using docker ps or the docker port command, you can then access it at localhost:32678 where 32678 is the mapped port. Alternatively, you can specify a specific port to map to in the docker-compose.yml.
As you seem to be new to Docker, this might all make a bit more sense if you start by using raw Docker commands rather than starting with Compose.
Old question, but if someone still want to do that, it is possible with volumes and command:
command: redis-server /usr/local/etc/redis/redis.conf
volumes:
- ./redis/redis.conf:/usr/local/etc/redis/redis.conf
Unfortunately with Docker, things become a little tricky when it comes to Redis configuration file, and the answer voted as best (im sure from people that did'nt actually tested it) it DOESNT work.
But what DOES WORK, fast, and without husles is this:
command: redis-server --bind redis-container-name --requirepass some-long-password --maxmemory 256mb --maxmemory-policy allkeys-lru --appendonly yes
You can pass all the variable options you want in the command section of the yaml docker file, by adding "--" in the front of it, followed by the variable value.
Never forget to set a password, and if possible close the port 6379.
Τhank me later.
PS: If you noticed at the command, i didnt use the typical 127.0.0.1, but instead the redis container name. This is done for the reason that docker assigns ip addresses internally via it's embedded dns server. In other words this bind address becomes dynamic, hence adding an extra layer of security.
If your redis container is called "redis" and you execute the command docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' redis (for verifying the running container's internal ip address), as far as docker is concerned, the command give in docker file, will be translated internally to something like: redis-server --bind 172.19.0.5 --requirepass some-long-password --maxmemory 256mb --maxmemory-policy allkeys-lru --appendonly yes
Based on David awnser but a more "Docker Compose" way is:
redis:
image: redis:alpine
command: redis-server --include /usr/local/etc/redis/redis.conf
volumes:
- ./redis/redis.conf:/usr/local/etc/redis/redis.conf
That way, you include the .conf file by docker-compose.yml file and don't need a custom image.
mount your config /usr/local/etc/redis/redis.conf
add command to execute redis-server with your config
redis:
image: redis:7.0.4-alpine
restart: unless-stopped
volumes:
- ./redis.conf:/usr/local/etc/redis/redis.conf
command: redis-server /usr/local/etc/redis/redis.conf
########################################
# or using command if mount not work
########################################
command: >
redis-server --bind 127.0.0.1
--appendonly no
--save ""
--protected-mode yes
It is an old question but I have a solution that seems elegant and I don't have to execute commands every time ;).
1 Create your dockerfile like this
#/bin/redis/Dockerfile
FROM redis
CMD ["redis-server", "--include /usr/local/etc/redis/redis.conf"]
What we are doing is telling the server to include that file in the Redis configuration. The settings you type there will override the default Redis have.
2 Create your docker-compose
redisall:
build:
context: ./bin/redis
container_name: 'redisAll'
restart: unless-stopped
ports:
- "6379:6379"
volumes:
- ./config/redis:/usr/local/etc/redis
3 Create your configuration file it has to be called the same as Dockerfile
//config/redis/redis.conf
requirepass some-long-password
appendonly yes
################################## NETWORK #####################################
# By default, if no "bind" configuration directive is specified, Redis listens
# for connections from all the network interfaces available on the server.
# It is possible to listen to just one or multiple selected interfaces using
# the "bind" configuration directive, followed by one or more IP addresses.
#
# Examples:
#
# bind 192.168.1.100 10.0.0.1
# bind 127.0.0.1 ::1
#
# ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the
# internet, binding to all the interfaces is dangerous and will expose the
# instance to everybody on the internet. So by default we uncomment the
# following bind directive, that will force Redis to listen only into
# the IPv4 loopback interface address (this means Redis will be able to
# accept connections only from clients running into the same computer it
# is running).
#
# IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES
# JUST COMMENT THE FOLLOWING LINE.*
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
bind 127.0.0.1
// and all configurations that can be specified
// what you put here overwrites the default settings that have the
container
I had the same problem when using Redis in docker environment that the Redis could not save data to disk on dump.rdb.
The problem was the Redis could not read the configurations redis.conf , I solve it by sending the required configurations with the command in docker compose as below :
redis19:
image: redis:5.0
restart: always
container_name: redis19
hostname: redis19
command: redis-server --requirepass some-secret --stop-writes-on-bgsave-error no --save 900 1 --save 300 10 --save 60 10000
volumes:
- $PWD/redis/redis_data:/data
- $PWD/redis/redis.conf:/usr/local/etc/redis/redis.conf
- /etc/localtime:/etc/localtime:ro
and it works fine.
I think it will be helpful i am sharing working code in my local
redis:
container_name: redis
hostname: redis
image: redis
command: >
--include /usr/local/etc/redis/redis.conf
volumes:
- ./redis/redis.conf:/usr/local/etc/redis/redis.conf
ports:
- "6379:6379"