LDAP modify cn=config results in "no global superior knowledge" - ldap

I'm trying to enable bind_v2 via an ldap diff? file on a fresh bitnami openldap docker container.
However this does not work and results in the following error:
ldap_modify: Server is unwilling to perform (53)
additional info: no global superior knowledge
I'm using this command to apply the changes:
ldapmodify -H ldap://localhost:1389 -D cn=admin,dc=example,dc=com -w markus -f add_olcAllows_bind_v2.ldif
add_olcAllows_bind_v2.ldif
dn: cn=config
add: olcAllows
olcAllows: bind_v2
Docker - Compose
version: '2'
services:
openldap:
# build: .
image: docker.io/bitnami/openldap:2.5
ports:
- '1389:1389'
- '1636:1636'
environment:
- LDAP_ADMIN_USERNAME=admin
- LDAP_ADMIN_PASSWORD=markus
- LDAP_USERS=example-ldap
- LDAP_PASSWORDS=master
- LDAP_ROOT=dc=example,dc=com
volumes:
- 'openldap_data:/bitnami/openldap'
volumes:
openldap_data:
driver: local
Most answers I could find note, that a database must be configured/added but should'nt cn=config exists by default?

Related

Set Host IP as Docker Environment Variable

Problem:
I have a mid-sized team of student developers working on a wide variety of devices. We're using expo-go to live-preview our development progress on our mobile phones. We need to update the REACT_NATIVE_PACKAGER_HOSTNAME environment variable in the docker-compose.yml file to the public ip of our devices. Right now this is being done manually, which is a minor time sink and doesn't meet our "business requirements".
This project is being run inside a VSCode Devcontainer.
Docker:
Docker Desktop - v4.12.0
Docker Engine - v20.10.17
Note:
We are junior developers with little to no experience in Docker, please answer thoroughly and forgive any "basic" mistakes we've made.
docker-compose.yml:
version: '3.8'
services:
expo:
build:
context: ../App/frontend
args:
- NODE_ENV=development
environment:
- NODE_ENV=development
- EXPO_DEVTOOLS_LISTEN_ADDRESS=0.0.0.0
- REACT_NATIVE_PACKAGER_HOSTNAME=<YOUR IP>
tty: true
ports:
- '19006:19006'
- '19001:19001'
- '19002:19002'
- '19000:19000'
volumes:
- ../App:/opt/App:delegated
- ../App/frontend/package.json:/opt/App/frontend/package.json
- ../App/frontend/package-lock.json:/opt/App/frontend/package-lock.json
- notused:/opt/App/frontend/node_modules
healthcheck:
disable: true
apps:
build:
context: ../App/backend
dockerfile: Dockerfile.app.prod
volumes:
- ../App/backend/src:/home/App/backend
depends_on:
- database
- expo
database:
image: postgres:latest
environment:
POSTGRES_USER: *****
POSTGRES_PASSWORD: *****
POSTGRES_DB: *****
expose:
- '5432'
ports:
- 5433:5432
volumes:
notused:
What We've Tried
Makefile that sets environment variable on host machine before devcontainer starts
Couldn't figure out how to start devcontainer from makefile/command line
Running script/command from inside docker to identify host IP
Unable to get IP Address stored in variable, or get host IP
Using docker variables (e.g. host.docker.internal)
Results in exp://host.docker.internal:19000 invalid IP address.

Influxdb 2.0 Backup Fails with Read:authorizations is unauthorized

I am trying to backup influxdb using backup command as stated in v2.0 doc.
https://docs.influxdata.com/influxdb/v2.0/backup-restore/backup/
docker-compose.yaml
version: '3'
services:
influxdb:
image: influxdb:2.0
ports:
- 8086:8086
volumes:
- influxdb-data:/var/lib/influxdb2
- /influxdb-backup/:/influxdb-backup/
restart: always
volumes:
influxdb-data:
external: true
The Backup Commmand fails as follow.
docker-compose exec influxdb influx backup influxdb-backup -t xxx
2021-04-22T22:14:06.631863Z info Backing up KV store {"log_id": "0TfvCY9G000", "path": "20210422T221406Z.bolt"}
Error: Read:authorizations is unauthorized.
See 'influx backup -h' for help

Running multiple docker-compose files with nginx reverse proxy

I asked a question here and got part of my problem solved, but I was advised to create another question because it started to get a bit lengthy in the comments.
I'm trying to use docker to run multiple PHP,MySQL & Apache based apps on my Mac, all of which would use different docker-compose.yml files (more details in the post I linked). I have quite a few repositories, some of which communicate with one another, and not all of them are the same PHP version. Because of this, I don't think it's wise for me to cram 20+ separate repositories into one single docker-compose.yml file. I'd like to have separate docker-compose.yml files for each repository and I want to be able to use an /etc/hosts entry for each app so that I don't have to specify the port. Ex: I would access 2 different repositories such as http://dockertest.com and http://dockertest2.com (using /etc/hosts entries), rather than having to specify the port like http://dockertest.com:8080 and http://dockertest.com:8081.
Using the accepted answer from my other post I was able to get one app running at a time (one docker-compose.yml file), but if I try to launch another with docker-compose up -d it results in an error because port 80 is already taken. How can I runn multiple docker apps at the same time, each with their own docker-compose.yml files and without having to specify the port in the url?
Here's a docker-compose.yml file for the app I made. In my /etc/hosts I have 127.0.0.1 dockertest.com
version: "3.3"
services:
php:
build: './php/'
networks:
- backend
volumes:
- ./public_html/:/var/www/html/
apache:
build: './apache/'
depends_on:
- php
- mysql
networks:
- frontend
- backend
volumes:
- ./public_html/:/var/www/html/
environment:
- VIRTUAL_HOST=dockertest.com
mysql:
image: mysql:5.6.40
networks:
- backend
environment:
- MYSQL_ROOT_PASSWORD=rootpassword
nginx-proxy:
image: jwilder/nginx-proxy
networks:
- backend
ports:
- 80:80
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
networks:
frontend:
backend:
I would suggest to extract the nginx-proxy to a separate docker-compose.yml and create a repository for the "reverse proxy" configuration with the following:
A file with extra contents to add to /etc/hosts
127.0.0.1 dockertest.com
127.0.0.1 anothertest.com
127.0.0.1 third-domain.net
And a docker-compose.yml which will have only the reverse proxy
version: "3.3"
services:
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- 80:80
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
Next, as you already mentioned, create a docker-compose.yml for each of your repositories that act as web endpoints. You will need to add VIRTUAL_HOST env var to the services that serve your applications (eg. Apache).
The nginx-proxy container can run in "permanent mode", as it has a small footprint. This way whenever you start a new container with VIRTUAL_HOST env var, the configuration of nginx-proxy will be automatically updated to include the new local domain. (You will still have to update /etc/hosts with the new entry).
If you decide to use networks, your web endpoint containers will have to be in the same network as nginx-proxy, so your docker-compose files will have to be modified similar to this:
# nginx-proxy/docker-compose.yml
version: "3.3"
services:
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- 80:80
networks:
- reverse-proxy
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
networks:
reverse-proxy:
# service1/docker-compose.yml
version: "3.3"
services:
php1:
...
networks:
- backend1
apache1:
...
networks:
- nginx-proxy_reverse-proxy
- backend1
environment:
- VIRTUAL_HOST=dockertest.com
mysql1:
...
networks:
- backend1
networks:
backend1:
nginx-proxy_reverse-proxy:
external: true
# service2/docker-compose.yml
version: "3.3"
services:
php2:
...
networks:
- backend2
apache2:
...
networks:
- nginx-proxy_reverse-proxy
- backend2
environment:
- VIRTUAL_HOST=anothertest.com
mysql2:
...
networks:
- backend2
networks:
backend2:
nginx-proxy_reverse-proxy:
external: true
The reverse-proxy network that is created in nginx-proxy/docker-compose.yml is referred as nginx-proxy_reverse-proxy in the other docker-compose files because whenever you define a network - its final name will be {{folder name}}_{{network name}}
If you want to have a look at a solution that relies on browser proxy extension instead of /etc/hosts, check out mitm-proxy-nginx-companion

Hyperledger Fabric-ca connection to a LDAP directory

We are implementing a Hyperledger Fabric solution. To do so, we set up a fabric-CA, using the minimal configuration (we are still trying to figure out how the things works) in a specific docker.
As we need to login our users, using a email/password couple, we set up a LDAP component. We choosed to use OpenLDAP, using osixia/openldap implementation in a different docker.
We set the parameters in the fabric-ca-server-config.yaml to connect Fabric CA to the LDAP. At the start of both dockers, the logs seems fine :
Successfully initialized LDAP client
When we carry on the Fabric-CA tutorial, we fail at the command :
fabric-ca-client enroll -u http://cn=admin,dc=example:admin#localhost:7054
The result is :
[INFO] 127.0.0.1:46244 POST /enroll 401 23 "Failed to get user: Failed to connect to LDAP server over TCP at localhost:389: LDAP Result Code 200 "": dial tcp 127.0.0.1:389: connect: connection refused"
The LDAP is setup and functionning correctly, when sollicitated in CLI and via PHPLdapAdmin, an LDAP Browser, using the same credentials.
This is a bit of the fabric-ca-server-config.yaml:
ldap:
enabled: true
url: ldap://cn=admin,dc=example:admin#localhost:389/dc=example
userfilter: (uid=%s)
tls:
enabled: false
certfiles:
client:
certfile: noclientcert
keyfile:
attribute:
names: ['uid','member']
converters:
- name: hf.Revoker
value: attr("uid") =~ "revoker*"
maps:
groups:
- name: example
value: peer
Anyone could help ?
Thanks for reading,
I see two issues here:
First is more related with docker rather than fabric-ca. You have to set netowrk_mode to host to remove network isolation between the container and the Docker host. Then your docker container will see OpenLDAP located on Docker host
Please look into sample docker-compose.yaml file
version: '2'
services:
fabric-ca-server:
image: hyperledger/fabric-ca:1.1.0
container_name: fabric-ca-server
ports:
- "7054:7054"
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
volumes:
- ./fabric-ca-server:/etc/hyperledger/fabric-ca-server
command: sh -c 'fabric-ca-server start'
network_mode: host
More about docker network you can find here: https://docs.docker.com/network/
When network issue will be resolved, you have also to modify userfilter to relate with admin prefix so it should looks like this: userfilter: (cn=%s) If userfilter will not be repaired then you will get info that admin cannot be found in LDAP.
I did not using the local LDAP server, instead I am using the one line for the quick test...
http://www.forumsys.com/tutorials/integration-how-to/ldap/online-ldap-test-server/
However I am still getting the error as well.
My fabric-ca-server-config.yaml is
ldap:
enabled: true
url: ldap://cn=read-only-admin,dc=example,dc=com:password#ldap.forumsys.com:389/dc=example,dc=com
tls:
certfiles:
client:
certfile:
keyfile:
# Attribute related configuration for mapping from LDAP entries to Fabric CA attributes
attribute:
names: ['uid','member']
converters:
- name: hf.Revoker
value: attr("uid") =~ "revoker*"
maps:
groups:
- name:
value:
And I run it by:
fabric-ca-server start -c fabric-ca-server-config.yaml
I saw logs:
Successfully initialized LDAP client
Here is the screenshot for phpLDAPAdmin:
I am using the same script for testing:
$fabric-ca-client enroll -u http://cn=read-only-admin,dc=example,dc=com:password#localhost:7054
$fabric-ca-client enroll -u http://uid=tesla,dc=example,dc=com:password#localhost:7054
But still not good, getting something like:
POST /enroll 401 23 "Failed to get user: User 'uid=tesla,dc=example,dc=com' does not exist in LDAP directory"

Connection refused in Docker containers communicating through exposed ports

Hi I have a requirement of connecting three docker containers so that they can work together. I call these three containers as
container 1 - pga (apache webserver at port 80)
container 2 - server (apache airavata server at port 8930)
container 3 - rabbit (RabbitMQ at port 5672)
I have started rabbitMQ as (container 3)
docker run -i -d --name rabbit -p 15672:15672 -t rabbitmq:3-management
I have started server (container 2) as
docker run -i -d --name server --link rabbit:rabbit --expose 8930 -t airavata_server /bin/bash
Now from inside server(container 2) I can access rabbit (container 3) at port 5672. When i try
nc -zv container_3_port 5672 it says connection successful.
Till this point I am happy with the docker connection through link.
Now I have created another container pga(container 1) as
docker run -i -d --name pga --link server:server -p 8080:80 -t psaha4/airavata_pga /bin/bash
now from inside the new pga container when I am trying to access the service of server (container 2) its saying connection refuse.
I have verified that from inside server container service is running at 8930 port and it was exposed while creating the container but still its refusing the connection from other containers to which it is linked.
I could not find a similar situation described by anyone anywhere and also clueless how to debug the same. Please help me find out a way.
The output of command: docker exec server lsof -i :8930
exec: "lsof": executable file not found in $PATH
Cannot run exec command fb207d2fe5b902419c31cb8466bcee4ba551b097c39a7405824c320fcc67f5e2 in container 995b86032b0421c5199eb635bd65669b1aa93f96b60da4a49328050f7048197a: [8] System error: exec: "lsof": executable file not found in $PATH
Error starting exec command in container fb207d2fe5b902419c31cb8466bcee4ba551b097c39a7405824c320fcc67f5e2: Cannot run exec command fb207d2fe5b902419c31cb8466bcee4ba551b097c39a7405824c320fcc67f5e2 in container 995b86032b0421c5199eb635bd65669b1aa93f96b60da4a49328050f7048197a: [8] System error: exec: "lsof": executable file not found in $PATH
NOTE: Intend to expand on this but my kid's just been sick. Will address debugging issue from question when I get a chance.
You may find it easier to use docker-compose for this as it lets you run them all with one command and keep the configuration under source control. An example configuration file (from my website) looks like this:
database:
build: database
env_file:
- database/.env
api:
build: api
command: /opt/server/dist/build/ILikeWhenItWorks/ILikeWhenItWorks
env_file:
- api/.env
links:
- database
tty:
false
volumes:
- /etc/ssl/certs/:/etc/ssl/certs/
- api:/opt/server/
webserver:
build: webserver
ports:
- "80:80"
- "443:443"
links:
- api
volumes_from:
- api
I find these files very readable and comprehensible, they essentially say exactly what they're doing. You can see how it relates to the surrounding directory structure in my source code.