OpenLDAP Won't Set Encyrpted Password: attribute 'userPassword' is not present in entry - ldap

I'm trying to set up an OpenLDAP server in a docker container on my local network. I got it set up and connected using Apache Directory Studio and created some posix groups/accounts with no trouble. The problem is, I can add a plaintext userPassword for my users but trying to use any type of encryption scheme results in the error:
entry failed schema check: value of naming attribute 'userPassword' is not present in entry
Other notes about my configuration:
I'm using a self signed certificate using the LAN address to connect over LDAPS
Using the docker image osixia/openldap
I can provide any other configurations if needed.

Normally you would install the ppolicy module and set the "ppolicy_hash_cleartext" variable in slapd.conf to enable password hashing. Since you're using a docker container this will work slightly differently.
Check out the following page from the docker image you are using:
https://github.com/osixia/docker-openldap/issues/208

Related

Weblogic 11g: how to change path to my trust.jks file for all Servers

i have put in /u01/app/oracle/product/fmw/wlserver_10.3/server/lib/
2 files:
-trust.jks
-identity.jks
Then i have changed on Webblogic console, for Admin and managed servers,
the PATH to:
-Custom Identity Keystore
-Custom Trust Keystore
All looks good.
After weblogic restarts all servers are running, but
when i run this command on terminal ps -eaf|grep weblogic
i see this line:
-Djavax.net.ssl.trustStore=/u01/app/oracle/product/fmw/wlserver_10.3/server/lib/DemoTrust.jks
As a result no one of my online interfaces are connecting.
I get the following error:
BEA-382513<con:reason>OSB Replace action failed updating variable "body": {err}FORG0005: expected exactly one item, got 0 items</con:reason>
Can someone help me to correct the path for my Servers so that it would look for trust.jks and not the DemoTrust.jks?
The way to fix this is by setting the flag on "SSL Listen Port Enabled"
thet can be found
Home >Summary of Environment >Summary of Servers >AdminServer -> Configuration/General.
After this we need to go to this view:
Home>Summary of Environment >Summary of Servers >AdminServer >Summary of Servers->Control
Select AdminServer and Click on Restart SSL.
To see if changes bin done we need to execute the command:
ps -eaf|grep weblogic
and look for
-Djavax.net.ssl.trustStore=/u01/app/oracle/product/fmw/wlserver_10.3/server/lib/**trust.jks**
If the end has the trust key file,in my case i called it trust.jks, then the change was performed successfully.

Issue with docker push on local registry https access to ressource denied

I have a problem with my registry docker. My "server" VM is on kali-linux. I created the registry docker in HTTP and use a centOS VM as a client. I declared the registry insecure in the client VM and it worked perfectly.
Now I try to put it in HTTPS. In order to do that, I use nginx as a proxy. I followed this tutorial : Step 5 — Setting Up SSL except for Part 8 to make it a service (I don't know why but i can't do it).
Because I don't have a domain name, I used a fake one. In order to be recognized, I added my IP (192.168.X.X) and the domain name I used (myregistryexemple) to the /etc/hosts file on both VM.
As asked by the tutorial, I generated the certificat on my "server" VM (the kali one), and send it by scp to my client VM. I make the centOS vm trust the certificate thanks to this commands :
yum install ca-certificates
update-ca-trust force-enable
cp cert.crt /etc/pki/ca-trust/source/anchors/
update-ca-trust extract
I restart the service docker on the client VM. And launch the docker registry and the nginx proxy with "docker-compose up" on my kali VM.
I tag and try to push an ubuntu on the registry :
docker tag ubuntu myregistryexemple/ubuntu
docker push myregistryexemple/ubuntu
But I get this error :
The push refers to a repository [docker.io/myregistryexemple/ubuntu]
56827159aa8b: Preparing
440e02c3dcde: Preparing
29660d0e5bb2: Preparing
85782553e37a: Preparing
745f5be9952c: Preparing
denied: requested access to the resource is denied
Then I try to push to localhost directly :
docker tag ubuntu localhost:5000/ubuntu & docker push localhost:5000/ubuntu
then I docker login on the domain from the client VM, it worked, but when i tried to pull from my domain registry on the client VM, docker cannot find on the registry the docker images i tried to push.
Do someone has any idea why and knows how to help me ?
Ok so i found a way to make it work.
It is quite simple : Juste follow the complete tutorial I quote on the question ( https://www.digitalocean.com/community/tutorials/how-to-set-up-a-private-docker-registry-on-ubuntu-14-04#step-5-%E2%80%94-setting-up-ssl )
After you created the repository, and before you push/pull a docker image.
You need to go, in both client and server VM, on /etc/hosts .
Add the line : domainChosen serverVmIp
Save and quit it.
Now we need the client to trust the certificate generated. In order to do that, you can use this tutorial : http://kb.kerio.com/product/kerio-connect/server-configuration/ssl-certificates/adding-trusted-root-certificates-to-the-server-1605.html .
Then restart your registry and your docker deamon. And you normaly can use your domain name to push/pull in your registry in https.

Problems setting up artifactory as a docker registry

im currently trying to setup a private Docker Registry in Artifacory (v4.7.4).
I've setup a local,remote and virtual docker Repository, added Apache as a Reverse Proxy. Added a DNS Entry for the virtual "docker" Repo.
Reverse Proxy is working but if i try something like:
docker pull docker.my.company.com/ubuntu:16.04
I'm getting:
https://docker.my.company.com/v1/_ping: x509: certificate is valid for
*.company.com, company.com, not docker.my.company.com
My Artifactory URL is: "my.company.com/artifactory" and i want the repositorys to be accessible on repo.my.company.com/artifactory.
I also have a Wildcard Certificate for company.com so i don't understand whats the problem here.
Or is there a way to access Artifactory over just http without SSL
Any Ideas?
According to the RFC-2818 Wildcard certificate matches only the one level down domains, but not deeper:
E.g., *.a.com matches foo.a.com but not bar.foo.a.com. f*.com matches foo.com but not bar.com.
In this case what you should do is use ports for mapping repositories, instead of subdomains, so the docker repository will be accessible under, for example my.company.com:5001/ instead of docker.my.company.com.
You can find the explanation about the change and how to do it using Artifactory Proxy settings generator in the User Guide.
If you are prepared to live with the certificate-name mismatch for-now, and understand the security implications of ignoring the name-mismatch and accessing the repo insecurely, you can apply the following workaround:
Edit /etc/default/docker and add the option DOCKER_OPTS="--insecure-registry docker.my.company.com".
Restart docker: [sudo] service docker restart.

openshift oo-install:The implied host domain 'com' does not match the specified host domain of 'demo.com' for DNS

all. I am trying to install openshift with one command
[root#demo ~]# sh <(curl -s https://install.openshift.com/)
Checking for necessary tools...
...looks good.
Downloading oo-install package...
Extracting oo-install to temporary directory...
Starting oo-install...
OpenShift Installer (Build 20140722-1618)
.....
....
....
Deploying workflow 'origin_deploy'.
The OpenShift deployment configuration has the following errors:
* The implied host domain 'com' does not match the specified host domain of 'demo.com' for DNS
Rerun the installer to correct these errors.
I don't know what is the reason it keeps telling me that 'the implied host domain 'com' ...' what need to be changed?
[root#demo ~]# sh <(curl -s https://install.openshift.com/)
Checking for necessary tools...
...looks good.
Downloading oo-install package...
Extracting oo-install to temporary directory...
Starting oo-install...
OpenShift Installer (Build 20140722-1618)
Welcome to OpenShift.
This installer will guide you through a basic system deployment, based
on one of the scenarios below.
Select from the following installation scenarios.
You can also type '?' for Help or 'q' to Quit:
Install OpenShift Origin
Add a Node to an OpenShift Origin deployment
Generate a Puppet Configuration File
Type a selection and press : 1
Your system deployment configuration is incomplete.
The installer will guide you through the necessary configuration
steps.
Note: ActiveMQ and MongoDB will be installed on all Broker instances.
For more flexibility, rerun the installer in advanced mode (-a).
DNS Settings
Installer will deploy DNS
Application Domain: example.com
Register OpenShift hosts with DNS? Yes
Component Domain: demo.com
Global Gear Settings
Account Settings
![enter image description here][2]
Node Districts
Role Assignments
Host Information
The configuration file does not include some of the required settings
for host instance demo.com. Please provide them here.
Hostname (the FQDN that other OpenShift hosts will use to connect to
the host that you are describing): |demo.com|
Hostname / IP address for SSH access to demo.com from the host where
you are running oo-install. You can say 'localhost' if you are running
oo-install from the system that you are describing: |demo.com| 10.1.14.145
Username for SSH access to 10.1.14.145: |root|
Validating root#10.1.14.145... looks good.
Detected multiple network interfaces for this host:
* 192.168.142.128 on interface eth2
* 10.1.14.145 on interface eth3
Do you want to use one of these as the public IP information for this
Node? (y/n/q/?) y
The following network interfaces were found on this host. Choose the
one that it uses for communication on the local subnet:
1. 192.168.142.128 on interface eth2
2. 10.1.14.145 on interface eth3
Type a selection and press : 2
Normally, the BIND DNS server that is installed on this host will be
reachable from other OpenShift components using the host's configured
IP address (10.1.14.145).
If that will work in your deployment, press to accept the
default value. Otherwise, provide an alternate IP address that will
enable other OpenShift components to reach the BIND DNS service on
this host: |10.1.14.145|
This Node host is currently associated with the Default district. Do
you want to change this district assignment? (y/n/q) n
Do you want to modify the account info settings for the various role
services? (y/n/q/?) n
Here are the details of your current deployment.
Note: ActiveMQ and MongoDB will be installed on all Broker instances.
For more flexibility, rerun the installer in advanced mode (-a).
DNS Settings
Installer will deploy DNS
Application Domain: example.com
Register OpenShift hosts with DNS? Yes
Component Domain: demo.com
Choose from the following deployment configuration options:
1. Change the DNS configuration
2. Manage Hosts
3. Services Accounts Settings
4. Global Gear Settings
5. Node Districts
6. Display full Host details
7. Finish editing the deployment configuration
Type a selection and press : 7
Here is the subscription configuration that the installer will use for
this deployment.
Do you want to make any changes to the subscription info in the
configuration file? (y/n/q/?) n
Do you want to set any temporary subscription settings for this
installation only? (y/n/q/?) n
Preflight check: verifying system and resource availability.
Checking demo.com:
* SSH connection succeeded
* Target host is running CentOS
* Located getenforce
* SELinux is running in enforcing mode
* Located yum
* puppet RPM is installed.
* openssh-clients RPM is installed.
* bind RPM is installed.
Deploying workflow 'origin_deploy'.
The OpenShift deployment configuration has the following errors:
* The implied host domain 'com' does not match the specified host domain of 'demo.com' for DNS
Rerun the installer to correct these errors.
The issue is that OpenShift requires hosts to be part of a second-level domain. myhost.openshift.localdomain works, while myhost.localdomain does not.
I entered oshost.localdomain as component domain (configured right after the application domain) and 0.oshost.localdomain for the actual host and now it installs just fine.

How do I make Rails use SSL to connect to PostgreSQL?

When I try to connect to the remote PostgreSQL database with a Rails 3.2 project I get this error:
FATAL: no pg_hba.conf entry for host "10.0.0.3", user "projectx", database "projectx", SSL off
My configuration on Rails looks like this:
staging:
adapter: postgresql
database: projectx
username: projectx
password: 123456
host: 10.0.0.3
encoding: utf8
template: template0
min_messages: warning
and on PostgreSQL looks like this:
hostssl all all 0.0.0.0/0 md5
hostssl all all ::/0 md5
Both machines are running on an Ubuntu 12.04.
I found posts saying that it should work automatically, which clearly doesn't happen. I found some saying that libpq didn't have SSL enabled and enabling it solved the problem, but no explanation on how to enable it. I can see when I look at the dependencies of libpq that it depends on the some SSL packages, so I would assume SSL support is compiled.
Some posts recommended adding this:
sslmode: require
or this:
sslmode: enabled
to enable ssl mode, but it had no effect for me. I read that it's silently ignored.
I also tried the database string approach, ending up with:
staging:
adapter: postgresql
database: "host=10.0.0.3 dbname=projectx user=projectx password=123456 sslmode=require"
and then I got the error:
could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
which seems to indicate that Rails was trying to connect to localhost or rather, the local PostgreSQL (there's none) instead of 10.0.0.3.
Any ideas?
As you wrote, normally the Ubuntu 12.x packages are set up so that SSL is activated, works out of the box, and in addition is the first method tried by rails, or any client that lets libpq deal with this stuff, which means almost all clients.
This automatic enabling is not necessarily true with other PostgreSQL packages or with a self-compiled server, so the answers or advice applying to these other contexts don't help with yours.
As your setup should work directly, this answer is a list of things to check to find out what goes wrong. Preferably, use psql first to test a connection setup rather than rails, so that generic postgresql issues can be ruled out first.
Client-side
The client-side sslmode parameter controls the sequence of connect attempts.
To voluntarily avoid SSL, a client would need to put sslmode=disable somewhere in the connection string, or PGSSLMODE=disable in the environment, or mess up with one of the other PGSSL* variables. In the unlikely case your rails process had this in its environment, that would explain the error you're getting, given that pg_hba.conf does not allow non-SSL connections.
Another reason to not try SSL is obviously when libpq is not compiled with SSL support but that's not the case with the Ubuntu packages.
The default for sslmode is prefer, described as:
prefer (default)
first try an SSL connection; if that fails, try a non-SSL connection
The SSL=off at the end of your error message relates to the last connect attempt that fails. It may be that SSL was tried and failed, or not tried at all, we can't know from this message alone. The connect attempt with SSL=off is rejected normally by the server per the policy set in pg_hba.conf (hostssl in the first column).
It's more plausible that the problem is server-side, because there are more things than can go wrong.
Server-side
Here are various things to check server-side:
There should be ssl=on in postgresql.conf (default location: /etc/postgresql/9.1/main/)
when connecting to localhost with psql, you should be greeted with a message like this:
psql (9.1.13)
SSL connection (cipher: DHE-RSA-AES256-SHA, bits: 256)
Type "help" for help.
The ca-certificates package should be installed and up-to-date.
The ssl-cert package should be installed and up-to-date.
Inside the postgres data directory (/var/lib/postgresql/9.1/main by default), there should be soft links:
server.crt -> /etc/ssl/certs/ssl-cert-snakeoil.pem or another valid certificate, and
server.key -> /etc/ssl/private/ssl-cert-snakeoil.key or another valid key.
/etc/ssl/certs and parent directories should be readable and cd'able by postgres.
The postgres unix user should be in the ssl-cert unix group (check with id -a postgres) otherwise it can't read the private key.
If changing postgresql.conf, be sure that postgresql gets restarted before doing any other test.
There shouldn't be any suspicious message about SSL in /var/log/postgresql/postgresql-9.1-main.log at startup time or at the time of the failed connection attempt.
Rails uses the PG gem for postgres to connect see here for the implementation:
https://github.com/rails/rails/blob/02a3c0e771b3e09173412f93d8699d4825a366d6/activerecord/lib/active_record/connection_adapters/postgresql_adapter.rb#L881
The PG gem uses libpg (c library) and the documentation on "PG::Connection.new" found here:
http://deveiate.org/code/pg/PGconn.html
Suggests the following options:
host
server hostname
hostaddr
server address (avoids hostname lookup, overrides host)
port
server port number
dbname
connecting database name
user
login user name
password
login password
connect_timeout
maximum time to wait for connection to succeed
options
backend options
tty
(ignored in newer versions of PostgreSQL)
sslmode
(disable|allow|prefer|require)
krbsrvname
kerberos service name
gsslib
GSS library to use for GSSAPI authentication
service
service name to use for additional parameters
So this would indicate that the connection string will not work (since it is not recognised by the adapter, this might be a mysql adapter option)
Also this indicates that the sslmode=required option should work, as this is a basic feature of libpg.
So:
database.yml
staging:
...
sslmode: "require"
...
should definitely do the trick, are you sure you use staging mode? // add sslmode to the other environments too to be sure.
Also libpg uses SSL by default as first try, maybe you see the error with SSL Off because SSL mode failed first, then libpq retried without ssl and eventually raised an error.
Please check your psql version,
older version do not support slmode=require.
It worked for me after upgrading psql to the latest version.