official guide for docker-compose instal not working - fusionauth

I am trying to install fusionauth on Ubuntu using official guide with docker-compose (without elasticsearch) db service couldn't go up, so fusionauth opens in maintance mode.
Any idea how to solve this?
log:
fusionauth_1 | 16-May-2020 17:31:13.017 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 3726 ms
db_1 | LOG: incomplete startup packet
db_1 | FATAL: password authentication failed for user "fusionauth"
db_1 | DETAIL: Role "fusionauth" does not exist.
db_1 | Connection matched pg_hba.conf line 95: "host all all all md5"

This was a bug in our setup code. Here's the relevant issue: https://github.com/FusionAuth/fusionauth-issues/issues/618
The new docker image should work. Digest: 9713a1d6f2c65dff13b47d41981aa3f00d5169f406ec91597c6faae1f95718e6

Related

Trivy Scan with Openshift internal registry | how to authenticate against openshift registry with trivy

I am currently using the trivy scanner to scan images in the pipeline. This has worked very well until now. But recently it is necessary to scan the image from an internal Openshift registry.
Unfortunately I have the problem that I do not know how to authenticate trivy against the internal registry. The documentation does not give any information regarding Openshift. It describes Azure and AWS as well as github.
My scan command currently looks like this in groovy:
trivy image --ignore-unfixed --format template --template \"path for output" --output trivy_image_report.html --skip-update --offline-scan $image
Output:
INFO Vulnerability scanning is enabled
INFO Secret scanning is enabled
INFO If your scanning is slow, please try '--security-checks vuln' to disable secret scanning
INFO Please see also https://aquasecurity.github.io/trivy/v0.31.3/docs/secret/scanning/#recommendation for faster secret detection
FATAL image scan error: scan error: unable to initialize a scanner: unable to initialize a docker scanner: 4 errors occurred:
* unable to inspect the image (openshiftregistry/namespace/imagestream:tag): Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
* unable to initialize Podman client: no podman socket found: stat podman/podman.sock: no such file or directory
* containerd socket not found: /run/containerd/containerd.sock
* GET https://openshiftregistry/v2/namespace/imagestream/manifests/tag: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:namespace/imagestream Type:repository]]
The image is stored within an imageStream in Openshift. Is there something i can add to the trivy command to authenticate the service against the registry or is there something else what has to be done before i use the command in groovy?
Thanks for help
Thanks to Will Gordon in the comments. This link was very helpfull: Access the Registry (Openshift).
This lines helped me (more information can be found on the linked site):
oc login -u kubeadmin -p <password_from_install_log> https://api-int.<cluster_name>.<base_domain>:6443
And
podman login -u kubeadmin -p $(oc whoami -t) image-registry.openshift-image-registry.svc:5000
Thanks

Systems start for slapd failed and timeout

I try to install openldap(v2.4.44) using puppet agent --test in a Centos 7 environment and received this error
Error: Systems start for slapd failed!
journalctl log for slapd:
systemd[1]: Starting OpenLDAP Server Daemon...
runuser[110317]: pam_unix(runuser:session): session opened for user ldap by
(uid=0)
runuser[110317]: pam_unix(runuser:session): session closed for user ldap
tlsmc_get_pin: INFO: Please note the extracted key file will not be protected
with a PIN any more, however it will be still protected at least by file
permissions.
tlsmc_get_pin: INFO: Please note the extracted key file will not be protected
with a PIN any more, however it will be still protected at least by file
permissions.
tlsmc_get_pin: INFO: Please note the extracted key file will not be protected
with a PIN any more, however it will be still protected at least by file
permissions.
... loop until timeout
Note: This happened if I installed manually as well
Might be environment issue as well because I tried in different environment it seems fine.
Anyone have any clue?

Postgresql User not connecting to Database (Nginx Django Gunicorn)

For almost a month now I have been struggling with this issue. Whenever I try to access my Django Admin page on production I get the following error:
OperationalError at /admin/login/
FATAL: password authentication failed for user "vpusr"
FATAL: password authentication failed for user "vpusr"
My production.py settings file is as follows:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'vpdb',
'USER': 'vpusr',
'PASSWORD': os.environ["VP_DB_PASS"],
'HOST': 'localhost',
}
}
NOTE: the environment variable is working correctly. even if I put the normal password hard coded in there it doesn't work.
Here is the list of databases with their owner:
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+-------------+-------------+-----------------------
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
vpdb | vpusr | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =Tc/vpusr +
| | | | | vpusr=CTc/vpusr
And here is the list of users:
List of roles
Role name | Attributes | Member of
-----------+------------------------------------------------------------+-----------
postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
vpusr | Superuser, Create DB | {}
As you can see I have also tried adding the roles of Superuser and Create DB to the vpusr but that did not have any effect.
Even when I try to connect through the terminal like this I get the same error:
sudo -u postgres psql -U vpusr vpdb
I still get the error: psql: FATAL: Peer authentication failed for user "vpusr"
When I do this command:
psql -U vpusr -h localhost vpdb
I properly connect to psql as vpusr.
A few more notes: I did delete the database, and the user and re created them. I made sure the password was correct.
I use Gunicorn, Nginx, Virtualenv, Django, Postgres on an Ubuntu Server from Digital Ocean.
Thank you in advance for taking the time to read this and helping me out!
EDIT: I have noticed that there are no migrations in my apps migration folder! Could it be that django or my user or postgres does not have permission to write the file?
EDIT: NOTE: I CHANGED THE USER TO TONY
In my postgres log file the following errors are found:
2017-09-09 18:09:55 UTC [29909-2] LOG: received fast shutdown request
2017-09-09 18:09:55 UTC [29909-3] LOG: aborting any active transactions
2017-09-09 18:09:55 UTC [29914-2] LOG: autovacuum launcher shutting down
2017-09-09 18:09:55 UTC [29911-1] LOG: shutting down
2017-09-09 18:09:55 UTC [29911-2] LOG: database system is shut down
2017-09-09 18:09:56 UTC [2711-1] LOG: database system was shut down at 2017-09-09 18:09:55 UTC
2017-09-09 18:09:56 UTC [2711-2] LOG: MultiXact member wraparound protections are now enabled
2017-09-09 18:09:56 UTC [2710-1] LOG: database system is ready to accept connections
2017-09-09 18:09:56 UTC [2715-1] LOG: autovacuum launcher started
2017-09-09 18:09:57 UTC [2717-1] [unknown]#[unknown] LOG: incomplete startup packet
2017-09-09 18:10:17 UTC [2740-1] tony#vpdb LOG: provided user name (tony) and authenticated user name (postgres) do not match
2017-09-09 18:10:17 UTC [2740-2] tony#vpdb FATAL: Peer authentication failed for user "tony"
2017-09-09 18:10:17 UTC [2740-3] tony#vpdb DETAIL: Connection matched pg_hba.conf line 90: "local all all peer"
EDIT:
pg_hba.conf file:
# Database administrative login by Unix domain socket
local all postgres peer
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all peer
# IPv4 local connections:
host all all 127.0.0.1/32 password
# IPv6 local connections:
host all all ::1/128 md5
# Allow replication connections from localhost, by a user with the
# replication privilege.
#local replication postgres peer
#host replication postgres 127.0.0.1/32 md5
#host replication postgres ::1/128 md5
what can you tell form this?
Your application is trying to connect to PostgreSQL using a password authentication method, but in your pg_hba.conf file, the connection type is matching the md5 method so it's expecting a md5 authentication. We can see this in your log messages
2017-09-01 11:42:17 UTC [16320-1] vpusr#vpdb FATAL: password authentication failed for user "vpusr"
2017-09-01 11:42:17 UTC [16320-2] vpusr#vpdb DETAIL: Connection matched pg_hba.conf line 92: "host all all 127.0.0.1/32 md5"
Locate your pg_hba.conf file inside your PostgreSQL data directory, vim the pg_hba.conf file and update the line
host all all 127.0.0.1/32 md5
and change it to
host all all 127.0.0.1/32 password
and then restart your PostgreSQL service
[root#server] service postgresql restart
and then try to authenticate again
To expand on the other messages you are seeing, when you run the command:
sudo -u postgres psql -U vpusr vpdb
you are not passing the -h <host> parameter, so the connection will attempt to match the line
local all all 127.0.0.1/32 <method>
so you will need to check which method of authentication it expects for local connections and authenticate that way, or else pass the -h <host> parameter, and then it will match your line
host all all 127.0.0.1/32 password
which means you can then enter your password when prompted, or else change your connection string to
sudo -u postgres -c "PGPASSWORD=<password>;psql -h localhost -U vpusr vpdb"
From the documentation:
db_user_namespace (boolean)
This parameter enables per-database user names. It is off by default. This parameter can only be set in the postgresql.conf file or on the server command line.
If this is on, you should create users as username#dbname. When username is passed by a connecting client, # and the database name are appended to the user name and that database-specific user name is looked up by the server. Note that when you create users with names containing # within the SQL environment, you will need to quote the user name.
With this parameter enabled, you can still create ordinary global users. Simply append # when specifying the user name in the client, e.g. joe#. The # will be stripped off before the user name is looked up by the server.
db_user_namespace causes the client's and server's user name representation to differ. Authentication checks are always done with the server's user name so authentication methods must be configured for the server's user name, not the client's. Because md5 uses the user name as salt on both the client and server, md5 cannot be used with db_user_namespace.
Although this doesn't explain why psql does the right thing, it's worth looking into.
Another possibility is that psycopg2 links with a different libpq, that links with a different and FIPS compliant OpenSSL. It would have no way to do md5 hashing as that OpenSSL doesn't contain the md5 algorithm. I would expect a different error message, but this bug is all but obvious.
UPDATE: This looks like a red herring. Apparently psycopg2 brings it's own crypto version.
Last thing to check would be character encoding. Test with a password that only contains ascii characters, like abcdefghijkl. If Django works then, look into LANG_* and LC_* variables in the environment.
fox fix password authentication failed for user "vpusr" try add password as is to the settings and the test for os.environ["VP_DB_PASS"],
change Engine
'ENGINE': 'django.db.backends.postgresql_psycopg2'
install if need:
pip install psycopg2
for fix psql: FATAL: Peer authentication failed for user "vpusr" try simple add host
psql -h localhost -U vpusr vpdb
# ^^^^^^^^^^^^

Ambari cluster : Host registration failed

I am setting up an ambari cluster with 3 virtualbox VMs running Ubuntu 16.04LTS.
I followed this hortonworks tutorial.
However when I am going to create a cluster using Ambari Cluster Install Wizard I get the below error during the step 3 - "Confirm Hosts".
26 Jun 2017 16:41:11,553 WARN [Thread-34] BSRunner:292 - Bootstrap process timed out. It will be destroyed.
26 Jun 2017 16:41:11,554 INFO [Thread-34] BSRunner:309 - Script log Mesg
INFO:root:BootStrapping hosts ['thanuja.ambari-agent1.com', 'thanuja.ambari-agent2.com'] using /usr/lib/python2.6/site-packages/ambari_server cluster primary OS: ubuntu16 with user 'thanuja'with ssh Port '22' sshKey File /var/run/ambari-server/bootstrap/5/sshKey password File null using tmp dir /var/run/ambari-server/bootstrap/5 ambari: thanuja.ambari-server.com; server_port: 8080; ambari version: 2.5.0.3; user_run_as: root
INFO:root:Executing parallel bootstrap
Bootstrap process timed out. It was destroyed.
I have read number of posts saying that this is related to not enabling Password-less SSH to the hosts. But I can ssh to the hosts without password from the server.
I am running ambari as non-root user with root privileges.
This post helped me.
I modified the users in host machines so that they can execute sudo commands without password using visudo command.
Please post if you have any alternative answers.

Automount is not mounting NFS shared home folder on centos 7 when using LDAP login

These are the error messages that I can see in /var/log/messages folder:
failed to bind to LDAP server ldap://x.x.x.x: Can't contact LDAP server
bind_ldap_simple: lookup(ldap): Unable to bind to the LDAP server: (default), error Can't contact LDAP server
failed to bind to LDAP server ldap://X.X.X.X: Can't contact LDAP server
failed to bind to LDAP server ldap://X.X.X.X: Can't contact LDAP server
I had to enable NetworkManager-wait-online service by executing the command below
systemctl enable NetworkManager-wait-online.service
Then I had to change timeout in
/usr/lib/systemd/system/NetworkManager-wait-online.service file.
I set the timeout to 60 seconds and it worked for me.
I am listing the modified file with changed timeout below.
[Unit]
Description=Network Manager Wait Online
Requisite=NetworkManager.service
After=NetworkManager.service
Wants=network.target
Before=network.target network-online.target
[Service]
Type=oneshot
ExecStart=/usr/bin/nm-online -s -q --timeout=60
[Install]
WantedBy=multi-user.target