How to configure Nextcloud User_Saml to supply multiply groups through request headers - nextcloud

I have a Nextcloud 25 installation with user_saml 5.03 installed behind a gateway.
My user_saml installation is enabled and configured to have type "environment-variable" and the user id is mapped to "Username"
sudo -u nginx -- /nextcloud/html/occ app:enable user_saml
sudo -u nginx -- /nextcloud/html/occ config:app:set --value="environment-variable" user_saml type
sudo -u nginx -- /nextcloud/html/occ saml:config:set --general-uid_mapping="Username" 1
This works, and my users can login, but I would like the gateway to supply roles.
How do I configure saml-attribute-mapping-group_mapping to make this possible, and what does the gateway need to supply?
sudo -u nginx -- /nextcloud/html/occ saml:config:set --saml-attribute-mapping-group_mapping="?????" 1

The mapping needs to be configured with a space separated list of header names (each prefixed by "HTTP_", e.g.
sudo -u nginx -- /nextcloud/html/occ saml:config:set --saml-attribute-mapping-group_mapping="HTTP_USERROLES1 HTTP_USERROLES2 HTTP_USERROLES3" 1
The gateway needs to supply one or more of the headers "USERROLES1", "USERROLES2" "USERROLES3" (no "HTTP_").

Related

Certbot unable to find AWS credentials when issuing certificate via dns for route53

I need to get an certificate for my domain hosted on AWS Route 53 from LetsEncrypt. I do not have any port 80 or 443 exposed since the server is used for VPN and does not have a public access.
So the only way to do this is via DNS validation of route 53.
So far I have installed certbot and dns-route53 plugin
sudo snap install --beta --classic certbot
sudo snap set certbot trust-plugin-with-root=ok
sudo snap install --beta certbot-dns-route53
sudo snap connect certbot:plugin certbot-dns-route53
I have created a special user in my AWS account who has access to Route53 and I have added the access key id and secret access key in the ~/.aws/config and also ~/.aws/credentials which looks something like this
[default]
aws_access_key_id=foo
aws_secret_access_key=bar
Basically followed every step given here: https://certbot-dns-route53.readthedocs.io/en/stable/
Now when I run the following command:
sudo certbot certonly -d mydomain.com --dns-route53
It gives the following output:
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator dns-route53, Installer None
Requesting a certificate for mydomain.com
Performing the following challenges:
dns-01 challenge for mydomain.com
Cleaning up challenges
Unable to locate credentials
To use certbot-dns-route53, configure credentials as described at https://boto3.readthedocs.io/en/latest/guide/configuration.html#best-practices-for-configuring-credentials and add the necessary permissions for Route53 access.
I went to the documentation given in the error message: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html#best-practices-for-configuring-credentials
but I do not think there is anything wrong I am doing
I even went to the root level by doing sudo su and exported the AWS keys as env vars there and even exported the AWS keys in the home as well but it still throws me the same error.
so I also ran into this same issue, and it's likely because of you running certbot with sudo, when do you do that, whatever user you've used as ~/, is ignored, as instead, it's looking in /root/.
I fixed it by (centos) is my user where I have the .aws/ directory with config and credential files.
sudo -s
ln -s /home/centos/.aws/ ~/.aws
ls -lsa ~/.aws
... /root/.aws -> /home/centos/.aws/

Is there a possibility for configurating via CLI and not by web interfaces for installing Centreon?

I am trying to use only CLI to install centreon, I don't want to use the web interface. ( I am trying to create an Ansible role who install centreon)
Is there a methode to do the web interface part via CLI ?
Centreon CLAPI aims to offer (almost) all the features that are available on the user web interface in terms of configuration, through a command-line interface.
The main features are:
Add/Delete/Update objects such as hosts, services, host templates,
host groups, contacts etc...
Generate configuration files
Test configuration files
Move configuration files to monitoring pollers
Restart monitoring pollers Import and export objects
All actions in Centreon CLAPI will require authentication, so your commands will always start like this:
# cd /usr/share/centreon/bin
# ./centreon -u admin -p centreon [...]
Obviously, the -u option is for the username and the -p option is for the password. The password can be in clear or the one encrypted in the database.
Here is an example for a HOST object (Object name: HOST)
In order to list available hosts, use the SHOW action:
[root#centreon ~]# ./centreon -u admin -p centreon -o HOST -a show
id;name;alias;address;activate
82;sri-dev1;dev1;192.168.2.1;1
83;sri-dev2;dev2;192.168.2.2;1
84;sri-dev3;dev3;192.168.2.3;0
In order to add a host, use the ADD action:
[root#centreon ~]# ./centreon -u admin -p centreon -o HOST -a ADD -v "test;Test host;127.0.0.1;generic-host;central;Linux"
Required parameters:
Order Description
1 Host name
2 Host alias
3 Host IP address
4 Host templates; for multiple definitions, use delimiter |
5 Instance name (poller)
6 Hostgroup; for multiple definitions, use delimiter |
In order to delete one host, use the DEL action.
[root#centreon ~]# ./centreon -u admin -p centreon -o HOST -a DEL -v "test"
You can retrieve all the CLI instructions online in the official doc. https://documentation.centreon.com/docs/centreon/en/19.04/api/clapi/index.html
I also found a useful Ansible Centreon playbook on Github: https://github.com/centreon/centreon-iac-ansible

Put different containers containing a server in the same server

I have a Debian server with apache2 on it. I can access it by an ip address.
What I want is to be able to access to the containers in it (which contain an apache2 serveur) from the outside by an url like "myIpAddress/container1". What I currently have is an acces to those containers only from the Debian server.
I thought about using proxy reverse, but I cannot make it works.
Thank you for your help! :-)
Map the docker container's port to a host port and access the docker container from <host-ip>:port.
docker run -p host-port:container-port image
For example, upon running a container using the above command will make the container available at 127.0.0.1
docker run -p 80:5000 training/webapp
Update:
Setting up reverse proxy using NGINX
This example uses a plain NGINX container as site A and plain Apache server as site B.
Run the reverse proxy.
docker run -d \
--name nginx-proxy \
-p 80:80 \
-v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
Start the container for site A, specifying the domain name in the VIRTUAL_HOST variable.
docker run -d --name site-a -e VIRTUAL_HOST=a.example.com nginx
Check out your website at http://a.example.com.
With site A still running, start the container for site B.
docker run -d --name site-b -e VIRTUAL_HOST=b.example.com httpd
Check out site B at http://b.example.com.
Note: Make sure you have set up DNS to forward the subdomains to the host running nginx-proxy. If you're using AWS, the easiest way is to use Route53.
For testing locally, map sub-domains to resolve to localhost by adding entries in /etc/hosts file.
127.0.0.1 a.example.com
127.0.0.1 b.example.com
References
jwilder NGNIX Proxy Github
NGNIX reverse proxy using docker

How to solve a challenge to authorize my domain for letsencrypt?

I'm trying to authorize my domain for letsencrypt. Previously, a few months ago on a different server, I didn't it, now I do for some reason.
./letsencrypt-auto certonly -a webroot --webroot-path=/home/deployer/pfios -d my_website.com -d www.my_website.com
Failed authorization procedure. my_website.com (http-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: The key authorization file from the server did not match this challenge [fdsfs-fdsfdsf.fdsfdsfds333] != [gangnam style!]
Domain: www.my_website.com
Type: unauthorized
Detail: The key authorization file from the server did not match
this challenge
[fdsfs-fdsfdsf.fdsfdsfds333]
!= [gangnam style!]
The code for authorization or rather the name of a file is different each time. Where should I actually retrieve it? In this case it's "fdsfs-fdsfdsf.fdsfdsfds"
Try authorize your domain via standalone webserver from LE.
./letsencrypt-auto certonly -a standalone -d my_website.com -d www.my_website.com
You must remember - when you generate new cert you must off your main webserver (Apache, nginx, etc.)
In my case solved it by running
sudo apt-get update and then running the renew command
/usr/bin/letsencrypt renew
Check if you have IPV6 redirection configured in your DNS provider.
If the redirection does not redirect to your server, remove it.

Configure a Proxy Array Consisting of 3 Squid Proxy Servers

I am trying to configure and install 3 squid proxy servers using CentOS. I have compiled and installed three separate servers in the following directories:
"/usr/local/squid"
"/usr/local/squid2"
"/usr/local/squid3"
From here I am completely lost. I need to use squid for load balancing and I only have one ip address to do it (localhost). I was assigned 3 separate ports as well. This first squid server works as a load balancer. It then forwards the client request to the second and third squid server based on a load balance rule. If no cached copy then forward back to the origin server.
The first squid server should use the CARP protocol and "1/3" of the client requests should be sent to the second squid server and "2/3" should be sent to the third squid server.
Any ideas on the squid.conf file?
Thanks
I would use LVS:
ipvsadm -A -t x.x.x.x:3128 -s wlc
ipvsadm -a -t x.x.x.x:3128 -r localhost:3128
ipvsadm -a -t x.x.x.x:3128 -r localhost:3129
ipvsadm -a -t x.x.x.x:3128 -r localhost:3130
x.x.x.x is your local IP.