grafana on localhost failing authentication - apache

I've setup Carbon, graphite server, postgresql and Graphana on my localhost machine.
I am able to send metrics to graphite like: echo "test.count 12date +%s" | nc -q0 127.0.0.1 2003 and I can see the metric and graph in Graphite.
some of my configs:
/etc/grafana/grafana.ini
[database]
type = postgres
host = 127.0.0.1:5432
name = grafana
user = graphite
password = mypass
[server]
protocol = http
http_addr = 127.0.0.1
http_port = 3000
domain = mygrafana.com
enforce_domain = true
root_url = %(protocol)s://%(domain)s/
[security]
admin_user = admin
admin_password = mypass
secret_key = something
...
...
/etc/apache2/sites-available/apache2-grafana.conf
<VirtualHost *:80>
ProxyPreserveHost On
ProxyPass / http://127.0.0.1:3000/
ProxyPassReverse / http://127.0.0.1:3000/
ServerName mygraphana.com
</VirtualHost>
Graphana is enabled:
sudo a2ensite apache2-grafana
Configured Grafana to run after boot and then start service:
sudo update-rc.d grafana-server defaults 95 10
sudo service grafana-server start
I also added my local IP to /etc/hosts
192.168.1.16 mygrafana.com
Now, when I access mygrafana.com on the browser, the grafana page loads and when I enter user: admin and pass mypass it gives me an authentication error.
the mypass is set on grafana.ini but I might be missing something, just don't know what or what else to do for debugging this issue.

Now grafana provides grafana cli command to reset password
grafana-cli admin reset-admin-password --homepath "/usr/share/grafana" <password>
where you can give any password based on your choice
If this does not work as there is reported bug when this command does not come in effect then there is more concrete & accurate way to do so (make sure you are running all commands with SUDO) -
Install sqlite3
apt-get install sqlite3
Connect to grafana.db
sqlite3 /var/lib/grafana/grafana.db
Run update command
update user set password = '59acf18b94d7eb0694c61e60ce44c110c7a683ac6a8f09580d626f90f4a242000746579358d77dd9e570e83fa24faa88a8a6', salt = 'F3FAxVm33R' where login = 'admin';
run .exit
then you should be fine to use admin as password for admin user.... It will ask you to reset your password for your choice....

The default password for the admin user is admin. The admin password in the grafana.ini is only set the first time the Grafana server is run. You can change the password by logging in as admin and then changing it in the user settings. (It is also possible to set the password via the API using curl if you need to do it in a script)

Related

Ansible unable to create folder on localhost with different user

I'm executing ansible playbook with appuser whereas I wish to create folder with user webuser on localhost.
ssh keys are setup for webuser on my localhost. So after login with appuser I can simply ssh webuser#localhost to switch user to webuser.
Note: I do not have sudo priveledges so I cannot sudo to switch to webuser from appuser.
Below is my playbook that is run with user appuser but needs to create a folder 04May2020 on localhost using webuser
- name: "Play 1"
hosts: localhost
remote_user: "webuser"
vars:
ansible_ssh_extra_args: -o StrictHostKeyChecking=no
ansible_ssh_private_key_file: /app/misc_automation/ssh_keys_id_rsa
tasks:
- name: create folder for today's print
file:
path: "/webWeb/htdocs/print/04May2020"
state: directory
remote_user: webuser
However, the output shows that the folder is created with appuser instead of webuser. See output showing ssh connectivity with appuser instead of webuser.
ansible-playbook /app/Ansible/playbook/print_oracle/print.yml -i /app/Ansible/playbook/print_oracle/allhosts.hosts -vvv
TASK [create folder for today] ***********************************
task path: /app/Ansible/playbook/print_oracle/print.yml:33
Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py
Pipelining is enabled.
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: appuser
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python2 && sleep 0'
Can you please suggest if it is possible without sudo?
Putting all my comments together in a comprehensive answer.
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: appuser
This is indicating that you are connecting to localhost through the local connection plugin, either because you explicitelly re-declared the host as such or because you are using the implicit localhost. From discussions, you are in the second situation.
When using the local connection plugin, as indicated in the above documentation, the remote_user is ignored. Trying to change the user has no effect as you can see in the below test run (user (u)ids changed):
# Check we are locally running as user1
$ id -a
uid=xxxx(user1) gid=yyy(group1) groups=yyy(group1)
# Running the same command through ansible returns the same result
$ ansible localhost -a 'id -a'
localhost | CHANGED | rc=0 >>
uid=xxxx(user1) gid=yyy(group1) groups=yyy(group1)
# Trying to change the remote user has no effect
$ ansible localhost -u whatever -a 'id -a'
localhost | CHANGED | rc=0 >>
uid=xxxx(user1) gid=yyy(group1) groups=yyy(group1)
Without changing your playbook and/or inventory, the only solution is to launch the playbook as the user who needs to create the directory.
Since you have ssh available, an other solution is to declare a new host that you will use only for this purpose, which will target the local IP through ssh. (Note: you can explicitly declare localhost like this but then all connections will go through ssh which might not be what you want to do).
Somewhere at the top of you inventory, add the line:
localssh ansible_host=127.0.0.1
And in your playbook, change
hosts: localssh
Now the connection to your local machine will go through ssh and the remote_user will be obeyed correctly.
One way you can try is by setting the ansible_connection to localhost. To do this, in the directory from which you are running ansible commands, create a host_vars directory. In that sub-directory, create a file named localhost, containing the line ansible_connection: smart

How to activate authentication in Apache Airflow

Airflow version- 1.9.0
I have installed apache airflow and post configuration i am able to run sample DAG's with sequential executor.
Also, created new sample user which i can see under Admin > Users.
But unable to get the login window/screen when we visit webserver adress at :8080/ it directly opens up Airflow webserver with admin user.
It will be great help if anyone can provide some info on how to activate login screen/page, so that user credentials can be used for logging into webserver.
Steps followed to enable web user authentication:
https://airflow.apache.org/security.html?highlight=authentication
Check the following in your airflow.cfg file:
[webserver]
authenticate = True
auth_backend = airflow.contrib.auth.backends.password_auth
And also remember to Restart Airflow Webserver, if it still doesn't work, run airflow initdb and restart the webserver.
Also, double-check in airflow.cfg file that it does not contain multiple configurations for authenticate or auth_backend. If there is more than one occurrence, than it can cause that issue.
If necessary, install flask_bcrpyt package of python2.x/3.x
For instance,
$ python3.7 -m pip install flask_bcrypt
Make sure you have an admin user created,
airflow create_user -r Admin -u admin -e admin#acme.com -f admin -l user -p *****
edit airflow.cfg
inside [webserver] section
change authenticate = True. by default it is set to False.
add auth_backend = airflow.contrib.auth.backends.password_auth.
change rbac = True for Role-based-access-control – RBAC.
airflow initdb
restart airflow webserver
just add rbac = True to airflow.cfg, and you are good to go.
Now all you need to is restart your airflow webserver.
And in case if you want to add a new user. You can use this command,
airflow create_user -r Admin -u admin -f Ashish -l malgawa -p test123 -e ashishmalgawa#gmail.com
“-r” is the role we want for the user
“-u” is the username
“-f” is the first name
“-l” is the last name
“-e” is the email id
“-p” is the password
For more details, you can follow this article
https://www.cloudwalker.io/2020/03/01/airflow-rbac-role-based-access-control/#:~:text=RBAC%20is%20the%20quickest%20way,access%20to%20DAGs%20as%20well

openldap tls authentication on client works after disabling re-enabling tls via authconfig

I have a strange issue when working on my docker openldap server and client container. To enable ldap authentication with tls i have to first disable it via authconfig then re-enable. I have also disabled cached logins. If i dont disable tls first and only execute the second command then login is unsuccessful from ssh
$ docker exec -t datanode1 bash -c 'authconfig --enableldap --enableldapauth --ldapserver="kerbldap.dkdocker.com" \
--ldapbasedn="dc=dkdocker,dc=com" --enablesssd --enablesssdauth --enableldaptls --enablemkhomedir --update'
$ ssh dhiren#localhost -p 2224
dhiren#localhost's password:
Permission denied, please try again.
dhiren#localhost's password:
$ docker exec -t datanode1 bash -c 'authconfig --disableldaptls --update'
$ docker exec -t datanode1 bash -c 'authconfig --enableldap --enableldapauth --ldapserver="kerbldap.dkdocker.com" \
--ldapbasedn="dc=dkdocker,dc=com" --enablesssd --enablesssdauth --enableldaptls --enablemkhomedir --update'
$ ssh dhiren#localhost -p 2224
dhiren#localhost's password:
Creating home directory for dhiren.
Last failed login: Thu Sep 21 19:31:42 IST 2017 from gateway on ssh:notty
There were 4 failed login attempts since the last successful login.
[dhiren#datanode1 ~]$
docker exec -t clientcontainer1 bash -c 'authconfig --disableldaptls --update'
Below is my authconfig --test result
[root#datanode1 ~]# authconfig --test
caching is disabled
nss_files is always enabled
nss_compat is disabled
nss_db is disabled
nss_hesiod is disabled
hesiod LHS = ""
hesiod RHS = ""
nss_ldap is enabled
LDAP+TLS is enabled
LDAP server = "ldap://kerbldap.dkdocker.com/"
LDAP base DN = "dc=dkdocker,dc=com"
nss_nis is disabled
NIS server = ""
NIS domain = ""
nss_nisplus is disabled
nss_winbind is disabled
SMB workgroup = "SAMBA"
SMB servers = ""
SMB security = "user"
SMB realm = ""
Winbind template shell = "/bin/false"
SMB idmap range = "16777216-33554431"
nss_sss is enabled by default
nss_wins is disabled
nss_mdns4_minimal is disabled
myhostname is enabled
DNS preference over NSS or WINS is disabled
pam_unix is always enabled
shadow passwords are enabled
password hashing algorithm is sha512
pam_krb5 is disabled
krb5 realm = "DKDOCKER.COM"
krb5 realm via dns is disabled
krb5 kdc = "kerbldap.dkdocker.com"
krb5 kdc via dns is disabled
krb5 admin server = "kerbldap.dkdocker.com"
pam_ldap is enabled
LDAP+TLS is enabled
LDAP server = "ldap://kerbldap.dkdocker.com/"
LDAP base DN = "dc=dkdocker,dc=com"
LDAP schema = "rfc2307"
pam_pkcs11 is disabled
SSSD smartcard support is disabled
use only smartcard for login is disabled
smartcard module = ""
smartcard removal action = ""
pam_fprintd is disabled
pam_ecryptfs is disabled
pam_winbind is disabled
SMB workgroup = "SAMBA"
SMB servers = ""
SMB security = "user"
SMB realm = ""
pam_sss is enabled by default
credential caching in SSSD is enabled
SSSD use instead of legacy services if possible is enabled
IPAv2 is disabled
IPAv2 domain was not joined
IPAv2 server = ""
IPAv2 realm = ""
IPAv2 domain = ""
pam_pwquality is enabled (try_first_pass local_users_only retry=3 authtok_type=)
pam_passwdqc is disabled ()
pam_access is disabled ()
pam_faillock is disabled (deny=4 unlock_time=1200)
pam_mkhomedir or pam_oddjob_mkhomedir is enabled (umask=0077)
Always authorize local users is enabled ()
--ldapbasedn="dc=dkdocker,dc=com" --enablesssd --enablesssdauth --enableldaptls --enablemkhomedir --update'
The issue was caused by missing symlink to my CA.cert under /etc/openldap/cacert directory.
LDAP Authentication Requirements

How can I set virtual host in Codeship?

I’m using Codeship to automate a multi-tenancy application.
My app need subdomain setting to run acceptance tests using Selenium Web Driver.
So, I need to config virtual domain for my app.
For example, I need the following virtual domain:
127.0.0.1 test.my-app.test
127.0.0.1 my-app.test
If I do not use subdomain to request to my app, It not work as requirement.
I tried the following commands in Setup Commands section before Test Pipelines.
sudo echo '127.0.0.1 test.my-app.test' >> /etc/hosts
sudo echo '127.0.0.1 my-app.test' >> /etc/hosts
But, It doesn’t work, because I has no permission. The error message was:
bash: /etc/hosts: Permission denied
Would you mind tell me how to make it work ?
Thank you in advanced !
Update:
I received reply from Codeship team:
this is not possible in our classic infrastructure due to technical limitations. You could move to our Docker Platform, which allows more customization of your build environment.
We need to use Docker to solve this issue
Your redirected command will not be executed in the root privilege, that's why you got the Permission denied error.
Your command means "do the echo in the privilege root, then redirected to /etc/hosts file".
Try this:
sudo sh -c 'echo "Your text" >> /path/to/file'
We don't allow access via sudo on the build VMs because of security considerations.
However, you can use a service like http://xip.io/ or lvh.me to access your application via DNS names.
$ nslookup codeship.lvh.me
Server: 8.8.8.8
Address: 8.8.8.8#53
Non-authoritative answer:
Name: codeship.lvh.me
Address: 127.0.0.1
lvh.me will resolve any requests to a subdomain to 127.0.0.1, xip.io offers more functionality, that is explained on its homepage in more detail.

Creating per-user php5-fpm pools the secure way

When creating per-user php5-fpm pools on an Apache mod_fastcgi setup which of the following is the most secure way and efficient way of granting webserver permissions to the PHP pool?
Option 1:
Set the group to www-data:
listen.owner = username
listen.group = www-data
listen.mode = 0660
user = username
group = www-data
While this works files created by PHP would have the ownership set to username:www-data while files uploaded via SCP will have username:username.
Option 2:
Add www-data to the supplementary group username:
listen.owner = username
listen.group = username
listen.mode = 0660
user = username
group = username
-
usermod -aG username www-data
Which of these options are secure? You may also share a better method.
I checked the following guides:
http://www.howtoforge.com/php-fpm-nginx-security-in-shared-hosting-environments-debian-ubuntu
http://www.binarytides.com/php-fpm-separate-user-uid-linux/
But they were all written before bug #67060 was discovered and fixed.
I am using following setup on my LEMP (Nginx + PHP-FPM). For Apache this should also be applicable.
PHP-FPM runs several pools as nobody:user1, nobody:user2 ...
Nginx runs as nginx:nginx
User nginx is a member of each user1, user2.. groups:
# usermod -a -G user5 nginx
File permissions:
root:root drwx--x--x /home
user1:user1 drwx--x--- /home/user1 (1)
user1:user1 rwxr-x--- /home/user1/site.com/config.php (2)
user1:user1 drwxrwx--- /home/user1/site.com/uploads (3)
nobody:user1 rw-rw---- /home/user1/site.com/uploads/avatar.gif (4)
(1) User's home dir has no x permission for other, so php-fpm pool running as nobody:user2 will not have access to /home/user1 and vice versa.
(2) php script doesn't have w for group, so it cannot create files in htdocs.
(3) On uploads dir we should manually enable write access for group user1, to enable php script to put files there. Don't forget to disable php handler for uploads, in nginx this is made by
server {
....
location ^~ /uploads/ { }
but for Apache you should check.
(4) uploaded files should also have w for group if we want user1 to be able to edit these files later via ftp or ssh (logging in as user1:user1). Php code is also editable via ftp since user1 is its owner.
Nginx will have read access to all users and write access to all user's uploads since user nginx is a member of each user1, user2, ... groups.
You should not forget to add it to all later groups. You can also modify useradd script to do it automatically.