How to hide the Passwords in airflow logs - passwords

I am using the airflow version 2.2.4 in the AKS cluster. so, In our airflow logs, I can see the user name and passwords for connection, I already encrypt the passwords by using the fernet key,
and I also changed the airflow conf file
sensitive_var_conn_names = comma,separated,sensitive,names,password,secret,extra,passwd
but still i could see the passwords in airflow logs anyone can help how to mask the connections passwords in airflow logs.

Related

How Can I delete auto created SuperUser of Ignite

I have an Azure Kubernetes Service. I deployed the Apache Ignite image on it.
It works well and I'm using ThinClient to connect to the Ignite. Also, authentication has been enabled.
In the first deployment, Ignite creates a superuser that name and password are "ignite".
I created my own user and tested to connection. It succeeded.
I would like to delete the user created by Apache Ignite, but I couldn't do it.
How can I delete the user?
The default superuser can't be removed, but you should be able to change the default password ALTER USER "ignite" WITH PASSWORD 'newPassword';

Knox does not work after Hive service restart

I use SQL Developer and some third party jar files for accessing Hive.
When ever there is a Hive service restart - My connection object wont let me connect to Hive after the restart. My admin team need to restart the metastore too. And then few more config changes, admin team does - and then I need to remove the cacerts file, add certificates to cacerts again using Apache knox.
Have any of you faced similar problems and managed to fixed it ?
Thanks
LNC
Sorry for the late response here. This sounds like an issue that has since been resolved with HiveServer2 using a random key for signing the cookie that is used to optimize authentication of each http request for a given session. When HS2 is restarted a new key is created and the Knox server continues to send the previously cached cookie which was signed with the previous random key. There should be no reason to mess with cacerts and the like. A simple - yet annoying - restart of Knox should suffice. You may also turn off cookie based authentication but that will degrade performance.

How to dynamically create Airflow S3 connection using IAM service

My Airflow application is running in AWS EC2 instance which has IAM role as well. Currently I am creating Airflow S3 connection using hardcoded access and secret key. But I want my application to pickup this AWS credentials from this instance itself.
How to achieve this?
We have a similar setup, our Airflow instance run inside containers deployed inside an EC2 machine. We set up the policies to access S3 on the EC2 machine instance profile. You don't need to pick up the credentials in the EC2 machine, because the machine has an instance profile that should have all the permissions that you need. From the Airflow side, we only use aws_default connection, in the extra parameter we only setup the default region, but there aren't any credentials.
Here a details article about Intance Profiles: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
The question is answered but for future reference, it is possible to do it without relying on aws_default and just doing it via Environment Variables. Here is an example to write logs to s3 using an AWS connection to benefit form IAM:
AIRFLOW_CONN_AWS_LOG="aws://"
AIRFLOW__CORE__REMOTE_LOG_CONN_ID=aws_log
AIRFLOW__CORE__REMOTE_LOGGING=true
AIRFLOW__CORE__REMOTE_BASE_LOG_FOLDER="s3://path to bucket"

How to secure HDFS on DC/OS without Enterprise

I'm trying to secure HDFS cluster on open source DC/OS but it seems it's not an easy thing.
The problem I see in HDFS is the fact that it uses username of current system user so without any form of authentication anyone can just create user with certain username and get superuser permissions on cluster.
So I need any form of authentication. IP auth would be fine(clients with certain IPs can only connect to HDFS) but I couldn't find if there's an option to enable it.
Creating Kerberos for HDFS is not an option because running another service just to run another service to run another service etc. will only give tons of work.
If enabling any form of viable security is impossible, is there any other DC/OS HDFS-like service I can use? I need some HA storage to fetch config files and sometimes jars from Artifact Uris to run services. I also need a place to store parquet files from spark streaming.
Version of DC/OS HDFS is 2.6.x.
Unfortunately it seems that Kerberos is the only real form of authentication in HDFS. Without this, HDFS will trust every user.

encrypted password kafka ssl setup

I am wondering to encrypt the password for ssl setup of kafka cluster.
my current setup:
listeners=SSL://:9095, PLAINTEXT://:9094
ssl.keystore.location=keystore.jks
ssl.keystore.password=password
ssl.key.password=phoenix
ssl.truststore.location=keystore.jks
ssl.truststore.password=password
security.inter.broker.protocol=SSL
but I dont want to have a plain password , expecting the encrypted this password
I don't think kafka provides any option for storing encrypted password. This doc specifically says Since we are storing passwords in the broker config, it is important to restrict access via file system permissions.
The following worked for me:
Updating Password Configs Dynamically
Password config values that are dynamically updated are encrypted before storing in ZooKeeper. The broker config password.encoder.secret must be configured in server.properties to enable dynamic update of password configs. The secret may be different on different brokers.
Source: http://kafka.apache.org/documentation/#dynamicbrokerconfigs