I'm designing a multi-tenancy hbase cluster. I want to assign a separate account for each client. I'm trying to find a way to do client authentication with username/password. I have learnt the way of Kerberos for client authentication: Secure Client Access to Apache HBase. But it's just a little complicated for my requests.
Is there a way that I can connect the cluster like this:
Configuration conf = HBaseConfiguration.create();
conf.set("hbase.zookeeper.quorum", "address_of_zookeeper");
conf.set("hbase.client.username", "root");
conf.set("hbase.client.password", "root");
...
If no, which part of code I should change for HBase?
Update:
I found an HBase-Issue about this problem: Is there a plan to support auth connection by username/password like mysql or redis.
The status was modified into resolved, but no patch attachment with it.
Related
We have a big data cluster that we have created by directly installing the tarballs from Cloudera website. We are currently using (Hive, Impala, Hadoop, Spark, Kafka). In the current setup we don't have any authentication/authorization setup.
We are in the process of adding authentication/authorization however we decided to not use Kerberos to avoid the hassle of setting up a KDC server.
We were able to setup Sentry for authorization and for authentication we are using Hive Custom authentication where in we validate user credentials through an internal REST API as described here
We are trying to setup similar authentication mechanism for Impala however we have not been able to figure out a way to do Custom authentication in Impala.
Please let us know if apart from LDAP/Kerberos there is an alternative way to authenticate a user, something that is equivalent of Hive Custom authentication.
What is the best way to secure a connection between an Elasticsearch cluster hosted on Elastic Cloud and a backend given that we have hundreds of thousands of users and that I want to handle the authorization logic on the backend itself not on Elasticsearch?
Is it better to create a "system" user in the native realm with all the read and write accesses (it looks like the user feature is intended for real end-users) or to use other types of authentication (but SAML, PKI or Kerberos are also end-user oriented)? Or using other security means like IP based?
I'm used to Elasticsearch service on AWS where authorization is based on IAM roles so I'm a bit lost here.
edit: 18 months later, there's no definitive answer on this, if I had to do it again, I would probably end up using JWT.
Per https://www.rabbitmq.com/access-control.html, RabbitMQ has the ability to use authentication (who is the user) and authorization (what can the user do?)
I'm using a rather obscure plugin for authorization already. I was wondering if there was a way to use the HTTP backend ONLY for authentication, because it would gel extremely well with the Django server that this project is using (users on the Django server may be allowed onto the Rabbit server).
Thanks
Never used before, but this plugin should solve:
https://github.com/rabbitmq/rabbitmq-auth-backend-http
This plugin provides the ability for your RabbitMQ server to perform
authentication (determining who can log in) and authorisation
(determining what permissions they have) by making requests to an HTTP
server.
I have integrated milton webdav with hadoop hdfs and able to read/write files to the hdfs cluster.
I have also added the authorization part using linux file permissions so only authorized users can access the hdfs server, however, I am stuck at the authentication part.
It seems hadoop does not provide any in built authentication and the users are identified only through unix 'whoami', meaning I cannot enable password for the specific user.
ref: http://hadoop.apache.org/common/docs/r1.0.3/hdfs_permissions_guide.html
So even if I create a new user and set permissions for it, there is no way to identify whether the user is authenticate or not. Two users with the same username and different password have the access to the all the resources intended for that username.
I am wondering if there is any way to enable user authentication in hdfs (either intrinsic in any new hadoop release or using third party tool like kerbores etc.)
Edit:
Ok, I have checked and it seems that kerberos may be an option but I just want to know if there is any other alternative available for authentication.
Thanks,
-chhavi
Right now kerberos is the only supported "real" authentication protocol. The "simple" protocol is completely trusting the client's whois information.
To setup kerberos authentication I suggest this guide: https://ccp.cloudera.com/display/CDH4DOC/Configuring+Hadoop+Security+in+CDH4
msktutil is a nice tool for creating kerberos keytabs in linux: https://fuhm.net/software/msktutil/
When creating service principals, make sure you have correct DNS settings, i.e. if you have a server named "host1.yourdomain.com", and that resolves to IP 1.2.3.4, then that IP should in turn resolve back to host1.yourdomain.com.
Also note that kerberos Negotiate Authentication headers might be larger than Jetty's built-in header size limit, in that case you need to modify org.apache.hadoop.http.HttpServer and add ret.setHeaderBufferSize(16*1024); in createDefaultChannelConnector(). I had to.
How authentication in general (Mutual Authentication as a special case) works in MSDTC and how to configure Mutual Authentication on MSDTC??
I've a custom application (archival solution), a windows service which on a configured time fetch data from online database and dumps to a back-end archival database (Ideally online and back-end DBs are located on different machine).
I am using TransactionScope and have configured DTC on client and host machine with no authentication and it's working fine. However, our client requires us to not use no authentication mode and put some authentication for MSDTC. I've decided to use mutual authentication though I am not very much sure how it works and how to configure it?? Any help would be appreciated.