I recently started working with Apache Ranger on HDP 2.2.6 and was trying to implement 2 active policy repos(Repo1 and Repo2) for the Ranger Hive Plugin. But I found that the policies from only Repo1 were being executed and all the policies from Repo2 were not (even if all the policies in repo1 were disabled).
Do I need to change some config property in Ranger to activate 2 or more repos at the same time?
Thanks!
Hive Plugin can work with only a single repo at a time. ranger.plugin.hive.service.name property in ranger-hive-security.xml (located under hive/hiveserver2 conf directory) corresponds to the repo/service name that plugin uses to fetch policies from ranger admin.
What is the use case that you are trying to address? Perhaps there is something in existing plugin/policy design that can help you achieve that with single service/repo.
Related
I am trying to apply changes to the Zeppelin environmental values in my EMR after launch but it is not working. The changes I am trying to add are as below taken from https://zeppelin.apache.org/docs/0.9.0/setup/storage/storage.html
zeppelin-env.sh: (from /etc/zeppelin/conf/ and /etc/zeppelin/conf.dist/)
export ZEPPELIN_NOTEBOOK_S3_CANNED_ACL=BucketOwnerFullControl
Or zeppelin-site.xml: (from /etc/zeppelin/conf/ and /etc/zeppelin/conf.dist/)
<property>
<name>zeppelin.notebook.s3.cannedAcl</name>
<value>BucketOwnerFullControl</value>
<description>Saves notebooks in S3 with the given Canned Access Control List.</description>
</property>
As in the text block, I tried overwriting both the files in conf and conf.dist and then
sudo systemctl stop zeppelin and start. But whenever I re-enter Zeppelin and look at the configurations page I can confirm that the changes did not take effect (also tested). I am not sure what is going on.
How do I add environmental or site values on a running EMR? I'd prefer doing it in a paragraph, but that doesn't seem possible. I am using EMR 5.30.1 if that adds any context.
Update: Zeppelin 0.82 does not support cannedACL settings. Support started from Zeppelin 0.9 which starts with EMR 5.33.0. BUT still does not work.
I can use aws api to upload a file to the other account bucket and it works fine when I add the ACL there, but not in Zeppelin, even when the change reflects in the Configuration page. And this goes for Zepp 0.9 and 0.10. What is going on? Why does this not work? I'm doing a simple df.write.parquet
I am new to Apache Ranger and the BigData field in general. I am working on an on-prem big data pipeline. I have configured resource based policies in Apache Ranger (ver 2.2.0) using ranger hive plugin (Hive ver 2.3.8) and they seem to be working fine. But I am having problems with tag based policies and would like someone to tell me where I am going wrong. I have configured a tag based policy in Ranger by doing the following -
1. Create a tag in Apache Atlas (eg. TAG_C1) on a hive column (column C1) (for this first
install Apache Atlas, Atlas Hook for Hive, then create tag in Atlas).
This seems to be working fine.
2. Install Atlas plugin in Apache Ranger.
3. Install RangerTagSync (but did not install Kafka).
4. Atlas Tag (TAG_C1) is being seen in Apache Ranger when I create Tag based masking policy in ranger.
5. But masking is not visible in hive which I access via beeline.
Is Kafka important for Tag based policies in Apache Ranger? What am I doing wrong in these steps?
Kafka is important for tagsync and for atlas too. Kafka is the one thats gonna notify rangertagsync about the tag assigments/changes in apache atlas.
I am planning to use Apache Ranger for authorization of my HDFS file system. I have a question on the capability of apache ranger plugin. Does HDFS plugin for Apache ranger offers more security features than just managing HDFS ACLs ? From the limited understanding that i gathered by looking into the presentations/blogs, I am unable to comprehend the functions of HDFS plugin for Apache Ranger.
..and now with the latest version of ApacheRanger it is possible to define "deny" rules.
Previously it was just possible to define rules which specifies additional "allow" privileges, on top of the underlying HDFS ACLs. Hence , if you had HDFS ACL for a directory set to "777", everybody can access it, independant from any Ranger-HDFS-policy on top of that ;)
Apache Ranger plugin for HDFS will provide user access auditing with below fields:
IP, Resource type, Timestamp, Access granted/denied.
Note that the Ranger plugin does not actually use HDFS ACLs. Ranger policies are added on top of standard HDFS permissions and HDFS ACLs.
You need to be aware that any access rights that are granted on these lower levels cannot be taken away by Ranger anymore.
Apart from that, Ranger gives you the same possibilities as ACLs, plus some more, like granting access by client IP range.
We in our team are planning to use gerrit. So, to get introduced, I did set up a server, used open-id for authentication and created some test-users and test-projects in it.
Now we are ready to use it. But we actually prefer LDAP for real use.
So, can I change my authentication system from open-id from LDAP? What will happen to current users then?
I want to clear test projects and changes. How can I do them?
Can I complete delete existing gerrit setup and initiate a fresh setup in same machine? (I tried extracting the jar in different folder, but I faced some problems in it)
I am using Ubuntu 12.04 as my server.
Please help.
Delete the database (you're not using the H2 database anymore, but some MySQL or PostgreSQL server, don't you?) plus the directory where Gerrit is running (the -d parameter, see docs). Additionally, remove the git repos, if you configured them to be located on a different path.
Then all your data is gone and you can start from scratch.
I want to know if it's possible to create JDBC Realm configuration in Glassfish 3.1 without admin console, like creation of a Data Source with the glassfish-resources.xml.
When developers download my GIT repository they don't like to configure Glassfish, it's configured in deployment time.
Best regards
Mounir
I'd create a shell script or batch file which runs the required asadmin commands.
Here you can find a complete example: Creating JDBC Objects Using asadmin
(Btw, DTD of GlassFish Resources Descriptor does not contain any realm-related tag (include create-auth-realm).)