Removing an unnecessary login module in Apache Karaf - jaas

This question was originally posted on the karaf users mailing list, but I didn't get an answer:
http://karaf.922171.n3.nabble.com/Deleting-an-unnecessary-login-module-td4033321.html
I would like to remove a login module (PublicKeyLoginModule) from the default jaas karaf realm.
According to the docs:
http://karaf.apache.org/manual/latest/developers-guide/security-framework.html
“So if you want to override the default security configuration in Karaf (which is used by the ssh shell, web console and
JMX layer), you need to deploy a JAAS configuration with the name name="karaf" and rank="1".”
However, when I do this new modules are added rather than replacing the existing ones.
When the blueprint below is loaded via either the deploy dir or via inclusion in a bundle (created using Maven by including the blueprint from the following path)
src\main\resources\OSGI-INF\blueprint\context.xml
I get the following:
karaf#root()> jaas:realm-list
Index | Realm Name | Login Module Class Name
-----------------------------------------------------------------------------------
1 | karaf | org.apache.karaf.jaas.modules.properties.PropertiesLoginModule
2 | karaf | org.apache.karaf.jaas.modules.publickey.PublickeyLoginModule
3 | karaf | org.apache.karaf.jaas.modules.ldap.LDAPLoginModule
What I would like to see is either
karaf#root()> jaas:realm-list
Index | Realm Name | Login Module Class Name
-----------------------------------------------------------------------------------
1 | karaf | org.apache.karaf.jaas.modules.ldap.LDAPLoginModule
Or, if there were a way to explicitly delete a module:
karaf#root()> jaas:realm-list
Index | Realm Name | Login Module Class Name
-----------------------------------------------------------------------------------
1 | karaf | org.apache.karaf.jaas.modules.properties.PropertiesLoginModule
2 | karaf | org.apache.karaf.jaas.modules.ldap.LDAPLoginModule
This is the blueprint:
<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"
xmlns:jaas="http://karaf.apache.org/xmlns/jaas/v1.0.0"
xmlns:cm="http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.1.0"
xmlns:ext="http://aries.apache.org/blueprint/xmlns/blueprint-ext/v1.0.0">
<type-converters>
<bean class="org.apache.karaf.jaas.modules.properties.PropertiesConverter"/>
</type-converters>
<!-- Allow usage of System properties, especially the karaf.base property -->
<ext:property-placeholder placeholder-prefix="$[" placeholder-suffix="]"/>
<!-- AdminConfig property place holder for the org.apache.karaf.jaas -->
<cm:property-placeholder persistent-id="org.apache.karaf.jaas" update-strategy="none">
<cm:default-properties>
<cm:property name="example.group" value="example-group-value"/>
</cm:default-properties>
</cm:property-placeholder>
<jaas:config name="karaf" rank="1">
<jaas:module className="org.apache.karaf.jaas.modules.ldap.LDAPLoginModule" flags="required">
connection.url = ldap://ldap.example.com:389
user.base.dn = o= example.com
user.filter = (uid=%u)
user.search.subtree = true
role.base.dn = ou=applications,l=global,o= example.com
role.filter = (&(objectClass=groupOfUniqueNames)(uniqueMember=*uid=%u*)(cn=${ example.group}))
role.name.attribute = cn
role.search.subtree = true
authentication = simple
</jaas:module>
</jaas:config>
</blueprint>
karaf#root()> shell:info
Karaf
Karaf version 3.0.0
Karaf home ***
Karaf base ***
OSGi Framework org.apache.felix.framework - 4.2.1
Same issue on Karaf 3.0.1
I'd welcome any suggestions. Creating a whole new realm is a possibility, but for policy reasons I'd prefer not to have the PublicKeyLoginModule visible in the runtime at all.

As a workaround you can try this:
Default karaf realm is registered in org.apache.karaf.jaas.module bundle with blueprint.
Find the original JaasRealm service named karaf from the service registry and unregister it; then register your own realm using the above blueprint.

Related

Camel FTPS Connection Login Failure

I have IIS FTPS Server setup and am trying to connect from a Camel route. But I'm getting the error:
22 Sep 2019 08:59:51,650 | WARN | Camel (Test) thread #202 -
ftps://test#test1834:21/BO/Salary | FtpConsumer | 248 -
org.apache.camel.camel-core - 2.17.0.redhat-630347 | Cannot
connect/login to: ftps://test#test1834:21. Will skip this poll.
I can connect via FileZilla client and perform any read/write operation.
Credentials have been verified.
Camel-Core version 2.17.0
Is anyone familiar with this issue?
this flag "isImplicit=true" is no longer valid for Camel version 3.7.0
The new flag is "implicit=true"
Take a look at the endpoint configuration, you probably should set the security mode isImplicit=true

Javamelody and multiple app and jvm in same node

we have 3 applications (app1/app2/app3) in cluster (server1/server2) with 2 jvms (8080/8180) in each node
for example
http://server1:8080/app1, http://server1:8080/app2,
http://server1:8080/app3
http://server1:8180/app1, http://server1:8180/app2,
http://server1:8180/app3
http://server2:8080/app1, http://server2:8080/app2,
http://server2:8080/app3
http://server2:8180/app1, http://server2:8180/app2,
http://server2:8180/app3
We can't override the path to record datas, what it is possible in web.xml set app1/app2/app3 in storage path but on same server app1 on port 8080 and 8081 will save files to same folder
the -D option is not a valuable option, because we can specify by jvm specific parameters, but if we put
"-Djavamelody.storage-directory=/tmp/javamelody_my_instance" as ticket
692 in github mentionned
it will override app1 with app2 or app2 with app3 .... in each case it will cause issue
overwrite file is not good how we can monitor each app in each JVM ?
any idea ?
The response i had in github was OK
in each node configuration:
-Djavamelody.storage-directory=C:\Windows\Temp\javamelody_[port_number]
i will have folders
javamelody_8080_app1, javamelody_8180_app1
javamelody_8080_app2, javamelody_8180_app2
javamelody_8080_app3, javamelody_8180_app3

Connecting to Kerberized solr on cloudera from karaf

I'm trying to connect to Solr (non cloud) which has Kerberos enabled from my SolrJ application running in Karaf container.
With Kerberos disabled, I'm able to connect fine.
With Kerberos enabled, I'm able to connect outside of Karaf by running a simple SolrClient class.
But its not working from within karaf.
Code:
System.setProperty("java.security.auth.login.config", "<path to jaas.conf file>");
String urlString = "http://<IP>:8983/solr/test";
SolrServer server = new HttpSolrServer(urlString);
QueryResponse sresponse = server.query( squery );
Exception in Karaf on trying to query:
2016-12-15 15:02:17,969 | WARN | l Console Thread | RequestTargetAuthentication | ? ? | 271 - wrap_mvn_org.apache.httpcomponents_httpclient_4.3.2 - 0.0.0 | NEGOTIATE authentication error: No valid credentials p
rovided (Mechanism level: No valid credentials provided (Mechanism level: Invalid option setting in ticket request. (101)))
2016-12-15 15:03:10,731 | ERROR | l Console Thread | Error:org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSol
rException: Expected mime type application/octet-stream but got text/html. Apache Tomcat/6.0.44 - Error report HTTP Status 401 - Authentication requiredtype Status reportmessage Authentication requireddescription This request requires HTTP authentication.Apache Tomcat/6.0.44

required or requisite jaas LDAPLoginModule not throwing FailedLoginException when user fails authentication

TL;DR: Why does the LDAPLoginModule (apparently) not throw a FailedLoginException when a user fails to be authenticated?
I have overridden the default "karaf" jaas realm in jboss fuse 6.2.[0|1]. My configuration has 2 modules:
an instance of org.apache.karaf.jaas.modules.ldap.LDAPLoginModule to authenticate the user via an LDAP-to-Active-Directory link
MyCustomLoginModule extends AbstractKarafLoginModule - a second module to check for locally-defined roles for an authentic user.
The latter works fine. However, when the LDAPLoginModule fails to authenticate a user, they are still allowed to pass. This is the case no matter what combination of required/requisite and ordering I use for the 2 modules.
An example of the behavior:
I define my modules like:
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:cm="http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.0.0"
xmlns:ext="http://aries.apache.org/blueprint/xmlns/blueprint-ext/v1.0.0"
xmlns:jaas="http://karaf.apache.org/xmlns/jaas/v1.0.0"
xsi:schemaLocation="
http://www.osgi.org/xmlns/blueprint/v1.0.0
http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd" >
. . .
<jaas:config . . . >
. . .
<jaas:module className="org.apache.karaf.jaas.modules.ldap.LDAPLoginModule"
flags="requisite">
. . .
properties herein as commonly seen for ldap-ad
. . .
</jaas:module>
<jaas:module className="com.abc.xyz.MyCustomLoginModule"
flags="requisite">
. . . nothing shocking in here either . . .
</jaas:module>
</jaas:config>
</blueprint>
This blueprint file and the MyCustomLoginModule are within a bundle that has been added to a feature that, itself, has been added to the etc/org.apache.karaf.features.cfg along with the associated remote mvn repo
I put "userX=admin" into the flat-file that MyCustomLoginModule uses to assign roles.
I try to login (via hawtio web console) as userX, but enter the wrong password.
Logged output is like:
DEBUG | LDAPLoginModule | org.apache.karaf.jaas.modules | Get the user DN.
DEBUG | LDAPLoginModule | org.apache.karaf.jaas.modules | Bind user (authentication).
DEBUG | LDAPLoginModule | org.apache.karaf.jaas.modules | Setting up SSL
DEBUG | LDAPLoginModule | org.apache.karaf.jaas.modules | Set the security principal for CN=...
DEBUG | LDAPLoginModule | org.apache.karaf.jaas.modules | Binding the user.
WARN | LDAPLoginModule | org.apache.karaf.jaas.modules | User userX authentication failed.
javax.naming.AuthenticationException: [LDAP: error code 49 - 80090308: LdapErr:
DSID-0C0903D9, comment: AcceptSecurityContext error, data 52e, v2580]
^^ as expected, LDAP Authentication fails--as per the WARN message and the "52e" error code ^^
HOWEVER, execution continues and I am successfully logged into the hawtio web console as userX!
Alternatively, I can define a user=role mapping in my custom, local file, where the user does not even exist in our Active Directory.... something simple, like: admin=admin. I then go through the same process. This time the LDAP module throws no Exceptions, but logs:
WARN | LDAPLoginModule | org.apache.karaf.jaas.modules | User admin not found in LDAP.
but yet again, execution continues and I am successfully logged into the hawtio web console, this time as "admin."
Lastly... Using a valid Active Directory user, but not one defined in my custom, local file, produces expected logging like:
DEBUG | LDAPLoginModule | org.apache.karaf.jaas.modules | Get the user DN.
DEBUG | LDAPLoginModule | org.apache.karaf.jaas.modules | Setting up SSL
DEBUG | LDAPLoginModule | org.apache.karaf.jaas.modules | Looking for the user in LDAP with
DEBUG | LDAPLoginModule | org.apache.karaf.jaas.modules | base DN:XXXXXXXXXX
DEBUG | LDAPLoginModule | org.apache.karaf.jaas.modules | filter: (&(|(samAccountName=<valid-username>)(userPrincipalName=<valid-username>)(cn=<valid-username>))(objectClass=user))
DEBUG | LDAPLoginModule | org.apache.karaf.jaas.modules | Found the user DN.
DEBUG | LDAPLoginModule | org.apache.karaf.jaas.modules | Bind user (authentication).
DEBUG | LDAPLoginModule | org.apache.karaf.jaas.modules | Setting up SSL
DEBUG | LDAPLoginModule | org.apache.karaf.jaas.modules | Set the security principal for CN=<valid-username>,...
DEBUG | LDAPLoginModule | org.apache.karaf.jaas.modules | Binding the user.
DEBUG | LDAPLoginModule | org.apache.karaf.jaas.modules | User <valid-username> successfully bound.
DEBUG | LDAPLoginModule | org.apache.karaf.jaas.modules | Setting up SSL
DEBUG | LDAPLoginModule | org.apache.karaf.jaas.modules | Looking for the user roles in LDAP with
DEBUG | LDAPLoginModule | org.apache.karaf.jaas.modules | base DN:XXXXXXXX
DEBUG | LDAPLoginModule | org.apache.karaf.jaas.modules | filter: (uniqueMember=CN=<valid-username>)
WARN | Authenticator | io.hawt.hawtio-web | Login failed due User <valid-username> has no local roles defined
where that last line is because my module throws a FailedLoginException if the user has no roles defined in the aforementioned custom file
I also noted that if the LDAPLoginModule's configuration is bad--e.g., a bad password is given for the system account that searches ldap for the user--then it DOES halt the login process, by throwing a FailedLoginExcpetion like:
WARN | Authenticator | io.hawt.hawtio-web | Login failed due Can't connect to
the LDAP server: [LDAP: error code 49 - 80090308: LdapErr:
DSID-0C0903D9, comment: AcceptSecurityContext error, data 52e, v2580]
note that this is logged by the Authenticator (not the LDAPLoginModule as above
...so at length, the question is -- why does the LDAPLoginModule (apparently) not throw a FailedLoginException when a user fails to be authenticated? I'd think that this is what's needed--does anyone disagree? Is there some additional bit of configuration that the LDAPLoginModule needs in order to be effective?
Has anyone else had this issue with JBoss FUSE v6.2.1 or karaf v2.4? Were you able to resolve within that version? If not, was it resolved by up-leveling to a newer version of either?
Thanks,
Hans
Though not an exact answer to the question asked, the following is an effective workaround.
Instead of using LDAPLoginModule directly, create a class that extends it and #Override the login() method--which returns a boolean... This boolean is set to false if the user being searched for does not exist, or has provided an incorrect password. Thus, simply call super.login() and if the result is false, then throw a FailedLoginException.

Cannot connect java client to cassandra with password authentication enabled

I have a default install of Datastax enterprise on my macbook. I was able to create my keyspace and setup all my applications including using solr.
I am trying to develop a set of steps to turn on password authentication for our dev cluster.
Thus far I have updated /usr/local/dse/resources/cassandra/conf/cassandra.yaml and changed the following properties:
authenticator: PasswordAuthenticator
authorizer: CassandraAuthorizer
I restarted the node and could login and query my keyspace using cqlsh:
cqlsh -u cassandra -p cassandra
At this point I tried setting the Credentials on the Session builder:
Host is: cassandra.host=localhost
Session session = keyspaceToSessionMap.get(keyspace);
if( session == null){
Cluster cluster = Cluster.builder().addContactPoints(hosts)
.withCredentials(username, password)
//.withSSL()
.build();
session = cluster.connect(keyspace);
keyspaceToSessionMap.put(keyspace,session);
}
I could not successfully connect however. So I added a new user and was able to again login via cqlsh but still cannot get the Java driver to connect.
cqlsh -u username -p password
Connected to LocalCluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 2.1.8.689 | DSE 4.7.3 | CQL spec 3.2.0 | Native protocol v3]
I am using 'com.datastax.cassandra:cassandra-driver-dse:2.1.9' via gradle for the driver.
I always get the following stack trace and through debugging can see the username and password are set properly:
Caused by: com.datastax.driver.core.exceptions.AuthenticationException: Authentication error on host localhost/127.0.0.1:9042: Username and/or password are incorrect
at com.datastax.driver.core.Connection$8.apply(Connection.java:376)
at com.datastax.driver.core.Connection$8.apply(Connection.java:346)
This seems like it should be simple but I am stumped.
My dependencies graph in relation to cassandra driver contains the following:
+--- com.datastax.cassandra:cassandra-driver-dse:2.1.9
| \--- com.datastax.cassandra:cassandra-driver-core:2.1.9 -> 2.1.8
| +--- io.netty:netty-handler:4.0.27.Final
| | +--- io.netty:netty-buffer:4.0.27.Final
| | | \--- io.netty:netty-common:4.0.27.Final
| | +--- io.netty:netty-transport:4.0.27.Final
| | | \--- io.netty:netty-buffer:4.0.27.Final (*)
| | \--- io.netty:netty-codec:4.0.27.Final
| | \--- io.netty:netty-transport:4.0.27.Final (*)
| +--- com.google.guava:guava:14.0.1 -> 18.0
| \--- com.codahale.metrics:metrics-core:3.0.2
| \--- org.slf4j:slf4j-api:1.7.5 -> 1.7.12
I created the following test which passes.
Cluster cluster = Cluster.builder().addContactPoints("localhost")
.withCredentials("username", "password")
//.withSSL()
.build();
Session session = cluster.connect("keyspace");
Assert.assertNotNull(session);
The only difference I can tell between the two is that "localhost" is now a constant rather than an array of size 1.
Found I had a trailing whitespace and that was the root cause.
Cluster cluster = Cluster.builder().addContactPoints(hosts)
.withCredentials(username.trim(), password.trim())
//.withSSL()
.build();