Issue with Dspring.cloud.bootstrap.enabled on enable cloud config with master 2.4 version - spring-cloud-config

i upgraded my spring boot application to master pom 2.4 version and using cloud configs with the property enabled spring.cloud.bootstrap.enabled = true, I have db password encrypted in cloud properties so by the time i use the db properties i don't have my encryption framework available, so eventually my application failing with invalid username and password .."i have my own encryption service "
i am looking to see load the cloud config properties after i have my encryption service available, but the spring.cloud.bootstrap.enabled makes it load first on application startup, before i upgrade to master pom, i was not using spring.cloud.bootstrap.enabled so i didn't had any issue, with adding the property the order of loading changed, so i am running into an issue. Any help will be greatly appreciated. Thanks

so by the time i use the db properties i don't have my encryption
framework available
Use #DependsOn annotation in the bean that uses the db properties to depend on the encryption framework.

Related

H2-console in r2dbc-h2 driver

I am using R2DBC-H2 driver, and my UR.L is spring.r2dbc.url=r2dbc:h2:mem:///customer
Using this configuration, SpringBoot starts fine, however, I can not access the h2-console.
Does anybody know why, and how I can fix it?
If I understand the source code of H2ConsoleAutoConfiguration correctly, the h2 console auto configuration from spring boot does not work in a reactive environment.
...
#ConditionalOnWebApplication(type = Type.SERVLET)
...
public class H2ConsoleAutoConfiguration {
You can confirm this by yourself by changing the type of your web application to SERVLET (for example, by adding spring-boot-starter-web as a dependency) which will activate the route to the h2 console (if enabled in the application properties). The h2-console route endpoint will start working again.
As the whole code seems very servlet-specific, I don't know how to properly fix this problem.
H2 Console depends on traditional Jdbc drivers, not compatible with Spring WebFlux stack.
If you are developing a WebFlux application, you can use H2 as a standalone database, ane use H2 Console freely.
Following the official Getting Started guide to start H2 Database and H2 Console.
Set your spring.r2dbc.url to the database url you are running in the first step.
NOTE: Do not use a Memory DB here.

Oozie Java Action access to Hive Server 2(Kerberized) using delegation token

Currently I am having an issue really need some help.
We are trying kerberize our hadoop cluster including hive server2 and oozie. My oozie job spins off a java action in data node which tries to connect to kerberized hive server 2.
There is no user’s kerberos keytab for authentication. So I can only use delegation token passed by oozie in the java action to connect to hive server 2.
My question is: is there any way that I can use delegation token in a oozie java action to connect to hive server 2? If so, how can I do it through hive JDBC?
Thanks
Jary
When using Oozie in a kerberized cluster...
for a "Hive" or "Pig" Action, you must configure <credentials> of
type HCat
for a "Hive2" Action (just released with V4.2) you must configure
<credentials> of type Hive2
for a "Java" action opening a custom JDBC connection to HiveServer2,
I fear that Oozie cannot help -- unless there is an undocumented hack that would make it possible to reuse this new Hive2 credential?!?
Reference: Oozie documentation about Kerberos credentials
AFAIK you cannot use Hadoop delegation tokens with HiveServer2. HS2 uses Thrift for managing client connections, and Thrift supports Kerberos; but the Hadoop delegation tokens are something different (Kerberos was never intended for distributed computing, a workaround was needed)
What you can do is ship a full set of GSSAPI configuration, including a keytab, in your "Java" Action. It works, but there are a number of caveats:
the Hadoop Auth library seems to be hard-wired on the local ticket
cache in a very lame way; if you must connect to both HDFS and
HiveServer2, then do HDFS first, because as soon as JDBC initiates
its own ticket based on your custom conf, the Hadoop Auth will be broken
Kerberos configuration is tricky, GSSAPI config is worse, and since
these are security features the error messages are not very
helpful, by design (would be bad taste to tell hackers why their intrusion
attempt was rejected)
use OpenJDK if possible; by default the Sun/Oracle JVM has
limitations on cryptography (because of silly and obsolete US
exports policies) so you must download 2 JARs with "unlimited
strength" crypto settings to replace the default ones
Reference: another StackOverflow post that I found really helpful to set up "raw" Kerberos authentication when connecting to HiveServer2; plus a link about a very helpful "trace flag" for debugging your GSSAPI config e.g.
-Djava.security.debug=gssloginconfig,configfile,configparser,logincontext
Final warning: Kerberos is black magic. It will suck your soul away. More prosaically, it will have you lose many man-days to cryptic config issues, and team morale will suffer. We've been there.
Like Samson said Java action in Oozie requires additional authentication to connect to some "kerberized" services like hive. It can be achieved in a relativly simple way, without modifications in application.
Oozie action
<action name="java-action">
<java>
...
<main-class>some.App</main-class>
<java-opts>-Djavax.security.auth.useSubjectCredsOnly=true -Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=jaas.conf</java-opts>
<file>hdfs://some/path/App.jar</file>
<file>hdfs://some/path/user.keytab</file>
<file>hdfs://some/path/jaas.conf</file>
</java>
...
</action>
jaas.conf
com.sun.security.jgss.initiate {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
useTicketCache=true
principal="USER#EXAMPLE.COM"
doNotPrompt=true
keyTab="user.keytab"
};

DataSource naming JBossEAP 6.2 vs Web Logic

I am porting a suite of related applications from WebLogic to JBoss EAP v6.2.
I have set up a data source connection using the JBoss command line interface and hooked it to an oracle database. This database has a name of "mydatasource" and a JNDI name of
"java:jboss/datasources/mydatasource" as per JBoss standards. I can test and validate this database connection.
However, when I try to port the code and run it, the connection doesn't work. The code that worked in WebLogic was simply:
InitialContext ic = new InitialContext() ;
DataSource ds = (DataSource)ic.lookup(dataSource) ;
with a value in dataSource of "mydatasource".
This worked in Web Logic but in JBoss it throws a NameNotFoundException
javax.naming.NameNotFoundException: mydatasource-- service jboss.naming.context.java.mydatasource
Clearly there is a difference in how the InitialContext is set up between the two servers.
But this port involves a large number of small applications, all of which connect to the datasource via code like that above. I don't want to rewrite all that code.
Is there a way through configuration (InitialContextFactory, maybe) to define the initial context such that code like that above will work without rewriting, or perhaps is there another way of naming the datasource that JBoss will accept that would allow code like that above to work without rewriting?
Or must we bite the bullet and accept that this code needs a rewrite?
Update: Yes, I know that simply passing "java:jboss/datasources/mydatasource" to the InitialContext lookup solves the problem, but I am looking for a solution via configuration, rather than via coding if there is such a solution.
The way to do this correctly through configuration is to use
java:comp/env/jdbc/myDataSource
then use resource-ref in web.xml to map it to the declare datasource and use weblogic.xml or jboss-web.xml to actually map it to the real one
in weblogic admin console, when you define datasource it can be jdbc/realDataSource
JNDI path Tomcat vs. Jboss
For weblogic http://docs.oracle.com/cd/E13222_01/wls/docs103/jdbc_admin/packagedjdbc.html

How to call Apache NMS from in a sandbox?

I'm trying to call Apache ActiveMQ NMS Version 1.6.0 from my code ('IntPub') that must run in a sandbox in a .NET 4.0 environment for security reasons. The program that creates the sandbox makes my code 'partially trusted' and therefore 'security-transparent' which seems to mean that it can't create a ConnectionFactory (see error log below) because NMS seems to be 'security-critical'. Here's the code that's causing this error:
connecturi = new Uri("tcp://my.server.com:61616");
var connectionFactory = new ConnectionFactory(connecturi);
I also tried this instead with similar results:
connecturi = new Uri("activemq:tcp://my.server.com:61616");
var connectionFactory = NMSConnectionFactory.CreateConnectionFactory(connecturi);
Since I can't change the security level of my assembly (the sandbox prevents it) is there a way to make NMS run as 'safe-critical' so it can be called by 'security-transparent' code? Would I have to recompile it to do so, or does NMS do some operation that would never be considered 'safe-critical?
I appreciate any help or suggestions...
Assembly 'IntPub, Version=1.0.0.0, Culture=neutral, PublicKeyToken=6fa620743b8dc60a' is partially trusted, which causes the CLR to make it entirely security transparent regardless of any transparency annotations in the assembly itself. In order to access security critical code, this assembly must be fully trusted.Detail:
<OrganizationServiceFault xmlns:i="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.microsoft.com/xrm/2011/Contracts">
<ErrorCode>-2147220956</ErrorCode>
<ErrorDetails xmlns:d2p1="http://schemas.datacontract.org/2004/07/System.Collections.Generic" />
<Message>Unexpected exception from plug-in (Execute): Test.Client: System.MethodAccessException: Attempt by security transparent method 'Test.Client.Execute(System.IServiceProvider)' to access security critical method 'Apache.NMS.ActiveMQ.ConnectionFactory..ctor(System.Uri)' failed.
From the error message attributes, it looks like you're running a Dynamics CRM 2011 plugin in sandbox mode, which has some very specific rules about what you can and can't do. In particular, you're only allowed to make network connections via HTTP and HTTPS, so attempting raw TCP sockets will definitely fail.
Take a look at this MSDN page on Plug-in Isolation, Trusts, and Statistics. It looks like there may be a way to relax the network restrictions by modifying a system registry entry to include tcp, etc, in the regex value. Below is an excerpt from the page. Note: I have not done this myself, so can't say for sure it'll work.
Sandboxed plug-ins and custom workflow activities can access the
network through the HTTP and HTTPS protocols. This capability provides
support for accessing popular web resources like social sites, news
feeds, web services, and more. The following web access restrictions
apply to this sandbox capability.
Only the HTTP and HTTPS protocols are allowed.
Access to localhost (loopback) is not permitted.
IP addresses cannot be used. You must use a named web address that requires DNS name resolution.
Anonymous authentication is supported and recommended. There is no provision for prompting the logged on user for credentials or saving those credentials.
These default web access restrictions are defined in a registry key on
the server that is running the Microsoft.Crm.Sandbox.HostService.exe
process. The value of the registry key can be changed by the System
Administrator according to business and security needs. The registry
key path on the server is:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSCRM\SandboxWorkerOutboundUriPattern
The key value is a regular expression string that defines the web access restrictions.
The default key value is:
"^http[s]?://(?!((localhost[:/])|([.])|([0-9]+[:/])|(0x[0-9a-f]+[:/])|(((([0-9]+)|(0x[0-9A-F]+)).){3}(([0-9]+)|(0x[0-9A-F]+))[:/]))).+";*
By changing this registry key value, you can change the web access for sandboxed plug-ins.

How to use java.util.logging in Weblogic?

I have an application that was migrated from Glassfish to Weblogic, and it uses java.util.logging as logging framework.
The only way I have found to make the logs work is by editing the logging.properties file of the JVM and restart the server. This solution is awkward and gives problems because the log is written to a different file than the standard ones for weblogic, so we have to look at too many files for a log in a clustered environment. Besides, for some reason this does not work on some Windows systems.
Is there a way to keep using standard java logging to write messages to weblogic's standard log files? I tried the instructions on this page but it doesn't work either.
WebLogic Server ships with a JDK logging handler which will pick up log messages emitted from JDK logging framework and direct them into the WebLogic Server logging system.
Set the default logging level for new ServerLoggingHandler instances in logging.properties as well as adding the ServerLoggingHandler to the handlers.
handlers = weblogic.logging.ServerLoggingHandler
weblogic.logging.ServerLoggingHandler.level = ALL
http://docs.oracle.com/cd/E14571_01/web.1111/e13739/logging_services.htm#CHDBBEIJ
To direct the JDK logging framework to use the logging.properties file, the standard System property java.util.logging.config.file is used. With WebLogic Server, this can be easily accomplished by setting the JAVA_OPTIONS System property with the corresponding value.
$ export JAVA_OPTIONS="-Djava.util.logging.config.file=/Users/xxx/Projects/Domains/wls1035/logging.properties"
Some more hints here: http://buttso.blogspot.de/2011/06/using-slf4j-with-weblogic-server.html