spring cloud config server with multiple property sources - spring-cloud-config

I have spring cloud config server reading properties from multiple sources (Git and Vault). For a given path, even it finds the resource in Git, it still queries vault and report failure as the resources are not available at both sources. My requirement is to look for a resource and if its found, no need to query the other source. Please suggest if its possible. Thanks

Related

Spring cloud config precedence property files

Trying to read about the precedence of loading several properties in Spring cloud config, I am not finding my case to figure it out which is the precedence of properties. My case is the next:
I have the next properties in the spring cloud config application:
application.properties
application-dev.properties
nameOfApplicationXX.properties
nameOfApplicationXX-dev.properties
I am launching the app nameOfApplicationXX with the dev profile. My case is that application-dev.properties has one property and this property is not being overriden by the same property present in nameOfApplication.properties. So, application-dev.properties has preference over nameOfApplicationXX.properties because the first one is specifying a profile?
Which is the precedence of each one? Do you know the docs reference because I am not finding it
Thanks
If I understood your problem correctly then the below is the solution I have found from the Spring Cloud Config document reference:
"If the repository is file-based, the server creates an Environment from application.yml (shared between all clients) and foo.yml (with foo.yml taking precedence). If the YAML files have documents inside them that point to Spring profiles, those are applied with higher precedence (in order of the profiles listed). If there are profile-specific YAML (or properties) files, these are also applied with higher precedence than the defaults. Higher precedence translates to a PropertySource listed earlier in the Environment. (These same rules apply in a standalone Spring Boot application.)"
Spring Cloud Config reference link : Documentation
Note: By seeing the above problem statement I can say that you are using file based profile in Spring cloud Config server. The Spring Cloud Config server will return List of Property Sources for each type as a classpath resource properties.
To override the the default implementation I have implemented the same and reference code is available in gitHub link : Source Code
Not a similar issue but may help you : reference issue
Hope this will help you to fix the above mentioned problem statement.

Setting RavenDB license for automated deployment in kubernetes

I am trying to deploy a RavenDB instance on a Kubernetes cluster. The deployment should be fully automated, i.e. there should be no need to access the UI to configure something.
I have found plenty of documentation on how raven in a container can be configured, e.g. with command line args via RAVEN_ARGS, environment variables (e.g. RAVEN_License_Eula_Accepted), or a custom settings.json file in a mounted volume.
I have tried all the options above, and they all work, except when trying to set a license. I have tried to set either License directly as a JSON string or License.Path pointing to a license.json file mounted in a volume. Yet whenever I access the UI after deploying the container, I get a notification telling me I need to set a license.
Can anyone tell me how I can get Raven to use the license I provide via the approaches mentioned above?
Thanks
You need to bootstrap the cluster with some kind of operation for the license to be picked up. For example, create a database or call the /admin/cluster/bootstrap endpoint.

Mule - Copy the directory from HDFS

I need to copy the directory (/tmp/xxx_files/xxx/Output) head containing sub folders and files from HDFS (Hadoop distributed file system). I'm using HDFS connector but it seems it does not support this.
It always getting an error like:
org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): Path is not a file: /tmp/xxx_files/xxx/Output/
I don't see any option is HDFS connector for copying the files/directories inside the path specified. It is always expecting file names to be copied.
Is it possible to copy a directory head containing sub-folders and files using the HDFS connector from MuleSoft?
As the technical documentation of the HSFS connector on the official MuleSoft website states, the code is hosted at the GitHub site of the connector:
The Anypoint Connector for the Hadoop Distributed File System (HDFS)
is used as a bi-directional gateway between applications. Its source
is stored at the HDFS Connector GitHub site.
What it does not state, that there is also a more detailed technical documentation available on the GitHub site.
Here you can also find different examples how to use the connector for basic file-system operations.
The links seem to be broken in the official MuleSoft documentation.
You can find the repository here:
https://github.com/mulesoft/mule-hadoop-connector
The operations are implemented in the HdfsOperations java class. (See also the FileSystemApiService class)
As you can see, the functionality you expect is not implemented. It is not supported out-of-the-box.
You can't copy a directory head containing sub folders and files from HDFS without any further effort using the HDFS connector.

Writing a file using a different account in Spring Integration

Using Spring Integration file:outbound-channel-adapter, is there a way to specify what user account to use when writing the file. We need to write files from one domain to another. We would like to be able to write them just using file shares, but to do this, we need to be able to log in to the remote box with an account in the remote domain.
We can get around this with FTP, but would like to use file writing.
Thanks
I assume you are talking about windows domains/shares.
There are SMB adapters in the Spring Integration Extensions repository.
It includes a sample configuration file.
You can build it from github or there's a snapshot in the spring snapshot repo.

Apache Ivy: Where do I put all these JARs?

I'm trying to convince the higher-ups at my work place to migrate to Apache Ivy. I've managed to get a few sandbox projects working using Ivy to power the build, and now I have a greenlight to put together a migration proposal.
We all agree on one thing: we don't want to trust JARs that are located in public directories! I know, I know, a bit paranoid, yes. But we'd like to have a setup where we pull a JAR from a trusted source (either downloading it from the open source project itself, or most likely, gulp, a public repo), and use it for some time before we "certify" it (give it our blessing as a safe artifact to use).
Then we want to have a common repository for all JARs used by our many projects.
My original thinking was to place this repository up in version control (we have an SVN server). But I wasn't sure what best practices dictate. It might make more sense to put our JARs on a file server and FTP to them in the Ivy script.
Either way, SVN (HTTPS) or FTP, all of our servers are authenticated. So, a small number of questions:
Where should we be publishing all of our "certified" JARs (everything from `log4j` to any homegrown JARs we produce)? What do best practices dictate?
The "ivyrep" resolver-type does not take username or passwd atrributes. If our "JAR server" (FTP, SVN, etc.) is authenticated, how do I configure the Ivy scripts to login?
I must echo Brian's recommendation to use a repository manager like Nexus. It's a lot less work in the long run. You'll also discover that the professional version of Nexus enables you to create approval processes around repositories which you plan to use in your build. See the procurement suite functionality.
If, on the other hand, you are determined to build your own repository, then ivy has the tools for the job. You need to become very familiar with the ivy settings file and how it declares and uses resolvers.
If repository is accessible via HTTPS the the url resolver should be able to access it. The resolver will assume that each version of an artifact is in a different directory and you'll need to specify the URL pattern that ivy will need to use when accessing the repository:
<url name="two-patterns-example">
<ivy pattern="http://ivyrep.mycompany.com/[module]/[revision]/ivy-[revision].xml" />
<artifact pattern="http://ivyrep.mycompany.com/[module]/[revision]/[artifact]-[revision].[ext]" />
</url>
The pattern is fully flexible to how you store the artifacts.
Authentication is also handled in the settings file using the credentials tag.
Finally, the FTP protocol is also supported. It's hard to find in the doco, but it's supported by the vfs resolver.
I think that's enough information on an option I don't recommend :-) Having said that I once created an FTP based repository for managing releases to clients. It's useful to have a tool this powerful :-)
Why not use something like Sonatype's Nexus. I've seen it used for Maven, and I believe it'll work for Ivy.
You can set it up to download from remote repositories into (say) a 'test' repository. You can then evaluate those .jars, and if they're good, upload them into an 'approved' repository for general consumption. There's some authentication surrounding this, but you'd have to evaluate that in greater depth. Certainly you can restrict the uploading into repositories via a username/password pair.