Can I store keystore files in git repo and access it via Spring cloud config server? or does it support only property files?
Spring Cloud Config Server does support loading static files through a URL:
https://cloud.spring.io/spring-cloud-config/reference/html/#_serving_plain_text
After retrieving the file, the key store file can be loaded programmatically: http://tutorials.jenkov.com/java-cryptography/keystore.html#loading-the-keystore
Looks like it does.
Store keystore file in git repo and access it via /{name}/{profile}/{label}/{path}.
Reference /{name}/{profile}/{label}/{path}
Related
I have built a pipeline that is triggered by a Git push on a specific file which contains additional meta information like the target namespace and version of the kubernetes manifest to be deployed.
Within an expression I would like to read the artifact using
${ #fromUrl( execution['trigger']['resolvedExpectedArtifacts'][0]['boundArtifact']['reference'] ) }
What I try to achieve is a GitOps approach with a set of config files in Git which trigger a pipeline for a parameterized Kubernetes manifest to deploy multiple resources.
When I execute that expression either by starting the pipeline or using curl I get 401 (in orca logs). The Git credentials are configured using username/password and token as well in config as in orca-local.yml.
But it seems they are not used.
Am I on the wrong path, is there an easier way to access a file's content in a pipeline?
That helper won't go through any sort of authentication, it will expect the endpoint to be open to your spinnaker instance.
Spinnaker normally treats artifacts as pass-through, so in order to get the contents of the file inside the pipeline you'll have to go through an intermediate stage such as writing out a property file in a jenkins stage ( https://www.spinnaker.io/guides/user/pipeline/expressions/#property-files ) or via webhook with custom auth headers.
In https://github.com/shauank/spring-cloud/tree/master/spring-cloud-prop, how to read property value from email-conf.properties.
Assuming: spring.application.name=reservation and profile=default.
As per convention spring cloud will load reservation.properties and application.properties. But I wanted to load email-conf.properties also. How to achieve?
See the doc of cloud config, access the endpoint /{name}/{profile}/{label}/{path} for plain text file.
In your case, you can get the content of email-conf.properties via url /reservation/default/master/email-conf.properties.
I've got an Appveyor project setup and working awesomely. Now, I want to upload artifacts S3 for easy hosting. This seems fairly easy as outlined in the documentation. My question is, where do I put the secret with write permission? I don't want to push it to my public repo for obvious reasons. On travis I could put it in an environment variable that was never logged. How would I go about this in Appveyor?
I assume you need to store this in YAML. You can use secure variables. Or you can simple put your secrets in clear text to S3 deployment configuration in UI, then save and press Export YAML and you will have YAML section with secrets encrypted.
Using Spring Integration file:outbound-channel-adapter, is there a way to specify what user account to use when writing the file. We need to write files from one domain to another. We would like to be able to write them just using file shares, but to do this, we need to be able to log in to the remote box with an account in the remote domain.
We can get around this with FTP, but would like to use file writing.
Thanks
I assume you are talking about windows domains/shares.
There are SMB adapters in the Spring Integration Extensions repository.
It includes a sample configuration file.
You can build it from github or there's a snapshot in the spring snapshot repo.
I want to know if it's possible to create JDBC Realm configuration in Glassfish 3.1 without admin console, like creation of a Data Source with the glassfish-resources.xml.
When developers download my GIT repository they don't like to configure Glassfish, it's configured in deployment time.
Best regards
Mounir
I'd create a shell script or batch file which runs the required asadmin commands.
Here you can find a complete example: Creating JDBC Objects Using asadmin
(Btw, DTD of GlassFish Resources Descriptor does not contain any realm-related tag (include create-auth-realm).)