Connecting to multiple namespace with spring cloud vault - spring-cloud-vault-config

Spring cloud vault enables connecting to a namespace with property "spring.cloud.vault.namespace". I have an use-case to read secrets stored in different namespaces. Is this possible with sprint cloud vault? or there any other approaches?
Thanks

The namespace is configured through the client by registering a ClientHttpRequestInterceptor in RestTemplate respective ExchangeFilterFunction in WebClient.
This approach serves the purpose of allowing client authentications to authenticate against the appropriate namespace without making each authentication mechanism aware of its namespace. Later on, VaultTemplate is configured with the namespaced client to avoid downstream namespace configuration in the VaultTemplate.
If you need to use multiple namespaces, then ideally configure individual SessionManager and VaultTemplate objects per namespace.
Depending on your authentication mechanism (i.e. if you use tokens instead of login methods) there are other possible approaches. One could consist of a single SessionManager/VaultTemplate where you store the namespace and token in a ThreadLocal and provide a ClientHttpRequestInterceptor for a single VaultTemplate that communicates with the desired namespace.

Related

Mulesoft :- Force Implementation url to listen to proxy only (or) Secure Implementation url

How to force implementation url to listen from proxy only in Mulesoft?
Right now proxy can be secured using client_id, client_secret etc. However implementation url is not secure. By chance if anyone knows the implementation url then its potential risky affair.
Is there any way we can force implementation url to listen to proxy only.
(or) Can we add policies to Implementation url.
Mulesoft documentation setting-up-an-api-proxy states that the proxy application is nothing but a mule application mocking the contractual behavior of the actual service implementation and making service calls to the actual API for fulfilling requests. So instead of HTTP, it is recommended to use HTTPS for enhanced security and data integrity. Since Mulesoft suggests using HTTPS protocol for the connection between mule proxy and service implementation, so leveraging the HTTPS protocol, one option would be to try enforcing two way SSL between your proxy and the implementation which will help you accept requests only from legitimate clients.
Check the topic enable-two-way-ssl-in-mule for further implementation details
The second option would be to enable policies on the actual service implementation i.e. enable api-auto-discovery on your service.
Although you can do it but it would be an overhead due to below reasons :
As you would be enforcing policies at two layers and doubling the
calls to API Manager for sync up of policies as the service
implementation would poll the API manager every fixed interval of
time to check/fetch the policies.
To enable the policy application on the service implementation, the
service needs to run on either api-gateway runtime or mule 3.8
onwards runtime as older mule versions do not support policies.
The implementation can be done by having below XML snippet in the API xml.
<api-platform-gw:api apiName="app-${env}" version="${api.version}" flowRef="api-main" create="true" apikitRef="api-config" doc:name="API Autodiscovery" />
apiName would be the API definition created in API Manager from where
you can view and manage the API
version would be same major version of the API
flowRef would map it to the main flow reference
create flag to signify if the definition needs to be created in API Manager in case it does not exist
Conclusion:
Enforce 2 way SSL to enforce client-server certificate based authentication
Add Auto Discovery to service implementation so to apply policies on implementation layer as well
Mulesoft documentation suggests adding VPC . When we tested http was working in VPC but not https.
Since https was a mandatory requirement and we were unable to do it via VPC , we fixed it in a different way.
We added a custom header at proxy code and we validate for that header in implementation .
This was the fix rolled out

Multiple Vault connections with Spring Vault

I am using Spring Vault and need to connect to two Vault servers, one for secrets and another for transit operations. (My cluster has many more transit operations.) How can I setup Spring Vault (also using Spring Cloud Vault) for this configuration?
You have two options:
Use dedicated VaultTemplate objects that are statically configured pointing each to a Vault endpoint that is intended for the particular use case.
Implement a routing VaultEndpointProvider along with a discriminator (e.g. ThreadLocal-based hostname). Each time you intend to call an action, you're setting the discriminator that is later evaluated by your VaultEndpointProvider to return the appropriate endpoint for your call.
Spring Vault uses a pluggable client model, with an upcoming version you'll be able to control RestTemplate creation and hook into UriTemplateHandler which would be the appropriate class to extend.

How to use my authentication filter with Websocket for Cometd deployed in Jetty?

I am using Cometd 3.0.1 with jetty 9.2.3 using JSR 356 based websocket implementation (and not jetty's own websocket implementation).
I have added some auth filters which basically ask for authentication headers from request. But as websocket upgrade happen as a part of websocketupgrade filter, is there a way to make authentication work here?
Authenticating via a Filter is the wrong way to accomplish authentication.
Correct Solution:
The servlet spec expects you to setup and configure the the authentication and authorization layers of your application using the servlet techniques of both the container and the application metadata (WEB-INF/web.xml)
This means you setup a the container side security, either using the Jetty container specific LoginService, or using a JAAS spec configuration. Then you reference your security realms in your WEB-INF/web.xml and use them. If you have something custom, then you can hook into the LoginService of your choice (even a custom one) and manage it accordingly.
JAAS and LoginService Authentication and Authorization is applied before all filters and servlets.
In this scenario, you'll have access to the authentication information during the upgrade process, in particular during the ServerEndpointConfig.Configurator.modifyHandshake()
Ugly Hack Solution:
Add the org.eclipse.jetty.websocket.server.WebSocketUpgradeFilter to your WEB-INF/web.xml manually.
This then leaves it up to you to attempt to get your authentication filter to exist before this WebSocketUpgradeFilter in 100% of use cases.
Caution: filter execution ordering is not part of the servlet spec. Be careful with this, as it might seem to be working on your dev machine and then suddenly not work in QA or production. Simply because the Set of filters in the metadata will have a different order in it.
Notes:
Path Spec must be /*
Async Supported must be true
Dispatcher Types must be REQUEST only.
Do not set the contextAttributeKey for that filter
All other WebSocketUpgradeFilter init-params are invalid for JSR-356 use (they are overridden by the various JSR-356 endpoint configurations)

Azure, WCF services and restricting part of the interface?

I could not find a direct answer to this. Basically I have services MainService and SubService. The idea is that the Client software calls some methods in the MainService, but SubService calls another part of the service in MainService.
I am deploying to Azure and I want to have two separate interfaces in MainService, one for client and one for SubService and I don't want Client service to have any chance of access to the interface the SubService uses.
Given that I am new to WCF services, I am not sure how to approach this. Do I need multiple web roles for different interfaces that access the same database and handle concurrency issues etc. there, or can I somehow include multiple interfaces but restrict the availability by, for example, certificates. I am not exactly sure on Azure firewall rules, but if the interface in MainService that is meant for the SubService could be mapped to a separate port that would be behind a firewall rule, that would also be a viable solution.
tl;dr: Need two separate interfaces in a WCF service, one for client software (open for outer world), one for a sub-system service. Both services are to be run in Azure. What are my options?
You can use standard WCF authorization and authentication. For example: http://msdn.microsoft.com/en-us/library/ff647503.aspx
If you wanted to use Azure Service Bus with relay messaging, you could use some of the authentication and authorization provided by Service Bus. But, I'm not sure if there's any extra value there compared to just hosting your WCF in a web role (you'd have to do that in either case, but the access to the service would be decoupled from the clients via Service Bus).

Switching between WCF windows authentication and basic authentication

I have an application where it should be possible to choose whether you want to use the built in security (basic username and password) or the windows authentication. How do i make this possible for my application? different endpoints for each type?
I concur with your suggestion: the service should expose different endpoints for each authentication type, and the client will programaticaly select the appropriate endpoint.
The authentication type is a property of the Transport or Message security (depending upon which mode you are using), and the security settings are a property of the binding configuration.
So you would need to create two separate binding configurations. Then you would create two separate endpoints, each endpoint referencing a different binding configuration.