Spring Cloud Config - Multiple Composite Repositories? - spring-cloud-config

Is it possible configure Spring Cloud Config with multiple composite repositories? Our setup uses multiple team-based repositories:
spring:
cloud:
config:
server:
git:
repos:
teamA:
cloneOnStart: true
pattern: teama-*
searchPaths: '{profile}'
uri: file:///etc/config/teama
teamB:
cloneOnStart: true
pattern: teamb-*
searchPaths: '{profile}'
uri: file:///etc/config/teamb
We want each team to now pull from 2 different git repositories. I think multiple Composite repositories (https://cloud.spring.io/spring-cloud-config/single/spring-cloud-config.html#composite-environment-repositories) is what we want, but I can't figure out how to combine them, or if it is even possible.
Clarification:
I want each team to pull configuration data from two repos instead of just the 1. In pseudo code:
spring:
cloud:
config:
server:
git:
repos:
teamA:
repo1:
key1: repo1
key2: repo1
repo2:
key1: repo2
key2: repo2

In this case your can use composite configuration. Next configuration is working for me, client service is using properties from 2 repositories
spring:
profiles:
active: composite
cloud:
config:
server:
composite:
-
type: git
cloneOnStart: true
uri: https://github.com/..../test-config
-
type: git
cloneOnStart: true
uri: https://github.com/..../test-config-2
You may need to configure 'searchPaths' for each for your needs. Profile should be composite (at least one of profiles)

Related

enabling dashboards for fllebeat

I am trying to develop more visibility around aws. I'd really like to use the prebuilt dashboards that come with filebeat, but I seem to constantly run into issues with the visualizations for elb and vpcflow logs. My configuration looks like this:
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 1
setup.kibana:
host: "localhost:9243"
protocol: "https"
username: "kibana_user"
password: "kibana_password"
setup.dashboards.enabled: true
setup.dashboards.directory: ${path.config}/kibana
setup.ilm.enabled: false
output.elasticsearch:
hosts: ["localhost:9200"]
protocol: "https"
username: "elastic_user"
password: "password"
indices:
- index: "cloudtrail-%{[agent.version]}-%{+yyyy.MM.dd}"
when.contains:
event.dataset: "aws.cloudtrail"
- index: "elb-%{[agent.version]}-%{+yyyy.MM.dd}"
when.contains:
event.dataset: "aws.elb"
- index: "vpc-%{[agent.version]}-%{+yyyy.MM.dd}"
when.contains:
event.dataset: "aws.vpc"
processors:
- add_fields:
target: my_env
fields:
environment: development
In my dashboards directory I changed the filebeat-* index to
vpc-* for Filebeat-aws-vpcflow-overview.json, cloudtrail-* for filebeat-aws-cloudtrail.json and elb-* for Filebeat-aws-elb-overview.json. The cloudtrail dashboard works just fine. I only run into issues with the elb and vpcflow visualizations. None of elb requests visualizations work. The top ip addresses for vpcflow logs do not work either. Here are some screenshots
Any help with this would be greatly appreciated.
For this particular situation if you don't use the deafault filebeaat-* index there are issues getting the prebuilt dashboards to spin up. I dropped the custom indexing that I had in my configuration and I was able to get the dashboards to load properly.

Assign roles to EKS cluster in manifest file?

I'm new to Kubernetes, and am playing with eksctl to create an EKS cluster in AWS. Here's my simple manifest file
kind: ClusterConfig
apiVersion: eksctl.io/v1alpha5
metadata:
name: sandbox
region: us-east-1
version: "1.18"
managedNodeGroups:
- name: ng-sandbox
instanceType: r5a.xlarge
privateNetworking: true
desiredCapacity: 2
minSize: 1
maxSize: 4
ssh:
allow: true
publicKeyName: my-ssh-key
fargateProfiles:
- name: fp-default
selectors:
# All workloads in the "default" Kubernetes namespace will be
# scheduled onto Fargate:
- namespace: default
# All workloads in the "kube-system" Kubernetes namespace will be
# scheduled onto Fargate:
- namespace: kube-system
- name: fp-sandbox
selectors:
# All workloads in the "sandbox" Kubernetes namespace matching the
# following label selectors will be scheduled onto Fargate:
- namespace: sandbox
labels:
env: sandbox
checks: passed
I created 2 roles, EKSClusterRole for cluster management, and EKSWorkerRole for the worker nodes? Where do I use them in the file? I'm looking at eksctl Config file schema page and it's not clear to me where in manifest file to use them.
As you mentioned, it's in the managedNodeGroups docs
managedNodeGroups:
- ...
iam:
instanceRoleARN: my-role-arn
# or
# instanceRoleName: my-role-name
You should also read about
Creating a cluster with Fargate support using a config file
AWS Fargate

Spinnaker on Titus cloud provider

Are there any steps of configuring Spinnaker/Halyard to work on Titus based cluster? - https://netflix.github.io/titus/
There aren't any steps described in the documentation: https://www.spinnaker.io/setup/install/providers/
Also, check this Github issue: https://github.com/spinnaker/spinnaker.github.io/issues/869
There is a sample config in the github repo:
titus:
enabled: true
awsVpc: vpc0 # this is the default vpc used by titus
accounts:
- name: titusdevint
environment: test
discovery: "http://discovery.compary.com/v2"
discoveryEnabled: true
registry: testregistry # reference to the docker registry being used
awsAccount: test # aws account underpinning
autoscalingEnabled: true
loadBalancingEnabled: false # load balancing will be released at a later date
regions:
- name: us-east-1
url: https://myTitus.us-east-1.company.com/
port: 7104
autoscalingEnabled: true
loadBalancingEnabled: false
- name: eu-west-1
url: https://myTitus.eu-west-1.company.com/
port: 7104
autoscalingEnabled: true
loadBalancingEnabled: false
https://github.com/spinnaker/clouddriver/tree/master/clouddriver-titus
Right now you'll have to edit clouddriver.yml manually and then update via halyard

how to set local path in yaml configuration file in microservice

Here all the properties file are in github location,so that I am able to read using uri path ,how I will read if It's in my local system.Can anybody please guide ?
server:
port: 8888
eureka:
instance:
hostname: configserver
client:
registerWithEureka: true
fetchRegistry: true
serviceUrl:
defaultZone: http://discovery:8761/eureka/
spring:
cloud:
config:
server:
git:
uri: https://github.com/****/******
You need to use spring cloud config in native mode, e.g.
spring:
cloud:
config:
server:
bootstrap: true
native:
search-locations: file:///C:/ConfigData
See the following link for more information:
http://cloud.spring.io/spring-cloud-config/spring-cloud-config.html#_file_system_backend

Pattern matching for profile in Spring Cloud Config Server

Context
I am attempting to separate configuration information for our applications using the pattern-matching feature in Spring Cloud Config Server. I have created a repo for "production" environment with a property file floof.yaml. I have created a repo for "development" environment with a property file floof-dev.yaml.
My server config:
spring:
application:
name: "bluemoon"
cloud:
config:
server:
git:
uri: file://${user.home}/tmp/prod
repos:
development:
pattern:
- \*/dev
uri: file://${user.home}/tmp/dev
After starting the server instance, I can successfully retrieve the config content using curl, and can verify which content was served by referring to the "source" element as well as the values for the properties themselves.
Expected Behavior
When I fetch http://localhost:8080/floof/prod I expect to see the source "$HOME/tmp/prod/floof.yaml" and the values from that source, and the actual results match that expectation.
When I fetch http://localhost:8080/floof/dev I expect to see the source "$HOME/tmp/dev/floof-dev.yaml" and the values from that source, but the actual result is the "production" file and contents (the same as if I had fetched .../floof/prod instead.
My Questions
Is my expectation of behavior incorrect? I assume not since there is an example in the documentation in the "Git backend" section that suggests separation by profile is a thing.
Is my server config incorrectly specifying the "development" repo? I turned up the logging verbosity in the server instance and saw nothing in there that called attention to itself in terms of misconfiguration.
Are the property files subject to a naming convention that I'm not following?
I had the same issue. Here is how I resolved::
spring cloud config pattern match for profile
Also, check if you are using Brixton.M5 version.
After some debugging on the PatternMatching source code here is how I resolved the issue: You can choose one of the two ways.
application.yml
server:
port: 8888
spring:
cloud:
config:
server:
git:
uri: ssh://xxxx#github/sample/cloud-config-properties.git
repos:
development:
pattern: '*/development' ## give in quotes
uri: ssh://git#xxxgithub.com/development.git
OR
development:
pattern: xx*/development,*/development ##since it is not allowed to have a value starting with a wildcard( '*' )after pattern I first gave a generic matching but the second value is */development. Since pattern takes multiple values, the second pattern will match with the profile.
uri: ssh://git#xxxgithub.com/development.git
pattern: */development.Error on yml file- expected alphabetic or numeric character, but found but found /.
The reason the profile pattern git repo was not identified because : although spring allows multiple array values for pattern beginning with a '-' in the yml file, the pattern matcher was taking the '-' as string to be matched. i.e it is looking for a pattern '-*/development' instead of '*/development'.
repos:
development:
pattern:
-*/development
-*/staging
Another issue i observed was, I was getting a compilation error on yml file if i had to mention the pattern array as '- */development' - note space after hyphen(which is meant to show that it can hold multiple values as array) and start with a '*/development' with an error: expected alphabetic or numeric character, but found but found /
repos:
development:
pattern:
- */development
- */staging