Second imported config Property file not imported after config server - spring-cloud-config

I have a Spring Cloud Config server and a client Spring Boot app
In my Cloud config I have
fhir:
poc:
myname: Fred
In my Local application.yml I have
spring:
application:
name: hello
config:
import:
- 'configserver:https://my-config-server:8888'
- classpath:dev.yml
in my dev.yml I have
fhir:
poc:
myname: Joe
The Spring Environment property is resolved to Fred
if I update my Local application.yml to
spring:
application:
name: hello
config:
import:
- 'optional:configserver:'
- classpath:dev.yml
The Spring Environment property is resolved to Joe
Why is dev.yml not overriding the value from spring config server?

Related

How can I set kafka ACLs for brokers?

I have some problems with configuring Kafka Acls brokers.
I'm using bitnami docker-compose-cluster.yml for my project and I want set authentication for each broker
I created kafka_jass.conf file with this content:
kafkabroker {
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required
username="alice"
password="******";
};
and added these lines to docker compose for each broker:
KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CLIENT:PLAINTEXT,EXTERNAL:PLAINTEXT,SASL_PLAINTEXT:PLAINTEXT
KAFKA_CFG_LISTENERS=CLIENT://:29092,EXTERNAL://0.0.0.0:9092,SASL_PLAINTEXT://broker1:9095
KAFKA_CFG_ADVERTISED_LISTENERS=CLIENT://broker1:29092,EXTERNAL://******:9092,SASL_PLAINTEXT://localhost:9095
security.inter.broker.protocol=SASL_PLAINTEXT
and this line to server.properties:
authorizer.class.name=kafka.security.authorizer.AclAuthorizer
after starting the docker compose, I receive this error for each broker:
[2022-12-06 06:47:19,679] ERROR Exiting Kafka due to fatal exception (kafka.Kafka$)
broker1 | org.apache.kafka.common.KafkaException: Exception while loading Zookeeper JAAS login context [java.security.auth.login.config=/opt/bitnami/kafka/config/kafka_jaas.conf, zookeeper.sasl.client=default:true, zookeeper.sasl.clientconfig=default:Client]
broker1 | at org.apache.kafka.common.security.JaasUtils.isZkSaslEnabled(JaasUtils.java:67)
broker1 | at kafka.server.KafkaServer$.zkClientConfigFromKafkaConfig(KafkaServer.scala:79)
broker1 | at kafka.server.KafkaServer.<init>(KafkaServer.scala:149)
broker1 | at kafka.Kafka$.buildServer(Kafka.scala:73)
broker1 | at kafka.Kafka$.main(Kafka.scala:87)
broker1 | at kafka.Kafka.main(Kafka.scala)
broker1 | Caused by: java.lang.SecurityException: java.io.IOException: Configuration Error:
broker1 | Line 2: expected [controlFlag]
Update question:
this is docker compose content:
version: "2"
services:
zookeeper:
image: dockerhub.******/bitnami/zookeeper:3.8
hostname: zookeeper,SASL_PLAINTEXT://localhost:9091
container_name: zookeeper
ports:
- '2181:2181'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
- KAFKA_OPTS="-Dzookeeper.authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider -Djava.security.auth.login.config=/opt/bitnami/kafka/config/zookeeper-server.jaas"
volumes:
- ./config:/opt/bitnami/kafka/config
kafka-0:
image: dockerhub.******/bitnami/kafka:3.2
hostname: broker1
container_name: broker1
ports:
- '9092:9092'
volumes:
- ./config/broker1:/bitnami
- ./config/broker1/kafka/config/server.properties:/bitnami/kafka/config/server.properties
environment:
- KAFKA_BROKER_ID=1
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CLIENT:SASL_PLAINTEXT,EXTERNAL:SASL_PLAINTEXT
- KAFKA_CFG_LISTENERS=CLIENT://0.0.0.0:29092,EXTERNAL://0.0.0.0:9092
- KAFKA_CFG_ADVERTISED_LISTENERS=CLIENT://broker1:29092,EXTERNAL://*****:9092
- KAFKA_CFG_INTER_BROKER_LISTENER_NAME=CLIENT
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_INTER_BROKER_LISTENER_NAME=SASL_PLAINTEXT
- KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL=PLAIN
- KAFKA_CFG_SASL_ENABLED_MECHANISMS=PLAIN
- KAFKA_CFG_LISTENER_NAME_EXTERNAL_SASL_ENABLED_MECHANISMS=PLAIN
- KAFKA_CFG_LISTENER_NAME_EXTERNAL_PLAIN_SASL_JAAS_CONFIG="org.apache.kafka.common.security.plain.PlainLoginModule required user_admin='admin-secret' user_producer='producer-secret' user_consumer='consu>
- KAFKA_CFG_LISTENER_NAME_CLIENT_SASL_ENABLED_MECHANISMS=PLAIN
- KAFKA_CFG_LISTENER_NAME_CLIENT_PLAIN_SASL_JAAS_CONFIG="org.apache.kafka.common.security.plain.PlainLoginModule required user_broker='broker-secret' username='broker' password='*****';"
- KAFKA_CFG_SUPER_USERS="User:broker;User:admin"
- KAFKA_CFG_ALLOW_EVERYONE_IF_NO_ACL_FOUND="false"
- KAFKA_CFG_ZOOKEEPER_SET_ACL="true"
- KAFKA_CFG_OPTS="-Djava.security.auth.login.config=/opt/bitnami/kafka/config/kafka-server.jaas"
depends_on:
- zookeeper
Your JAAS file is improperly formatted. The first two options are Kafka broker properties, not JAAS entries.
This is the minimum information you need, but you can add additional users here, too.
Refer. https://docs.confluent.io/platform/current/kafka/authentication_sasl/index.html
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="alice"
password="******";
};
Regarding your container, you cannot edit the server properties directly; that's what the environment variables already do. Use KAFKA_CFG_AUTHORIZER_CLASS. Similarly, the inter broker protocol needs uppercased and prefixed appropriately for the bitnami container to use it.
Also, look at the README of the bitnami Kafka container and it already has ways to configure authentication and user accounts via extra environment variables.

Assign roles to EKS cluster in manifest file?

I'm new to Kubernetes, and am playing with eksctl to create an EKS cluster in AWS. Here's my simple manifest file
kind: ClusterConfig
apiVersion: eksctl.io/v1alpha5
metadata:
name: sandbox
region: us-east-1
version: "1.18"
managedNodeGroups:
- name: ng-sandbox
instanceType: r5a.xlarge
privateNetworking: true
desiredCapacity: 2
minSize: 1
maxSize: 4
ssh:
allow: true
publicKeyName: my-ssh-key
fargateProfiles:
- name: fp-default
selectors:
# All workloads in the "default" Kubernetes namespace will be
# scheduled onto Fargate:
- namespace: default
# All workloads in the "kube-system" Kubernetes namespace will be
# scheduled onto Fargate:
- namespace: kube-system
- name: fp-sandbox
selectors:
# All workloads in the "sandbox" Kubernetes namespace matching the
# following label selectors will be scheduled onto Fargate:
- namespace: sandbox
labels:
env: sandbox
checks: passed
I created 2 roles, EKSClusterRole for cluster management, and EKSWorkerRole for the worker nodes? Where do I use them in the file? I'm looking at eksctl Config file schema page and it's not clear to me where in manifest file to use them.
As you mentioned, it's in the managedNodeGroups docs
managedNodeGroups:
- ...
iam:
instanceRoleARN: my-role-arn
# or
# instanceRoleName: my-role-name
You should also read about
Creating a cluster with Fargate support using a config file
AWS Fargate

Spring cloud config git refreshRate behaviour

I am trying to setup Spring Cloud Config Server and want to enable auto refresh of properties based on changes to the backing git repository.
Below is the bootstrap.yml of the server.
server:
port: 8080
spring:
application:
name: my-configserver
cloud:
config:
server:
bootstrap: true
git:
uri: /Users/anoop/Documents/centralconfig
refreshRate: 15
searchPaths: {application}/properties
bus:
enabled: true
As per the documentation spring.cloud.config.server.git.refreshRate determines
how often the config server will fetch updated configuration data from
your Git backend
I see that the config clients are not notified of changes, when the configuration changes. I have not configured a git hook for this and was hoping that just setting the property would do the job.
Anoop
Since you have configured the refreshRate property, whenever config client (other applications) call config server to fetch properties (this happens when either the application starts or application calls /actuator/refresh endpoint), they will get properties which were fetched 15 seconds (your refreshRate) old.
By default the refreshRate property is set to 0, meaning any time client applications ask for property config server will fetch latest from GIT.
I don't think there is any property which lets your client apps get notified in case of change/commits in the GIT. This is something your app needs to do by calling actuator/refresh endpoint. This can be done programmatically using some scheduler (though I wouldn't recommend that).
By default, the config client just reads the properties in git repo at startup and not again.
You can actually have a way to workaround by force bean to refresh its configuration from the config server.
First, you need to add #RefreshScope annotation in the bean where config needs to be reloaded.
Second, enable spring boot actuator in application.yml of config client.
# enable dynamic configuration changes using spring boot actuator
management:
endpoints:
web:
exposure:
include: '*'
And then config a scheduled job (by using #Scheduled annotation with fixedRate,...). Of course, fixedRate should conform with refreshRate from config server.
And inside that job, it will execute the request as below:
curl -X POST http://username:password#localhost:8888/refresh
Then your config client will be notified changes in config repo every fixRate interval.
The property spring.cloud.config.server.git.refreshRate is configured in the Config Server and controls how frequently it is going to pull updates, if any, from Git. Otherwise, the Config Server's default behaviour is to connect to the Git repo only when some client requests its configuration.
Git Repo -> Config Server
It has no effect in the communication between the Config Server and its clients.
Config Server -> Spring Boot app (Config Server clients)
Spring Boot apps built with Config Server clients pull all configuration from the Config Server during their startup. To enable them to dynamically change the initially loaded configuration, you need to perform the following steps in your Spring Boot apps aka Config Server clients:
Include Spring Boot Actuator in the classpath, for example using Gradle:
implementation 'org.springframework.boot:spring-boot-starter-actuator'
Enable the actuator refresh endpoint via the management.endpoints.web.exposure.include=refresh property
Annotate the Beans holding the properties in your code with #RefreshScope, for example:
#RefreshScope
#Service
public class BookService {
#Value("${book.default-page-size:20}")
private int DEFAULT_PAGE_SIZE;
//...
}
With the application running, commit some property change into the repo:
git commit -am "Testing property changes"
Trigger the updating process by sending a HTTP POST request to the refresh endpoint, for example (using httpie and assuming your app is running locally at the port 8080:
http post :8080/actuator/refresh
and the response should be something like below indicating which properties, if any, have changed
HTTP/1.1 200
Connection: keep-alive
Content-Type: application/vnd.spring-boot.actuator.v3+json
Date: Wed, 30 Jun 2021 10:18:48 GMT
Keep-Alive: timeout=60
Transfer-Encoding: chunked
[
"config.client.version",
"book.default-page-size"
]

how to set local path in yaml configuration file in microservice

Here all the properties file are in github location,so that I am able to read using uri path ,how I will read if It's in my local system.Can anybody please guide ?
server:
port: 8888
eureka:
instance:
hostname: configserver
client:
registerWithEureka: true
fetchRegistry: true
serviceUrl:
defaultZone: http://discovery:8761/eureka/
spring:
cloud:
config:
server:
git:
uri: https://github.com/****/******
You need to use spring cloud config in native mode, e.g.
spring:
cloud:
config:
server:
bootstrap: true
native:
search-locations: file:///C:/ConfigData
See the following link for more information:
http://cloud.spring.io/spring-cloud-config/spring-cloud-config.html#_file_system_backend

how to set configurations variables and access it in rails controller

I have a set of parameters that needs to be initialized for elasticMQ sqs. Right now I have added in the controller as below.
sqs = RightAws::SqsGen2.new("ABCD","DEFG",{:server=>"localhost",:port=>9324,:protocol=>"http"})
what is the better way to set it in config folder and access it in controller and how to do it. Please help
Create a config file config/config.yml that will store the config variables for the different environments and load it in config/application.rb.
development:
elasticmq:
server: localhost
port: 9324
protocol: 'http'
production:
elasticmq:
server:
port:
protocol:
test:
In config/application.rb:
CONFIG = YAML.load_file("config/config.yml")[Rails.env]
The CONFIG variable is now available in the controller.So now you can do the following:
sqs = RightAws::SqsGen2.new("ABCD","DEFG",{:server=>"#{CONFIG['elasticmq']['server']}",:port=> "#{CONFIG['elasticmq']['port']}",:protocol=>"#{CONFIG['elasticmq']['protocol']}"})