Spring cloud config git refreshRate behaviour - spring-cloud-config

I am trying to setup Spring Cloud Config Server and want to enable auto refresh of properties based on changes to the backing git repository.
Below is the bootstrap.yml of the server.
server:
port: 8080
spring:
application:
name: my-configserver
cloud:
config:
server:
bootstrap: true
git:
uri: /Users/anoop/Documents/centralconfig
refreshRate: 15
searchPaths: {application}/properties
bus:
enabled: true
As per the documentation spring.cloud.config.server.git.refreshRate determines
how often the config server will fetch updated configuration data from
your Git backend
I see that the config clients are not notified of changes, when the configuration changes. I have not configured a git hook for this and was hoping that just setting the property would do the job.
Anoop

Since you have configured the refreshRate property, whenever config client (other applications) call config server to fetch properties (this happens when either the application starts or application calls /actuator/refresh endpoint), they will get properties which were fetched 15 seconds (your refreshRate) old.
By default the refreshRate property is set to 0, meaning any time client applications ask for property config server will fetch latest from GIT.
I don't think there is any property which lets your client apps get notified in case of change/commits in the GIT. This is something your app needs to do by calling actuator/refresh endpoint. This can be done programmatically using some scheduler (though I wouldn't recommend that).

By default, the config client just reads the properties in git repo at startup and not again.
You can actually have a way to workaround by force bean to refresh its configuration from the config server.
First, you need to add #RefreshScope annotation in the bean where config needs to be reloaded.
Second, enable spring boot actuator in application.yml of config client.
# enable dynamic configuration changes using spring boot actuator
management:
endpoints:
web:
exposure:
include: '*'
And then config a scheduled job (by using #Scheduled annotation with fixedRate,...). Of course, fixedRate should conform with refreshRate from config server.
And inside that job, it will execute the request as below:
curl -X POST http://username:password#localhost:8888/refresh
Then your config client will be notified changes in config repo every fixRate interval.

The property spring.cloud.config.server.git.refreshRate is configured in the Config Server and controls how frequently it is going to pull updates, if any, from Git. Otherwise, the Config Server's default behaviour is to connect to the Git repo only when some client requests its configuration.
Git Repo -> Config Server
It has no effect in the communication between the Config Server and its clients.
Config Server -> Spring Boot app (Config Server clients)
Spring Boot apps built with Config Server clients pull all configuration from the Config Server during their startup. To enable them to dynamically change the initially loaded configuration, you need to perform the following steps in your Spring Boot apps aka Config Server clients:
Include Spring Boot Actuator in the classpath, for example using Gradle:
implementation 'org.springframework.boot:spring-boot-starter-actuator'
Enable the actuator refresh endpoint via the management.endpoints.web.exposure.include=refresh property
Annotate the Beans holding the properties in your code with #RefreshScope, for example:
#RefreshScope
#Service
public class BookService {
#Value("${book.default-page-size:20}")
private int DEFAULT_PAGE_SIZE;
//...
}
With the application running, commit some property change into the repo:
git commit -am "Testing property changes"
Trigger the updating process by sending a HTTP POST request to the refresh endpoint, for example (using httpie and assuming your app is running locally at the port 8080:
http post :8080/actuator/refresh
and the response should be something like below indicating which properties, if any, have changed
HTTP/1.1 200
Connection: keep-alive
Content-Type: application/vnd.spring-boot.actuator.v3+json
Date: Wed, 30 Jun 2021 10:18:48 GMT
Keep-Alive: timeout=60
Transfer-Encoding: chunked
[
"config.client.version",
"book.default-page-size"
]

Related

RKE2 Authorized endpoint configuration help required

I have a rancher 2.6.67 server and RKE2 downstream cluster. The cluster was created without authorized cluster endpoint. How to add an authorised cluster endpoint to a RKE2 cluster created by Rancher article describes how to add it in an existing cluster, however although the answer looks promising, I still must miss some detail, because it does not work for me.
Here is what I did:
Created /var/lib/rancher/rke2/kube-api-authn-webhook.yaml file with contents:
apiVersion: v1
kind: Config
clusters:
- name: Default
cluster:
insecure-skip-tls-verify: true
server: http://127.0.0.1:6440/v1/authenticate
users:
- name: Default
user:
insecure-skip-tls-verify: true
current-context: webhook
contexts:
- name: webhook
context:
user: Default
cluster: Default
and added
"kube-apiserver-arg": [
"authentication-token-webhook-config-file=/var/lib/rancher/rke2/kube-api-authn-webhook.yaml"
to the /etc/rancher/rke2/config.yaml.d/50-rancher.yaml file.
After restarting rke2-server I found the network configuration tab in Rancher and was able to enable authorized endpoint. Here is where my success ends.
I tried to create a serviceaccount and got the secret to have token authorization, but it failed when connecting directly to the api endpoint on the master.
kube-api-auth pod logs this:
time="2022-10-06T08:42:27Z" level=error msg="found 1 parts of token"
time="2022-10-06T08:42:27Z" level=info msg="Processing v1Authenticate request..."
Also the log is full of messages like this:
E1006 09:04:07.868108 1 reflector.go:139] pkg/mod/github.com/rancher/client-go#v1.22.3-rancher.1/tools/cache/reflector.go:168: Failed to watch *v3.ClusterAuthToken: failed to list *v3.ClusterAuthToken: the server could not find the requested resource (get clusterauthtokens.meta.k8s.io)
E1006 09:04:40.778350 1 reflector.go:139] pkg/mod/github.com/rancher/client-go#v1.22.3-rancher.1/tools/cache/reflector.go:168: Failed to watch *v3.ClusterAuthToken: failed to list *v3.ClusterAuthToken: the server could not find the requested resource (get clusterauthtokens.meta.k8s.io)
E1006 09:04:45.171554 1 reflector.go:139] pkg/mod/github.com/rancher/client-go#v1.22.3-rancher.1/tools/cache/reflector.go:168: Failed to watch *v3.ClusterUserAttribute: failed to list *v3.ClusterUserAttribute: the server could not find the requested resource (get clusteruserattributes.meta.k8s.io)
I found that SA tokens will not work this way so I tried to use a rancher user token, but that fails as well:
time="2022-10-06T08:37:34Z" level=info msg=" ...looking up token for kubeconfig-user-qq9nrc86vv"
time="2022-10-06T08:37:34Z" level=error msg="clusterauthtokens.cluster.cattle.io \"cattle-system/kubeconfig-user-qq9nrc86vv\" not found"
Checking the cattle-system namespace, there are no SA and secret entries corresponding to the users created in rancher, however I found SA and secret entries related in cattle-impersonation-system.
I tried creating a new user, but that too, only resulted in new entries in cattle-impersonation-system namespace, so I presume kube-api-auth wrongly assumes the location of the secrets to be cattle-system namespace.
Now the questions:
Can I authenticate with downstream RKE2 cluster using normal SA tokens (not ones created through Rancher server)? If so, how?
What did I do wrong about adding the webhook authentication configuration? How to make it work?
I noticed, that since I made the modifications described above, I cannot download the kubeconfig file from the rancher UI for this cluster. What went wrong there?
Thanks in advance for any advice.

400 bad request when attempting connection to AWS Neptune with IAM enabled

I am unable to connect to neptune instance that has IAM enabled. I have followed the AWS documentation (corrected a few of my silly errors on the way) but without luck.
When I connect via my Java application using the SigV4Signer and when I use the gremlin console, I get a 400 bad request websocket error.
o.a.t.g.d.Handler$GremlinResponseHandler : Could not process the response
io.netty.handler.codec.http.websocketx.WebSocketHandshakeException: Invalid handshake response getStatus: 400 Bad Request
at io.netty.handler.codec.http.websocketx.WebSocketClientHandshaker13.verify(WebSocketClientHandshaker13.java:267)
at io.netty.handler.codec.http.websocketx.WebSocketClientHandshaker.finishHandshake(WebSocketClientHandshaker.java:302)
at org.apache.tinkerpop.gremlin.driver.handler.WebSocketClientHandler.channelRead0(WebSocketClientHandler.java:69)
When I run com.amazon.neptune.gremlin.driver.example.NeptuneGremlinSigV4Example (from my machine over port-forwarding AND from the EC2 jumphost) I get:
java.util.concurrent.TimeoutException: Timed out while waiting for an available host - check the client configuration and connectivity to the server if this message persists
I am able to connect to my neptune instance using the older deprecated certificate mechanism. I am using a jumphost ec2 instance and port-forwarding.
I believe that the SigV4 aspect is working as in the neptune audit logs I can see attempts to connect with the aws_access_key:
1584098990319, <jumphost_ip>:47390, <db_instance_ip>:8182, HTTP_GET, [unknown], [unknown], "HttpObjectAggregator$AggregatedFullHttpRequest(decodeResult: success, version: HTTP/1.1, content: CompositeByteBuf(ridx: 0, widx: 0, cap: 0, components=0)) GET /gremlin HTTP/1.1 upgrade: websocket connection: upgrade sec-websocket-key: g44zxck9hTI9cZrq05V19Q== sec-websocket-origin: http://localhost:8182 sec-websocket-version: 13 Host: localhost:8182 X-Amz-Date: 20200313T112950Z Authorization: AWS4-HMAC-SHA256 Credential=<my_access_key>/20200313/eu-west-2/neptune-db/aws4_request, SignedHeaders=host;sec-websocket-key;sec-websocket-origin;sec-websocket-version;upgrade;x-amz-date, Signature=<the_signature> content-length: 0", /gremlin
But when I look
This is the policy that I created:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"neptune-db:*"
],
"Resource": [
"arn:aws:neptune-db:eu-west-2:<my_aws_account>:*/*"
]
}
]
}
I have previously tried with a policy that references my cluster resource id.
I created a new api user with this policy attached as its only permission. (I've tried this twice).
IAM is showing my that the graph-user I created has not successfully logged in (duh).
Seems that the issue is with the IAM set-up somewhere along the line. Is it possible to get more information out of AWS with regards to why the connection attempt is failing?
I am using the most recent release of Neptune and the 3.4.3 Gremlin Driver and console. I am using Java 8 when running the NeptuneGremlinSigV4Example and building the libraries to deploy to the console.
thanks
It appears from the audit log output that the SigV4 Signature that is being created is using localhost as the Host header. This is most likely due to the fact that you're using a proxy to connect to Neptune. By default, the NeptuneGremlinSigV4Example assumes that you're connecting directly to a Neptune endpoint and reuses the endpoint as the Host header in creating the Signature.
To get around this, you can use the following example code that overrides this process and allows you to use a proxy and still sign the request properly.
https://github.com/aws-samples/amazon-neptune-samples/tree/master/gremlin/gremlin-java-client-demo
I was able to get this to work using the following.
Create an SSL tunnel from you local workstation to your EC2 jumphost:
ssh -i <key-pem-file> -L 8182:<neptune-endpoint>:8182 ec2-user#<ec2-jumphost-hostname>
Set the following environment variables:
export AWS_ACCESS_KEY_ID=<access_key>
export AWS_SECRET_ACCESS_KEY=<secret_key>
export SERVICE_REGION=<region_id> (i.e. us-west-2)
Once the tunnel is up and your environment variables are set, use the following format with the Gremlin-Java-Client-Demo:
java -jar target/gremlin-java-client-demo.jar --nlb-endpoint localhost --lb-port 8182 --neptune-endpoint <neptune-endpoint> --port 8182 --enable-ssl --enable-iam-auth

How to set custom headers with Keycloak Gatekeeper?

I have Keycloak and Keycloak-Gatekeeper set up in OpenShift and it's acting as a proxy for an application that is running.
The application that Keycloak Gatekeeper is proxying requires a custom cookie to be set so I figured I could use the Gatekeeper's custom header configuration to set this however I'm running into issues.
Configuration looks like:
discovery-url: https://keycloak-url.com/auth/realms/MyRealm
client-id: MyClient
client-secret: MyClientSecret
cookie-access-name: my.token
encryption_key: MY_KEY
listen: :3000
redirection-url: https://gatekeeper-url.com
upstream-url: https://app-url.com
verbose: true
resources:
- uri: /home/*
roles:
- MyClient:general-access
headers:
Set-Cookie: isLoggedIn=true
After re-deploying and running through the auth flow, the upstream URL/application is not receiving the custom header. I tried with multiple headers (key/value) but can't seem to get it working or find where that header is being injected in the flow.
I've also checked logs and haven't been able to find anything super useful.
Sample Gatekeeper Config
Gatekeeper Custom Headers Docs
Any suggestions/ideas on how to get this working?
remove Set-Cookie.
Simply add
headers:
isLoggedIn: true

How to disable admin port in DropWizard while using gzip server mode/type

I am trying to disable admin port in DropWizard while using 'gzip' server mode.
I know this is possible in 'simple' server. Below is the .yml file configuration. -
server:
type: simple
but I want to disable admin port in gzip server mode.
server:
gzip:
bufferSize: 8KiB
NOTE: I cant use 'simple' server as we have dependency on 'gzip'.
I am out of ideas now any help is really appreciated.
The default Dropwizard config includes an admin connector. To prevent this, you will need to explicitly tell it to not include any admin connectors:
server:
gzip:
bufferSize: 8KiB
adminConnectors: []

TortoiseSVN Can't Authenticate

After my previous problem, TortoiseSVN Can't Connect was resolved, I ran into a new problem.
On the linux server hosting my svn repository, in the repository's directory, there is a conf/svnserve.conf file. In this file, I have the option:
anon-access = none | read | write
Initially, this line was commented out and the default value must have been read.
Of course, I want to set anon-access = none, and I want auth-access = write (which is the default).
But when I set anon-access = none, when I try to browse with TortoiseSVN Repository Browser
using url svn://host:port/repositoryname, I get the error:
Unable to connect to a repository at URL
'svn://host:port/repositoryname' No access allowed to this repository
I'd like to successfully authenticate without ssh if possible, because I gather ssh has more moving parts and might be a little slower.
The server is CloudLinux Server release 5.8
The svn server information follows. I have only tried svn protocol so far.
svn, version 1.6.17 (r1128011) compiled Jul 26 2012, 03:59:19
Copyright (C) 2000-2009 CollabNet. Subversion is open source software,
see http://subversion.apache.org/ This product includes software
developed by CollabNet (http://www.Collab.Net/).
The following repository access (RA) modules are available:
ra_neon : Module for accessing a repository via WebDAV protocol using Neon.
handles 'http' scheme
ra_svn : Module for accessing a repository using the svn network protocol.
with Cyrus SASL authentication
handles 'svn' scheme
ra_local : Module for accessing a repository on local disk.
handles 'file' scheme
ra_serf : Module for accessing a repository via WebDAV protocol using serf.
handles 'http' scheme
handles 'https' scheme
I hope this is a good question because this is kind of the "out of the box" behavior connecting to svn with windows, which might be pretty common when someone adds svn to a shared hosting account.
Thank you!
Set these lines in your svnserve.conf file:
19 anon-access = none
20 auth-access = write
[...]
27 password-db = passwd
[...]
39 realm = Name-of-your-repository
46 force-username-case = lower
The line numbers are approximate.
The realm should equal the name of your repository. It can be anything. The password-db is who is authorized to use the repository. By default, the line is NOPed out.
Next, you'll edit the passwd file that's in the same directory. The format is very simple:
<userName> = <password>
There are two NOPed entries that show you how it's done.