Vault backend storage migration from s3 to dynamodb - amazon-s3

I am trying to migrate vault storage from s3 bucket to dynamodb . The new vault ha cluster is in kubernetes
vault-2 1/1 Running 0 10s
vault-1 1/1 Running 0 15s
vault-0 1/1 Running 0 15s
I have used the below command to migrate the vault storage backend
vault operator migrate -config ~/Desktop/migrate.hcl
MacBook-Pro vaultconfig % cat ~/Desktop/migrate.hcl
storage_source “s3” {
bucket = “db-vault-cluster”
}
storage_destination “dynamodb” {
region = “us-east-1”
access_key = “***"
secret_key = "”
table = “demandbase-vault”
}
migration completed successfully, but was not able to view the secrets. When I restarted the pods, vault is not coming up again. and getting the below error.
2022-02-05T11:46:56.787Z [INFO] core: loaded wrapping token key
2022-02-05T11:46:56.803Z [INFO] core: successfully setup plugin catalog: plugin-directory=""""
2022-02-05T11:46:56.821Z [INFO] core: successfully mounted backend: type=system path=sys/
2022-02-05T11:46:56.822Z [INFO] core: successfully mounted backend: type=identity path=identity/
2022-02-05T11:46:56.822Z [INFO] core: successfully mounted backend: type=cubbyhole path=cubbyhole/
2022-02-05T11:46:58.753Z [INFO] core: pre-seal teardown starting
2022-02-05T11:46:58.753Z [INFO] core: pre-seal teardown complete
2022-02-05T11:46:58.753Z [ERROR] core: post-unseal setup failed: error=“error fetching default policy from store: failed to read policy: decryption failed: cipher: message authentication failed”
can someone help me to migrate the storage backend ?

Are you using the same seal stanza in your configuration? It looks like Vault finds the storage but can't decrypt it.
Copy the seal part of the original (S3) configuration to the new (DynamoDB) configuration.

Related

Why is dockerized config client unable to connect to dockerized config server

I'm out of ideas on this and appreciate any suggestions. I have a handful of dockerized springboot microservices which include a config server. Here are the characteristics:
Springboot version 2.3.0-RELEASE
Standard Springboot config server with basic auth turned on.
3 Springboot microservices that are also config clients to config server.
-- I use a simple Dockerfile model for microservices and springboot maven plugin with default docker layers capabilities.
SSL is enabled for all including the config server.
-- For dev and testing, I use a self signed cert.
All microservices use a JKS to sign JWTs
Docker image for java is openjdk8 alpine.
Docker compose is used to orchestrate container launch and settings.
The docker container for config server runs perfectly fine. I can even query for config via a browser following the HTTPS URL: https://app-dev.localhost.com:8443/config-server/shopping-svc/dev.
The Problem
I cannot manage to successfully start container 'shopping-svc'. It fails with this error.
2023-01-25T23:44:12.375221300Z
2023-01-25 23:44:12.575 INFO 1 --- [ main] c.c.c.ConfigServicePropertySourceLocator : Fetching config from server at : https://app-dev.localhost.com:8443/config-server
2023-01-25 23:44:12.829 INFO 1 --- [ main] c.c.c.ConfigServicePropertySourceLocator : Connect Timeout Exception on Url - https://app-dev.localhost.com:8443/config-server. Will be trying the next url if available
2023-01-25 23:44:12.836 ERROR 1 --- [ main] o.s.boot.SpringApplication : Application run failed
2023-01-25T23:44:12.837487100Z
java.lang.IllegalStateException: Could not locate PropertySource and the fail fast property is set, failing
at org.springframework.cloud.config.client.ConfigServicePropertySourceLocator.locate(ConfigServicePropertySourceLocator.java:155) ~[spring-cloud-config-client-2.2.8.RELEASE.jar:2.2.8.RELEASE]
at org.springframework.cloud.bootstrap.config.PropertySourceLocator.locateCollection(PropertySourceLocator.java:52) ~[spring-cloud-context-2.2.9.RELEASE.jar:2.2.9.RELEASE]
at org.springframework.cloud.config.client.ConfigServicePropertySourceLocator.locateCollection(ConfigServicePropertySourceLocator.java:170) ~[spring-cloud-config-client-2.2.8.RELEASE.jar:2.2.8.RELEASE]
at org.springframework.cloud.bootstrap.config.PropertySourceBootstrapConfiguration.initialize(PropertySourceBootstrapConfiguration.java:98) ~[spring-cloud-context-2.2.9.RELEASE.jar:2.2.9.RELEASE]
at org.springframework.boot.SpringApplication.applyInitializers(SpringApplication.java:626) [spring-boot-2.3.0.RELEASE.jar:2.3.0.RELEASE]
at org.springframework.boot.SpringApplication.prepareContext(SpringApplication.java:370) [spring-boot-2.3.0.RELEASE.jar:2.3.0.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:314) [spring-boot-2.3.0.RELEASE.jar:2.3.0.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1237) [spring-boot-2.3.0.RELEASE.jar:2.3.0.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1226) [spring-boot-2.3.0.RELEASE.jar:2.3.0.RELEASE]
at com.shopping.app.ShoppingApplication.main(ShoppingApplication.java:35) [classes/:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_212]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_212]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_212]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_212]
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:49) [application/:na]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:109) [application/:na]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:58) [application/:na]
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:88) [application/:na]
Caused by: org.springframework.web.client.ResourceAccessException: I/O error on GET request for "https://app-dev.localhost.com:8443/config-server/shopping-app/dev": Connection refused (Connection refused); nested exception is java.net.ConnectException: Connection refused (Connection refused)
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:748) ~[spring-web-5.2.6.RELEASE.jar:5.2.6.RELEASE]
at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:674) ~[spring-web-5.2.6.RELEASE.jar:5.2.6.RELEASE]
at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:583) ~[spring-web-5.2.6.RELEASE.jar:5.2.6.RELEASE]
Investigations
At first, I thought perhaps the port 8443 is blocked somehow by my OS firewall but that's not it. Clearing the port makes no difference
Then I thought, perhaps it's a cert issue so I tried supplying the cert differently with the JAVA_TOOL_OPTIONS argument with the override populated: -Djavax.net.ssl.trustStore=/path/to/cert, etc... No dice.
I read several posts and articles suggesting services inside docker containers should refer to others via service name. While this poses a bit of confusion for me since my certs are generated against a hostname, I tried swapping the URL of config server in shopping-app YML to something like: https://config-server:8443/config-server/ or the same without https to see if at least successful connection would be made.
Last thing I tried was to change the compose network driver to 'host' instead of 'bridge' so the containers would use the host machine network config. The rationale was that at least, it's obvious it's all on same network.
I am not sure what or where to look anymore
References
=====
Docker compose file:
version: "3"
networks:
default:
driver: bridge
frontend:
driver: bridge
backend:
driver: bridge
services:
config-server:
image: config-server
env_file: .env
hostname: app-dev.localhost.com # Not sure this is necessary I add this because the self signed cert was generated with this domain name
volumes: #I'm developping on windows, hence the backslash "\"
- shoppingapp:/var/opt
- shoppingapp\certs\server.jks:/etc/certs/server.jks
- shoppingapp\certs\ssl/app-dev.localhost.com.p12:/etc/certs/ssl/app-dev.localhost.com.p12
ports:
- "8443:8443"
networks:
- backend
shopping-svc:
image: shopping-svc
env_file: .env
hostname: app-dev.localhost.com # Not sure this is necessary I add this because the self signed cert was generated with this domain name
volumes:
- shoppingapp:/var/opt
- shoppingapp\certs\server.jks:/etc/certs/server.jks
- shoppingapp\certs\ssl\app-dev.localhost.com.p12:/etc/certs/ssl/app-dev.localhost.com.p12
ports:
- "8444:8444"
depends_on:
config-server:
condition: service_started
networks:
- backend

Setting up S3 compatible service for blob storage on Google Cloud Storage

PS: cross posted on drone forums here.
I'm trying to setup s3 like service for drone logs. i've tested that my AWS_* values are set correctly in the container and using aws-cli from inside container gives correct output for:
aws s3api list-objects --bucket drone-logs --endpoint-url=https://storage.googleapis.com
however, drone server itself is unable to upload logs to the bucket (with following error):
{"error":"InvalidArgument: Invalid argument.\n\tstatus code: 400, request id: , host id: ","level":"warning","msg":"manager: cannot upload complete logs","step-id":7,"time":"2023-02-09T12:26:16Z"}
drone server on startup shows that s3 related configuration was picked correctly:
rpc:
server: ""
secret: my-secret
debug: false
host: drone.XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
proto: https
s3:
bucket: drone-logs
prefix: ""
endpoint: https://storage.googleapis.com
pathstyle: true
the env. vars inside droner server container are:
# env | grep -E 'DRONE|AWS' | sort
AWS_ACCESS_KEY_ID=GOOGXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
AWS_DEFAULT_REGION=us-east-1
AWS_REGION=us-east-1
AWS_SECRET_ACCESS_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
DRONE_COOKIE_SECRET=XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
DRONE_DATABASE_DATASOURCE=postgres://drone:XXXXXXXXXXXXXXXXXXXXXXXXXXXXX#35.XXXXXX.XXXX:5432/drone?sslmode=disable
DRONE_DATABASE_DRIVER=postgres
DRONE_DATABASE_SECRET=XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
DRONE_GITHUB_CLIENT_ID=XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
DRONE_GITHUB_CLIENT_SECRET=XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
DRONE_JSONNET_ENABLED=true
DRONE_LOGS_DEBUG=true
DRONE_LOGS_TRACE=true
DRONE_RPC_SECRET=XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
DRONE_S3_BUCKET=drone-logs
DRONE_S3_ENDPOINT=https://storage.googleapis.com
DRONE_S3_PATH_STYLE=true
DRONE_SERVER_HOST=drone.XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
DRONE_SERVER_PROTO=https
DRONE_STARLARK_ENABLED=true
the .drone.yaml that is being used is available here, on github.
the server is running using the nolimit flag:
go build -tags "nolimit" github.com/drone/drone/cmd/drone-server

RKE2 Authorized endpoint configuration help required

I have a rancher 2.6.67 server and RKE2 downstream cluster. The cluster was created without authorized cluster endpoint. How to add an authorised cluster endpoint to a RKE2 cluster created by Rancher article describes how to add it in an existing cluster, however although the answer looks promising, I still must miss some detail, because it does not work for me.
Here is what I did:
Created /var/lib/rancher/rke2/kube-api-authn-webhook.yaml file with contents:
apiVersion: v1
kind: Config
clusters:
- name: Default
cluster:
insecure-skip-tls-verify: true
server: http://127.0.0.1:6440/v1/authenticate
users:
- name: Default
user:
insecure-skip-tls-verify: true
current-context: webhook
contexts:
- name: webhook
context:
user: Default
cluster: Default
and added
"kube-apiserver-arg": [
"authentication-token-webhook-config-file=/var/lib/rancher/rke2/kube-api-authn-webhook.yaml"
to the /etc/rancher/rke2/config.yaml.d/50-rancher.yaml file.
After restarting rke2-server I found the network configuration tab in Rancher and was able to enable authorized endpoint. Here is where my success ends.
I tried to create a serviceaccount and got the secret to have token authorization, but it failed when connecting directly to the api endpoint on the master.
kube-api-auth pod logs this:
time="2022-10-06T08:42:27Z" level=error msg="found 1 parts of token"
time="2022-10-06T08:42:27Z" level=info msg="Processing v1Authenticate request..."
Also the log is full of messages like this:
E1006 09:04:07.868108 1 reflector.go:139] pkg/mod/github.com/rancher/client-go#v1.22.3-rancher.1/tools/cache/reflector.go:168: Failed to watch *v3.ClusterAuthToken: failed to list *v3.ClusterAuthToken: the server could not find the requested resource (get clusterauthtokens.meta.k8s.io)
E1006 09:04:40.778350 1 reflector.go:139] pkg/mod/github.com/rancher/client-go#v1.22.3-rancher.1/tools/cache/reflector.go:168: Failed to watch *v3.ClusterAuthToken: failed to list *v3.ClusterAuthToken: the server could not find the requested resource (get clusterauthtokens.meta.k8s.io)
E1006 09:04:45.171554 1 reflector.go:139] pkg/mod/github.com/rancher/client-go#v1.22.3-rancher.1/tools/cache/reflector.go:168: Failed to watch *v3.ClusterUserAttribute: failed to list *v3.ClusterUserAttribute: the server could not find the requested resource (get clusteruserattributes.meta.k8s.io)
I found that SA tokens will not work this way so I tried to use a rancher user token, but that fails as well:
time="2022-10-06T08:37:34Z" level=info msg=" ...looking up token for kubeconfig-user-qq9nrc86vv"
time="2022-10-06T08:37:34Z" level=error msg="clusterauthtokens.cluster.cattle.io \"cattle-system/kubeconfig-user-qq9nrc86vv\" not found"
Checking the cattle-system namespace, there are no SA and secret entries corresponding to the users created in rancher, however I found SA and secret entries related in cattle-impersonation-system.
I tried creating a new user, but that too, only resulted in new entries in cattle-impersonation-system namespace, so I presume kube-api-auth wrongly assumes the location of the secrets to be cattle-system namespace.
Now the questions:
Can I authenticate with downstream RKE2 cluster using normal SA tokens (not ones created through Rancher server)? If so, how?
What did I do wrong about adding the webhook authentication configuration? How to make it work?
I noticed, that since I made the modifications described above, I cannot download the kubeconfig file from the rancher UI for this cluster. What went wrong there?
Thanks in advance for any advice.

X-Ray Daemon don't receive any data from envoy

I have a service running a task definition with three containers:
service itself
envoy
x-ray daemon
And I want to trace and monitor my services interacting with each other with x-ray.
But I don't see any data in x-ray.
I can see the request logs and everything in the envoy logs but there are no error messages about missing connection to the x-ray daemon.
Envoy container has three env variables:
APPMESH_VIRTUAL_NODE_NAME = mesh/mesh-name/virtualNode/service-virtual-node
ENABLE_ENVOY_XRAY_TRACING = 1
ENVOY_LOG_LEVEL = trace
The x-ray daemon is pretty plain and has just a name and an image (amazon/aws-xray-daemon:1).
But when looking in the logs of the x-ray dameon, there is only the following:
2022-05-31T14:48:05.042+02:00 2022-05-31T12:48:05Z [Info] Initializing AWS X-Ray daemon 3.0.0
2022-05-31T14:48:05.042+02:00 2022-05-31T12:48:05Z [Info] Using buffer memory limit of 76 MB
2022-05-31T14:48:05.042+02:00 2022-05-31T12:48:05Z [Info] 1216 segment buffers allocated
2022-05-31T14:48:05.051+02:00 2022-05-31T12:48:05Z [Info] Using region: eu-central-1
2022-05-31T14:48:05.788+02:00 2022-05-31T12:48:05Z [Error] Get instance id metadata failed: RequestError: send request failed
2022-05-31T14:48:05.788+02:00 caused by: Get http://169.254.169.254/latest/meta-data/instance-id: dial tcp xxx.xxx.xxx.254:80: connect: invalid argument
2022-05-31T14:48:05.789+02:00 2022-05-31T12:48:05Z [Info] Starting proxy http server on 127.0.0.1:2000
As far as I read, the error you can see in these logs doesn't affect the functionality (https://repost.aws/questions/QUr6JJxyeLRUK5M4tadg944w).
I'm pretty sure I'm missing a configuration or access right.
It's running already on staging but I set this up several weeks ago and I don't find any differences between the configurations.
Thanks in advance!
In my case, I made a copy-paste mistake by copying trailing line break into the name of the environment variable ENABLE_ENVOY_XRAY_TRACING which wasn't visible in the overview and only inside the text field.

Duplicate field 'Status' when I try to run 'pio status'

When ever I do pio status I get the following error
[INFO] [Management$] Inspecting PredictionIO...
[INFO] [Management$] PredictionIO 0.13.0 is installed at /Users/prvns/tools/PredictionIO-0.13.0
[INFO] [Management$] Inspecting Apache Spark...
[INFO] [Management$] Apache Spark is installed at /Users/prvns/tools/PredictionIO-0.13.0/vendors/spark-2.3.1-bin-hadoop2.7
[INFO] [Management$] Apache Spark 2.3.1 detected (meets minimum requirement of 1.6.3)
[INFO] [Management$] Inspecting storage backend connections...
[INFO] [Storage$] Verifying Meta Data Backend (Source: ELASTICSEARCH)...
[ERROR] [Management$] Unable to connect to all storage backends successfully.
The following shows the error message from the storage backend.
PUT http://localhost:9200/pio_meta/_mapping/engine_instances: HTTP/1.1 400 Bad Request
{"error":{"root_cause":[{"type":"parse_exception","reason":"Failed to parse content to map"}],"type":"parse_exception","reason":"Failed to parse content to map","caused_by":{"type":"json_parse_exception","reason":"Duplicate field 'status'\n at [Source: org.elasticsearch.common.compress.DeflateCompressor$1#6b496f00; line: 1, column: 462]"}},"status":400} (org.apache.predictionio.shaded.org.elasticsearch.client.ResponseException)
Dumping configuration of initialized storage backend sources.
Please make sure they are correct.
Source Name: ELASTICSEARCH; Type: elasticsearch; Configuration: HOME -> /usr/local/Cellar/elasticsearch/6.2.4/, HOSTS -> localhost, PORTS -> 9200, SCHEMES -> http, TYPE -> elasticsearch
My pio-env.sh looks like
SPARK_HOME=$PIO_HOME/vendors/spark-2.3.1-bin-hadoop2.7
POSTGRES_JDBC_DRIVER=$PIO_HOME/lib/postgresql-42.0.0.jar
MYSQL_JDBC_DRIVER=$PIO_HOME/lib/mysql-connector-java-5.1.41.jar
PIO_FS_BASEDIR=$HOME/.pio_store
PIO_FS_ENGINESDIR=$PIO_FS_BASEDIR/engines
PIO_FS_TMPDIR=$PIO_FS_BASEDIR/tmp
PIO_STORAGE_REPOSITORIES_METADATA_NAME=pio_meta
PIO_STORAGE_REPOSITORIES_METADATA_SOURCE=ELASTICSEARCH
PIO_STORAGE_REPOSITORIES_EVENTDATA_NAME=pio_event
PIO_STORAGE_REPOSITORIES_EVENTDATA_SOURCE=ELASTICSEARCH
PIO_STORAGE_REPOSITORIES_MODELDATA_NAME=pio_model
PIO_STORAGE_REPOSITORIES_MODELDATA_SOURCE=ELASTICSEARCH
PIO_STORAGE_SOURCES_ELASTICSEARCH_TYPE=elasticsearch
PIO_STORAGE_SOURCES_ELASTICSEARCH_HOSTS=localhost
PIO_STORAGE_SOURCES_ELASTICSEARCH_PORTS=9200
PIO_STORAGE_SOURCES_ELASTICSEARCH_SCHEMES=http
PIO_STORAGE_SOURCES_ELASTICSEARCH_HOME=/usr/local/Cellar/elasticsearch/6.2.4/
Why is this not working?
I was using ElasticSearch 6.x. I replaced it with ElasticSearch 5.x and it worked.