How to migrate kong gateway with DB to another server - migration

We are trying to migrate the kong setup from one server(ubuntu) to another server(ubuntu).We have kong gateway(kong/kong-gateway:2.3.3.2-alpine) on docker with DB (postgres:9.6). We have routes, 75 services and consumer entities created on this.
I have followed the below step to migrate kong and db.
'docker commit' both kong and postgres conatiners and saved to a tar file
kong conatiner name: kong-ee
postgres container name: kong-db
docker commit kong-db postgres/migrationimage:version1
docker save -o postgres-migrationimage.tar postgres/migrationimage:version1
docker commit kong-ee kongee/migrationimage:version1
docker save -o kongee-migrationimage.tar kongee/migrationimage:version1
On new sever:
docker network create -d bridge kong_net
docker load -i postgres-migrationimage.tar
docker load -i kongee-migrationimage.tar
kongee/migrationimage:version1 and postgres/migrationimage:version1 images are created.
Created new containers from both the images. But kong conatiner failed to start by giving the below error
2022/08/19 09:56:41 [error] 1#0: init_by_lua error: /usr/local/share/lua/5.1/kong/cmd/utils/migrations.lua:33: Database needs bootstrapping or is older than Kong 1.0.
To start a new installation from scratch, run 'kong migrations bootstrap'.
To migrate from a version older than 1.0, migrated to Kong 1.5.0 first.
If you still have 'apis' entities, you can convert them to Routes and Services
using the 'kong migrations migrate-apis' command in Kong 1.5.0.
stack traceback:
[C]: in function 'error'
/usr/local/share/lua/5.1/kong/cmd/utils/migrations.lua:33: in function 'check_state'
/usr/local/share/lua/5.1/kong/init.lua:477: in function 'init'
init_by_lua:3: in main chunk
I understand from error, it is looking for db bootstrap. When I perform bootstrap db , kong container is running fine but I do not see the any services, consumers and routes on the migrated kong setup. Looks like bootstraping db erasing all the data.
How to migrate kong gateway along with the DB without losing data.

Related

Connect back to main container from GitlabCI service

I am using Gitlab CI with docker executor and services.
During test I'm starting a server in the main script, and I need the service to make a request back to the main script.
Is there address/alias I can use to connect back to the main build script? Something like host.docker.internal.
Pseudo-example:
test:
services:
name: ping-pong-service
variables:
CALLBACK_ADDRESS: 'http://host.docker.internal:8090/pong'
script:
- "Start a server at 0.0.0.0:8090"
- curl http://ping-pong-service:80/ping
Supose that ping-pong-service is a service that when receiving any http request on :80, performs new request to CALLBACK_ADDRESS. What should I enter into CALLBACK_ADDRESS to connect back to main container?
I tried looking into what containers get started on the runner, but the main container doesn't seem to have predictable name or alias in the docker network.
Env:
Docker: 20.10.12
Gitlab Runner: 14.8.0, self-hosted, FF_NETWORK_PER_BUILD=1
Gitlab: 14.9.2-ee, self-hosted
When using the FF_NETWORK_PER_BUILD feature flag for networks per job, containers started using services: can reach the main job container using the network alias build
Assuming your service is configured as you describe, you would use:
variables:
CALLBACK_ADDRESS: 'http://build:8090/pong'
Note: this does not apply to containers started using docker run in the job container for this scenario.

Kubernetes - env variables as API url

So I have an API that's the gateway for two other API's.
Using docker in wsl 2 (ubuntu), when I build my Gateway API.
docker run -d -p 8080:8080 -e A_API_URL=$A_API_URL B_API_URL=$B_API_URL registry:$(somePort)//gateway
I have 2 environnement variables that are the API URI of the two API'S. I just dont know how to make this work in the config.
env:
- name: A_API_URL
value: <need help>
- name: B_API_URL
value: <need help>
I get 500 or 502 errors when accessing then in the network.
I tried specifyng the value of the env var as:
their respective service's name.
the complete URI (http://$(addr):$(port)
the relative path : /something/anotherSomething
Each API is deployed with a Deployment controller and a service
I'm at a lost, any help is appreciated
You just have to hardwire them. Kubernetes doesn't know anything about your local machine. There are templating tools like Helm that could inject things like Bash is in your docker run example but generally not a good idea since if anyone other than you runs the same command, they could see different results. The values should look like http://servicename.namespacename.svc.cluster.local:port/whatever. So if the service is named foo in namespace default with port 8000 and path /api, http://foo.default.svc.cluster.local:8000/api.

Build spinnaker with docker-compose, redirect to localhost

i build spinnaker using docker-compose follow here
but it always redirect to localhost, how can i fix this.
e.g.
http://localhost:8084/auth/redirect?to=http%3A%2F%2F192.168.99.100%3A9000%2F%23%2Finfrastructure
i set the host:0.0.0.0 in spinnaker-local.yml and configured deck apache2 with proxyPreserve=On, it's not working.
where is the configuration about 'redirect'?
All containers running well but fiat gets error mesages, like this:
WARN 1 --- [ecutionAction-1] c.n.s.fiat.roles.UserRolesSyncer : [] User permission sync failed. Server status is DOWN. Trying again in 10000 ms. Cause:(Provider: DefaultServiceAccountProvider) retrofit.RetrofitError: unexpected url: front50/serviceAccounts
i'm sure set fiat false, is this matter?
thanks.
The docker-compose link project is not available anymore. That deployment type is not supported anymore.
The easiest way i suggest for people to get started quick is by using Armory Open source Minnaker. It runs on top of a K3S small cluster and contains a functional spinnaker deployment.
Great way to get started.
I tried the debian local deployment and it failed all the time.
Enjoy your CD operations.

Expose service in OpenShift Origin Server - router is not working

Our team decided to try using OpenShift Origin server to deploy services.
We have a separate VM with OpenShift Origin server installed and working fine. I was able to deploy our local docker images and those services are running fine as well - Pods are up and running, get their own IP and I can reach services endpoints from VM.
The issue is I can't get it working, so the services are exposed outside the machine. I read about the routers, which suppose to be the right way of exposing services, but it just won't work, now some details.
Lets say my VM is 10.48.1.1. The Pod with docker container with one of my services is running on IP 172.30.67.15:
~$ oc get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-svc 172.30.67.15 <none> 8182/TCP 4h
The service is simple Spring Boot app with REST endpoint exposed at port 8182.
Whe I call it from VM hosting it, it works just fine:
$ curl -H "Content-Type: application/json" http://172.30.67.15:8182/home
{"valid":true}
Now I wanted to expose it outside, so I created a router:
oc adm router my-svc --ports='8182'
I followed the steps from OpenShift dev doc both from CLI and Console UI. The router gets created fine, but then when I want to check its status, I get this:
$ oc status
In project sample on server https://10.48.3.161:8443
...
Errors:
* route/my-svc is routing traffic to svc/my-svc, but either the administrator has not installed a router or the router is not selecting this route.
I couldn't find anything about this error that could help me solve the issue - does anyone had similar issue? Is there any other (better/proper?) way of exposing service endpoint? I am new to OpenShift so any suggestions would be appirciated.
If anyone interested, I finally found the "solution".
The issue was that there was no "router" service created - I didn't know it has to be created.
Step by step, in order to create this service I followed the instructions from OpenShift doc page which were pretty easy, but I couldn't login using admin account.
I used default admin account
$ oc login -u system:admin
But instead using available certificate, it kept asking me for password, but it shouldn't. What was wrong? My env variables were reset, and I had to set them again
$ export KUBECONFIG="$(pwd)"/openshift.local.config/master/admin.kubeconfig
$ export CURL_CA_BUNDLE="$(pwd)"/openshift.local.config/master/ca.crt
$ sudo chmod +r "$(pwd)"/openshift.local.config/master/admin.kubeconfig
This was one of the first steps described in OpenShift docs OpenShift docs. After that the certificate is set correctly and login works as expected. As an admin I created router service (1st link) and the route started working - no more errors.
So in the end it came out to be pretty simple and dummy, but given that I didn't have experience with OpenShift it was hard for me to find out what is going on. Hope it will help if someone will have the same issue.

I am trying OpenShift origin, I cannot create application

I am trying OO on a RHEL Atomic Host. I spun up OO master as a container following this guide https://docs.openshift.org/latest/getting_started/administrators.html
After attaching a shell to the Master Container, I cannot deploy an app.
# oc new-app openshift/deployment-example
error: can't look up Docker image "openshift/deployment-example": Internal error occurred: Get https://registry-1.docker.io/v2/: net/htt p: request canceled while waiting for connection error: no match for "openshift/deployment-example"
The 'oc new-app' command will match arguments to the following types:
1. Images tagged into image streams in the current project or the 'openshift' project
- if you don't specify a tag, we'll add ':latest'
2. Images in the Docker Hub, on remote registries, or on the local Docker engine
3. Templates in the current project or the 'openshift' project
4. Git repository URLs or local paths that point to Git repositories
--allow-missing-images can be used to point to an image that does not exist yet.
See 'oc new-app -h' for examples.
The host needs proxy to access Internet. I have configured proxy in /etc/sysconfig/docker and that is how I could pull the origin image in the same place.
I have tried setting proxy for master and node with luck
https://docs.openshift.org/latest/install_config/http_proxies.html
It is possible that your proxy is terminating the connection. you can test by creating an internal registry, push image to that and then use
"oc new-app your.internal.registry/openshift/deployment-example"