Connect back to main container from GitlabCI service - gitlab-ci

I am using Gitlab CI with docker executor and services.
During test I'm starting a server in the main script, and I need the service to make a request back to the main script.
Is there address/alias I can use to connect back to the main build script? Something like host.docker.internal.
Pseudo-example:
test:
services:
name: ping-pong-service
variables:
CALLBACK_ADDRESS: 'http://host.docker.internal:8090/pong'
script:
- "Start a server at 0.0.0.0:8090"
- curl http://ping-pong-service:80/ping
Supose that ping-pong-service is a service that when receiving any http request on :80, performs new request to CALLBACK_ADDRESS. What should I enter into CALLBACK_ADDRESS to connect back to main container?
I tried looking into what containers get started on the runner, but the main container doesn't seem to have predictable name or alias in the docker network.
Env:
Docker: 20.10.12
Gitlab Runner: 14.8.0, self-hosted, FF_NETWORK_PER_BUILD=1
Gitlab: 14.9.2-ee, self-hosted

When using the FF_NETWORK_PER_BUILD feature flag for networks per job, containers started using services: can reach the main job container using the network alias build
Assuming your service is configured as you describe, you would use:
variables:
CALLBACK_ADDRESS: 'http://build:8090/pong'
Note: this does not apply to containers started using docker run in the job container for this scenario.

Related

Kubernetes - env variables as API url

So I have an API that's the gateway for two other API's.
Using docker in wsl 2 (ubuntu), when I build my Gateway API.
docker run -d -p 8080:8080 -e A_API_URL=$A_API_URL B_API_URL=$B_API_URL registry:$(somePort)//gateway
I have 2 environnement variables that are the API URI of the two API'S. I just dont know how to make this work in the config.
env:
- name: A_API_URL
value: <need help>
- name: B_API_URL
value: <need help>
I get 500 or 502 errors when accessing then in the network.
I tried specifyng the value of the env var as:
their respective service's name.
the complete URI (http://$(addr):$(port)
the relative path : /something/anotherSomething
Each API is deployed with a Deployment controller and a service
I'm at a lost, any help is appreciated
You just have to hardwire them. Kubernetes doesn't know anything about your local machine. There are templating tools like Helm that could inject things like Bash is in your docker run example but generally not a good idea since if anyone other than you runs the same command, they could see different results. The values should look like http://servicename.namespacename.svc.cluster.local:port/whatever. So if the service is named foo in namespace default with port 8000 and path /api, http://foo.default.svc.cluster.local:8000/api.

How to configure Serverless VPC Access from Cloud Build "gradle test"?

I'm trying to put some integration tests in the Cloud Build process. So far I managed to connect to a MySQL server, but I can't connect to a Redis server since I can't add --vpc-connector option to gradle test command to configure Serverless VPC Connector.
This is part of cloudbuild.yaml:
steps:
- name: 'gradle:6.8.3-jdk11'
args:
- 'test'
- '--no-daemon'
- '-i'
- '--stacktrace'
id: Test
entrypoint: gradle
- name: gcr.io/cloud-builders/docker
args:
- build
- '--no-cache'
- '-t'
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- .
- '-f'
- Dockerfile
id: Build
- name: gcr.io/cloud-builders/docker
args:
- push
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
id: Push
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk:slim'
args:
- run
- services
- update
- $_SERVICE_NAME
- '--platform=managed'
- '--image=$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- >-
--labels=managed-by=gcp-cloud-build-deploy-cloud-run,commit-sha=$COMMIT_SHA,gcb-build-id=$BUILD_ID,gcb-trigger-id=$_TRIGGER_ID,$_LABELS
- '--region=$_DEPLOY_REGION'
- '--quiet'
- '--vpc-connector=$_SERVERLESS_VPC_CONNECTOR'
id: Deploy
entrypoint: gcloud
(... omitted ...)
Everything works fine If I remove the Test step. I need to add --vpc-connector option to Test step somehow to connect to the Redis server, but there is no such option in the gradle:6.8.3-jdk11 image.
How to configure Serverless VPC Connector in the Test step so gradle test command can connect to the Redis server?
You are mixing 2 concepts:
Gradle is an application layer
VPC Connector is an infrastructure component to bridge the serverless world managed by Google and the VPC of your current project.
So, Gradle absolutely don't care about the infrastructure: It will try to reach a private IP, the REDIS private IP.
Cloud Build doesn't support VPC connector and thus, you can't access private resources in your project through Cloud Build. (A private preview is ongoing to have Cloud Build worker directly in your VPC and thus not to have this VPC connectivity issue (because already in the VPC), but I haven't visibility on a public preview of this feature)

Zalenium - Unable to launch application under test

My AUT runs in a docker container and the URL for it is "http://localhost:8080/" . Now when I trigger the tests using Zalenium it launches the browser but when it tries to navigate to the AUT's URL it can't find it. Is it because my AUT runs in a docker container and Zalenium also runs in a separate docker container and they both can't communicate with each other?
Thanks in Advance.
I think that's how it supposed to work. You ideally want contains to have isolation until you say otherwise.
A quick solution, on your selenium scripts, instead of localhost specify your <machine name>. That way, your scripts in zalenium will look to resolve the address through the network (which is still you) instead of trying to internally resolve localhost on their own network ring.
While this will probably work for you machine, it's a little bit static. You'll probably want to script it as part of your solution so it connects to multiple.
In java you can use this to get the running machine name:
InetAddress addr;
addr = InetAddress.getLocalHost();
hostname = addr.getHostName();
Alternatively, to make a more portable solution, you might want to review the docker networking pages.
I think this tutorial might help you. "Networking with standalone containers" sounds about right.
Long and short of it is when you run your docker containers you need to attach them to the same network. The default is:
--network bridge
Or create your own bridge:
docker network create --driver bridge MyBridgeName
and run both your containers on that bridge name:
docker run -dit --name MyImageName --network MyBridgeName ImageToRun
docker run -dit --name OtherName --network MyBridgeName OtherImageToRun

Build spinnaker with docker-compose, redirect to localhost

i build spinnaker using docker-compose follow here
but it always redirect to localhost, how can i fix this.
e.g.
http://localhost:8084/auth/redirect?to=http%3A%2F%2F192.168.99.100%3A9000%2F%23%2Finfrastructure
i set the host:0.0.0.0 in spinnaker-local.yml and configured deck apache2 with proxyPreserve=On, it's not working.
where is the configuration about 'redirect'?
All containers running well but fiat gets error mesages, like this:
WARN 1 --- [ecutionAction-1] c.n.s.fiat.roles.UserRolesSyncer : [] User permission sync failed. Server status is DOWN. Trying again in 10000 ms. Cause:(Provider: DefaultServiceAccountProvider) retrofit.RetrofitError: unexpected url: front50/serviceAccounts
i'm sure set fiat false, is this matter?
thanks.
The docker-compose link project is not available anymore. That deployment type is not supported anymore.
The easiest way i suggest for people to get started quick is by using Armory Open source Minnaker. It runs on top of a K3S small cluster and contains a functional spinnaker deployment.
Great way to get started.
I tried the debian local deployment and it failed all the time.
Enjoy your CD operations.

Nightwatch keeps giving 502 bad gateway

I have a docker setup for nightwatch.js to run selenium tests through selenium grid for an express server application. When I just use a docker-compose up and then run my nightwatch tests manually after the server starts everything appears to start properly. If I run them as part of a containers command (ie. in my app server containers command or in a new container based on it that just runs nightwatch) then I get a 503 bad gateway error. I think there is a race condition in my docker setup that is causing this. Is there a way to guarantee my app server starts properly before running my nightwatch tests?
Prefix your command with wait-for-it.sh script (download it a and put in the image):
command: /wait-for-it.sh theotherservice:PORT -- your-previous-command
wait-for-it.sh will wait to the specified server:port to execute the command that is after --, so you can avoid the race condition.