I'm trying to put some integration tests in the Cloud Build process. So far I managed to connect to a MySQL server, but I can't connect to a Redis server since I can't add --vpc-connector option to gradle test command to configure Serverless VPC Connector.
This is part of cloudbuild.yaml:
steps:
- name: 'gradle:6.8.3-jdk11'
args:
- 'test'
- '--no-daemon'
- '-i'
- '--stacktrace'
id: Test
entrypoint: gradle
- name: gcr.io/cloud-builders/docker
args:
- build
- '--no-cache'
- '-t'
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- .
- '-f'
- Dockerfile
id: Build
- name: gcr.io/cloud-builders/docker
args:
- push
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
id: Push
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk:slim'
args:
- run
- services
- update
- $_SERVICE_NAME
- '--platform=managed'
- '--image=$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- >-
--labels=managed-by=gcp-cloud-build-deploy-cloud-run,commit-sha=$COMMIT_SHA,gcb-build-id=$BUILD_ID,gcb-trigger-id=$_TRIGGER_ID,$_LABELS
- '--region=$_DEPLOY_REGION'
- '--quiet'
- '--vpc-connector=$_SERVERLESS_VPC_CONNECTOR'
id: Deploy
entrypoint: gcloud
(... omitted ...)
Everything works fine If I remove the Test step. I need to add --vpc-connector option to Test step somehow to connect to the Redis server, but there is no such option in the gradle:6.8.3-jdk11 image.
How to configure Serverless VPC Connector in the Test step so gradle test command can connect to the Redis server?
You are mixing 2 concepts:
Gradle is an application layer
VPC Connector is an infrastructure component to bridge the serverless world managed by Google and the VPC of your current project.
So, Gradle absolutely don't care about the infrastructure: It will try to reach a private IP, the REDIS private IP.
Cloud Build doesn't support VPC connector and thus, you can't access private resources in your project through Cloud Build. (A private preview is ongoing to have Cloud Build worker directly in your VPC and thus not to have this VPC connectivity issue (because already in the VPC), but I haven't visibility on a public preview of this feature)
Related
I am using serverless framework. I have set the stage as the dev and deploying using sls deploy --stage dev command. The dev stage is not added to the end point. The endpoints are not like the one given below:
https://****.execute-api.us-east-1.amazonaws.com/users
One of my lambda functions needs an endpoint to submit a post request to a third-party API to post back the result. I need to be sure that the endpoint is rightly sent from the production/dev stage.
postback_url = `https://${process.env.RestApiId}.execute-api.${
process.env.REGION
}.amazonaws.com/${process.env.stage}/dfs-pingback?id=$id&tag=$tag`;
As you can see the above postback url is wrong if I do not get the stage (process.env.stage) added to the endpoint.
serverless.yml
service: lytoolsApi
frameworkVersion: '2 || 3'
configValidationMode: error
provider:
name: aws
runtime: nodejs12.x
region: us-east-1
stage: dev
Serverless Framework does things a bit differently and instead of using stages of APIGW, it creates a totally new APIGW for each stage, that's why you don't see the prefix in your path with the stage name, but if you observe the url, you'll see that the base url will be different across stages. That's how you can differentiate between them.
I am using Gitlab CI with docker executor and services.
During test I'm starting a server in the main script, and I need the service to make a request back to the main script.
Is there address/alias I can use to connect back to the main build script? Something like host.docker.internal.
Pseudo-example:
test:
services:
name: ping-pong-service
variables:
CALLBACK_ADDRESS: 'http://host.docker.internal:8090/pong'
script:
- "Start a server at 0.0.0.0:8090"
- curl http://ping-pong-service:80/ping
Supose that ping-pong-service is a service that when receiving any http request on :80, performs new request to CALLBACK_ADDRESS. What should I enter into CALLBACK_ADDRESS to connect back to main container?
I tried looking into what containers get started on the runner, but the main container doesn't seem to have predictable name or alias in the docker network.
Env:
Docker: 20.10.12
Gitlab Runner: 14.8.0, self-hosted, FF_NETWORK_PER_BUILD=1
Gitlab: 14.9.2-ee, self-hosted
When using the FF_NETWORK_PER_BUILD feature flag for networks per job, containers started using services: can reach the main job container using the network alias build
Assuming your service is configured as you describe, you would use:
variables:
CALLBACK_ADDRESS: 'http://build:8090/pong'
Note: this does not apply to containers started using docker run in the job container for this scenario.
I've got a web app deployed to AKS and it's generally up and running fine. I am now trying to extend its functionality by adding access to Azure sql.
We are using VSTS/Azure DevOps for deployment.
I've deployed a secret to the cluster using the command:
kubectl secret generic sampleapp-appsettings --from-literal=DBConnectionString="$(var_DBConnectionString)"
I've checked the cluster from the kubernetes dashboard and can see it has been deployed as expected. The secret is a connection string to the database
However, I'm struggling to retrieve the secret from the deployed pods. I've created an environment variable for ASPNETCORE_ENVIRONMENT, with a value for Kubernetes.
Here's part of my deployment yaml:
spec:
containers:
- name: sampleapp-services
image: sampleapp.azurecr.io/sampleapp-services:latest
imagePullPolicy: Always
env:
- name: "ASPNETCORE_ENVIRONMENT"
value: "Kubernetes"
- name: services-appsettings
valueFrom:
secretKeyRef:
name: services-appsettings
key: DBConnectionString
ports:
- containerPort: 80
I've added an api endpoint to my app for the purposes of debugging and can see that the ASPNETCORE_ENVIRONMENT value is being pull correctly.
however, the DBConnectionString value isn't being correctly pull from the kubernetes secret. instead it's being retrieved from the appsettings.json file. I've got some code in my app which is just outputting the values:
[HttpGet("settings")]
public ActionResult<string> GetAppSettings()
{
var message = $"Host: {Environment.MachineName}\n" +
$"EnvironmentName: {_env.EnvironmentName}\n" +
$"Secret value: {_dataSettings.ConnectionString}";
return message;
}
in my DataSettings class I've got code like this:
var value = Environment.GetEnvironmentVariable("DBConnectionString");
However, this isn't pulling back the secret value from the kubernetes cluster that I'm expecting.
I've followed some examples, like this - but they don't help.
blog
Has anyone got some simple step by step instructions/samples that might help?
thanks
The command you have specified to create secret is missing create and creates a secret by name 'sampleapp-appsettings' however in deployment.yaml you have specified 'services-appsettings' instead. I assume the snippets you listed are just for reference and in actual code these values match.
Secondly, environment variable - name: services-appsettings should match to the name you have specified in code. As per your snippets, Environment.GetEnvironmentVariable("DBConnectionString") is having 'DBConnectionString' however your yaml is having 'services-appsettings'
Lastly, I hope in Web API, you are calling .AddEnvironmentVariables() while building config.
I am trying to figure out how to manage secrets for different environements while creating serverless applications
my serverless.yml file looks something like this:
provider:
name: aws
runtime: nodejs6.10
stage: ${opt:stage}
region: us-east-1
environment:
NODE_ENV: ${opt:stage}
SOME_API_KEY: // this is different depending upon dev or prod
functions:
....
when deploying i use the following command
serverless deploy --stage prod
I want the configuration information to be picked up from AWS parameter store as described here:
https://serverless.com/blog/serverless-secrets-api-keys/
However I do not see a way to provide different keys for development or prod environment.
Any suggestions ?
I put prefixes in my Parameter Store variables.
e.g.
common/SLACK_WEBHOOK
development/MYSQL_PASSWORD
production/MYSQL_PASSWORD
Then, in my serverless.yml, I can do...
...
environment:
SLACK_WEBHOOK: ${ssm:common/SLACK_WEBHOOK}
MYSQL_PASSWORD: ${ssm:${opt:stage}/MYSQL_PASSWORD}
You should be able to do something similar.
I want to access Docker Secrets in my ASP.Net Core project.
I distilled the issue down to a test API project. All it does is read the directories inside /run/
[HttpGet]
public IEnumerable<string> Get()
{
return Directory.GetDirectories("/run");
}
I changed the compose files to 3.1 and added the plumbing for secrets listed here: how do you manage secret values with docker-compose v3.1?
version: '3.1'
services:
secrettest:
image: secrettest
build:
context: ./SecretTest
dockerfile: Dockerfile
secrets: # secrets block only for 'web' service
- my_external_secret
secrets: # top level secrets block
my_external_secret:
external: true
The get action returns ["/run/lock"]. I do not see a /run/secrets directory. I also shelled into the container to verify it does not see the /run/secrets directory.
I am able to see secrets from other containers. Anyone know what I am missing? Is there another strategy I should take other than the 3.1 compose to configure the container in VS 2017?