Retrieve environment secret from AKS kubernetes cluster for deployed ASP.Net core web app - asp.net-core

I've got a web app deployed to AKS and it's generally up and running fine. I am now trying to extend its functionality by adding access to Azure sql.
We are using VSTS/Azure DevOps for deployment.
I've deployed a secret to the cluster using the command:
kubectl secret generic sampleapp-appsettings --from-literal=DBConnectionString="$(var_DBConnectionString)"
I've checked the cluster from the kubernetes dashboard and can see it has been deployed as expected. The secret is a connection string to the database
However, I'm struggling to retrieve the secret from the deployed pods. I've created an environment variable for ASPNETCORE_ENVIRONMENT, with a value for Kubernetes.
Here's part of my deployment yaml:
spec:
containers:
- name: sampleapp-services
image: sampleapp.azurecr.io/sampleapp-services:latest
imagePullPolicy: Always
env:
- name: "ASPNETCORE_ENVIRONMENT"
value: "Kubernetes"
- name: services-appsettings
valueFrom:
secretKeyRef:
name: services-appsettings
key: DBConnectionString
ports:
- containerPort: 80
I've added an api endpoint to my app for the purposes of debugging and can see that the ASPNETCORE_ENVIRONMENT value is being pull correctly.
however, the DBConnectionString value isn't being correctly pull from the kubernetes secret. instead it's being retrieved from the appsettings.json file. I've got some code in my app which is just outputting the values:
[HttpGet("settings")]
public ActionResult<string> GetAppSettings()
{
var message = $"Host: {Environment.MachineName}\n" +
$"EnvironmentName: {_env.EnvironmentName}\n" +
$"Secret value: {_dataSettings.ConnectionString}";
return message;
}
in my DataSettings class I've got code like this:
var value = Environment.GetEnvironmentVariable("DBConnectionString");
However, this isn't pulling back the secret value from the kubernetes cluster that I'm expecting.
I've followed some examples, like this - but they don't help.
blog
Has anyone got some simple step by step instructions/samples that might help?
thanks

The command you have specified to create secret is missing create and creates a secret by name 'sampleapp-appsettings' however in deployment.yaml you have specified 'services-appsettings' instead. I assume the snippets you listed are just for reference and in actual code these values match.
Secondly, environment variable - name: services-appsettings should match to the name you have specified in code. As per your snippets, Environment.GetEnvironmentVariable("DBConnectionString") is having 'DBConnectionString' however your yaml is having 'services-appsettings'
Lastly, I hope in Web API, you are calling .AddEnvironmentVariables() while building config.

Related

stage is not added to the endpoint when deploy with sls deploy --stage dev

I am using serverless framework. I have set the stage as the dev and deploying using sls deploy --stage dev command. The dev stage is not added to the end point. The endpoints are not like the one given below:
https://****.execute-api.us-east-1.amazonaws.com/users
One of my lambda functions needs an endpoint to submit a post request to a third-party API to post back the result. I need to be sure that the endpoint is rightly sent from the production/dev stage.
postback_url = `https://${process.env.RestApiId}.execute-api.${
process.env.REGION
}.amazonaws.com/${process.env.stage}/dfs-pingback?id=$id&tag=$tag`;
As you can see the above postback url is wrong if I do not get the stage (process.env.stage) added to the endpoint.
serverless.yml
service: lytoolsApi
frameworkVersion: '2 || 3'
configValidationMode: error
provider:
name: aws
runtime: nodejs12.x
region: us-east-1
stage: dev
Serverless Framework does things a bit differently and instead of using stages of APIGW, it creates a totally new APIGW for each stage, that's why you don't see the prefix in your path with the stage name, but if you observe the url, you'll see that the base url will be different across stages. That's how you can differentiate between them.

Kubernetes - env variables as API url

So I have an API that's the gateway for two other API's.
Using docker in wsl 2 (ubuntu), when I build my Gateway API.
docker run -d -p 8080:8080 -e A_API_URL=$A_API_URL B_API_URL=$B_API_URL registry:$(somePort)//gateway
I have 2 environnement variables that are the API URI of the two API'S. I just dont know how to make this work in the config.
env:
- name: A_API_URL
value: <need help>
- name: B_API_URL
value: <need help>
I get 500 or 502 errors when accessing then in the network.
I tried specifyng the value of the env var as:
their respective service's name.
the complete URI (http://$(addr):$(port)
the relative path : /something/anotherSomething
Each API is deployed with a Deployment controller and a service
I'm at a lost, any help is appreciated
You just have to hardwire them. Kubernetes doesn't know anything about your local machine. There are templating tools like Helm that could inject things like Bash is in your docker run example but generally not a good idea since if anyone other than you runs the same command, they could see different results. The values should look like http://servicename.namespacename.svc.cluster.local:port/whatever. So if the service is named foo in namespace default with port 8000 and path /api, http://foo.default.svc.cluster.local:8000/api.

Create multiple file share dynamically under existing storage account

I am learning kubernetes(AKS),
I am playing around azure. I have a scenario where I need to create multiple file share in azure storage account and I am able to create with set of commands but the twist is I need to create those dynamically as per requirement .
Example: I have two application both need azure storage account, instead of creating two different storage account I can create two file share under same storage account. Here I want to create file share dynamically as the application started to deploy. Because I might need 2nd application or may be I can start third application. So instead of creating multiple file share before I want to create those as per requirement.
After googling I found this article, but here also share name must be created in storage account already.
My question is, Is it achievable? If yes?
Update
YML for storageClass and PersistentVolumeClaim
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: mystorageclass
provisioner: kubernetes.io/azure-file
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=1000
- gid=1000
parameters:
skuName: Standard_LRS
storageAccount: mystrg
location: eastus
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypvc
spec:
accessModes:
- ReadWriteMany
storageClassName: mystorageclass
resources:
requests:
storage: 5Gi
storageClass created successfully but PersistentVolumeClaim status is pending with error resource storageAccount not found. It tries to find storageAccount under resource-group which is created by kubernetes?
The short answer is Yes. When you use the persistent volume in AKS dynamic created from the Azure File Share, you can create the storage account before, then you can use that storage account in the storage class like this:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: azurefile
provisioner: kubernetes.io/azure-file
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=1000
- gid=1000
parameters:
skuName: Standard_LRS
storageAccount: azureaksstore
location: eastus
And when you create the PVC with this SC, Azure will create the file share in this storage account for you. It shows like below in the storage account:
For more details, see Dynamically create and use a persistent volume with Azure Files in Azure Kubernetes Service (AKS).

how to manage secrets for different environment in serverless framework

I am trying to figure out how to manage secrets for different environements while creating serverless applications
my serverless.yml file looks something like this:
provider:
name: aws
runtime: nodejs6.10
stage: ${opt:stage}
region: us-east-1
environment:
NODE_ENV: ${opt:stage}
SOME_API_KEY: // this is different depending upon dev or prod
functions:
....
when deploying i use the following command
serverless deploy --stage prod
I want the configuration information to be picked up from AWS parameter store as described here:
https://serverless.com/blog/serverless-secrets-api-keys/
However I do not see a way to provide different keys for development or prod environment.
Any suggestions ?
I put prefixes in my Parameter Store variables.
e.g.
common/SLACK_WEBHOOK
development/MYSQL_PASSWORD
production/MYSQL_PASSWORD
Then, in my serverless.yml, I can do...
...
environment:
SLACK_WEBHOOK: ${ssm:common/SLACK_WEBHOOK}
MYSQL_PASSWORD: ${ssm:${opt:stage}/MYSQL_PASSWORD}
You should be able to do something similar.

Docker Secrets ASP.NET Core

I want to access Docker Secrets in my ASP.Net Core project.
I distilled the issue down to a test API project. All it does is read the directories inside /run/
[HttpGet]
public IEnumerable<string> Get()
{
return Directory.GetDirectories("/run");
}
I changed the compose files to 3.1 and added the plumbing for secrets listed here: how do you manage secret values with docker-compose v3.1?
version: '3.1'
services:
secrettest:
image: secrettest
build:
context: ./SecretTest
dockerfile: Dockerfile
secrets: # secrets block only for 'web' service
- my_external_secret
secrets: # top level secrets block
my_external_secret:
external: true
The get action returns ["/run/lock"]. I do not see a /run/secrets directory. I also shelled into the container to verify it does not see the /run/secrets directory.
I am able to see secrets from other containers. Anyone know what I am missing? Is there another strategy I should take other than the 3.1 compose to configure the container in VS 2017?