Create multiple file share dynamically under existing storage account - azure-storage

I am learning kubernetes(AKS),
I am playing around azure. I have a scenario where I need to create multiple file share in azure storage account and I am able to create with set of commands but the twist is I need to create those dynamically as per requirement .
Example: I have two application both need azure storage account, instead of creating two different storage account I can create two file share under same storage account. Here I want to create file share dynamically as the application started to deploy. Because I might need 2nd application or may be I can start third application. So instead of creating multiple file share before I want to create those as per requirement.
After googling I found this article, but here also share name must be created in storage account already.
My question is, Is it achievable? If yes?
Update
YML for storageClass and PersistentVolumeClaim
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: mystorageclass
provisioner: kubernetes.io/azure-file
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=1000
- gid=1000
parameters:
skuName: Standard_LRS
storageAccount: mystrg
location: eastus
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypvc
spec:
accessModes:
- ReadWriteMany
storageClassName: mystorageclass
resources:
requests:
storage: 5Gi
storageClass created successfully but PersistentVolumeClaim status is pending with error resource storageAccount not found. It tries to find storageAccount under resource-group which is created by kubernetes?

The short answer is Yes. When you use the persistent volume in AKS dynamic created from the Azure File Share, you can create the storage account before, then you can use that storage account in the storage class like this:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: azurefile
provisioner: kubernetes.io/azure-file
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=1000
- gid=1000
parameters:
skuName: Standard_LRS
storageAccount: azureaksstore
location: eastus
And when you create the PVC with this SC, Azure will create the file share in this storage account for you. It shows like below in the storage account:
For more details, see Dynamically create and use a persistent volume with Azure Files in Azure Kubernetes Service (AKS).

Related

Restrict Log Analytics logging per deployment or container

We've seen our Log Analytics costs spike and found that the ContainerLog table had grown drastically. This appears to be all stdout/stderr logs from the containers.
Is it possible to restrict logging to this table, at least for some deployments or containers, without disabling Log Analytics on the cluster? We still want performance logging and insights.
AFAIK the stdout and stderr logs under ContainerLog table are basically the logs which we see when we manually run the command "kubectl logs " so it would be possible to restrict logging to ContainerLog table without disabling Log Analytics on the cluster by having the deployment file something like shown below which would write logs to logfile within the container.
apiVersion: apps/v1
kind: Deployment
metadata:
name: xxxxxxx
spec:
selector:
matchLabels:
app: xxxxxxx
template:
metadata:
labels:
app: xxxxxxx
spec:
containers:
- name: xxxxxxx
image: xxxxxxx/xxxxxxx:latest
command: ["sh", "-c", "./xxxxxxx.sh &> /logfile"]
However, the best practice would be to send log messages to stdout for applications running in a container so the above process is not a preferrable way.
So you may create an alert when data collection is higher than expected as explained in this article and / or occasionally delete unwanted data as explained in this article by leveraging purge REST API (but make sure you are purging only unwanted data because the deletes in Log Analytics are non-reversible!).
Hope this helps!!
Recently faced a similar problem in one of our Azure Clusters. Due to some incessant logging in the code the container logs went berserk. It is possible to restrict logging per namespace at the level of STDOUT or STDERR.
You have to configure this by deploying a config map on the kube-system namespace upon which, logging ingestion to the log analytics workspace can be disabled/restricted per namespace.
The omsagent pods in kube-system namespace will absorb these new configs in a few mins.
Download the below file and apply it on your Azure Kubernetes cluster
container-azm-ms-agentconfig.yaml
The file contains the flags to enable/disable logging and namespaces can be excluded in the rule.
# kubectl apply -f <path to container-azm-ms-agentconfig.yaml>
This only prevents the log collection in the Log analytics Workspace but not the log generation in the individual containers.
Details on each config flag in the file is available here

Retrieve environment secret from AKS kubernetes cluster for deployed ASP.Net core web app

I've got a web app deployed to AKS and it's generally up and running fine. I am now trying to extend its functionality by adding access to Azure sql.
We are using VSTS/Azure DevOps for deployment.
I've deployed a secret to the cluster using the command:
kubectl secret generic sampleapp-appsettings --from-literal=DBConnectionString="$(var_DBConnectionString)"
I've checked the cluster from the kubernetes dashboard and can see it has been deployed as expected. The secret is a connection string to the database
However, I'm struggling to retrieve the secret from the deployed pods. I've created an environment variable for ASPNETCORE_ENVIRONMENT, with a value for Kubernetes.
Here's part of my deployment yaml:
spec:
containers:
- name: sampleapp-services
image: sampleapp.azurecr.io/sampleapp-services:latest
imagePullPolicy: Always
env:
- name: "ASPNETCORE_ENVIRONMENT"
value: "Kubernetes"
- name: services-appsettings
valueFrom:
secretKeyRef:
name: services-appsettings
key: DBConnectionString
ports:
- containerPort: 80
I've added an api endpoint to my app for the purposes of debugging and can see that the ASPNETCORE_ENVIRONMENT value is being pull correctly.
however, the DBConnectionString value isn't being correctly pull from the kubernetes secret. instead it's being retrieved from the appsettings.json file. I've got some code in my app which is just outputting the values:
[HttpGet("settings")]
public ActionResult<string> GetAppSettings()
{
var message = $"Host: {Environment.MachineName}\n" +
$"EnvironmentName: {_env.EnvironmentName}\n" +
$"Secret value: {_dataSettings.ConnectionString}";
return message;
}
in my DataSettings class I've got code like this:
var value = Environment.GetEnvironmentVariable("DBConnectionString");
However, this isn't pulling back the secret value from the kubernetes cluster that I'm expecting.
I've followed some examples, like this - but they don't help.
blog
Has anyone got some simple step by step instructions/samples that might help?
thanks
The command you have specified to create secret is missing create and creates a secret by name 'sampleapp-appsettings' however in deployment.yaml you have specified 'services-appsettings' instead. I assume the snippets you listed are just for reference and in actual code these values match.
Secondly, environment variable - name: services-appsettings should match to the name you have specified in code. As per your snippets, Environment.GetEnvironmentVariable("DBConnectionString") is having 'DBConnectionString' however your yaml is having 'services-appsettings'
Lastly, I hope in Web API, you are calling .AddEnvironmentVariables() while building config.

How can I configure openshift to find the my RabbitMQ definitions.json?

I am experiencing this problem or a similiar problem:
https://access.redhat.com/solutions/2374351 (RabbitMQ users and its permission are deleted after resource restart)
But the proposed fix is not public.
I would like to have a user name & password hash pair which can survive a complete crash.
I am not sure how with openshift templates to programmatically upload or define definitions.json. I can upload the definitions.json to here /var/lib/rabbitmq/etc/rabbitmq/definitions.json with winscp.
If my definitions.json is uploaded from hand, after a crash the user names and hashes get reloaded. However I don't want to upload from hand. I would like to configure openshift and save that configuration.
My only idea is to trying to access one openshift ConfigMap from another.
I have two config maps:
plattform-rabbitmq-configmap
definitions.json
I want the ConfigMap plattform-rabbitmq-configmap to reference the ConfigMap definitions.json.
plattform-rabbitmq-configmap contains my rabbitmq.config. In plattform-rabbitmq-configmap I want to access or load definitions.json.
Using the oc get configmaps command I got a selflink for definitions.json. Using the selflink I try to load the definitions.json as follows (in plattform-rabbitmq-configmap):
load_definitions,"/api/v1/namespaces/my-app/configmaps/definitions.json"}
But that doesn't work:
=INFO REPORT==== 15-Mar-2018::15:08:40 ===
application: rabbit
exited: {bad_return,
{{rabbit,start,[normal,[]]},
{'EXIT',
{error,
{could_not_read_defs,
{"/api/v1/namespaces/my-app/configmaps/definitions.json",
enoent}}}}}}
type: transient
Is there any way to do this? Or another way?

how to manage secrets for different environment in serverless framework

I am trying to figure out how to manage secrets for different environements while creating serverless applications
my serverless.yml file looks something like this:
provider:
name: aws
runtime: nodejs6.10
stage: ${opt:stage}
region: us-east-1
environment:
NODE_ENV: ${opt:stage}
SOME_API_KEY: // this is different depending upon dev or prod
functions:
....
when deploying i use the following command
serverless deploy --stage prod
I want the configuration information to be picked up from AWS parameter store as described here:
https://serverless.com/blog/serverless-secrets-api-keys/
However I do not see a way to provide different keys for development or prod environment.
Any suggestions ?
I put prefixes in my Parameter Store variables.
e.g.
common/SLACK_WEBHOOK
development/MYSQL_PASSWORD
production/MYSQL_PASSWORD
Then, in my serverless.yml, I can do...
...
environment:
SLACK_WEBHOOK: ${ssm:common/SLACK_WEBHOOK}
MYSQL_PASSWORD: ${ssm:${opt:stage}/MYSQL_PASSWORD}
You should be able to do something similar.

openshift concrete builder quota

How does the openshift count the resource quota consumed by specific builder image? (There may be multiple images)
It is created by sti builder, but not openshift cluster itself (k8s exactly).
I know the quota is equal to the sti builder, but would like to know how to count it if we customized the quota (and if I can do that). It looks like the cluster can't count the resource quota (cpu/memory, etc)
Together with quota you can define a scope. See OpenShift Origin: quota scopes.
The relevant scope for build and deployment pods is NonTerminating.
Adding this scope to quota definition will constrain it to only build and deployment pods (pods that have spec.activeDeadlineSeconds is nil according to docs)
Example definition:
apiVersion: v1
kind: ResourceQuota
metadata:
name: slow-builds-and-deployments
spec:
hard:
pods: "2"
limits.cpu: "1"
limits.memory: "1Gi"
scopes:
- NotTerminating
The Terminating scope on the other side will be applied other pods (pods with spec.activeDeadlineSeconds >= 0).