openshift concrete builder quota - openshift-origin

How does the openshift count the resource quota consumed by specific builder image? (There may be multiple images)
It is created by sti builder, but not openshift cluster itself (k8s exactly).
I know the quota is equal to the sti builder, but would like to know how to count it if we customized the quota (and if I can do that). It looks like the cluster can't count the resource quota (cpu/memory, etc)

Together with quota you can define a scope. See OpenShift Origin: quota scopes.
The relevant scope for build and deployment pods is NonTerminating.
Adding this scope to quota definition will constrain it to only build and deployment pods (pods that have spec.activeDeadlineSeconds is nil according to docs)
Example definition:
apiVersion: v1
kind: ResourceQuota
metadata:
name: slow-builds-and-deployments
spec:
hard:
pods: "2"
limits.cpu: "1"
limits.memory: "1Gi"
scopes:
- NotTerminating
The Terminating scope on the other side will be applied other pods (pods with spec.activeDeadlineSeconds >= 0).

Related

Why does a VirtualNode require podSelector.matchLabels and serviceDiscovery.dns?

Why does a VirtualNode require podSelector.matchLabels and serviceDiscovery.dns?
Aren't they doing the same thing - identifying which Pods (IPs) are member of this virtual node?
The AWS App Mesh Controller For K8s uses podSelector to match every Pod to a VirtualNode (API Design) in the Mesh as long as it does not have an appmesh.k8s.aws/sidecarInjectorWebhook: disabled annotation defined.

AKS API Load testing error: Premature end of Content-Length delimited message body

While load testing, after some successful responses from the API, JMeter records errors:
'Premature end of Content-Length delimited message body'.
From logs inside the code the response seems to complete normally.
The APP is deployed on AKS with ingress nginx/1.15.10 controllers. The APP consists of 4 separate APIs (one master calling the 3 others). The APIs are created in FLASK with CONNEXION and run in a WSGIContainer on a Tornado HTTPServer.
Another confusing factor is that the APP is deployed on two AKS instances on the same cluster. The one deployment does not return errors and the other does.
What could be causing the error?
I would suggest to limit your testing scope.
1) target the application directly (bypassing the k8s svc and ingress controller). ensure you target each app running on the two different nodes. Do you still see the issue ?
2) target the app service directly (bypassing ingress controller), ensure you target each app running on the two different nodes. Do you still see the issue ?
3) target the app using its ingress, ensure you target each app running on the two different nodes. Do you still see the issue ?
Based on those results, we should be able to pinpoint better the source of your issue.

Create multiple file share dynamically under existing storage account

I am learning kubernetes(AKS),
I am playing around azure. I have a scenario where I need to create multiple file share in azure storage account and I am able to create with set of commands but the twist is I need to create those dynamically as per requirement .
Example: I have two application both need azure storage account, instead of creating two different storage account I can create two file share under same storage account. Here I want to create file share dynamically as the application started to deploy. Because I might need 2nd application or may be I can start third application. So instead of creating multiple file share before I want to create those as per requirement.
After googling I found this article, but here also share name must be created in storage account already.
My question is, Is it achievable? If yes?
Update
YML for storageClass and PersistentVolumeClaim
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: mystorageclass
provisioner: kubernetes.io/azure-file
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=1000
- gid=1000
parameters:
skuName: Standard_LRS
storageAccount: mystrg
location: eastus
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypvc
spec:
accessModes:
- ReadWriteMany
storageClassName: mystorageclass
resources:
requests:
storage: 5Gi
storageClass created successfully but PersistentVolumeClaim status is pending with error resource storageAccount not found. It tries to find storageAccount under resource-group which is created by kubernetes?
The short answer is Yes. When you use the persistent volume in AKS dynamic created from the Azure File Share, you can create the storage account before, then you can use that storage account in the storage class like this:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: azurefile
provisioner: kubernetes.io/azure-file
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=1000
- gid=1000
parameters:
skuName: Standard_LRS
storageAccount: azureaksstore
location: eastus
And when you create the PVC with this SC, Azure will create the file share in this storage account for you. It shows like below in the storage account:
For more details, see Dynamically create and use a persistent volume with Azure Files in Azure Kubernetes Service (AKS).

Restrict Log Analytics logging per deployment or container

We've seen our Log Analytics costs spike and found that the ContainerLog table had grown drastically. This appears to be all stdout/stderr logs from the containers.
Is it possible to restrict logging to this table, at least for some deployments or containers, without disabling Log Analytics on the cluster? We still want performance logging and insights.
AFAIK the stdout and stderr logs under ContainerLog table are basically the logs which we see when we manually run the command "kubectl logs " so it would be possible to restrict logging to ContainerLog table without disabling Log Analytics on the cluster by having the deployment file something like shown below which would write logs to logfile within the container.
apiVersion: apps/v1
kind: Deployment
metadata:
name: xxxxxxx
spec:
selector:
matchLabels:
app: xxxxxxx
template:
metadata:
labels:
app: xxxxxxx
spec:
containers:
- name: xxxxxxx
image: xxxxxxx/xxxxxxx:latest
command: ["sh", "-c", "./xxxxxxx.sh &> /logfile"]
However, the best practice would be to send log messages to stdout for applications running in a container so the above process is not a preferrable way.
So you may create an alert when data collection is higher than expected as explained in this article and / or occasionally delete unwanted data as explained in this article by leveraging purge REST API (but make sure you are purging only unwanted data because the deletes in Log Analytics are non-reversible!).
Hope this helps!!
Recently faced a similar problem in one of our Azure Clusters. Due to some incessant logging in the code the container logs went berserk. It is possible to restrict logging per namespace at the level of STDOUT or STDERR.
You have to configure this by deploying a config map on the kube-system namespace upon which, logging ingestion to the log analytics workspace can be disabled/restricted per namespace.
The omsagent pods in kube-system namespace will absorb these new configs in a few mins.
Download the below file and apply it on your Azure Kubernetes cluster
container-azm-ms-agentconfig.yaml
The file contains the flags to enable/disable logging and namespaces can be excluded in the rule.
# kubectl apply -f <path to container-azm-ms-agentconfig.yaml>
This only prevents the log collection in the Log analytics Workspace but not the log generation in the individual containers.
Details on each config flag in the file is available here

Retrieve environment secret from AKS kubernetes cluster for deployed ASP.Net core web app

I've got a web app deployed to AKS and it's generally up and running fine. I am now trying to extend its functionality by adding access to Azure sql.
We are using VSTS/Azure DevOps for deployment.
I've deployed a secret to the cluster using the command:
kubectl secret generic sampleapp-appsettings --from-literal=DBConnectionString="$(var_DBConnectionString)"
I've checked the cluster from the kubernetes dashboard and can see it has been deployed as expected. The secret is a connection string to the database
However, I'm struggling to retrieve the secret from the deployed pods. I've created an environment variable for ASPNETCORE_ENVIRONMENT, with a value for Kubernetes.
Here's part of my deployment yaml:
spec:
containers:
- name: sampleapp-services
image: sampleapp.azurecr.io/sampleapp-services:latest
imagePullPolicy: Always
env:
- name: "ASPNETCORE_ENVIRONMENT"
value: "Kubernetes"
- name: services-appsettings
valueFrom:
secretKeyRef:
name: services-appsettings
key: DBConnectionString
ports:
- containerPort: 80
I've added an api endpoint to my app for the purposes of debugging and can see that the ASPNETCORE_ENVIRONMENT value is being pull correctly.
however, the DBConnectionString value isn't being correctly pull from the kubernetes secret. instead it's being retrieved from the appsettings.json file. I've got some code in my app which is just outputting the values:
[HttpGet("settings")]
public ActionResult<string> GetAppSettings()
{
var message = $"Host: {Environment.MachineName}\n" +
$"EnvironmentName: {_env.EnvironmentName}\n" +
$"Secret value: {_dataSettings.ConnectionString}";
return message;
}
in my DataSettings class I've got code like this:
var value = Environment.GetEnvironmentVariable("DBConnectionString");
However, this isn't pulling back the secret value from the kubernetes cluster that I'm expecting.
I've followed some examples, like this - but they don't help.
blog
Has anyone got some simple step by step instructions/samples that might help?
thanks
The command you have specified to create secret is missing create and creates a secret by name 'sampleapp-appsettings' however in deployment.yaml you have specified 'services-appsettings' instead. I assume the snippets you listed are just for reference and in actual code these values match.
Secondly, environment variable - name: services-appsettings should match to the name you have specified in code. As per your snippets, Environment.GetEnvironmentVariable("DBConnectionString") is having 'DBConnectionString' however your yaml is having 'services-appsettings'
Lastly, I hope in Web API, you are calling .AddEnvironmentVariables() while building config.