I have implemented a .Net Core API which is caching and accessing the data from Redis cache. I have created images for API and Redis and deployed both of them on 2 different pods. Currently I am using minikube as a Kubernetes cluster on my local machine.
.Net API code:
Controller method code:
[HttpGet]
public IActionResult Get()
{
var cacheKey = "weatherList";
var rng = new Random();
IEnumerable<WeatherForecast> weatherList = new List<WeatherForecast>();
var redisWeatherList = _distributedCache.Get(cacheKey);
if (redisWeatherList != null)
{
var serializedWeatherList = Encoding.UTF8.GetString(redisWeatherList);
weatherList = JsonConvert.DeserializeObject<List<WeatherForecast>>(serializedWeatherList);
return Ok(new Tuple<IEnumerable<WeatherForecast>, string>(weatherList, "Data fetched from cache"));
}
else
{
weatherList = Enumerable.Range(1, 5).Select(index => new WeatherForecast
{
Date = DateTime.Now.AddDays(index),
TemperatureC = rng.Next(-20, 55),
Summary = Summaries[rng.Next(Summaries.Length)]
}).ToArray();
var serializedWeatherList = JsonConvert.SerializeObject(weatherList);
redisWeatherList = Encoding.UTF8.GetBytes(serializedWeatherList);
var options = new DistributedCacheEntryOptions()
.SetAbsoluteExpiration(DateTime.Now.AddSeconds(5));
_distributedCache.Set(cacheKey, redisWeatherList, options);
return Ok(new Tuple<IEnumerable<WeatherForecast>>(weatherList));
}
}
Redis cache configuration in the Startup class
services.AddStackExchangeRedisCache(options =>
{
options.Configuration = Configuration.GetSection("AppConfiguration").GetSection("RedisConfiguration").Value;
});
For deployment I have created the deployment and service yaml files for both of them. Since we have to access the API so its type is selected as LoadBalancer and Redis will be interacted by the API type is selected as default ClusterIP.
YML files:
API YML files:
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-deployment
labels:
app: api-deployment
spec:
replicas: 1
selector:
matchLabels:
app: api-deployment
template:
metadata:
labels:
app: api-deployment
spec:
volumes:
- name: appsettings-volume
configMap:
name: appsettings-configmap
containers:
- name: api-deployment
image: testapiimage
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumeMounts:
- name: appsettings-volume
mountPath: /app/appsettings.json
subPath: appsettings.json
resources:
requests:
cpu: 100m
memory: 100Mi
limits:
cpu: 200m
memory: 200Mi
apiVersion: v1
kind: Service
metadata:
name: api-service
labels:
app: api-service
spec:
ports:
- port: 8080
targetPort: 80
type: LoadBalancer
selector:
app: api-deployment
Redis YML files:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-deployment
labels:
app: redis-deployment
spec:
replicas: 1
selector:
matchLabels:
app: redis-deployment
template:
metadata:
labels:
app: redis-deployment
spec:
containers:
- name: redis-deployment
image: redis
ports:
- containerPort: 6379
resources:
requests:
cpu: 100m
memory: 100Mi
limits:
cpu: 200m
memory: 200Mi
apiVersion: v1
kind: Service
metadata:
name: redis-service
labels:
app: redis-service
spec:
ports:
- port: 6379
targetPort: 6379
selector:
app: redis-deployment
Now the issue and confusion is how API will know how to connect to the ClusterIP of Redis pod. In the local code it is connecting through localhost. Therefore, while API container is running it is still trying to connect to the Redis container through localhost only just like in code.
Is there any way that the API container will automatically connected to the ClusterIP of the Redis container like Environment variables but in my case I have to depend on the value of Redis host which in the container environment is the ClusterIP which is not static and cant be injected through appsettings.
Please guide me here.
To reach other pods in the cluster that are not exposed publicly, you can use the fully qualified domain name (FQDN) assigned by the Kubernetes internal DNS. Kubernetes internal DNS will resolve this to the necessary ClusterIP value to reach the service.
The FQDN is in the format:
<service-name>.<namespace>.svc.<cluster-domain>
The default value of the cluster-domain is cluster.local and if no namespace was provided to redis, it will be running in the default namespace.
Assuming all defaults, the FQDN of the redis service would be:
redis-service.default.svc.cluster.local.
Kubernetes will automatically resolve this to the appropriate ClusterIP address.
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
Related
I have a .net core 6 web api service which I have deployed on AKS(Azure Kubernetes Services). The Program.cs looks like so;
var builder = WebApplication.CreateBuilder(args);
var swaggerConfig = builder.Configuration.GetSection(nameof(SwaggerConfig)).Get<SwaggerConfig>();
builder.Services.AddSwaggerGen(opts =>
{
opts.SchemaFilter<ExampleNashSchemaFilter>();
opts.SwaggerDoc(swaggerConfig.ApiVersion, new Microsoft.OpenApi.Models.OpenApiInfo { Title = swaggerConfig.ApiName, Version = swaggerConfig.ApiVersion });
});
builder.Services.Configure<ForwardedHeadersOptions>(options =>
{
options.ForwardedHeaders = ForwardedHeaders.XForwardedFor |
ForwardedHeaders.XForwardedProto;
// Only loopback proxies are allowed by default.
// Clear that restriction because forwarders are enabled by explicit
// configuration.
options.KnownNetworks.Clear();
options.KnownProxies.Clear();
});
builder.Services.AddCors(p => p.AddPolicy("devcors", builder =>
{
builder.WithOrigins("*").AllowAnyMethod().AllowAnyHeader();
}));
var app = builder.Build();
app.UseCors("devcors");
app.UseSwagger();
app.UseSwaggerUI(c =>
{
c.SwaggerEndpoint($"/swagger/v1/swagger.json", swaggerConfig.ApiName);
c.RoutePrefix = "swagger";
});
app.UseAuthorization();
app.MapControllers();
app.UseDefaultFiles();
app.UseStaticFiles();
app.UseRouting();
app.UseEndpoints(endpoints =>
{
endpoints.MapControllers();
});
app.Run();
Here is the swagger settings on appsettings.json
"SwaggerConfig": {
"ApiName": "Some Services",
"ApiVersion": "v1"
}
Here is my deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: servicetest
spec:
replicas: 1
selector:
matchLabels:
app: servicetest
template:
metadata:
labels:
app: servicetest
spec:
containers:
- name: servicetest
image: myprivatehub/servicetest:latest
imagePullPolicy: Always
ports:
- containerPort: 80
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
name: servicetest
spec:
selector:
app: servicetest
ports:
- port: 80
protocol: TCP
targetPort: 80
type: LoadBalancer
For some weird reasons I can access the APIs running from my service on AKS but I can't access the swagger/index.html I get an error page can't be found. I replicated this deployment on my minikube and it works well. I'm curious to know why i can't access swaggerUI from AKS
Seems that you do not have an Ingress Controller deployed on your AKS. You will need that in order to get ingress to work and to expose the swagger-ui.
To access the swagger-ui for testing purposes you can do this:
kubectl port-forward service/servicetest 80:8080
Afterwards just access http://localhost:8080
But you should def. install an ingress-controller: Here is a Workflow from MS to install ingress-nginx as Ingress Controller on your Cluster.
To avoid that every service spawns a LoadBalancer and a PublicIP in Azure (to keep an eye on your costs), you will then only expose the ingress-controller to the internet and could also specify the loadBalancerIP statically if you created the PublicIP in advance:
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/azure-load-balancer-resource-group: myResourceGroup # only needed if the LB is in another RG
name: ingress-nginx-controller
spec:
loadBalancerIP: <YOUR_STATIC_IP>
type: LoadBalancer
The Ingress Controller then will route incoming traffic to your application with an Ingress resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
spec:
ingressClassName: nginx # ingress-nginx specifix
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: servicetest
port:
number: 80
I'm working on the Canary Deployment Strategy.
I use the Service Mesh Interface, after installing trafik mesh.
When starting the program for the first time with the command
kubectl apply -f applications.yaml
It should deploy the entire application i.e. 4 replicas, but it deploys only 20% (1 replica) of the application,
and it goes into progressing state with an error:
TrafficRoutingErro: the server could not find the requested resource (post trafficsplits.splits.smi-spec.io)
TrafficSplitNotCreated: Unable to create traffic Split 'demo-traefficsplit'
Here is my manifest:
argocd-rollout.yaml
---
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: demo
labels:
app: demo
spec:
strategy:
canary:
steps:
- setWeight: 20
- pause:
duration: "1m"
- setWeight: 50
- pause:
duration: "2m"
canaryService: demo-canary
stableService: demo
trafficRouting:
smi:
rootService: demo-smi
trafficSplitName: demo-trafficsplit
replicas: 4
revisionHistoryLimit: 2
selector:
matchLabels:
app: demo
version: blue
template:
metadata:
labels:
app: demo
version: blue
spec:
containers:
- name: demo
image: argoproj/rollouts-demo:blue
imagePullPolicy: Always
ports:
- name: web
containerPort: 8080
resources:
requests:
memory: "64Mi"
cpu: "100m"
limits:
memory: "128Mi"
cpu: "140m"
---
apiVersion: split.smi-spec.io/v1alpha3
kind: TrafficSplit
metadata:
name: demo-trafficsplit
spec:
service: demo-smi # controller uses the stableService if Rollout does not specify the rootService field
backends:
- service: demo
weight: 10
- service: demo-canary
weight: 90
---
apiVersion: v1
kind: Service
metadata:
name: demo-smi
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: demo
version: blue
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: demo
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: demo
version: blue
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: demo-canary
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: demo
version: blue
type: ClusterIP
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: rollout-ing
spec:
entryPoints:
- websecure
routes:
- kind: Rule
match: Host(`mycompagny.com`)
services:
- name: demo-smi
port: 80
tls:
certResolver: myresolver
applications.yaml
apiVersion: v1
kind: Namespace
metadata:
name: net
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: rollout
namespace: argocd
spec:
project: default
source:
repoURL: git#github.com:telemaqueHQ/DevOps.git
targetRevision: master
path: gitOps/test/argocd
destination:
server: https://kubernetes.default.svc
namespace: net
syncPolicy:
automated:
prune: true
selfHeal: true
My init.sql (script) is not getting copied to docker-entrypoint-initdb.d.
Note that the problem doesn't occur when I try to run it locally or on my server. It happens only when using the azure devops by creating build and release pipeline.
There seems to be a mistake in the hostpath(containing sql script) in the persistant volume YAML file in cases where the file is placed in the azure repos.
mysqlpersistantvolume.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-initdb-pv-volume
labels:
type: local
app: mysql
spec:
storageClassName: manual
capacity:
storage: 1Mi
accessModes:
- ReadOnlyMany
hostPath:
path: "/devops-sample" // main project folder in azure repos which
contains all files including sql script.
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mysql-initdb-pv-claim
labels:
app: mysql
spec:
storageClassName: manual
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1Mi
mysql.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
protocol: TCP
targetPort: 3306
selector:
app: mysql
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
imagePullPolicy: "IfNotPresent"
env:
- name: MYSQL_ROOT_PASSWORD
value: root
- name: MYSQL_PASSWORD
value: kovaion
- name: MYSQL_USER
value: vignesh
- name: MYSQL_DATABASE
value: data-core
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /docker-entrypoint-initdb.d
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-initdb-pv-claim
Currently the folder docker-entrypoint-initdb.d seems to be empty(nothing is getting copied).
how to set the full host path in mysql persistant volume if the sql script is placed in the azure repos inside the devops-sample folder??
Mysql data directory storage location is wrong. You should mount persistent storage to /var/lib/mysql/data
I have problems with Kubernetes. I try to deploy my service for two days now bu I'm doing something wrong.
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\": No policy matched.",
"reason": "Forbidden",
"details": {
},
"code": 403
}
Does anybody knows what the problem could be?
Here is also my yaml file:
# Certificate
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: ${APP_NAME}
spec:
secretName: ${APP_NAME}-cert
dnsNames:
- ${URL}
- www.${URL}
acme:
config:
- domains:
- ${URL}
- www.${URL}
http01:
ingressClass: nginx
issuerRef:
name: ${CERT_ISSUER}
kind: ClusterIssuer
---
# Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ${APP_NAME}
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: 'true'
nginx.ingress.kubernetes.io/from-to-www-redirect: 'true'
spec:
tls:
- secretName: ${APP_NAME}-cert
hosts:
- ${URL}
- www.${URL}
rules:
- host: ${URL}
http:
paths:
- backend:
serviceName: ${APP_NAME}-service
servicePort: 80
---
# Service
apiVersion: v1
kind: Service
metadata:
name: ${APP_NAME}-service
labels:
app: ${CI_PROJECT_NAME}
spec:
selector:
name: ${APP_NAME}
app: ${CI_PROJECT_NAME}
ports:
- name: http
port: 80
targetPort: http
---
# Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: ${APP_NAME}
labels:
app: ${CI_PROJECT_NAME}
spec:
replicas: ${REPLICAS}
revisionHistoryLimit: 0
selector:
matchLabels:
app: ${CI_PROJECT_NAME}
template:
metadata:
labels:
name: ${APP_NAME}
app: ${CI_PROJECT_NAME}
spec:
containers:
- name: webapp
image: eu.gcr.io/my-site/my-site.com:latest
imagePullPolicy: Always
ports:
- name: http
containerPort: 80
env:
- name: COMMIT_SHA
value: ${CI_COMMIT_SHA}
livenessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 30
timeoutSeconds: 1
readinessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 5
timeoutSeconds: 1
resources:
requests:
memory: '16Mi'
limits:
memory: '64Mi'
imagePullSecrets:
- name: ${REGISTRY_PULL_SECRET}
Can anybody help me with this? I'm stuck and I've no idea what could be the problem. This is also my first Kubernetes project.
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\": No policy matched.",
.. means just what it says: your request to the kubernetes api was not authenticated (that's the system:anonymous part), and your RBAC configuration does not tolerate the anonymous user making any requests to the API
No one here is going to be able to help you straighten out that problem, because fixing that depends on a horrific number of variables. Perhaps ask your cluster administrator to provide you with the correct credentials.
I have explained it in this post. You will need ServiceAccount, ClusterRole and RoleBinding. You can find explanation in this article. Or as Matthew L Daniel mentioned in the Kubernetes documentation.
If you still have problems, provide the method/tutorial you have used to deploy the cluster (as "Gitlab Kubernetes integration" does not tell much on the method you have used).
I am trying to set up a cluster of Apache Ignite with persistence enabled. I am trying to start the cluster on Azure Kubernetes with 10 nodes. The problem is that the cluster activation seems to get stuck, but I am able to activate a cluster with 3 nodes in less than 5 minutes.
Here is the configuration I am using to start the cluster:
apiVersion: v1
kind: Service
metadata:
name: ignite-main
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
labels:
main: ignite-main
spec:
type: LoadBalancer
externalTrafficPolicy: Cluster
ports:
- port: 10800 # JDBC port
targetPort: 10800
name: jdbc
- port: 11211 # Activating the baseline (port)
targetPort: 11211
name: control
- port: 8080 # REST port
targetPort: 8080
name: rest
selector:
main: ignite-main
---
#########################################
# Ignite service configuration
#########################################
# Service for discovery of ignite nodes
apiVersion: v1
kind: Service
metadata:
name: ignite
labels:
app: ignite
spec:
clusterIP: None
# externalTrafficPolicy: Cluster
ports:
# - port: 9042 # custom value.
# name: discovery
- port: 47500
name: discovery
- port: 47100
name: communication
- port: 11211
name: control
selector:
app: ignite
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ignite-cluster
labels:
app: ignite
main: ignite-main
spec:
selector:
matchLabels:
app: ignite
main: ignite-main
replicas: 5
template:
metadata:
labels:
app: ignite
main: ignite-main
spec:
volumes:
- name: ignite-storage
persistentVolumeClaim:
claimName: ignite-volume-claim # Must be equal to the PersistentVolumeClaim created before.
containers:
- name: ignite-node
image: ignite.azurecr.io/apacheignite/ignite:2.7.0-SNAPSHOT
env:
- name: OPTION_LIBS
value: ignite-kubernetes
- name: CONFIG_URI
value: https://file-location
- name: IGNITE_H2_DEBUG_CONSOLE
value: 'true'
- name: IGNITE_QUIET
value: 'false'
- name: java.net.preferIPv4Stack
value: 'true'
- name: JVM_OPTS
value: -server -Xms10g -Xmx10g -XX:+AlwaysPreTouch -XX:+UseG1GC -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC
ports:
- containerPort: 47100 # communication SPI port number.
- containerPort: 47500 # discovery SPI port number.
- containerPort: 49112 # JMX port number.
- containerPort: 8080 # REST port number.
- containerPort: 10800 # SQL port number.
- containerPort: 11211 # Activating the baseline (port)
imagePullSecrets:
- name: docker-cred
I was trying to activate the cluster remotely by providing --host parameter, like:
./control.sh --host x.x.x.x --activate
Instead, I tried activating the cluster by logging into one of the kubernetes nodes and activating from there. The detailed steps are mentioned here