Apache web server custom index kubernetes - apache

I have deployed a apache web server on kubernetes cluster using standard httpd image from dockerhub. I want to make changes to index file so that it prints the container id instead of default index file. How can i achieve this?

Answering the question:
How can I have a Apache container in Kubernetes that will output the id of the container in the index.html or other .html file.
One of the ways how it could be handled is by a lifecycle hooks (specifically postStart):
PostStart
This hook is executed immediately after a container is created. However, there is no guarantee that the hook will execute before the container ENTRYPOINT. No parameters are passed to the handler.
-- Kubernetes.io: Docs: Concepts: Containers: Container lifecycle hooks: Container hooks
As for the example of such how setup could be implemented:
apiVersion: apps/v1
kind: Deployment
metadata:
name: apache
labels:
app: apache
spec:
replicas: 3
selector:
matchLabels:
app: apache
template:
metadata:
labels:
app: apache
spec:
containers:
- name: apache
image: httpd # <-- APACHE IMAGE
# LIFECYCLE DEFINITION START
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo $HOSTNAME > htdocs/hostname.html"]
# LIFECYCLE DEFINITION END
ports:
- containerPort: 80
Taking a specific look on:
command: ["/bin/sh", "-c", "echo $HOSTNAME > htdocs/hostname.html"]
This part will write/save a hostname of the container to the hostname.html
To check if each of Pod has the hostname.html you can create a Service and run either:
$ kubectl port-forward svc/apache 8080:80 -> curl localhost:8080/hostname.html
$ kubectl run -it --rm nginx --image=nginx -- /bin/bash -> curl apache/hostname.html
Additional resources:
Kubernetes.io: Docs: Tasks: Configure pod container: Attach handler lifecycle event: Define postStart and preStop handlers

Related

Kubectl Ingress without IP

I have developed a very small service on dotnet 6, running in Windows 10 and Docker 20.10.17. I want to expose as service in Kubernetes in local machine as "http://localhost:15001/Calculator/sum/1/1".
I am running an script like:
docker build -f API/CalculadoraREST/Dockerfile . --tag calculadorarestapi:v1.0
kubectl config set-context --current --namespace=calculadora
kubectl apply -f kubernetes/namespace.yml --overwrite=true
kubectl apply -f kubernetes --overwrite=true
When finished and run kubectl get ingress -n calculadora I get the ingress object and found without IP to access:
NAME CLASS HOSTS ADDRESS PORTS AGE
calculadora-ingress <none> * 80 5s
Dockerfile:
FROM mcr.microsoft.com/dotnet/aspnet:6.0-focal AS base
WORKDIR /app
EXPOSE 15001
ENV ASPNETCORE_URLS=http://+:15001
# Creates a non-root user with an explicit UID and adds permission to access the /app folder
# For more info, please refer to https://aka.ms/vscode-docker-dotnet-configure-containers
RUN adduser -u 5678 --disabled-password --gecos "" appuser && chown -R appuser /app
USER appuser
FROM mcr.microsoft.com/dotnet/sdk:6.0-focal AS build
WORKDIR /src
COPY ["API/CalculadoraREST/CalculadoraRestAPI.csproj", "API/CalculadoraREST/"]
RUN dotnet restore "API/CalculadoraREST/CalculadoraRestAPI.csproj"
COPY . .
WORKDIR "/src/API/CalculadoraREST"
RUN dotnet build "CalculadoraRestAPI.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "CalculadoraRestAPI.csproj" -c Release -o /app/publish /p:UseAppHost=false
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "CalculadoraRestAPI.dll"]
Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: calculadora-ingress
namespace: calculadora
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /Calculator/
pathType: Prefix
backend:
service :
name: calculadorarestapi-service
port:
number: 15001
Service:
apiVersion: v1
kind: Service
metadata:
name: calculadorarestapi-service
namespace: calculadora
spec:
selector:
app: calculadorarestapi
ports:
- protocol: TCP
port: 15001
targetPort: 15001
name: http
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: calculadorarestapi-deployment
namespace: calculadora
spec:
selector:
matchLabels:
app: calculadorarestapi
replicas: 2
template:
metadata:
labels:
app: calculadorarestapi
spec:
containers:
- name: calculadorarestapi
image: calculadorarestapi:v1.0
ports:
- containerPort: 15001
resources:
requests:
memory: "150Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
imagePullSecrets:
- name: regsecret
Any ideas? I will really appreciate your comments. :-)
Could you add the kubernetes.io/ingress.class: "nginx" annotation to the Ingress resource or set the class field as per this Git link.

How to create a Traefik v2.x ReplacePath label in a simple docker-compose file?

Given the following url's, I'm trying to create some routers with Traefik v2.x
Route forwards to
-----------------------------
/users -> /users
/users/* -> /users/*
/users/swagger -> /swagger
So in these examples, my webserver is has some endpoints for users GET /users, GET /users/1, POST /users, DELETE /users/1, etc.. but also has a Swagger/OpenApi definition/docs located /swagger.
So I'm trying to access these endpoints through Traefik.
I'm under the impression that I need to create labels that use the routers + PathPrefex and routers + Path for the endpoint matching ... but use middleware for the replace.
I'm too sure how to do this properly.
Here's what I'm trying to do...
version: '3.5'
services:
users-api:
image: spike.openapi/users.api
build:
context: ./
dockerfile: src/Users/Dockerfile
ports:
- "80"
networks:
- backend
container_name: users.api
labels:
- "traefik.enable=true"
- "traefik.http.routers.users-api.rule=PathPrefix(`/users`)"
- "traefik.http.routers.users-api.rule=Path(`/users/swagger`)"
- "traefik.http.routers.users-api.entrypoints=web"
reverse-proxy:
image: traefik
<snipped>
...
Without the Host rule, traefik will not know on which backend to redirect the request so the first thing you are missing might be that. I think following should work.
services:
users-api:
...
labels:
# /users/swagger -> /swagger
traefik.http.middlewares.replacepath-middleware.replacepath.path: /swagger
traefik.http.routers.swagger-router.rule: Host(`your-domain.net`) && PathPrefix(`/users/swagger`)
traefik.http.routers.swagger-router.entrypoints: http
traefik.http.routers.swagger-router.middlewares: replacepath-middleware
# everything else (/users -> /users)
traefik.http.routers.base-router.entrypoints: http
traefik.http.routers.base-router.rule: Host(`your-domain.net`)
You could also use the stripprefix middleware to achieve the exact same thing
services:
users-api:
...
labels:
# /users/swagger -> /swagger
traefik.http.middlewares.stripprefix-middleware.stripprefix.prefixes: /users
traefik.http.routers.swagger-router.rule: Host(`your-domain.net`) && PathPrefix(`/users/swagger`)
traefik.http.routers.swagger-router.entrypoints: http
traefik.http.routers.swagger-router.middlewares: stripprefix-middleware
# everything else (/users -> /users)
traefik.http.routers.base-router.entrypoints: http
traefik.http.routers.base-router.rule: Host(`your-domain.net`)
I noticed traefik redirect the request (http 304). If what you want is some kind of url rewriting, I don't think traefik can handle it - this should be your backend's job, users-api in your case.
IMO understanding traefik middleware behaviours is not easy. I tried to reproduce your setup with a simple nginx backend. Have a try at it:
version: '3.7'
services:
traefik:
image: traefik:v2.1
ports:
- 80:80
command:
- --entrypoints.http.address=:80
- --providers.docker.exposedByDefault=false
- --log.level=DEBUG
volumes:
- /var/run/docker.sock:/var/run/docker.sock
nginx:
image: nginx:1.16.1
labels:
traefik.enable: 'true'
# /users/swagger -> /swagger
traefik.http.middlewares.replacepath-middleware.replacepath.path: /swagger
traefik.http.routers.swagger-router.rule: Host(`127.0.0.1`) && PathPrefix(`/users/swagger`)
traefik.http.routers.swagger-router.entrypoints: http
traefik.http.routers.swagger-router.middlewares: replacepath-middleware
# everything else (/users -> /users)
traefik.http.routers.base-router.entrypoints: http
traefik.http.routers.base-router.rule: Host(`127.0.0.1`)
Run the following commands first to create the folder and dummy pages:
docker-compose up -d
docker-compose exec nginx mkdir /usr/share/nginx/html/swagger
docker-compose exec nginx mkdir /usr/share/nginx/html/users
docker-compose exec nginx sh -c "echo 'users page here' > /usr/share/nginx/html/users/index.html" bbouchereau#bbouchereau
docker-compose exec nginx sh -c "echo 'swagger page here' > /usr/share/nginx/html/swagger/index.html"
The results:
http://127.0.0.1/users/ -> users page here
http://127.0.0.1/users/swagger/ -> redirect to http://127.0.0.1/swagger/ -> swagger page here

Custom docker container in Kubernetes cluster with log using Stackdriver

I would like to know which steps I have to follow in order to send the logs created in my custom apache container (deployed in a pod with Kubernetes) to the Stackdriver collector.
I have noticed that if I create a pod with a standard apache (or nginx) container, access.log and error.log are sent automatically to Stackdriver.
In fact I'm able to see the log both on Kubernetes dashboard and on Google Cloud Dashboard--->Logging--->Logs
Instead I don't see anything related my custom apache...
Any suggestions?
After some researches I have resolved the problem of log forwarder from my custom apache container.
I don't know why the "standard redirection" (using /dev/stdout or /proc/self/fd/1) is not working anyway the solution that I followed is called "sidecar container with the logging agent"
1) create a configMag file where you'll set a fluentd configuration:
apiVersion: v1
data:
fluentd.conf: |
<source>
type tail
format none
path /var/log/access.log
pos_file /var/log/access.log.pos
tag count.format1
</source>
<source>
type tail
format none
path /var/log/error.log
pos_file /var/log/error.log.pos
tag count.format2
</source>
<match **>
type google_cloud
</match>
kind: ConfigMap
metadata:
name: my-fluentd-config
2) create a pod with 2 containers: the custom apache + a log agent. Both containers will mount a log folder. Only log agent will mount the fluentd config:
apiVersion: v1
kind: Pod
metadata:
name: my-sidecar
labels:
app: my-sidecar
spec:
volumes:
- name: varlog
emptyDir: {}
- name: config-volume
configMap:
name: my-fluentd-config
containers:
- name: my-apache
image: <your_custom_image_repository>
ports:
- containerPort: 80
name: http
protocol: TCP
volumeMounts:
- name: varlog
mountPath: /var/log
- name: log-agent
image: gcr.io/google_containers/fluentd-gcp:1.30
env:
- name: FLUENTD_ARGS
value: -c /etc/fluentd-config/fluentd.conf
volumeMounts:
- name: varlog
mountPath: /var/log
- name: config-volume
mountPath: /etc/fluentd-config
3) Enter in my-apache container with:
kubectl exec -it my-sidecar --container my-apache -- /bin/bash
and change/check the httpd.conf is using the following files:
ErrorLog /var/log/error.log
CustomLog /var/log/access.log common
(if you change something remember to restart apache..)
4) Now in Google Cloud Console -> Logging you'll be able to see the apache access/error logs in Stackdriver with a filter like:
resource.type="container"
labels."compute.googleapis.com/resource_name"="my-sidecar"

Kubernetes Redis Cluster issue

I'm trying to create redis cluster using kubernetes on centos. I have my kubernetes master running on one host and kubernetes slaves on 2 different hosts.
etcdctl get /kube-centos/network/config
{ "Network": "172.30.0.0/16", "SubnetLen": 24, "Backend": { "Type": "vxlan" } }
Here is my replication controller
apiVersion: v1
kind: ReplicationController
metadata:
name: redis-master
labels:
app: redis
role: master
tier: backend
spec:
replicas: 6
template:
metadata:
labels:
app: redis
role: master
tier: backend
spec:
containers:
- name: master
image: redis
command:
- "redis-server"
args:
- "/redis-master/redis.conf"
ports:
- containerPort: 6379
volumeMounts:
- mountPath: /redis-master
name: config
- mountPath: /redis-master-data
name: data
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: redis-config
items:
- key: redis-config
path: redis.conf
kubectl create -f rc.yaml
NAME READY STATUS RESTARTS AGE IP NODE
redis-master-149tt 1/1 Running 0 8s 172.30.96.4 centos-minion-1
redis-master-14j0k 1/1 Running 0 8s 172.30.79.3 centos-minion-2
redis-master-3wgdt 1/1 Running 0 8s 172.30.96.3 centos-minion-1
redis-master-84jtv 1/1 Running 0 8s 172.30.96.2 centos-minion-1
redis-master-fw3rs 1/1 Running 0 8s 172.30.79.4 centos-minion-2
redis-master-llg9n 1/1 Running 0 8s 172.30.79.2 centos-minion-2
Redis-config file used
appendonly yes
cluster-enabled yes
cluster-config-file /redis-master/nodes.conf
cluster-node-timeout 5000
dir /redis-master
port 6379
I used the following command to create the kubernetes service.
kubectl expose rc redis-master --name=redis-service --port=6379 --target-port=6379 --type=NodePort
Name: redis-service
Namespace: default
Labels: app=redis
role=master
tier=backend
Selector: app=redis,role=master,tier=backend
Type: NodePort
IP: 10.254.229.114
Port: <unset> 6379/TCP
NodePort: <unset> 30894/TCP
Endpoints: 172.30.79.2:6379,172.30.79.3:6379,172.30.79.4:6379 + 3 more...
Session Affinity: None
No events.
Now I have all the pods and service up and running. I'm using redis-trib pod to create redis cluster.
kubectl exec -it redis-trib bash
./redis-trib.rb create --replicas 1 172.30.79.2:6379 172.30.79.3:6379 172.30.79.4:6379 172.30.96.2:6379 172.30.96.3:6379 172.30.96.4:6379
Redis Cluster created as expected with the below message.
[OK] All 16384 slots covered.
Now I should be able to access my redis-cluster on kubernetes node IP(192.168.240.116) and nodePort(30894) from any host within my network. Everything works as expected when I execute the below command from one of the kubernetes node.
redis-cli -p 30894 -h 192.168.240.116 -c
192.168.240.116:30894> set foo bar
-> Redirected to slot [12182] located at 172.30.79.4:6379
OK
172.30.79.4:6379>
When I run the same command from different (non-kubernetes) node within the same network, I see the connected timed out error.
redis-cli -c -p 30894 -h 192.168.240.116
192.168.240.116:30894> set foo bar
-> Redirected to slot [12182] located at 172.30.79.4:6379
Could not connect to Redis at 172.30.79.4:6379: Connection timed out
Is it not possible to access the redis-cluster outside the kubernetes cluster network when exposed using NodePort service type?
Also I cannot use LoadBalancer service type as I'm not hosting it on cloud.
I have been stuck with this issue for quite a while. Can someone suggest on what approach I should use to access my redis-cluster outside my network ?
Thanks
Running ./redis-trib.rb create --replicas 1 172.30.79.2:6379 172.30.79.3:6379 172.30.79.4:6379 172.30.96.2:6379 172.30.96.3:6379 172.30.96.4:6379 doesn't make sense with this setup.
The port 6379 is only accessible through the service which you brough up, but never directly as you try. That's why you run into issues when you try to use your setup.
What you can do is to expose each POD with it's own service and have one additional cluster services to loadbalance external requests. As shown in the example repository from Kelsey Hightower. This way the PODs can communicate though the internally exposed ports and (external) clients can use the loadbalanced cluster port. The implication then is also that each POD requires it's own ReplicaSet (or Deployment). There's a long talk available on YouTube from Kelsey explaining the setup - YouTube / Slideshare.
An alternative would be to use a single redis master as shown in other examples.

Kubernetes: hostPath volume does not mount

I want to create a web app using apache server with https, and I have generated certificate files using letsencrypt. I already verified that cert.pem, chain.pem, fullchain.pem, and privkey.pem are stored on the host machine. However, I cannot map them to the pod. Here is the web-controller.yaml file:
apiVersion: v1
kind: ReplicationController
metadata:
labels:
name: web
name: web-controller
spec:
replicas: 2
selector:
name: web
template:
metadata:
labels:
name: web
spec:
containers:
- image: <my-wen-app-image>
command: ['/bin/sh', '-c']
args: ['sudo a2enmod ssl && service apache2 restart && sudo /usr/sbin/apache2ctl -D FOREGROUND']
name: web
ports:
- containerPort: 80
name: http-server
volumeMounts:
- mountPath: /usr/local/myapp/https
name: test-volume
readOnly: false
volumes:
- hostPath:
path: /etc/letsencrypt/live/xxx.xxx.xxx.edu
name: test-volume
After kubectl create -f web-controller.yaml the error log says:
AH00526: Syntax error on line 8 of /etc/apache2/sites-enabled/000-default.conf:
SSLCertificateFile: file '/usr/local/myapp/https/cert.pem' does not exist or is empty
Action 'configtest' failed.
This is why I think the problem is that the certificates are not mapped into the container.
Could anyone help me on this? Thanks a lot!
I figured it out: I have to mount it to /etc/letsencrypt/live/host rather than /usr/local/myapp/https
This is probably not the root cause, but it works now.