I want to create a web app using apache server with https, and I have generated certificate files using letsencrypt. I already verified that cert.pem, chain.pem, fullchain.pem, and privkey.pem are stored on the host machine. However, I cannot map them to the pod. Here is the web-controller.yaml file:
apiVersion: v1
kind: ReplicationController
metadata:
labels:
name: web
name: web-controller
spec:
replicas: 2
selector:
name: web
template:
metadata:
labels:
name: web
spec:
containers:
- image: <my-wen-app-image>
command: ['/bin/sh', '-c']
args: ['sudo a2enmod ssl && service apache2 restart && sudo /usr/sbin/apache2ctl -D FOREGROUND']
name: web
ports:
- containerPort: 80
name: http-server
volumeMounts:
- mountPath: /usr/local/myapp/https
name: test-volume
readOnly: false
volumes:
- hostPath:
path: /etc/letsencrypt/live/xxx.xxx.xxx.edu
name: test-volume
After kubectl create -f web-controller.yaml the error log says:
AH00526: Syntax error on line 8 of /etc/apache2/sites-enabled/000-default.conf:
SSLCertificateFile: file '/usr/local/myapp/https/cert.pem' does not exist or is empty
Action 'configtest' failed.
This is why I think the problem is that the certificates are not mapped into the container.
Could anyone help me on this? Thanks a lot!
I figured it out: I have to mount it to /etc/letsencrypt/live/host rather than /usr/local/myapp/https
This is probably not the root cause, but it works now.
Related
I'm creating a pipeline to deploy some application in kubernetes.
I've been given the authentication credentials as a yaml file similar to the following:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tL******0tLS0t
server: https://api.whatever.com
name: gs-name-clientcert
contexts:
- context:
cluster: gs-name-clientcert
user: gs-name-clientcert-user
name: gs-name-clientcert
current-context: gs-name-clientcert
kind: Config
preferences: {}
users:
- name: gs-name-clientcert-user
user:
client-certificate-data: LS************RS0tLS0t
client-key-data: LS0tL***********tLQ==
How can I tell kubectl to use that config file when I use the apply command?
Thanks.
I have developed a very small service on dotnet 6, running in Windows 10 and Docker 20.10.17. I want to expose as service in Kubernetes in local machine as "http://localhost:15001/Calculator/sum/1/1".
I am running an script like:
docker build -f API/CalculadoraREST/Dockerfile . --tag calculadorarestapi:v1.0
kubectl config set-context --current --namespace=calculadora
kubectl apply -f kubernetes/namespace.yml --overwrite=true
kubectl apply -f kubernetes --overwrite=true
When finished and run kubectl get ingress -n calculadora I get the ingress object and found without IP to access:
NAME CLASS HOSTS ADDRESS PORTS AGE
calculadora-ingress <none> * 80 5s
Dockerfile:
FROM mcr.microsoft.com/dotnet/aspnet:6.0-focal AS base
WORKDIR /app
EXPOSE 15001
ENV ASPNETCORE_URLS=http://+:15001
# Creates a non-root user with an explicit UID and adds permission to access the /app folder
# For more info, please refer to https://aka.ms/vscode-docker-dotnet-configure-containers
RUN adduser -u 5678 --disabled-password --gecos "" appuser && chown -R appuser /app
USER appuser
FROM mcr.microsoft.com/dotnet/sdk:6.0-focal AS build
WORKDIR /src
COPY ["API/CalculadoraREST/CalculadoraRestAPI.csproj", "API/CalculadoraREST/"]
RUN dotnet restore "API/CalculadoraREST/CalculadoraRestAPI.csproj"
COPY . .
WORKDIR "/src/API/CalculadoraREST"
RUN dotnet build "CalculadoraRestAPI.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "CalculadoraRestAPI.csproj" -c Release -o /app/publish /p:UseAppHost=false
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "CalculadoraRestAPI.dll"]
Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: calculadora-ingress
namespace: calculadora
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /Calculator/
pathType: Prefix
backend:
service :
name: calculadorarestapi-service
port:
number: 15001
Service:
apiVersion: v1
kind: Service
metadata:
name: calculadorarestapi-service
namespace: calculadora
spec:
selector:
app: calculadorarestapi
ports:
- protocol: TCP
port: 15001
targetPort: 15001
name: http
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: calculadorarestapi-deployment
namespace: calculadora
spec:
selector:
matchLabels:
app: calculadorarestapi
replicas: 2
template:
metadata:
labels:
app: calculadorarestapi
spec:
containers:
- name: calculadorarestapi
image: calculadorarestapi:v1.0
ports:
- containerPort: 15001
resources:
requests:
memory: "150Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
imagePullSecrets:
- name: regsecret
Any ideas? I will really appreciate your comments. :-)
Could you add the kubernetes.io/ingress.class: "nginx" annotation to the Ingress resource or set the class field as per this Git link.
I have a .pfx file that a Java container needs to use.
I have created a tls secret using the command
kubectl create secret tls secret-pfx-key --dry-run=client --cert tls.crt --key tls.key -o yaml
apiVersion: v1
kind: Secret
type: kubernetes.io/tls
metadata:
name : secret-pfx-key
namespace: default
data:
#cat tls.crt | base64
tls.crt: base64-gibberish....
#cat tls.key | base64
tls.key: base64-gibberish....
However, now I cannot understand how to use it. When I add the secret as volume in the pod I can see the two files that are created. But I need the combination of the two in one .pfx file.
Am I missing something? Thanks.
Note: I have read the related stackoverflow questions but could not understand how to use it.
You can convert to pfx first, then kubectl create secret generic mypfx --from-file=pfx-cert=<converted pfx file>
Mount the secret as a volume in your pod:
apiVersion: v1
kind: Pod
metadata:
name: test-mypfx
spec:
restartPolicy: OnFailure
volumes:
- name: pfx-volume
secret:
secretName: mypfx
containers:
- name: busybox
image: busybox
command: ["ash","-c","cat /path/in/the/container/pfx-cert; sleep 5"]
volumeMounts:
- name: pfx-volume
mountPath: /path/in/the/container
The above example dump the cert, wait for 5s and exit.
I have deployed a apache web server on kubernetes cluster using standard httpd image from dockerhub. I want to make changes to index file so that it prints the container id instead of default index file. How can i achieve this?
Answering the question:
How can I have a Apache container in Kubernetes that will output the id of the container in the index.html or other .html file.
One of the ways how it could be handled is by a lifecycle hooks (specifically postStart):
PostStart
This hook is executed immediately after a container is created. However, there is no guarantee that the hook will execute before the container ENTRYPOINT. No parameters are passed to the handler.
-- Kubernetes.io: Docs: Concepts: Containers: Container lifecycle hooks: Container hooks
As for the example of such how setup could be implemented:
apiVersion: apps/v1
kind: Deployment
metadata:
name: apache
labels:
app: apache
spec:
replicas: 3
selector:
matchLabels:
app: apache
template:
metadata:
labels:
app: apache
spec:
containers:
- name: apache
image: httpd # <-- APACHE IMAGE
# LIFECYCLE DEFINITION START
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo $HOSTNAME > htdocs/hostname.html"]
# LIFECYCLE DEFINITION END
ports:
- containerPort: 80
Taking a specific look on:
command: ["/bin/sh", "-c", "echo $HOSTNAME > htdocs/hostname.html"]
This part will write/save a hostname of the container to the hostname.html
To check if each of Pod has the hostname.html you can create a Service and run either:
$ kubectl port-forward svc/apache 8080:80 -> curl localhost:8080/hostname.html
$ kubectl run -it --rm nginx --image=nginx -- /bin/bash -> curl apache/hostname.html
Additional resources:
Kubernetes.io: Docs: Tasks: Configure pod container: Attach handler lifecycle event: Define postStart and preStop handlers
I would like to know which steps I have to follow in order to send the logs created in my custom apache container (deployed in a pod with Kubernetes) to the Stackdriver collector.
I have noticed that if I create a pod with a standard apache (or nginx) container, access.log and error.log are sent automatically to Stackdriver.
In fact I'm able to see the log both on Kubernetes dashboard and on Google Cloud Dashboard--->Logging--->Logs
Instead I don't see anything related my custom apache...
Any suggestions?
After some researches I have resolved the problem of log forwarder from my custom apache container.
I don't know why the "standard redirection" (using /dev/stdout or /proc/self/fd/1) is not working anyway the solution that I followed is called "sidecar container with the logging agent"
1) create a configMag file where you'll set a fluentd configuration:
apiVersion: v1
data:
fluentd.conf: |
<source>
type tail
format none
path /var/log/access.log
pos_file /var/log/access.log.pos
tag count.format1
</source>
<source>
type tail
format none
path /var/log/error.log
pos_file /var/log/error.log.pos
tag count.format2
</source>
<match **>
type google_cloud
</match>
kind: ConfigMap
metadata:
name: my-fluentd-config
2) create a pod with 2 containers: the custom apache + a log agent. Both containers will mount a log folder. Only log agent will mount the fluentd config:
apiVersion: v1
kind: Pod
metadata:
name: my-sidecar
labels:
app: my-sidecar
spec:
volumes:
- name: varlog
emptyDir: {}
- name: config-volume
configMap:
name: my-fluentd-config
containers:
- name: my-apache
image: <your_custom_image_repository>
ports:
- containerPort: 80
name: http
protocol: TCP
volumeMounts:
- name: varlog
mountPath: /var/log
- name: log-agent
image: gcr.io/google_containers/fluentd-gcp:1.30
env:
- name: FLUENTD_ARGS
value: -c /etc/fluentd-config/fluentd.conf
volumeMounts:
- name: varlog
mountPath: /var/log
- name: config-volume
mountPath: /etc/fluentd-config
3) Enter in my-apache container with:
kubectl exec -it my-sidecar --container my-apache -- /bin/bash
and change/check the httpd.conf is using the following files:
ErrorLog /var/log/error.log
CustomLog /var/log/access.log common
(if you change something remember to restart apache..)
4) Now in Google Cloud Console -> Logging you'll be able to see the apache access/error logs in Stackdriver with a filter like:
resource.type="container"
labels."compute.googleapis.com/resource_name"="my-sidecar"