I have developed a very small service on dotnet 6, running in Windows 10 and Docker 20.10.17. I want to expose as service in Kubernetes in local machine as "http://localhost:15001/Calculator/sum/1/1".
I am running an script like:
docker build -f API/CalculadoraREST/Dockerfile . --tag calculadorarestapi:v1.0
kubectl config set-context --current --namespace=calculadora
kubectl apply -f kubernetes/namespace.yml --overwrite=true
kubectl apply -f kubernetes --overwrite=true
When finished and run kubectl get ingress -n calculadora I get the ingress object and found without IP to access:
NAME CLASS HOSTS ADDRESS PORTS AGE
calculadora-ingress <none> * 80 5s
Dockerfile:
FROM mcr.microsoft.com/dotnet/aspnet:6.0-focal AS base
WORKDIR /app
EXPOSE 15001
ENV ASPNETCORE_URLS=http://+:15001
# Creates a non-root user with an explicit UID and adds permission to access the /app folder
# For more info, please refer to https://aka.ms/vscode-docker-dotnet-configure-containers
RUN adduser -u 5678 --disabled-password --gecos "" appuser && chown -R appuser /app
USER appuser
FROM mcr.microsoft.com/dotnet/sdk:6.0-focal AS build
WORKDIR /src
COPY ["API/CalculadoraREST/CalculadoraRestAPI.csproj", "API/CalculadoraREST/"]
RUN dotnet restore "API/CalculadoraREST/CalculadoraRestAPI.csproj"
COPY . .
WORKDIR "/src/API/CalculadoraREST"
RUN dotnet build "CalculadoraRestAPI.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "CalculadoraRestAPI.csproj" -c Release -o /app/publish /p:UseAppHost=false
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "CalculadoraRestAPI.dll"]
Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: calculadora-ingress
namespace: calculadora
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /Calculator/
pathType: Prefix
backend:
service :
name: calculadorarestapi-service
port:
number: 15001
Service:
apiVersion: v1
kind: Service
metadata:
name: calculadorarestapi-service
namespace: calculadora
spec:
selector:
app: calculadorarestapi
ports:
- protocol: TCP
port: 15001
targetPort: 15001
name: http
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: calculadorarestapi-deployment
namespace: calculadora
spec:
selector:
matchLabels:
app: calculadorarestapi
replicas: 2
template:
metadata:
labels:
app: calculadorarestapi
spec:
containers:
- name: calculadorarestapi
image: calculadorarestapi:v1.0
ports:
- containerPort: 15001
resources:
requests:
memory: "150Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
imagePullSecrets:
- name: regsecret
Any ideas? I will really appreciate your comments. :-)
Could you add the kubernetes.io/ingress.class: "nginx" annotation to the Ingress resource or set the class field as per this Git link.
Related
We have two services that need to be served with the same hostname but with different paths.
https://demoapp.com/login
https://demoapp.com/admin/login
Have configured ingress rules for both but the ‘/admin’ page is not loading, not sure if the problem is with ingress or Nginx configurations.
ngnix configuration
server {
location / {
root /var/www/html/;
try_files $uri $uri/ /index.html;
}
}
Docker file
FROM node:14.16.0-alpine as build
ARG NPM_TOKEN
RUN apk add --update nginx
RUN mkdir -p /tmp/nginx/wa-demoapp-fe
RUN mkdir -p /var/log/nginx
RUN mkdir -p /var/www/html/admin
RUN mkdir -p /var/www/html/admin/admin
COPY nginx_config/nginx.conf /etc/nginx/nginx.conf
COPY nginx_config/default.conf /etc/nginx/conf.d/default.conf
WORKDIR /tmp/nginx/wa-demoapp-fe
COPY app/ .
RUN echo "//npm.pkg.github.com/:_authToken=$NPM_TOKEN" > .npmrc
RUN echo "#test:registry=https://npm.pkg.github.com/" >> .npmrc
RUN cp .env.test .env && rm .env.production .env.test .env.development
RUN npm ci && npm run build
RUN cp -r dist/* /var/www/html/admin/admin
RUN cp /var/www/html/admin/admin/index.html /var/www/html/admin/.
RUN chown -R nginx:nginx /var/www/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Ingress config
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-fe
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
tls:
- hosts:
- demoapp.com
secretName: tls-secret-fe
rules:
- host: demoapp.com
http:
paths:
- path: /admin(/|$)(.*)
pathType: Prefix
backend:
service:
name: wa-demoapp-fe
port:
number: 80
- path: /(.*)
pathType: Prefix
backend:
service:
name: ab-abc-fe
port:
number: 80
Did I miss something in the configurations?
I have deployed a apache web server on kubernetes cluster using standard httpd image from dockerhub. I want to make changes to index file so that it prints the container id instead of default index file. How can i achieve this?
Answering the question:
How can I have a Apache container in Kubernetes that will output the id of the container in the index.html or other .html file.
One of the ways how it could be handled is by a lifecycle hooks (specifically postStart):
PostStart
This hook is executed immediately after a container is created. However, there is no guarantee that the hook will execute before the container ENTRYPOINT. No parameters are passed to the handler.
-- Kubernetes.io: Docs: Concepts: Containers: Container lifecycle hooks: Container hooks
As for the example of such how setup could be implemented:
apiVersion: apps/v1
kind: Deployment
metadata:
name: apache
labels:
app: apache
spec:
replicas: 3
selector:
matchLabels:
app: apache
template:
metadata:
labels:
app: apache
spec:
containers:
- name: apache
image: httpd # <-- APACHE IMAGE
# LIFECYCLE DEFINITION START
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo $HOSTNAME > htdocs/hostname.html"]
# LIFECYCLE DEFINITION END
ports:
- containerPort: 80
Taking a specific look on:
command: ["/bin/sh", "-c", "echo $HOSTNAME > htdocs/hostname.html"]
This part will write/save a hostname of the container to the hostname.html
To check if each of Pod has the hostname.html you can create a Service and run either:
$ kubectl port-forward svc/apache 8080:80 -> curl localhost:8080/hostname.html
$ kubectl run -it --rm nginx --image=nginx -- /bin/bash -> curl apache/hostname.html
Additional resources:
Kubernetes.io: Docs: Tasks: Configure pod container: Attach handler lifecycle event: Define postStart and preStop handlers
I did this:
I created a service account
cat <<EOF | kubectl create -f -
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: myname
...
I generate the token from the secret created in the service account:
token=$(kubectl get secrets myname-xxxx-xxxx -o jsonpath={.data.token} | base64 --decode)
I set credentials for the serviceAccount myname created:
kubectl config set-credentials myname --token=$token
I created a context
kubectl config set-context myname-context --cluster=my-cluster --user=myname
then I created a copie of ~/.kube/config and delete the cluster-admin entries (letting only the user myname)
I rolebinded the user to a specific namespace with the edit clusterRole permissions:
cat <<EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: dev-access
namespace: my-ns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: edit
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: myname
EOF
I sent the edited ~/.kube/config to the person who want to access the cluster, now he can lists the pods but not exec into them:
Error
(Forbidden): pods "pod-xxxxx-xxxx" is forbidden: User "system:serviceaccount:default:myname" cannot create resource "pods/exec" in API group "" in the namespace "my-ns"
I want to do that from a non master machine which have the master ~/.kube/config copied into it.
Thanks
The RoleBinding that you have is binding the ClusterRole to a User and not a ServiceAccount. The error clearly shows a ServiceAccount system:serviceaccount:default:myuser So the RoleBinding should be as below
cat <<EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: dev-access
namespace: my-ns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: edit
subjects:
- kind: ServiceAccount
name: myuser
namespace: default
EOF
To verify all permissions of the ServiceAccount myuser use below command
kubectl auth can-i --list --as=system:serviceaccount:default:myuser
To verify specific permission of pods/exec of the ServiceAccount myuser use below command
kubectl auth can-i create pods/exec --as=system:serviceaccount:default:myuser
I would like to know which steps I have to follow in order to send the logs created in my custom apache container (deployed in a pod with Kubernetes) to the Stackdriver collector.
I have noticed that if I create a pod with a standard apache (or nginx) container, access.log and error.log are sent automatically to Stackdriver.
In fact I'm able to see the log both on Kubernetes dashboard and on Google Cloud Dashboard--->Logging--->Logs
Instead I don't see anything related my custom apache...
Any suggestions?
After some researches I have resolved the problem of log forwarder from my custom apache container.
I don't know why the "standard redirection" (using /dev/stdout or /proc/self/fd/1) is not working anyway the solution that I followed is called "sidecar container with the logging agent"
1) create a configMag file where you'll set a fluentd configuration:
apiVersion: v1
data:
fluentd.conf: |
<source>
type tail
format none
path /var/log/access.log
pos_file /var/log/access.log.pos
tag count.format1
</source>
<source>
type tail
format none
path /var/log/error.log
pos_file /var/log/error.log.pos
tag count.format2
</source>
<match **>
type google_cloud
</match>
kind: ConfigMap
metadata:
name: my-fluentd-config
2) create a pod with 2 containers: the custom apache + a log agent. Both containers will mount a log folder. Only log agent will mount the fluentd config:
apiVersion: v1
kind: Pod
metadata:
name: my-sidecar
labels:
app: my-sidecar
spec:
volumes:
- name: varlog
emptyDir: {}
- name: config-volume
configMap:
name: my-fluentd-config
containers:
- name: my-apache
image: <your_custom_image_repository>
ports:
- containerPort: 80
name: http
protocol: TCP
volumeMounts:
- name: varlog
mountPath: /var/log
- name: log-agent
image: gcr.io/google_containers/fluentd-gcp:1.30
env:
- name: FLUENTD_ARGS
value: -c /etc/fluentd-config/fluentd.conf
volumeMounts:
- name: varlog
mountPath: /var/log
- name: config-volume
mountPath: /etc/fluentd-config
3) Enter in my-apache container with:
kubectl exec -it my-sidecar --container my-apache -- /bin/bash
and change/check the httpd.conf is using the following files:
ErrorLog /var/log/error.log
CustomLog /var/log/access.log common
(if you change something remember to restart apache..)
4) Now in Google Cloud Console -> Logging you'll be able to see the apache access/error logs in Stackdriver with a filter like:
resource.type="container"
labels."compute.googleapis.com/resource_name"="my-sidecar"
I want to create a web app using apache server with https, and I have generated certificate files using letsencrypt. I already verified that cert.pem, chain.pem, fullchain.pem, and privkey.pem are stored on the host machine. However, I cannot map them to the pod. Here is the web-controller.yaml file:
apiVersion: v1
kind: ReplicationController
metadata:
labels:
name: web
name: web-controller
spec:
replicas: 2
selector:
name: web
template:
metadata:
labels:
name: web
spec:
containers:
- image: <my-wen-app-image>
command: ['/bin/sh', '-c']
args: ['sudo a2enmod ssl && service apache2 restart && sudo /usr/sbin/apache2ctl -D FOREGROUND']
name: web
ports:
- containerPort: 80
name: http-server
volumeMounts:
- mountPath: /usr/local/myapp/https
name: test-volume
readOnly: false
volumes:
- hostPath:
path: /etc/letsencrypt/live/xxx.xxx.xxx.edu
name: test-volume
After kubectl create -f web-controller.yaml the error log says:
AH00526: Syntax error on line 8 of /etc/apache2/sites-enabled/000-default.conf:
SSLCertificateFile: file '/usr/local/myapp/https/cert.pem' does not exist or is empty
Action 'configtest' failed.
This is why I think the problem is that the certificates are not mapped into the container.
Could anyone help me on this? Thanks a lot!
I figured it out: I have to mount it to /etc/letsencrypt/live/host rather than /usr/local/myapp/https
This is probably not the root cause, but it works now.