how ingressroute is hooked up to traefik's ingress contoller - traefik

I am learning traefik and ingressroute. One thing confused me the most is how the two parts are connected together.
After deploying traefik and my own service, I can simply create the following ingressroute to make it work:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: my-service-ir
namespace: my-service-ns
spec:
entryPoints:
- web
routes:
- match: Path(`/`)
kind: Rule
services:
- name: my-service
port: 8000
But the ingressroute has nothing shared with traefik: not in the same namespace, no selector, etc.. It seems to me that the ingressroute can magically find traefik and apply on traefik. I am curious what happened behind it.
Thanks

When you deploy traefik in the kubernetes cluster, you use the rbac-k8s manifests like here. If you use helm then these are all present under that hood.
These RBACs actually create the new resource types i.e. IngressRoute over here.
They are applied at cluster level as you see in the link ClusterRole. This gives them ClusterLevel privileges. This is the reason you don't see anything special in the namespace.
You can checkout the sample task here which will give some more light on the matter.

Related

Should Tekton Dashboard deployed on root path?

I'm trying Tekton on a Kind cluster and successfully configured Tekton Dashboard to work with Ingress rules. But I don't have a dedicated domain name, and unlikely to have one later. This Tekton instance will be exposed on a subpath on another domain through another NGINX.
But Tekton Dashboard doesn't seem to work on subpath locations. Tekton Dashboard exposed with Ingress path: / works well, but if I change it to path: /tekton, it doesn't work.
So, is it designed to work only at root path? No support for working on subpath?
P.S.
I'm going to use Kind cluster for production too as I do not have access to a Kubernetes cluster. This is small service and we don't need scale, but just CI/CD-as-code. And nowadays it seems all of new CI/CD implementations are designed only for Kubernetes.
You can also use the following Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tekton-dashboard
namespace: tekton-pipelines
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/configuration-snippet: |
rewrite ^(/[a-z1-9\-]*)$ $1/ redirect;
spec:
rules:
- http:
paths:
- path: /tekton-dashboard(/|$)(.*)
pathType: Prefix
backend:
service:
name: tekton-dashboard
port:
number: 9097
Tekton Dashboard does support being exposed on a subpath, it attempts to detect the base URL to use and adapts accordingly. For example, if you run kubectl proxy locally against the target cluster you can then access the Dashboard at http://localhost:8001/api/v1/namespaces/tekton-pipelines/services/tekton-dashboard:http/proxy/
More details about the issue you're encountering would be useful to help debug, e.g. Dashboard version? Is anything loading at all? Ingress controller and config? Any errors in the browser console / network tab, etc.

Kubernetes Cert-Manager can't get http01 ACME challange to work

Hello I m struggling to get the Cert-Manager work with let'sencrypt on my Azure AKS to secure and asp.net core web app.
I have a ClusterIssuer like that:
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencryptstaging-issuer
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: letsencryptstaging#prodibi.com
privateKeySecretRef:
name: letsencryptstaging-secret
solvers:
- http01:
ingress:
class: nginx
and I request a certificate like that:
apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: aks-prodibiv2-com-staging
spec:
secretName: aks-prodibiv2-com-staging-secret
duration: 2160h
renewBefore: 480h
organization:
- prodibiv2
dnsNames:
- aks.prodibiv2.com
issuerRef:
name: letsencryptstaging-issuer
kind: ClusterIssuer
I also have added the annotations to the ingress controller I would like to use
certmanager.k8s.io/acme-challenge-type: http01
certmanager.k8s.io/cluster-issuer: letsencryptstaging-issuer
In the following screenshot we can see that the certificate request is "Waiting to complete"
We can see also that we have two ingress controller and the one for the challenge seems to not have an IP, and the domain is pointing to the ingress-prodibiweb
if I try to put the domain in front of the .well-known path I get a 404 not found error.
So my guess is that the cert-manager is not configured properly to use the ingress-prodibiweb (that point to asp.net core webapp) or something like that. Any idea on what can I try to get it working?
Your ingress ip is private. No way for Let's encrypt to access.
You need to make sure you are using Nginx Ingress which you are using
- http01:
ingress:
class: nginx
Make sure your domain is point to right IP. (Ingress IP and also a Load Balancer IP)
Now is 20.50.42.93
https://mxtoolbox.com/SuperTool.aspx?action=a%3aaks.prodibiv2.com&run=toolpage
And dns01 also a solution to request SSL. You can give it a try if you have enough permissions.

HTTPS Ingress with Istio and SDS not working (returns 404) when I configure multiple Gateways

When I configure multiple (gateway - virtual service) pairs in a namespace, each pointing to basic HTTP services, only one service becomes accessable. Calls to the other (typically, the second configured) return 404. If the first gateway is deleted, the second service then becomes accesible
I raised a github issue a few weeks ago ( https://github.com/istio/istio/issues/20661 ) that contains all my configuration but no response to date. Does anyone know what I'm doing wrong (if anything) ?
Based on that github issue
The gateway port names have to be unique, if they are sharing the same port. Thats the only way we differentiate different RDS blocks. We went through this motion earlier as well. I wouldn't rock this boat unless absolutely necessary.
More about the issue here
Checked it on istio documentation, and in fact if you configure multiple gateways name of the first one is https, but second is https-bookinfo.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: httpbin-gateway
spec:
selector:
istio: ingressgateway # use istio default ingress gateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
privateKey: /etc/istio/ingressgateway-certs/tls.key
hosts:
- "httpbin.example.com"
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway # use istio default ingress gateway
servers:
- port:
number: 443
name: https-bookinfo
protocol: HTTPS
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingressgateway-bookinfo-certs/tls.crt
privateKey: /etc/istio/ingressgateway-bookinfo-certs/tls.key
hosts:
- "bookinfo.com"
EDIT
That's weird, but I have another idea.
There is a github pull which have the following line in pilot:
routeName := gatewayRDSRouteName(s, config.Namespace)
This change adds namespace scoping to Gateway port names by appending
namespace suffix to the HTTPS RDS routes. Port names still have to be
unique within the namespace boundaries, but this change makes adding
more specific scoping rather trivial.
Could you try make 2 namespaces like in below example
EXAMPLE
apiVersion: v1
kind: Namespace
metadata:
name: httpbin
labels:
name: httpbin
istio-injection: enabled
---
apiVersion: v1
kind: Namespace
metadata:
name: nodejs
labels:
name: nodejs
istio-injection: enabled
And deploy everything( deployment,service,virtual service,gateway) in proper namespace and let me know if that works?
Could You try change the hosts from "*" to some names? It's only thing that came to my mind besides trying serverCertficate and privateKey but from the comments I assume you have already try it.
Let me know if that help.

kubernetes + ingress controller + lets encrypt + block mixed content

Thanks for taking the time to read this.
I am testing a cluster of kubernetes in digitalocean.
I have installed an ingress controler with cert-manager and letsencript (I followed this guide https://cert-manager.io/docs/tutorials/acme/ingress/) and when I launch some deployment I have problems with the files that are not in the root (Blocked loading mixed active content).
To give a more concrete example, I'm trying to put the application bookstack, if I not active tls, I see everything correctly. On the other hand if I activate tls I see everything without css and in the console I see that there are files that have been blocked by the browser.
On the other hand if I do a port-forward I see it correctly (http://localhost:8080/) -> note http and not https
I have done the test also with a wordpress, with the same problem, the main page is seen without the styles. In this case, for wordpress there is a plugin, that if you get into the backend (browsing the page without css is a torture) and install it solves the problem (this is the plugin https://es.wordpress.org/plugins/ssl-insecure-content-fixer/). On plugin i have to check "HTTP_X_FORWARDED_PROTO" to make it work.
But I'm realizing that it's a recurring problem, and I think there are concepts that are not clear to me and I do not know very well what I have to do.
Here is an example of the ingress controller
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: bookstack
annotations:
kubernetes.io/ingress.class: "nginx"
# cert-manager.io/issuer: "letsencrypt-staging"
cert-manager.io/issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- k1.athosnetwork.es
secretName: tls-bookstack
rules:
- host: k1.athosnetwork.es
http:
paths:
- path: /
backend:
serviceName: bookstack
servicePort: 80
Thanks very much for your time
I have found the solution, I write it for other person on my situation.
The problem were on one environment variable that I dont write on my deployment.
APP_URL .
On bookstack dockerhub repository talk about it:
-e APP_URL=http://your.site.here.xyz for specifying the url your application will be accessed on (required for correct operation of reverse proxy)

Deploy the same deployment several times in Kubernetes

I intend to deploy my online service that depends on Redis servers in Kubernetes. So far I have:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: "redis"
spec:
replicas: 1
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis
ports:
- containerPort: 6379
protocol: TCP
I can also expose redis as a service with:
apiVersion: v1
kind: Service
metadata:
name: redis
labels:
app: redis
spec:
ports:
- port: 6379
protocol: TCP
selector:
app: redis
With this, I can run a single pod with a Redis server and expose it.
However, my application needs to contact several Redis servers (this can be configured but ideally should not change live). It does care which Redis server it talks to so I can't just use replicas and expose it a single service as the service abstraction would not allow me to know which instance I am talking too. I understand that the need for my application to know this hinders scalability and am happy to loose some flexibility as a result.
I thought of deploying the same deployment several times and declaring a service for each, but I could not find anything to do that in the Kubernetes documentation. I could of course copy-paste my deployment and service YAML files and adding a suffix for each but this seems silly and way too manual (and frankly too complex).
Is there anything in Kubernetes to help me achieve my goal ?
You should look into pet sets. A pet set will get a unique, but determinable, name per instance like redis0, redis1, redis2.
They are also treated as pets opposed to deployment pods, which are treated as cattle.
Be advised that pet sets are harder to upgrade, delete and handle in general, but thats the price of getting the reliability and determinability.
Your deployment as pet set:
apiVersion: apps/v1alpha1
kind: PetSet
metadata:
name: redis
spec:
serviceName: "redis"
replicas: 3
template:
metadata:
labels:
app: redis
annotations:
pod.alpha.kubernetes.io/initialized: "true"
spec:
terminationGracePeriodSeconds: 0
containers:
- name: redis
image: redis
ports:
- containerPort: 6379
protocol: TCP
Also consider to use volumes to make it easier to access data in the pods.