I 've apache camel application deployed on kubernetes. My application is esposed in kubernetes cluster which is accessible at http://192.168.99.100:31750. so how to make it accessiible accross.
I suggest you do 2 things :
run an NginX Ingress Controller in your minikube and expose it with NodePort service. Meaning it will be available somewhat similar to your service right now (high port range)
run HAProxy on your host that runs minikube that will forward 80/443 port to your high ports on minikube (ie. 80->32080, 443->32443)
that way you can expose your ingress controller on standard ports and have your services exposed with regular kubernetes Ingress definitions on these ports.
Related
Is it possible to run spring boot containerized apps on port 8443 going through a 443 ALB listener and deployed on ECS Fargate in AWS? The 443 listener would have an issued cert, not a self-signed cert. I would use an NLB but I need to set route paths, so that's a no go. Would using nginx as a proxy be used in a situation like this?
Is it possible to run spring boot containerized apps on port 8443
going through a 443 ALB listener and deployed on ECS Fargate in AWS?
Yes it is absolutely possible, there should be no issue with this at all. What you are describing is actually just a very standard and basic ECS/Fargate setup.
Would using nginx as a proxy be used in a situation like this?
Only if you want to. You don't need Nginx just to make this work.
I have an EKS cluster running in one VPC and some ec2 instances on a legacy VPC, the 2 VPCs has peering between them.
I have a app on the EKS cluster needs to be reachable from inside the cluster and also to the ec2 instances on the legacy VPC.
Do I need to create 2 services for the app - one kind: clusterIP for in cluster communication and one kine: LoadBalancer for external VPC communication:
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
OR - I can create only one service kind: nodeport/clusterIP/LB internal for both in-cluster and external VPC communication?
What is the preferred way.
Thanks,
Do I need to create 2 services for the app...create only one service kind: nodeport/clusterIP/LB internal for both in-cluster and external VPC communication?
You need only one service in this case. Either a typed NodePort where you get a cluster IP (for connection within the k8s cluster network); plus a port accessible at EC2 worker node. Typed LoadBalancer gets you a cluster IP too; plus the LB endpoint. As worker nodes come and go, LB give you more flexibility as you will only dealing with a known endpoint.
I've installed haproxy ingress in the GKE cluster since the default ingress (integration with global load balancer) was not satisfying my needs.
So port 80 is the target HTTP port for the load balancer backend on all cluster hosts.
I've simply configured a global HTTPS load balancer to terminate SSL and balance traffic between the k8s nodes auto-scaling group.
Everything seems correctly configured, but I can see backend health checks fail.
I've tried two methods HTTP on /healthz and TCP on port 80.
Both checks fail, and service is unavailable in 99% of the time.
Can anybody help me with this situation?
The problem was the firewall rules.
Health checks were not allowed to access GCE nodes associated with the GKE cluster.
I've added a new rule to VPC to allow 35.191.0.0/16,130.211.0.0/22 source IP ranges and 10253 TCP port associated with haproxy ingress health port.
After adding the rule, health checks passed, and the load balancer started to work.
I have a bare metal kubernetes deployment running on vmware vcloud director and I am struggling to setup cert-manager to manage ssl certificates. As described in following issue, "Challenge" always fails with self-check to cluster's domain name/Public IP because of it is not accessible from inside of cluster (vmware vcloud director doesn't support hairpin nat which is routing traffic from internal server back to internal server using edge gateways public IP).
https://github.com/jetstack/cert-manager/issues/863
There is also a feature request to disable http01 and dns01 self-check but this is not implemented yet.
https://github.com/jetstack/cert-manager/issues/1292
My questin is "Is there a work-around solution to fix this self-check request?" I am also using node-port to open nginx-ingress service to outside. Therefore, I have to route www.domain.com:80 request from cert-manager pod to ingress-nginx pod 31080 port without leaving the kubernetes cluster.
Best Regard
I am running web site with Kubernetes on Google Cloud. At the moment, everything is working well - through http. But I need https. I have several services and one of them is exposed to the outside world, let's call it web. As far as I know, this is the only service that needs to be modified. I tried to creating a static IP and TCP/SSL loadbalancer ssl-LB in the Networking section of GCP and using that LB in web.yaml, which I create. Creating the service gets stuck with:
Error creating load balancer (will retry): Failed to create load
balancer for service default/web: requested ip <IP> is
neither static nor assigned to LB
aff3a4e1f487f11e787cc42010a84016(default/web): <nil>
According to GCP my IP is static, however. The hashed LB I cannot find anywhere and it should be assigned to ssl-LB anyway. How do I assign this properly?
More details:
Here are the contents of web.yaml
apiVersion: v1
kind: Service
metadata:
name: web
labels:
...
spec:
type: LoadBalancer
loadBalancerIP: <RESERVED STATIC IP>
ports:
- port: 443
targetPort: 7770
selector:
...
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: web
spec:
replicas: 1
template:
metadata:
labels:
...
spec:
containers:
- name: web
image: gcr.io/<PROJECT>/<IMAGE NAME>
ports:
- containerPort: 7770
Since you have not mentioned this already, I'm just assuming you're using Google Container Engine (GKE) for your Kubernetes setup.
In the service resource manifest, if you set the Type to LoadBalancer, Kubernetes on GKE automatically sets up Network load balancing (L4 Load balancer) using GCE. You will have to terminate connections in your pod using your own custom server or something like nginx/apache.
If your goal is to set up a (HTTP/HTTPS) L7 load balancer (which looks to be the case), it will be simpler and easier to use the Ingress resource in Kubernetes (starting with v1.1). GKE automatically sets up a GCE HTTP/HTTPS L7 load balancing with this setup.
You will be able to add your TLS certificates which will get provisioned on the GCE load balancer automatically by GKE.
This setup has the following advantages:
Specify services per URL path and port (it uses URL Maps from GCE to configure this).
Set up and terminate SSL/TLS on the GCE load balancer (it uses Target proxies from GCE to configure this).
GKE will automatically also configure the GCE health checks for your services.
Your responsibility will be to handle the backend service logic to handle requests in your pods.
More info available on the GKE page about setting up HTTP load balancing.
Remember that when using GKE, it automatically uses the available GCE load balancer support for both the use cases described above and you will not need to manually set up GCE load balancing.