No access to k8s pod by internal IP from the envtest - testing

I wrote an integration test for my Kubernetes operator using envtest and local Minikube cluster.
The controller I'm testing is making an http request to one of the pods by IP address.
The address i get like this:
endpoints := &corev1.Endpoints{}
client.Get(context.TODO(), types.NamespacedName{Namespace: "ns", Name: "cluster"}, endpoints)
url = fmt.Sprintf("%s:%s", endpoints.Subsets[0].Addresses[0].IP, "8081")
This code works if i deploy operator to Kubernetes cluster.
But when running the test I get the error:
Post \"http://172.17.0.3:8081": dial tcp 172.17.0.3:8081: connect: no route to host
This is due to the fact that earlier the http call was made from the pod with the operator, but now from the envtest?
How to write tests for controllers correctly?

My workaround:
I did port forwarding to target pod using this example.
To make the code go to localhost I'll monkeypatch it a bit.

Related

Cannot access the application via node ip and node port

I have to deploy an application via Helm by supplying a VM Ip address and node port. Its a BareMetal Kubernetes cluster. The kubernetes cluster has ingress controller installed (as node port, this value is supplied in helm command). The problem is: I am receiving a 404 not found error if I access the applciation as:
curl http://{NODE_IP}:{nodeport}/path
there is no firewall. I have "allow all ingresss traffic" policy. But not sure what is wrong. I have now tried anything possible but cannot find the root cause.

In Cloud Foundry, how do I create a service to run my Apache web server?

I'm on Ubuntu 18, running the following version of Cloud Foundry ...
$ cf -v
cf version 7.4.0+e55633fed.2021-11-15
I would to set up several containers, running off Docker image. First is an Apache web server. I have the following Dockerfile
FROM httpd:2.4
COPY ./my-httpd.conf /usr/local/apache2/conf/httpd.conf
COPY ./my-vhosts.conf /usr/local/apache2/conf/extra/httpd-vhosts.conf
COPY ./directory /usr/local/apache2/htdocs/directory
How do I set this up in Cloud foundry? I tried creating a service but got these errors
$ cf cups apache-service -p "localhost, 80"
FAILED
No API endpoint set. Use 'cf login' or 'cf api' to target an endpoint.
When I tried to create this API endpoint I got
$ cf api "http://my_ip_address"
Setting API endpoint to http://my_ip_address...
Request error: Get "http://my_ip_address": dial tcp my_ip_address:80: connect: connection refused
TIP: If you are behind a firewall and require an HTTP proxy, verify the https_proxy environment variable is correctly set. Else, check your network connection.
I'm thinking I'm missing something rather substantial but don't know what the right questions to ask are.
The error message you are providing (dial tcp my_ip_address:80: connect: connection refused ) is related to the cf api $address not responding.
Ensure that your Cloud Foundry API Endpoint is still active and you don't have any firewall preventing you from accessing the API. (port is open, the process is running, and the firewall is allowing traffic from your IP if applicable)

cert-manager on kubernetes without hairpin nat

I have a bare metal kubernetes deployment running on vmware vcloud director and I am struggling to setup cert-manager to manage ssl certificates. As described in following issue, "Challenge" always fails with self-check to cluster's domain name/Public IP because of it is not accessible from inside of cluster (vmware vcloud director doesn't support hairpin nat which is routing traffic from internal server back to internal server using edge gateways public IP).
https://github.com/jetstack/cert-manager/issues/863
There is also a feature request to disable http01 and dns01 self-check but this is not implemented yet.
https://github.com/jetstack/cert-manager/issues/1292
My questin is "Is there a work-around solution to fix this self-check request?" I am also using node-port to open nginx-ingress service to outside. Therefore, I have to route www.domain.com:80 request from cert-manager pod to ingress-nginx pod 31080 port without leaving the kubernetes cluster.
Best Regard

How to CURL an instance of OpenStack from another Instance

I have running devstack on my machine and created an instance of Alpine Linux which runs a Rails API (IP 10.0.0.6) on port 3000 (also tried 80, 8080). Then I created a simple CirrOS client instance (IP 10.0.0.4) to access the /test endpoint of the API. However, i find that I can ŕun:
ping 10.0.0.6
from the CirrOS instance and receive response of packets. However, when I try:
curl -XGET http://10.0.0.6:3000/test
I receive the error:
curl: (7) couldn't connect to host
The two instances belong to the private network and the security group policy allows any Ingress and Egress of any kind of protocol.
The /test endpoint works locally on the API instance.
I also tested that I'm able to make an ssh connection from one instance to another.
What configuration could I be missing? Thanks!
Found the solution.
It wasn't a wrong configuration on openstack side.
I needed to run rails with the flag -b 0.0.0.0 to allow any IP. Rails on default only serves the localhost IP.
rails s -b 0.0.0.0
You could always try telneting on the particular port which server is running on to locate the issue whether it's networking issue or it is any other configuration issue.

Can not link a HTTP Load Balancer to a backend (502 Bad Gateway)

I have on the backend a Kubernetes node running on port 32656 (Kubernetes Service of type NodePort). If I create a firewall rule for the <node_ip>:32656 to allow traffic, I can open the backend in the browser on this address: http://<node_ip>:32656.
What I try to achieve now is creating an HTTP Load Balancer and link it to the above backend. I use the following script to create the infrastructure required:
#!/bin/bash
GROUP_NAME="gke-service-cluster-61155cae-group"
HEALTH_CHECK_NAME="test-health-check"
BACKEND_SERVICE_NAME="test-backend-service"
URL_MAP_NAME="test-url-map"
TARGET_PROXY_NAME="test-target-proxy"
GLOBAL_FORWARDING_RULE_NAME="test-global-rule"
NODE_PORT="32656"
PORT_NAME="http"
# instance group named ports
gcloud compute instance-groups set-named-ports "$GROUP_NAME" --named-ports "$PORT_NAME:$NODE_PORT"
# health check
gcloud compute http-health-checks create --format none "$HEALTH_CHECK_NAME" --check-interval "5m" --healthy-threshold "1" --timeout "5m" --unhealthy-threshold "10"
# backend service
gcloud compute backend-services create "$BACKEND_SERVICE_NAME" --http-health-check "$HEALTH_CHECK_NAME" --port-name "$PORT_NAME" --timeout "30"
gcloud compute backend-services add-backend "$BACKEND_SERVICE_NAME" --instance-group "$GROUP_NAME" --balancing-mode "UTILIZATION" --capacity-scaler "1" --max-utilization "1"
# URL map
gcloud compute url-maps create "$URL_MAP_NAME" --default-service "$BACKEND_SERVICE_NAME"
# target proxy
gcloud compute target-http-proxies create "$TARGET_PROXY_NAME" --url-map "$URL_MAP_NAME"
# global forwarding rule
gcloud compute forwarding-rules create "$GLOBAL_FORWARDING_RULE_NAME" --global --ip-protocol "TCP" --ports "80" --target-http-proxy "$TARGET_PROXY_NAME"
But I get the following response from the Load Balancer accessed through the public IP in the Frontend configuration:
Error: Server Error
The server encountered a temporary error and could not complete your
request. Please try again in 30 seconds.
The health check is left with default values: (/ and 80) and the backend service responds quickly with a status 200.
I have also created the firewall rule to accept any source and all ports (tcp) and no target specified (i.e. all targets).
Considering that regardless of the port I choose (in the instance group), and that I get the same result (Server Error), the problem should be somewhere in the configuration of the HTTP Load Balancer. (something with the health checks maybe?)
What am I missing from completing the linking between the frontend and the backend?
I assume you actually have instances in the instance group, and the firewall rule is not specific to a source range. Can you check your logs for a google health check? (UA will have google in it).
What version of kubernetes are you running? Fyi there's a resource in 1.2 that hooks this up for you automatically: http://kubernetes.io/docs/user-guide/ingress/, just make sure you do these: https://github.com/kubernetes/contrib/blob/master/ingress/controllers/gce/BETA_LIMITATIONS.md.
More specifically: in 1.2 you need to create a firewall rule, service of type=nodeport (both of which you already seem to have), and a health check on that service at "/" (which you don't have, this requirement is alleviated in 1.3 but 1.3 is not out yet).
Also note that you can't put the same instance into 2 loadbalanced IGs, so to use the Ingress mentioned above you will have to cleanup your existing loadbalancer (or at least, remove the instances from the IG, and free up enough quota so the Ingress controller can do its thing).
There can be a few things wrong that are mentioned:
firewall rules need to be set to all hosts, are they need to have the same network label as the machines in the instance group have
by default, the node should return 200 at / - readiness and liveness probes to configure otherwise did not work for me
It seems you try to do things that are all automated, so I can really recommend:
https://cloud.google.com/kubernetes-engine/docs/how-to/load-balance-ingress
This shows the steps that do the firewall and portforwarding for you, which also may show you what you are missing.
I noticed myself when using an app on 8080, exposed on 80 (like one of the deployments in the example) that the load balancer staid unhealthy untill I had / returning 200 (and /healthz I added to). So basically that container now exposes a webserver on port 8080, returning that and the other config wires that up to port 80.
When it comes to firewall rules, make sure they are set to all machines or make the network label match, or they won't work.
The 502 error is usually from the loadbalancer that will not pass your request if the health check does not pass.
Could you make your service type LoadBalancer (http://kubernetes.io/docs/user-guide/services/#type-loadbalancer) which would setup this all up automatically? This assumes you have the flag set for google cloud.
After you deploy, then describe the service name and should give you the endpoint which is assigned.