I've built a docker image based on httpd:2.4. In my k8s deployment I've defined the following securityContext:
securityContext:
privileged: false
runAsNonRoot: true
runAsUser: 431
allowPrivilegeEscalation: false
In order to get this container to run properly as non-root apache needs to be configured to bind to a port > 1024, as opposed to the default 80. As far as I can tell this means editing Listen 80 in httpd.conf to Listen {Some port > 1024}.
When I want to run the docker image I've build normally (i.e. on default port 80) I have the following port settings:
deployment
spec.template.spec.containers[0].ports[0].containerPort: 80
service
spec.ports[0].targetPort: 80
spec.ports[0].port: 8080
ingress
spec.rules[0].http.paths[0].backend.servicePort: 8080
Given these settings the service becomes accessible at the host url provided in the ingress manifest. Again, this is without the changes to httpd.conf. When I make those changes (using Listen 8000), and add in the securityContext section to the deployment, I change the various manifests accordingly:
deployment
spec.template.spec.containers[0].ports[0].containerPort: 8000
service
spec.ports[0].targetPort: 8000
spec.ports[0].port: 8080
ingress
spec.rules[0].http.paths[0].backend.servicePort: 8080
Yet for some reason, when I try to access a URL that should be working I get a 502 Bad Gateway error. Have I set the ports correctly? Is there something else I need to do?
Check if pod is Running
kubectl get pods
kubectl logs pod_name
Check if the URL is accessible within the pod
kubectl exec -it <pod_name> -- bash
$ curl http://localhost:8000
If the above didn't work, check your httpd.conf.
Check with the service name
kubectl exec -it <ingress pod_name> -- bash
$ curl http://svc:8080
You can check ingress logs too.
In order to get this container to run properly as non-root apache
needs to be configured to bind to a port > 1024, as opposed to the
default 80
You got it, that's the hard requirement in order to make the apache container running as non-root, therefore this change needs to be done at container level, not to Kubernetes' abstracts like Deployment's Pod spec or Service/Ingress resource object definitions. So the only thing left in your case, is to build a custom httpd image, with listening port > 1024. The same approach applies to the NGINX Docker containers.
One key information for the 'containerPort' field in Pod spec, that you are trying to manually adjust, and which is not so apparent. It's there primarily for informational purposes, and does not cause opening port on container level. According Kubernetes API reference:
Not specifying a port here DOES NOT prevent that port from being
exposed. Any port which is listening on the default "0.0.0.0" address
inside a container will be accessible from the network. Cannot be updated.
I hope this will help you to move on
Related
I want to host my fastAPI application using gunicorn and host it on a Kubernetes Service. My Kubernetes service runs a liveness probe (health check) using HTTP call on a specified endpoint.
I also want the application to be served on HTTPS because my Kubernetes service exposes it to be used by external components.
Now my HTTP endpoint can't rely on redirection as the liveness probe expects a 200 Response and redirection will hamper that.
I want to host my HTTPS endpoint on a pre-specified port as the organization has the best practices in place and the endpoint and port are specified.
Some similar problems on StackOverflow:
Running Gunicorn on both http and https
uvicorn [fastapi] python run both HTTP and HTTPS
But both of these are okay with redirection, and we are not. And we cannot use the NGINX server too, because that support is deprecated in my organization.
If we are trying this out in a Docker environment. The following will get it done:
Dockerfile:
ENTRYPOINT ./start.sh
Shell Script start.sh:
gunicorn -k uvicorn.workers.UvicornWorker -w 3 -b 0.0.0.0:30000 -t 360 --reload app:app & gunicorn -k uvicorn.workers.UvicornWorker -w 3 --ssl-certfile certfile.txt --ssl-keyfile keyfile.txt --ca-certs ca_certs.txt -b 0.0.0.0:8443 -t 360 --reload app:app
The & runs one in the background and then runs the other one. You can configure one to use HTTP and one to use HTTPS.
We are using gunicorn for fastAPI application so we are using uvicorn workers and you need to change it accordingly for your use case.
For people landing here looking for fastapi/uvicorn help:
uvicorn api:app\
--ssl-certfile=yourcert.pem\
--ssl-keyfile=yourkey.pem\
--host 0.0.0.0 --port 443 --workers 1\
&\
uvicorn api:app\
--host 0.0.0.0 --port 80 --workers 1
You should know, the background daemon will fail to close on CTRL+C. It's best to use something like tmux, and run the :80 and :443 in different windows.
I want to put my docker image running react into kubernetes and be able to hit the main page. I am able to get the main page just running docker run --rm -p 3000:3000 reactdemo locally. When I try to deploy to my kubernetes (running locally via docker-desktop) I get no response until eventually a timeout.
I tried this same process below with a springboot docker image and I am able to get a simple json response in my browser.
Below is my Dockerfile, deployment yaml (with service inside it), and commands I'm running to try and get my results. Morale is low, any help would be appreciated!
Dockerfile:
# pull official base image
FROM node
# set working directory
RUN mkdir /app
WORKDIR /app
# install app dependencies
COPY package.json /app
RUN npm install
# add app
COPY . /app
#Command to build ReactJS application for deploy might not need this...
RUN npm run build
# start app
CMD ["npm", "start"]
Deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo
spec:
replicas: 1
selector:
matchLabels:
app: demo
template:
metadata:
labels:
app: demo
spec:
containers:
- name: reactdemo
image: reactdemo:latest
imagePullPolicy: Never
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: demo
spec:
type: NodePort
selector:
app: demo
ports:
- port: 3000
targetPort: 3000
protocol: TCP
nodePort: 31000
selector:
app: demo
I then open a port on my local machine to the nodeport for the service:
PS C:\WINDOWS\system32> kubectl port-forward pod/demo-854f4d78f6-qv4mt 31000:3000
Forwarding from 127.0.0.1:31000 -> 3000
My assumption is that everything is in place at this point and I should be able to open a browser to hit localhost:31000. I expected to see that spinning react symbol for their landing page just like I do when I only run a local docker container.
Here is it all running:
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/demo-854f4d78f6-7dn7c 1/1 Running 0 4s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/demo NodePort 10.111.203.209 <none> 3000:31000/TCP 4s
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 9d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/demo 1/1 1 1 4s
NAME DESIRED CURRENT READY AGE
replicaset.apps/demo-854f4d78f6 1 1 1 4s
Some extra things to note:
Although I don't have it setup currently I did have my springboot service in the deployment file. I logged into it's pod and ensured the react container was reachable. It was.
I haven't done anything with my firewall settings (but sort of assume I dont have to since the run with the springboot service worked?)
I see this in chrome developer tools and so far don't think it's related to my problem: crbug/1173575, non-JS module files deprecated. I see this response in the main browser page after some time:
localhost didn’t send any data.
ERR_EMPTY_RESPONSE
if you are using Kubernetes using minikube in your local system then it will not work with localhost:3000, because it runs in minikube cluster which has there owned private IP address, so instead of trying localhost:3000 you should try minikube service <servicename> this in your terminal and it shows the URL of your service.
Thanks for all the feedback peeps! In trying out the solutions presented I found my error and it was pretty silly. I tried removing the service and trying the different port configs mentioned above. What solved it was using 127.0.0.1:31000 instead of localhost. Not sure why that fixed it but it did!
That being said a few comments I found while looking at the above comments.
I found that I couldnt hit the cluster without doing the port forwarding regardless of whether I had a service defined or not.
ContainerPort to my understanding is for kubernetes to work on pod to pod communication and doesnt impact application function from a user perspective (could be wrong!)
good to know on minikube Im thinking about trying it out and if I do I'll know why port 3000 stops working.
Thanks
You don't need a Service to port-forward to a pod. You just port-forward straight to it
kubectl port-forward <pod-name> <local-port>:<pod-port>
This creates a "tunnel" between your local machine on a chosen port (<local-port>) and your pod (pod-name), on a chosen pod's port (<pod-port>).
Then you can curl your pod with
curl localhost:<local-port>
If you really want to use a Service, then port-forward to the Service (service/demo in your case) + service port, but it will be translated to POD's IP eventually.
Change .spec.port.[0].targetPort in your Service to be the same as .spec.template.spec.containers.port.containerPort in your deployment, so in your case:
...
- port: 3000
targetPort: 80
...
Port forward to a Service, with Service port
kubectl port-forward service/demo 8080:3000
Then curl your Service with
curl localhost:8080
This have a side effect if there are more pods under the same service. Port forward almost always will be connected to the same pod.
I have running devstack on my machine and created an instance of Alpine Linux which runs a Rails API (IP 10.0.0.6) on port 3000 (also tried 80, 8080). Then I created a simple CirrOS client instance (IP 10.0.0.4) to access the /test endpoint of the API. However, i find that I can ŕun:
ping 10.0.0.6
from the CirrOS instance and receive response of packets. However, when I try:
curl -XGET http://10.0.0.6:3000/test
I receive the error:
curl: (7) couldn't connect to host
The two instances belong to the private network and the security group policy allows any Ingress and Egress of any kind of protocol.
The /test endpoint works locally on the API instance.
I also tested that I'm able to make an ssh connection from one instance to another.
What configuration could I be missing? Thanks!
Found the solution.
It wasn't a wrong configuration on openstack side.
I needed to run rails with the flag -b 0.0.0.0 to allow any IP. Rails on default only serves the localhost IP.
rails s -b 0.0.0.0
You could always try telneting on the particular port which server is running on to locate the issue whether it's networking issue or it is any other configuration issue.
I am using docker-compose to work across multiple docker containers, all these containers are mostly individual django rest framework built applications. I have downloaded all the containers and am able to build the whole application using all these containers.
Each container has postgres db running, I want to browse the db now using any ui tool. I know pgadmin can do the work here, but how I can configure my pgadmin to showcase any postgres database from these containers?
It should be possible to expose your database port also to your local network.
Normally you connect your application containers internally to the database container. In that case it's not needed declare the ports section in your compose file for the database, but if you have that entry you bind your database in addition to your local host.
After you have also expose the postgres port to your host port it should be no problem to connect with the gui tool of your choice.
version: '3.2'
services:
httpd:
image: "oth/d_apache2.4:0.2"
ports:
# container port 80 of the webserver to localhost 80
- "80:80"
keycloak:
# keycloak uses keycloak_db
image: "jboss/keycloak-postgres:3.2.1.Final"
environment:
# internal network reference to db container
- POSTGRES_PORT_5432_TCP_ADDR=keycloak_db
- POSTGRES_PORT_5432_TCP_PORT=5432
keycloak_db:
environment:
image: "postgres:alpine"
ports:
# container port 5432 to localhost 5432
# stack intern is the port still available
- "5432:5432"
Make sure that the port of the postgres container is mapped to the host system. The default postgres port is 5432. You can do that with the port directive in your docker-compose.yml. You are only able to map the port once. So your config file would look like:
services:
postgres_1:
ports:
- "49000:54321"
[...]
postgres_2:
ports:
- "49001:54321"
[...]
After that you should be able to access the desired database with the IP of your docker host and the above specified port.
If you still encounter problems connecting with a client like pgadmin check the following configuration files inside your container.
Is there anything blocking your connection attempt? Is yourdocker host behind a firewall?
postgresql.conf under the section connections and authentication:
listen_addresses
port
Check your pg_hba.conf, which controls client authentication.
For debug purposes you can set it to the following:
Don't do the following in production:
host all all all trust
I have a WebLogic docker container. The WLS admin port is configured at 7001. When I run the container, I use --hostname=[hosts' hostname] and expose the 7001 port at a different host port using -p 8001:7001 for example. The reason I do the port mapping is because I would want to run multiple WLS containers on the same host.
I have some applications that I deploy on this WebLogic. These applications use an external SDK (which I don't control) to get the application url using JMX (getURL operation of RuntimeServiceMBean).
This is where it gets wrong. The URL comes out as http://[container's IP]:7001. I would want it to retrieve http://[hosts' hostname]:8001 - i.e. the hostname I used to start the container and the port at which 7001 is mapped i.e. 8001.
Is there a way this could be done?
When the container is started, you should start WebLogic after adjusting the External Listen Address of your AdminServer. You can use WLST Offline for that from within a shell script, passing parameters with docker run -e KEY=VALUE, then later read these from inside the WLST script. Modify your AdminServer External Listen Address, exit(), then you can start AdminServer.
Here's an example on how to create the extra Network Channel with proper External Listen Address.