I have two Api's on the same cluster and when I run the get services I get the following.
dh-service ClusterIP 10.233.48.45 <none> 15012/TCP 70d
api-service ClusterIP 10.233.54.208 <none> 15012/TCP
Now I want to make a Api call from one API to the other, When I do it using the Ingress address for the two Images I get 404 Not Found.
What address should I use for my post calls? Will the cluster ip work ?
I want to make a Api call from one API to the other
If they are in the same namespace and you use http, you can use:
http://dh-service
http://api-service
to access them.
If e.g. the api-service is located in a different namespace e.g. blue-namespace you can access it with:
http://api-service.blue-namespace
See more on DNS for Services and Pods
Related
So I have an API that's the gateway for two other API's.
Using docker in wsl 2 (ubuntu), when I build my Gateway API.
docker run -d -p 8080:8080 -e A_API_URL=$A_API_URL B_API_URL=$B_API_URL registry:$(somePort)//gateway
I have 2 environnement variables that are the API URI of the two API'S. I just dont know how to make this work in the config.
env:
- name: A_API_URL
value: <need help>
- name: B_API_URL
value: <need help>
I get 500 or 502 errors when accessing then in the network.
I tried specifyng the value of the env var as:
their respective service's name.
the complete URI (http://$(addr):$(port)
the relative path : /something/anotherSomething
Each API is deployed with a Deployment controller and a service
I'm at a lost, any help is appreciated
You just have to hardwire them. Kubernetes doesn't know anything about your local machine. There are templating tools like Helm that could inject things like Bash is in your docker run example but generally not a good idea since if anyone other than you runs the same command, they could see different results. The values should look like http://servicename.namespacename.svc.cluster.local:port/whatever. So if the service is named foo in namespace default with port 8000 and path /api, http://foo.default.svc.cluster.local:8000/api.
While load testing, after some successful responses from the API, JMeter records errors:
'Premature end of Content-Length delimited message body'.
From logs inside the code the response seems to complete normally.
The APP is deployed on AKS with ingress nginx/1.15.10 controllers. The APP consists of 4 separate APIs (one master calling the 3 others). The APIs are created in FLASK with CONNEXION and run in a WSGIContainer on a Tornado HTTPServer.
Another confusing factor is that the APP is deployed on two AKS instances on the same cluster. The one deployment does not return errors and the other does.
What could be causing the error?
I would suggest to limit your testing scope.
1) target the application directly (bypassing the k8s svc and ingress controller). ensure you target each app running on the two different nodes. Do you still see the issue ?
2) target the app service directly (bypassing ingress controller), ensure you target each app running on the two different nodes. Do you still see the issue ?
3) target the app using its ingress, ensure you target each app running on the two different nodes. Do you still see the issue ?
Based on those results, we should be able to pinpoint better the source of your issue.
It is hard to check if request is reached the service or not, because we get RouterName like namespace-ingressroute_name-some_random_string#kubernatescrd and ServiceName like
namespace-ingressroute_name-some_random_string instead of pod service name.
Is there any way I can print pod service name instead of that RouterName and ServiceName in logs?
i found a solution for that we can use traefik service for that. which is introduce in traefik version 2.0
Our team decided to try using OpenShift Origin server to deploy services.
We have a separate VM with OpenShift Origin server installed and working fine. I was able to deploy our local docker images and those services are running fine as well - Pods are up and running, get their own IP and I can reach services endpoints from VM.
The issue is I can't get it working, so the services are exposed outside the machine. I read about the routers, which suppose to be the right way of exposing services, but it just won't work, now some details.
Lets say my VM is 10.48.1.1. The Pod with docker container with one of my services is running on IP 172.30.67.15:
~$ oc get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-svc 172.30.67.15 <none> 8182/TCP 4h
The service is simple Spring Boot app with REST endpoint exposed at port 8182.
Whe I call it from VM hosting it, it works just fine:
$ curl -H "Content-Type: application/json" http://172.30.67.15:8182/home
{"valid":true}
Now I wanted to expose it outside, so I created a router:
oc adm router my-svc --ports='8182'
I followed the steps from OpenShift dev doc both from CLI and Console UI. The router gets created fine, but then when I want to check its status, I get this:
$ oc status
In project sample on server https://10.48.3.161:8443
...
Errors:
* route/my-svc is routing traffic to svc/my-svc, but either the administrator has not installed a router or the router is not selecting this route.
I couldn't find anything about this error that could help me solve the issue - does anyone had similar issue? Is there any other (better/proper?) way of exposing service endpoint? I am new to OpenShift so any suggestions would be appirciated.
If anyone interested, I finally found the "solution".
The issue was that there was no "router" service created - I didn't know it has to be created.
Step by step, in order to create this service I followed the instructions from OpenShift doc page which were pretty easy, but I couldn't login using admin account.
I used default admin account
$ oc login -u system:admin
But instead using available certificate, it kept asking me for password, but it shouldn't. What was wrong? My env variables were reset, and I had to set them again
$ export KUBECONFIG="$(pwd)"/openshift.local.config/master/admin.kubeconfig
$ export CURL_CA_BUNDLE="$(pwd)"/openshift.local.config/master/ca.crt
$ sudo chmod +r "$(pwd)"/openshift.local.config/master/admin.kubeconfig
This was one of the first steps described in OpenShift docs OpenShift docs. After that the certificate is set correctly and login works as expected. As an admin I created router service (1st link) and the route started working - no more errors.
So in the end it came out to be pretty simple and dummy, but given that I didn't have experience with OpenShift it was hard for me to find out what is going on. Hope it will help if someone will have the same issue.
I'm trying to load balance a cluster that is exposing port 7654. I've followed the instructions here. When following it exactly (creating the nginx cluster), it works fine, but when I try to apply it to my containers I can't get it to pass the health check. If I use kubectl to expose 7654 with LoadBalancer instead of NodePort, I'm able to connect, so it seems that the container is working fine. Does anyone have any advice for creating a load balancer?
According to https://cloud.google.com/compute/docs/load-balancing/health-checks#overview a successful health check "must return a valid HTTP response with code 200 and close the connection normally within the timeoutSec period". It's possible that your empty response wasn't closing the HTTP connection and adding HTML content caused your backend to close the connection.