Ocelot ApiGateway unable to reach other services on Kubernetes - asp.net-core

I'm working on a simple project to learn more about microservices.
I have a very simple .net core web with a single GET endpoint.
I have added a ApiGateway web app with Ocelot, and everything seems to be working fine except when I deploy to a local Kubernetes cluster.
Those are the yaml files I'm using for the deployment:
ApiGateway.yaml
kind: Pod
apiVersion: v1
metadata:
name: api-gateway
labels:
app: api-gateway
spec:
containers:
- name: api-gateway
image: apigateway:dev
---
kind: Service
apiVersion: v1
metadata:
name: api-gateway-service
spec:
selector:
app: api-gateway
ports:
- port: 80
TestService.yaml
kind: Pod
apiVersion: v1
metadata:
name: testservice
labels:
app: testservice
spec:
containers:
- name: testservice
image: testbuild:latest
---
kind: Service
apiVersion: v1
metadata:
name: testservice-service
spec:
selector:
app: testservice
ports:
- port: 80
ocelot.json
{
"ReRoutes": [
{
"DownstreamPathTemplate": "/endpoint",
"DownstreamScheme": "http",
"DownstreamHostAndPorts": [
{
"Host": "testservice-service",
"Port": 80
}
],
"UpstreamPathTemplate": "/test",
"UpstreamHttpMethod": [ "Get" ]
}
]
}
If I try to make a request with cUrl directly from the ApiGateway pod to the TestService service it works with no issue. But when I try to request it from Ocelot, it returns a 500 error, saying:
warn: Ocelot.Responder.Middleware.ResponderMiddleware[0]
requestId: 0HLUEDTNVVN26:00000002, previousRequestId: no previous request id, message: Error Code: UnableToCompleteRequestError Message: Error making http request, exception: System.Net.Http.HttpRequestException: Name or service not known
---> System.Net.Sockets.SocketException (0xFFFDFFFF): Name or service not known
Also, I tried with this https://ocelot.readthedocs.io/en/latest/features/kubernetes.html but honestly the doc is not really clear and I haven't had any success so far.
Any idea what the problem might be?
(I'm also using an Ingress resource for exposing the services, but that works with no problem so I will keep it out of this thread for now)

Related

AKS Page not Found for SwaggerUI

I have a .net core 6 web api service which I have deployed on AKS(Azure Kubernetes Services). The Program.cs looks like so;
var builder = WebApplication.CreateBuilder(args);
var swaggerConfig = builder.Configuration.GetSection(nameof(SwaggerConfig)).Get<SwaggerConfig>();
builder.Services.AddSwaggerGen(opts =>
{
opts.SchemaFilter<ExampleNashSchemaFilter>();
opts.SwaggerDoc(swaggerConfig.ApiVersion, new Microsoft.OpenApi.Models.OpenApiInfo { Title = swaggerConfig.ApiName, Version = swaggerConfig.ApiVersion });
});
builder.Services.Configure<ForwardedHeadersOptions>(options =>
{
options.ForwardedHeaders = ForwardedHeaders.XForwardedFor |
ForwardedHeaders.XForwardedProto;
// Only loopback proxies are allowed by default.
// Clear that restriction because forwarders are enabled by explicit
// configuration.
options.KnownNetworks.Clear();
options.KnownProxies.Clear();
});
builder.Services.AddCors(p => p.AddPolicy("devcors", builder =>
{
builder.WithOrigins("*").AllowAnyMethod().AllowAnyHeader();
}));
var app = builder.Build();
app.UseCors("devcors");
app.UseSwagger();
app.UseSwaggerUI(c =>
{
c.SwaggerEndpoint($"/swagger/v1/swagger.json", swaggerConfig.ApiName);
c.RoutePrefix = "swagger";
});
app.UseAuthorization();
app.MapControllers();
app.UseDefaultFiles();
app.UseStaticFiles();
app.UseRouting();
app.UseEndpoints(endpoints =>
{
endpoints.MapControllers();
});
app.Run();
Here is the swagger settings on appsettings.json
"SwaggerConfig": {
"ApiName": "Some Services",
"ApiVersion": "v1"
}
Here is my deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: servicetest
spec:
replicas: 1
selector:
matchLabels:
app: servicetest
template:
metadata:
labels:
app: servicetest
spec:
containers:
- name: servicetest
image: myprivatehub/servicetest:latest
imagePullPolicy: Always
ports:
- containerPort: 80
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
name: servicetest
spec:
selector:
app: servicetest
ports:
- port: 80
protocol: TCP
targetPort: 80
type: LoadBalancer
For some weird reasons I can access the APIs running from my service on AKS but I can't access the swagger/index.html I get an error page can't be found. I replicated this deployment on my minikube and it works well. I'm curious to know why i can't access swaggerUI from AKS
Seems that you do not have an Ingress Controller deployed on your AKS. You will need that in order to get ingress to work and to expose the swagger-ui.
To access the swagger-ui for testing purposes you can do this:
kubectl port-forward service/servicetest 80:8080
Afterwards just access http://localhost:8080
But you should def. install an ingress-controller: Here is a Workflow from MS to install ingress-nginx as Ingress Controller on your Cluster.
To avoid that every service spawns a LoadBalancer and a PublicIP in Azure (to keep an eye on your costs), you will then only expose the ingress-controller to the internet and could also specify the loadBalancerIP statically if you created the PublicIP in advance:
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/azure-load-balancer-resource-group: myResourceGroup # only needed if the LB is in another RG
name: ingress-nginx-controller
spec:
loadBalancerIP: <YOUR_STATIC_IP>
type: LoadBalancer
The Ingress Controller then will route incoming traffic to your application with an Ingress resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
spec:
ingressClassName: nginx # ingress-nginx specifix
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: servicetest
port:
number: 80

Ingress unable to find backend service in Kubernetes v1.21.5

I am using Kubernetes v1.21.5 on docker. The following is my ingress YAML file
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: triver.dev
http:
paths:
- path: /service/account/?(.*)
pathType: Prefix
backend:
service:
name: auth-srv
port:
number: 3000
I am running an auth srv image and the following is its YAML file
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: triver/auth
env:
- name: MONGO_URI
value: 'mongodb://mongo-srv:27017/auth'
- name: JWT_KEY
valueFrom:
secretKeyRef:
name: jwt-secret
key: JWT_KEY
---
apiVersion: v1
kind: Service
metadata:
name: auth-srv
spec:
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
but when I try to send a request to any route that I created in auth service for example I created a triver.dev/service/account/signup post express router for a user to signup. When I try to send a post request to the route through Postman it gives an error (404) of ECONNRefused. Couldn't send a request. Why is it happening? (My postman is working fine. It's not an issue on the postman end.)
What am I doing wrong
The app works but I just can't access the route. It's definitely an ingress issue. Can someone help me, please? This is a very important project.
This is what show up when I use the command 'kubectl get ingress'
Everything works fine when I run the application using skaffold dev.
it's due to you have not mentioned the hostname into the ingress, also make sure your ingress controller is running
example ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: hello-world.info
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web
port:
number: 8080
host: hello-world.info
Reference : https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/
you can also checkout : https://kubernetes.io/docs/concepts/services-networking/ingress/
if you have set default backend set in ingress and host is not issue
make sure you are sending the request on HTTP instead of HTTPS
nginx.ingress.kubernetes.io/ssl-redirect: "false"
as you are not using the certificate SSL/TLS so please try the URL with HTTP

istio exclude service from ext-auth

Hi guys i have set up istio on minikube and set envoy ext-auth filter on the gateways . i have two microservices running in different pods exposing virtual services /auther and /appone to outside world . the ext-auth filter i set will send every single request to /auther/auth to be authenticated and if the response is 200 let the request to pass and reach other the service it wants .
the problem is that istio is authenticating every single request to all endpoints even /auther. i want to exclude requests sent to /auther to be authenticated (cause auther service will handle the authentication itself ) .but its not working .
so here is my ext-auth filter :
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: authn-filter
namespace: istio-system
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: HTTP_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: "envoy.http_connection_manager"
subFilter:
name: "envoy.router"
patch:
operation: INSERT_BEFORE
value:
name: envoy.ext_authz
typed_config:
"#type": "type.googleapis.com/envoy.config.filter.http.ext_authz.v2.ExtAuthz"
http_service:
server_uri:
uri: http://auther.default.svc.cluster.local
cluster: outbound|3000||auther.default.svc.cluster.local
timeout: 1.5s
path_prefix: /auther/auth?user=
authorizationRequest:
allowedHeaders:
patterns:
- exact: "cookie"
- exact: "authorization"
authorizationResponse:
allowedClientHeaders:
patterns:
- exact: "set-cookie"
- exact: "authorization"
and here is the exception filter im trying to implement:
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: bypass-filter
namespace: default
spec:
configPatches:
# The first patch adds the lua filter to the listener/http connection manager
- applyTo: HTTP_ROUTE
match:
context: GATEWAY
routeConfiguration:
vhost:
name: auther
route:
name: auther
patch:
operation: MERGE
value:
typed_per_filter_config:
envoy.ext_authz:
"#type": type.googleapis.com/envoy.config.filter.http.ext_authz.v2.ExtAuthzPerRoute
disabled: true
the first filter is working just fine . but the second one which is going to exclude the auther service from the authentication ext-filter is not working.
You have set #type to envoy.config.filter.http.ext_authz.v2.ExtAuthzPerRoute, but the correct path is envoy.extensions.filters.http.ext_authz.v3.ExtAuthzPerRoute.
Furthermore the route name must match the name in the virtual service. And it must be deployed to the istio-system namespace as your authn-filter. This config works for me:
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: bypass-authn
namespace: istio-system
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: HTTP_ROUTE
match:
routeConfiguration:
vhost:
route:
name: my-route #from virtual service http route name
patch:
operation: MERGE
value:
name: envoy.ext_authz_disabled
typed_per_filter_config:
envoy.ext_authz:
"#type": type.googleapis.com/envoy.extensions.filters.http.ext_authz.v3.ExtAuthzPerRoute
disabled: true

I have a problem with Kubernetes depoyment. Can anybody help I always get this error when trying to connect to the cluster IP

I have problems with Kubernetes. I try to deploy my service for two days now bu I'm doing something wrong.
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\": No policy matched.",
"reason": "Forbidden",
"details": {
},
"code": 403
}
Does anybody knows what the problem could be?
Here is also my yaml file:
# Certificate
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: ${APP_NAME}
spec:
secretName: ${APP_NAME}-cert
dnsNames:
- ${URL}
- www.${URL}
acme:
config:
- domains:
- ${URL}
- www.${URL}
http01:
ingressClass: nginx
issuerRef:
name: ${CERT_ISSUER}
kind: ClusterIssuer
---
# Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ${APP_NAME}
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: 'true'
nginx.ingress.kubernetes.io/from-to-www-redirect: 'true'
spec:
tls:
- secretName: ${APP_NAME}-cert
hosts:
- ${URL}
- www.${URL}
rules:
- host: ${URL}
http:
paths:
- backend:
serviceName: ${APP_NAME}-service
servicePort: 80
---
# Service
apiVersion: v1
kind: Service
metadata:
name: ${APP_NAME}-service
labels:
app: ${CI_PROJECT_NAME}
spec:
selector:
name: ${APP_NAME}
app: ${CI_PROJECT_NAME}
ports:
- name: http
port: 80
targetPort: http
---
# Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: ${APP_NAME}
labels:
app: ${CI_PROJECT_NAME}
spec:
replicas: ${REPLICAS}
revisionHistoryLimit: 0
selector:
matchLabels:
app: ${CI_PROJECT_NAME}
template:
metadata:
labels:
name: ${APP_NAME}
app: ${CI_PROJECT_NAME}
spec:
containers:
- name: webapp
image: eu.gcr.io/my-site/my-site.com:latest
imagePullPolicy: Always
ports:
- name: http
containerPort: 80
env:
- name: COMMIT_SHA
value: ${CI_COMMIT_SHA}
livenessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 30
timeoutSeconds: 1
readinessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 5
timeoutSeconds: 1
resources:
requests:
memory: '16Mi'
limits:
memory: '64Mi'
imagePullSecrets:
- name: ${REGISTRY_PULL_SECRET}
Can anybody help me with this? I'm stuck and I've no idea what could be the problem. This is also my first Kubernetes project.
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\": No policy matched.",
.. means just what it says: your request to the kubernetes api was not authenticated (that's the system:anonymous part), and your RBAC configuration does not tolerate the anonymous user making any requests to the API
No one here is going to be able to help you straighten out that problem, because fixing that depends on a horrific number of variables. Perhaps ask your cluster administrator to provide you with the correct credentials.
I have explained it in this post. You will need ServiceAccount, ClusterRole and RoleBinding. You can find explanation in this article. Or as Matthew L Daniel mentioned in the Kubernetes documentation.
If you still have problems, provide the method/tutorial you have used to deploy the cluster (as "Gitlab Kubernetes integration" does not tell much on the method you have used).

Can't get kubernetes to pass my tls certificate to browsers

I've been struggling for a while trying to get HTTPS access to my Elasticsearch cluster in Kubernetes.
I think the problem is that Kubernetes doesn't like the TLS certificate I'm trying to use, which is why it's not passing it all the way through to the browser.
Everything else seems to work, since when I accept the Kubernetes Ingress Controller Fake Certificate, the requests go through as expected.
In my attempt to do this I've set up:
The cluster itself
An nginx-ingress controller
An ingress resource
Here's the related yaml:
Cluster:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2018-08-03T03:20:47Z
labels:
run: my-es
name: my-es
namespace: default
resourceVersion: "3159488"
selfLink: /api/v1/namespaces/default/services/my-es
uid: 373047e0-96cc-11e8-932b-42010a800043
spec:
clusterIP: 10.63.241.39
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 9200
selector:
run: my-es
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
The ingress resource
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/cors-allow-methods: PUT, GET, POST, OPTIONS
nginx.ingress.kubernetes.io/cors-origins: http://localhost:3425 https://mydomain.ca
https://myOtherDomain.ca
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
creationTimestamp: 2018-08-12T08:44:29Z
generation: 16
name: es-ingress
namespace: default
resourceVersion: "3159625"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/es-ingress
uid: ece0071d-9e0b-11e8-8a45-42001a8000fc
spec:
rules:
- http:
paths:
- backend:
serviceName: my-es
servicePort: 8080
path: /
tls:
- hosts:
- mydomain.ca
secretName: my-tls-secret
status:
loadBalancer:
ingress:
- ip: 130.211.179.225
The nginx-ingress controller:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2018-08-12T00:41:32Z
labels:
app: nginx-ingress
chart: nginx-ingress-0.23.0
component: controller
heritage: Tiller
release: nginx-ingress
name: nginx-ingress-controller
namespace: default
resourceVersion: "2781955"
selfLink: /api/v1/namespaces/default/services/nginx-ingress-controller
uid: 755ee4b8-9dc8-11e8-85a4-4201a08000fc
spec:
clusterIP: 10.63.250.256
externalTrafficPolicy: Cluster
ports:
- name: http
nodePort: 32084
port: 80
protocol: TCP
targetPort: http
- name: https
nodePort: 31182
port: 443
protocol: TCP
targetPort: https
selector:
app: nginx-ingress
component: controller
release: nginx-ingress
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 35.212.6.131
I feel like I'm missing something basic, because it doesn't seem like it should be this hard to expose something this simple...
To get my certificate, I just requested one for mydomain.ca from godaddy.
Do I need to somehow get a certificate using my ingress resource's cluster IP as the common name?
It doesn't seem possible to verify ownership of an IP.
I've seen people mention ways for Kubernetes to automatically create certificates for ingress resources, but those seem to be self signed.
Here are some logs from the nginx-controller:
This one is talking about a PEM with the tls-secret, but it's only a warning.
{
insertId: "1kvvhm7g1q7e0ej"
labels: {
compute.googleapis.com/resource_name: "fluentd-gcp-v2.0.17-5b82n"
container.googleapis.com/namespace_name: "default"
container.googleapis.com/pod_name: "nginx-ingress-controller-58f57fc597-zl25s"
container.googleapis.com/stream: "stderr"
}
logName: "projects/project-7d320/logs/nginx-ingress-controller"
receiveTimestamp: "2018-08-14T02:58:42.135388365Z"
resource: {
labels: {
cluster_name: "my-elasticsearch-cluster"
container_name: "nginx-ingress-controller"
instance_id: "2341889542400230234"
namespace_id: "default"
pod_id: "nginx-ingress-controller-58f57fc597-zl25s"
project_id: "project-7d320"
zone: "us-central1-a"
}
type: "container"
}
severity: "WARNING"
textPayload: "error obtaining PEM from secret default/my-tls-cert: error retrieving secret default/my-tls-cert: secret default/my-tls-cert was not found"
timestamp: "2018-08-14T02:58:37Z"
}
I have a few occurences of this handshake error, which may be a result of the last warning...
{
insertId: "148t6rfg1xmz978"
labels: {
compute.googleapis.com/resource_name: "fluentd-gcp-v2.0.17-5b82n"
container.googleapis.com/namespace_name: "default"
container.googleapis.com/pod_name: "nginx-ingress-controller-58f57fc597-zl25s"
container.googleapis.com/stream: "stderr"
}
logName: "projects/project-7d320/logs/nginx-ingress-controller"
receiveTimestamp: "2018-08-14T15:55:52.438035706Z"
resource: {
labels: {
cluster_name: "my-elasticsearch-cluster"
container_name: "nginx-ingress-controller"
instance_id: "2341889542400230234"
namespace_id: "default"
pod_id: "nginx-ingress-controller-58f57fc597-zl25s"
project_id: "project-7d320"
zone: "us-central1-a"
}
type: "container"
}
severity: "ERROR"
textPayload: "2018/08/14 15:55:50 [crit] 1548#1548: *860 SSL_do_handshake() failed (SSL: error:1417D18C:SSL routines:tls_process_client_hello:version too low) while SSL handshaking, client: 127.0.0.1, server: 0.0.0.0:442"
timestamp: "2018-08-14T15:55:50Z"
}
The above logs make it seem like my tls secret isnt working, but when I run kubectl describe ingress, it says my secret terminates.
aaronmw#project-7d320:~$ kubectl describe ing
Name: es-ingress
Namespace: default
Address: 130.221.179.212
Default backend: default-http-backend:80 (10.61.3.7:8080)
TLS:
my-tls-secret terminates mydomain.ca
Rules:
Host Path Backends
---- ---- --------
*
/ my-es:8080 (<none>)
Annotations:
Events: <none>
I figured it out!
What I ended up doing was adding a default ssl certificate to my nginx-ingress controller on creation using the following command
helm install --name nginx-ingress --set controller.extraArgs.default-ssl-certificate=default/search-tls-secret stable/nginx-ingress
Once I had that, it was passing the cert as expected, but I still had the wrong cert as the CN didn't match my load balancer IP.
So what I did was:
Make my load balancer IP static
Add an A record to my domain, to map a subdomain to that IP
Re-key my cert to match that new subdomain
And I'm in business!
Thanks to #Crou, who's comment reminded me to look at the logs and got me on the right track.