Our web application is already running on on-prem Kubernetes setup with the following traefik configuration. The HTTPS endpoints are working fine, and now we need to add two services that run on HTTP with their own specific ports.
So basically we need to do the following routing:
[existing setup]
HTTPS adminapp.mydomain.com -> Admin UI App
HTTPS myapp.mydomain.com -> UI App
HTTPS api.mydomain.com -> Backend API
[new services]
HTTP api.mydomain.com:8111 -> Service1 API Integration with HTTP
HTTP api.mydomain.com:9111 -> Service2 API Integration with HTTP
Service1 and Service2 are intranet systems that will send the data to their own specific ports.
Here is the traefik configuration:
## Entrypoint Configurations
entryPoints:
web:
address: ":80"
websecure:
address: ":443"
service1:
address: ":8111"
service2:
address: ":9111"
----
## Service1 IngressRoute
entryPoints:
- service1
routes:
- match: Host(`api.mydomain.com`)
kind: Rule
services:
- name: service1-clusterip-service
port: 8111
----
## Service2 IngressRoute
entryPoints:
- service2
routes:
- match: Host(`api.mydomain.com`)
kind: Rule
services:
- name: service2-clusterip-service
port: 9111
When we try to call the Service1 service with the following API http://api.mydomain.com:8111/path/arg/item over the HTTP request, getting this specific error.
upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: delayed connect error: 111
There is not much detail in the access logs as well to identify where the request is breaking.
We have a middleware to force redirect from HTTP to HTTPS, but that is removed to test the above configurations.
Any idea on why the configuration is not working as expected!!
Issue resolved. We found a typo in the service that was pointing to the wrong pod selector.
Also, our setup changed a bit so putting it here if anyone else faces the same issue.
[existing setup]
HTTPS adminapp.mydomain.com -> Admin UI App
HTTPS myapp.mydomain.com -> UI App
HTTPS api.mydomain.com -> Backend API
[new services]
HTTP api.mydomain.com:8111 -> Service1 API Integration with HTTP
TCP api.mydomain.com:9111 -> Service2 API Integration with TCP
For TCP integration make sure you follow these:
Entrypoint is defined with :port/tcp
Router is defined with IngressRouterTCP
If you are doing then host check, then use the HostSNI(`*`) if tls is disabled.
Related
We are using ActiveMQ 5.15.6. We have deployed it as a container on a Kubernetes cluster. We have exposed the ActiveMQ web console using a load balancer.
Now we are setting up a service mesh using Istio, for which we have to enable a strict TLS on all the namespaces (applications). After enabling the TLS we are unable to use the load balancer. So to expose the web-console we need to create a virtual service and a gateway which will do the routing.
We have created the virtual service and tried to expose it on path /activemq but it is not working as expected. Also we changed the routing path /admin to /activemq in jetty.xml as /admin was conflicting with our other application path.
Kindly help us to understand how can we set the proper routing using a virtual service.
Note: We also tried it with nginx-ingress and it didn’t work.
Virtual Service :
kind: VirtualService
metadata:
name: activemq-vs
spec:
hosts:
- "*"
gateways:
- default/backend-gateway
http:
- name: activemq-service
match:
- uri:
prefix: /activemq
rewrite:
uri: /activemq
route:
- destination:
host: active-mq.activemq.svc.cluster.local
port:
number: 8161
I'm too dumb and Google is not helpful.
I'm trying to set up the simplest configuration:
Traefik 2 (in latest Docker container), handling incoming requests...
should direct all incoming requests (http, https) to another service, the Traefik whoami demo container (which I've got already running)...
while terminating the SSL connection, calling the service via http on port 80...
while using a configuration file with explicitly defined routes
How would I configure this? Here's my try:
entryPoints:
web:
address: :80
websecure:
address: :443
log:
filePath: "/home/LogFiles/traefik.log"
level: DEBUG
accessLog:
filePath: "/home/LogFiles/trafik-access.log"
providers:
file:
filename: "/home/traefik.yml"
http:
routers:
route-https:
rule: "Host(`traefik-test.azurewebsites.net`) && PathPrefix(`/whoami`)"
service: "whoami"
tls: {}
route-http:
rule: "Host(`traefik-test.azurewebsites.net`) && PathPrefix(`/whoami`)"
service: "whoami"
services:
whoami:
loadBalancer:
servers:
- url: "http://whoami-test.azurewebsites.net/"
I am not sure about how the https to http conversion works. The documentation says that it's happening automatically. Another part of the doc says you have to use two routers and the tls: {} part tells to terminate the TLS connection. That's what I am doing above. (Is that correct?)
The whomami service URL can be accessed in the browser without problems, via http and https. But when calling it via Traefik (for the above sample this would be https://traefik-test.azurewebsites.net/whoami) I get a 400 and the Browser shows "Bad Request". I suspect the https->http part is not working.
Samples on the web commonly show how to orchestrate multiple containers that get discovered by Traefik. That's not what I'm doing here. I just want to tell Treafik about my already running service. Take every request, route everything to my service via http. Should be simple?
Any hints are appreciated.
There were two errors preventing my configuration from working.
Number one: nasty YAML. See the two spaces before "-"?
servers:
- url: "http://whoami-test.azurewebsites.net/"
They have to go for this to be valid:
servers:
- url: "http://whoami-test.azurewebsites.net/"
Number two: the Host header set by Traefik (which is set to the proxy host) makes the service web app redirect back to my proxy. The passHostHeader: false configuration was necessary:
services:
whoami:
loadBalancer:
passHostHeader: false # <------ added this
servers:
- url: "http://whoami-test.azurewebsites.net/"
Passing the proxy host as "Host" header causes some services to 301-redirect back to the proxy, causing a redirect loop between proxy and service. A .NET Core app (Kestrel) will respond with "Bad Request" instead. Omitting the header is the solution in my case.
I have a Kubernetes Ingress, pointing to a headless service, pointing finally to an Endpoints object that routes to an external IP address. The following is the configuration for the endpoint
apiVersion: v1
kind: Endpoints
metadata:
name: my-chart
subsets:
- addresses:
- ip: **.**.**.**
ports:
- port: 443
However, the upstream connection fails with 'connection reset by peer', and on looking at the logs I see the following error in the Kubernetes nginx-ingress-controller:
2020/01/15 14:39:50 [error] 24546#24546: *240425068 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: *****, server: dev.somehost.com, request: "GET / HTTP/1.1", upstream: "http://**.**.**.**:443/", host: "dev.somehost.com"
My theory is that the combination of http:// and the 443 port is what is triggering this (tested with cURL commands). How do I either 1) Specify a different protocol for the endpoint object or 2) just prevent the prepending of http://
Additional notes:
1) SSL is enabled on the target IP, and if I curl it I can set up a secure connection
2) SSL passthrough doesn't really work here. The incoming and outgoing requests will use two different SSL connections with two different certificates.
3) I want the Ingress host to be the SNI (and it looks like this may default to being the case)
Edit: Ingress controller version: 0.21.0-rancher3
We were able to solve this by adding the following to the metadata of our Ingress
annotations:
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/configuration-snippet: |-
proxy_ssl_server_name on;
proxy_ssl_name $host;
The first command turns on HTTPS for the backend protocol, and the second command enables SNI
This is more of a how-to question , as i am still exploring openshift.
We have an orchestrator running out of openshift which calls to a REST API written in Flask hosted on apache/RHEL.
While our end point is token authenticated - we wanted to add a second level of restriction by allowing access from a whitelisted source of hosts.
But given the concept of openshift, that can span a container across any (number of ) server across its cluster.
What is the best way to go about whitelisting the action from a cluster of computers?
I tried to take a look at External Load Balancer for my the orchestrator service.
clusterIP: 172.30.65.163
externalIPs:
- 10.198.40.123
- 172.29.29.133
externalTrafficPolicy: Cluster
loadBalancerIP: 10.198.40.123
ports:
- nodePort: 30768
port: 5023
protocol: TCP
targetPort: 5023
selector:
app: dbrun-x2
deploymentconfig: dbrun-x2
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 172.29.29.133
So I am unsure is - what is the IP I am expected to see on the other side [ my API apache access logs ] with this setup?
or
Does this LoadBalancer act as gateway only for incoming calls to openshift.
sorry about the long post - would appreciate some inputs
my company have i runing microservice project deployed with openshift.
and existing services success to connect to rabbitmq service with this properties on it's code ,
spring.cloud.stream.rabbitmq.host: rabbitmq
spring.cloud.stream.rabbitmq.port: 5672
now i'm developing new service, with my laptop, not deployed to openshift yet, and now i'm trying to connect to the same broker as the others, so i created new route for rabbitmq service with 5672 as it's port
this is my route YAML :
apiVersion: v1
kind: Route
metadata:
name: rabbitmq-amqp
namespace: message-broker
selfLink: /oapi/v1/namespaces/message-broker/routes/rabbitmq-amqp
uid: 5af3e903-a8ad-11e7-8370-005056aca8b0
resourceVersion: '21744899'
creationTimestamp: '2017-10-04T02:40:16Z'
labels:
app: rabbitmq
annotations:
openshift.io/host.generated: 'true'
spec:
host: rabbitmq-amqp-message-broker.apps.fifgroup.co.id
to:
kind: Service
name: rabbitmq
weight: 100
port:
targetPort: 5672-tcp
tls:
termination: passthrough
wildcardPolicy: None
status:
ingress:
- host: rabbitmq-amqp-message-broker.apps.fifgroup.co.id
routerName: router
conditions:
- type: Admitted
status: 'True'
lastTransitionTime: '2017-10-04T02:40:16Z'
wildcardPolicy: None
when i;m trying to connect my new service with this properties :
spring.cloud.stream.rabbitmq.host: rabbitmq-amqp-message-broker.apps.fifgroup.co.id
spring.cloud.stream.rabbitmq.port: 80
and my service failed to set the connection.
how to solve this problem?
where should i fix it? my service routes or my service properties..?
Thank for your attention.
If you are using passthrough secure connection to expose a non HTTP server outside of the cluster, your service must terminate a TLS connection and your client must also support SNI over TLS. Do you have both of those? Right now it seems you are trying to connect on port 80 anyway, which means it must be HTTP. Isn't RabbitMQ non HTTP? If you are only trying to connect to it from a front end app in the same OpenShift cluster, you don't need a Route on rabbitmq, use 'rabbitmq' as host name and use port 5672.