Forward to external url on live probe failure - traefik

I currently forward traffic to an internal service
labels:
- traefik.http.routers.ocean.rule=Host(`ocean.xxx.ch`)
- traefik.http.routers.ocean.tls=true
- traefik.http.routers.ocean.tls.certresolver=lets-encrypt
- traefik.http.services.ocean.loadbalancer.server.port=3000
- traefik.http.services.ocean.loadbalancer.healthcheck.path=/_actuator/probes/readiness
- traefik.http.services.ocean.loadbalancer.healthcheck.interval=10s
If the service fails health check I would like the traffic to be forwarded to an external url whale.yyy.ch instead until the primary service comes back online. Is that possible?

Related

Secure mTLS communication within Istio-knative services + external requests

We are converting existing k8s services to use istio & knative. The services receive requests from external users as well as from within the cluster. We are trying to setup Istio AuthorizationPolicy to achieve the below requirements:
Certain paths (like docs/healthchecks) should not require any special header or anything and must be accessible from anywhere
Health & metric collection paths required to be accessed by knative must be accisible only by knative controllers
Any request coming from outside the cluster (through knative-serving/knative-ingress-gateway basically) must contain a key header matching a pre-shared key
Any request coming from any service within the cluster can access all the paths
Below is a sample of what I am trying. I am able to get the first 3 requirements working but not the last one...
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: my-svc
namespace: my-ns
spec:
selector:
matchLabels:
serving.knative.dev/service: my-svc
action: "ALLOW"
rules:
- to:
- operation:
methods:
- "GET"
paths:
- "/docs"
- "/openapi.json"
- "/redoc"
- "/rest/v1/healthz"
- to:
- operation:
methods:
- "GET"
paths:
- "/healthz*"
- "/metrics*"
when:
- key: "request.headers[User-Agent]"
values:
- "Knative-Activator-Probe"
- "Go-http-client/1.1"
- to:
- operation:
paths:
- "/rest/v1/myapp*"
when:
- key: "request.headers[my-key]"
values:
- "asjhfhjgdhjsfgjhdgsfjh"
- from:
- source:
namespaces:
- "*"
We have made no changes to the mTLS configuration provided by default by istio-knative setup, so assume that the mtls mode is currently PERMISSIVE.
Details of tech stack involved
AWS EKS - Version 1.21
Knative Serving - Version 1.1 (with Istio
1.11.5)
I'm not an Istio expert, but you might be able to express the last policy based on either the ingress gateway (have one which is listening only on a ClusterIP address), or based on the SourceIP being within the cluster. For the latter, I'd want to test that Istio is using the actual SourceIP and not substituting in the Forwarded header's IP address (a different reasonable configuration).

Traefik v2.6 multiple certresolvers

I am running Traefik and first I configured to use cloudflare as my certresolver for domain1.com. But I have domain2.net hosted on Route 53.
This is what I have so far:
--entrypoints.websecure.http.tls.certresolver=cloudflare
--entrypoints.websecure.http.tls.domains[0].main=local.domain1.com
--entrypoints.websecure.http.tls.domains[0].sans=*.local.domain1.com
--certificatesresolvers.cloudflare.acme.dnschallenge.provider=cloudflare
--certificatesresolvers.cloudflare.acme.email=myemail#gmail.com
--certificatesresolvers.cloudflare.acme.dnschallenge.resolvers=1.1.1.1
--certificatesresolvers.cloudflare.acme.storage=/certs/acme.json
--entrypoints.websecure.web.tls.domains[1].main=local.domain2.net
--entrypoints.websecure.web.tls.domains[1].sans=*.local.domain2.net
--certificatesresolvers.route53.acme.dnschallenge.provider=route53
--certificatesresolvers.route53.acme.email=myemail#gmail.com
--certificatesresolvers.route53.acme.storage=/certs/acme.json
But when I setup this way, only route53 is configured as a certificate resolver. That's because it's being called last.
Is there a way to make this work with multiple certificate resolvers?
Thanks!
I figure this out and forgot to update.
So just create additional args on traefik deployment:
- --certificatesresolvers.cloudflare.acme.dnschallenge.provider=cloudflare
- --certificatesresolvers.cloudflare.acme.email=myemail#gmail.com
- --certificatesresolvers.cloudflare.acme.dnschallenge.resolvers=1.1.1.1
- --certificatesresolvers.cloudflare.acme.storage=/certs/cloudflare.json
- --certificatesresolvers.route53.acme.dnschallenge.provider=route53
- --certificatesresolvers.route53.acme.email=myemail#gmail.com
- --certificatesresolvers.route53.acme.storage=/certs/route53.json
And then the entrypoints you add to the annotation of the app deployment with its own domain.

Sharing Acme configuration for multiple Traefik services

I have a server running Docker containers with Traefik. Let's say the machine's hostname is machine1.example.com, and each service runs as a subdomain, e.g. srv1.machine1.example.com, srv2.machine1.example.com, srv3.machine1.example.com....
I want to have LetsEncrypt generate a Wildcard certificate for *.machine1.example.com and use it for all of the services instead of generating a separate certificate for each service.
The annoyance is that I have to put the configuration lines into every single service's labels:
labels:
- traefik.http.routers.srv1.rule=Host(`srv1.machine1.example.com`)
- traefik.http.routers.srv1.tls=true
- traefik.http.routers.srv1.tls.certresolver=myresolver
- traefik.http.routers.srv1.tls.domains[0].main=machine1.example.com
- traefik.http.routers.srv1.tls.domains[0].sans=*.machine1.example.com
labels:
- traefik.http.routers.srv2.rule=Host(`srv2.machine1.example.com`)
- traefik.http.routers.srv2.tls=true
- traefik.http.routers.srv2.tls.certresolver=myresolver
- traefik.http.routers.srv2.tls.domains[0].main=machine1.example.com
- traefik.http.routers.srv2.tls.domains[0].sans=*.machine1.example.com
# etc.
This gets to be a lot of seemingly-needless boilerplate.
I tried work around it (in a way that is still ugly and annoying, but less so) by using the templating feature in the file provider like this:
[http]
[http.routers]
{{ range $i, $e := list "srv1" "srv2 }}
[http.routers."{{ $e }}".tls]
certResolver = "letsencrypt"
[[http.routers."{{ $e }}".tls.domains]]
main = "machine1.example.com"
sans = ["*.machine1.example.com"]
{{ end }}
That did not work because the routers created here are srv1#file, srv2#file instead of srv1#docker, srv2#docker which are created by the docker-compose configuration.
Is there any way to specify this configuration only once and have it apply to multiple services?

Getting API (deployed on IBM APIC 5.0) to invoke Loopback application (deployed on Collective Members)

I am using IBM APIC 5.0
I have setup the following.
1. IBM HTTP Server, WAS Plugin routing to MicroGateway
2. MicroGateway, running on Collectives
3. IBM HTTP Server, WAS Plugin routing to Provider Application
4. Provider Application, running on Collectives
Scenario 1 - Invoke Provider App URL directly
HTTPS request to IHS1/Plugin
Configure API to invoke the URL directly (e.g. http://:9081), without SSL
IHS1/Plugin (svr1:443) > MicroGateway (svr1:9081) > Loopback App (svr2:9081)
This works.
Scenario 2 - Invoke Provider App, indirectly via HTTP Server
HTTPS request to IHS1/Plugin
Set host header accordingly (as described in KnowledgeCenter)
Configure API to invoke the IHS URL (e.g. https://svr1:443), with SSL
IHS1/Plugin (svr1:443) > MicroGateway (svr1:9081) > IHS2/Plugin (svr2:443) > Loopback App (svr2:9081).
503 error encountered.
The ihs2/plugin trace reveals the following:
[29/Sep/2016:12:55:59.40468] 00007ea3 fdd0b700 - ODR:DEBUG: matchVHost: enter - host=apidemo-57d22263e4b0171525a5042d-1474392568657.xxx, port=443
[29/Sep/2016:12:55:59.40470] 00007ea3 fdd0b700 - ODR:DEBUG: matchLongestURI: virtual host /cell/defaultCollective/vHostGroup/-vHost-apidemo-57d22263e4b0171525a5042d-1474392568657.xxx:-1 matched host apidemo-57d22263e4b0171525a5042d-1474392568657.xxx
This shows that the configured host header matches, and it is able to find the provider application server. Means that the dynamic routing works to certain extent.
[29/Sep/2016:12:55:59.40565] 00007ea3 fdd0b700 - ODR:DEBUG: checkIfTransportIsValid: endpoint name='/cell/defaultCollective/node/,%2Fhome%2Fusers%2Fadmin%2Fwlpn/server/apidemo-57d22263e4b0171525a5042d-1474392568657-1/transport/Https', port=9081 is valid
This shows that 9081 is a valid part and Https is selected.
[29/Sep/2016:12:55:59.40971] 00007ea3 fdd0b700 - ERROR: lib_stream: openStream: Failed in r_gsk_secure_soc_init: GSK_ERROR_SOCKET_CLOSED(gsk rc = 420) PARTNER CERTIFICATE DN=No Information Available, Serial=No Information Available
[29/Sep/2016:12:55:59.40982] 00007ea3 fdd0b700 - ERROR: GSK_INVALID_HANDLE
[29/Sep/2016:12:55:59.40998] 00007ea3 fdd0b700 - ERROR: ws_common: websphereGetStream: Could not open stream
Then come the error. It's can SSL error. I suspect that currently the Provider application is not enabled with SSL.
Question on how to resolve this
1) How do I enable the loopback app with SSL. I follow this instruction, but it does not work for me because my loopback app is deployed on Collectives.
https://github.com/strongloop/loopback-example-ssl
2) How do I configure the dynamic routing to use non-SSL http traffic instead?

Weblogic 12.1.2. "https + t3" combination on a single managed server. Is it possible?

WLS 12.1.2 is running under JDK 1.7_60 on Windows 7
To meet the requirement "Switch to HTTPS, but leave t3" the following steps are performed in admin console for managed server (where the apps reside)
Disable default listen port 7280 (http and t3)
Enable default SSL listen port 7282 (https and t3s)
In order to enable t3, create a custom Channel
Protocol: t3
Port: 7280
“HTTP Enabled for This Protocol“ flag is set to false
After that, we have https and t3s on port 7282 and t3 only on port 7280.
In this case, we have issues with deployment of applications.
The deployer fails to start/stop the apps.
The reason is the deployer still tries to send messages to managed server via http.
I turned on the deployment debugging and see the following messages in admin server log.
…<DeploymentServiceTransportHttp> …<HTTPMessageSender: IOException: java.io.EOFException: Response had end of stream after 0 bytes when making a DeploymentServiceMsg request to URL: http://localhost:7280/bea_wls_deployment_internal/DeploymentService>
… <DeploymentServiceTransportHttp> …<sending message for id '-1' to 'my_srv' using URL 'http://localhost:7280' via http>
If I disable the custom t3 Channel, everything is ok. The deployer sends messages to https://localhost:7282, as expected. But in this case, we have no t3 available.
Any help is much appreciated.
Thanks