OpenShift: Pod often does not return expected requests - api

I'm running a .dotnet 3.1 RestAPI inside a pod in Openshift, and it process every request smoothly - all transactions to the database (outside the Openshift network) are executed properly, all programatic executions are being finalized without errors. However, 1 in 15 requests will always ECONNRESET and fail to return the HTTP request.
Let's say I make a GET to /users/id/3 - I can see this request hitting my restAPI, being processed all the way down the infra layer, fetch data from the DB, wrap the return, and send it back finishing the request, however at this point, no return is received by the frontend, or postman, but I can see on the API logs that the request was finished and returned.
All of this requests take 2.3min to execute, and often finish in a ECONNRESET. I'm at odds at how to troubleshoot this. I have tried curl'ing the resource in another pod and the same behaviour appears.
I think these requests sometimes are getting lost in the cluster network, so I tried playing with the sessionaffinity of the service config but it's not really tied to this, as far as I understood. Do I have a wrong route config, or service config?
Route config
spec:
host: api.com.cloud
to:
kind: Service
name: api
weight: 100
port:
targetPort: 8080-tcp
tls:
termination: edge
wildcardPolicy: None
status:
ingress:
- host: api.com.cloud
routerName: default
conditions:
- type: Admitted
status: 'True'
lastTransitionTime: XXXX
wildcardPolicy: None
routerCanonicalHostname: router-default.apps.com.cloud
Service config
spec:
clusterIP: XXXX
ipFamilies:
- IPv4
ports:
- name: 8080-tcp
protocol: TCP
port: 8080
targetPort: 8080
internalTrafficPolicy: Cluster
clusterIPs:
- XXX
type: ClusterIP
ipFamilyPolicy: SingleStack
sessionAffinity: None
selector:
deploymentconfig: api
status:
loadBalancer: {}

Related

Forward Flex Gateway Logs to Splunk

I have an instance of MuleSoft's Flex Gateway (v 1.2.0) installed on a Linux machine in a podman container. I am trying to forward container as well as API logs to Splunk. Below is my log.yaml file in /home/username/app folder. Not sure what I am doing wrong, but the logs are not getting forwarded to Splunk.
apiVersion: gateway.mulesoft.com/v1alpha1
kind: Configuration
metadata:
name: logging-config
spec:
logging:
outputs:
- name: default
type: splunk
parameters:
host: <instance-name>.splunkcloud.com
port: "443"
splunk_token: xxxxx-xxxxx-xxxx-xxxx
tls: "on"
tls.verify: "off"
splunk_send_raw: "on"
runtimeLogs:
logLevel: info
outputs:
- default
accessLogs:
outputs:
- default
Please advise.
The endpoint for Splunk's HTTP Event Collector (HEC) is https://http-input.<instance-name>.splunkcloud.com:443/services/collector/raw. If you're using a free trial of Splunk Cloud then change the port number to 8088. See https://docs.splunk.com/Documentation/Splunk/latest/Data/UsetheHTTPEventCollector#Send_data_to_HTTP_Event_Collector_on_Splunk_Cloud_Platform for details.
I managed to get this work. The issue was that I had to give full permissions to the app folder using "chmod" command. After it was done, the fluent-bit.conf file had an entry for Splunk and logs started flowing.

Connection refused when using load balancing with Traefik 2

I have yml configuration file with a router and a service. Every time I get a 404 error. I know the URL works and I can access the server from Traefik server. What am I missing? Also, for some reason the request reroutes to https. Perhaps a conflicting rule?
Also note, Traefik runs in docker, but the connecting server does not. The goal here is to add multiple nodes to the load balancer.
http:
routers:
demo_1-rtr:
rule: "Host(`http://demo.lab.local`)"
service: demo_1
entryPoints:
- http
services:
demo_1:
loadBalancer:
servers:
- url: "http://172.16.9.90:16000"
Traefik Config:
global:
checkNewVersion: true
sendAnonymousUsage: true
api:
insecure: true
providers:
docker:
endpoint: "unix://var/run/docker.sock"
exposedByDefault: false
file:
directory: /rules
watch: true
log:
level: DEBUG
accessLog: {}
entryPoints:
http:
address: ":80"
I suspect it would be this
--api.insecure=true global argument and it should work.
So in your case add the following in traefik.toml
[api]
insecure = true
Otherwise I would need more information to debug more.

Let's encrypt SSL with traefick on ECS Fargate

I've been trying to solve this for days, but without any luck:
Situation:
I have a ECS cluster on AWS using Fargate, this cluster contains an instance of Traefick 2.3.4 and other containers. I'm using Traefick as reverse proxy to forward the requests to the other containers.
Using HTTP everything works fine, so I've decided to add also the secure connection to Traefick. I've tried everything that I could find on the Internet but nothing works, when I try to connect to the specified domain with curl it returns:
curl: (35) error:1408F10B:SSL routines:ssl3_get_record:wrong version number
Here there are some test that I've done:
traefick.yml:
log:
level: DEBUG
api:
dashboard: true
entryPoints:
web:
address: :80
http:
redirections:
entryPoint:
to: websecure
scheme: https
websecure:
address: ":443"
providers:
ecs:
clusters:
- tools-cluster
region: eu-west-2
exposedByDefault: false
certificatesResolvers:
letsencrypt:
acme:
caServer: https://acme-staging-v02.api.letsencrypt.org/directory
email: #########################
storage: acme.json
httpchallenge:
entrypoint: web
Labels:
"dockerLabels": {
"traefik.enable": "true",
"traefik.http.services.traefik.loadbalancer.server.port": "8080",
"traefik.http.routers.traefik.rule": "Host(`${host}`)",
"traefik.http.routers.traefik.entrypoints": "websecure",
"traefik.http.routers.traefik.tls.certresolver": "letsencrypt",
"traefik.http.routers.traefik.service": "api#internal"
}
this version returns this error:
rror: 400 :: urn:ietf:params:acme:error:connection :: Fetching https://traefik.baaluu.com/.well-known/acme-challenge/td8IdOvJ1_GkigY-jPYaA4YsgeiS5FUiuUS-avbpsuY: Error getting validation data, url
It tries to retrieve that data but it can't because it is redirected to the https and it can't retrieve because https doesn't work, I've tried also without the auto redirect, and it returns a similar error, it can't retrieve that data.
But following this guide it should work correctly.
So I've decided to move to the dnsChallenge with this configuration:
Traefick.yml
log:
level: DEBUG
api:
dashboard: true
entryPoints:
web:
address: :80
websecure:
address: ":443"
providers:
ecs:
clusters:
- tools-cluster
region: eu-west-2
exposedByDefault: false
certificatesResolvers:
letsencrypt:
acme:
caServer: https://acme-staging-v02.api.letsencrypt.org/directory
email: ######################
storage: acme.json
dnsChallenge:
provider: route53
delayBeforeCheck: 3
and same labels as before:
"dockerLabels": {
"traefik.enable": "true",
"traefik.http.services.traefik.loadbalancer.server.port": "8080",
"traefik.http.routers.traefik.rule": "Host(`${host}`)",
"traefik.http.routers.traefik.entrypoints": "websecure",
"traefik.http.routers.traefik.tls.certresolver": "letsencrypt",
"traefik.http.routers.traefik.service": "api#internal"
}
Still nothing, and I've this inside the logs:AuthURL: https://acme-staging-v02.api.letsencrypt.org/acme/authz-v3/170242259"
That url contains:
{
"type": "urn:ietf:params:acme:error:malformed",
"detail": "Method not allowed",
"status": 405
}
The latest test that I did is to remove the staging ca server:
log:
level: DEBUG
api:
dashboard: true
entryPoints:
web:
address: :80
websecure:
address: :443
providers:
ecs:
clusters:
- tools-cluster
region: eu-west-2
exposedByDefault: false
certificatesResolvers:
letsencrypt:
acme:
email: ###############
storage: acme.json
dnsChallenge:
provider: route53
delayBeforeCheck: 2
The ssl still doesn't work but I don't see any error message inside the logs: this is the last message that I get about a certificate:
Try to challenge certificate for domain [traefik.baaluu.com] found in HostSNI rule" providerName=letsencrypt.acme routerName=traefik#ecs rule="Host(`traefik.baaluu.com`)"
And there is not much more after that:
(I'm sorry for the picture but I don't find a way to extract that logs from ECS)
The other containers are still reachable on the http protocol.
If I try to connect to it using telnet I can reach the service:
telnet traefik.baaluu.com 443
Trying 3.8.30.164...
Connected to traefik-1547500306.eu-west-2.elb.amazonaws.com.
Escape character is '^]'.
Same goes for the 80
Looking better inside the logs I've also find this
retry due to: acme: error: 400 :: POST :: https://acme-v02.api.letsencrypt.org/acme/chall-v3/9205340157/1Wh0tQ :: urn:ietf:params:acme:error:badNonce :: JWS has an invalid anti-replay nonce: \"0004cbkFTGjCALFGDYOmhruMl6_F_fRSj33cOMvdpx5Xd2M\", url: "
time="2020-12-10T13:08:21Z" level=debug msg="legolog: [INFO] retry due to: acme: error: 400 :: POST :: https://acme-v02.api.letsencrypt.org/acme/chall-v3/9205340157/1Wh0tQ :: urn:ietf:params:acme:error:badNonce :: JWS has an invalid anti-replay nonce: \"0004cbkFTGjCALFGDYOmhruMl6_F_fRSj33cOMvdpx5Xd2M\", url: "
that contains this url: https://acme-v02.api.letsencrypt.org/acme/chall-v3/9205340157/1Wh0tQ
{
"type": "dns-01",
"status": "valid",
"url": "https://acme-v02.api.letsencrypt.org/acme/chall-v3/9205340157/1Wh0tQ",
"token": "44R4gD4_ZmemiCn5rtkqJyWOcjoj09sEgobUvZLH6yc",
"validationRecord": [
{
"hostname": "traefik.baaluu.com"
}
]
}
So I suppose that the ssl has been generated correctly but I'm not sure.
Any idea or suggestion?
Thanks in advance.
H2K
Edit:
I've removed the ssl from the dashboard and I've put it on another container, now entering inside the dashboard I can see this:
So I suppose that the ssl is working for that domain, but I still can't connect to it.
Edit 2:
with telnet if I connect to that url on the port 443 and I request the page I can see the content:
telnet xxxxxxxxxxxxxxxxx 443
Trying 3.10.148.201...
Connected to traefik-1547500306.eu-west-2.elb.amazonaws.com.
Escape character is '^]'.
GET /index.html HTTP/1.1
Host: xxxxxxxxxxxxxxxxx
And the content of the page appears, so it is not a load balacer problem or routing problem, it seems that I can reach the container using the 443, simply the ssl is not there. It is like to have 2 http port and both are behaving in the same way. The 443 at the moment is like a port 80.
I've have also spent a number of days trying to work it out so i feel your pain.
The error is misleading, the request doesn't even make it past the ALB let alone traefik.
There are two factors to this issue,
The first being that when you specify a port 443 through docker compose as "443:443" you would assume that this creates a HTTPS listener, it actually creates a listener for 443 on the HTTP protocol. In addition the listener also sent the data to the fargate HTTP port and didn't redirect. I'm not sure if this is a bug, or because because i haven't specified that the protocol should be "x-aws-protocol: https" on the target port.
I also found some AWS documentation that said if you use a HTTPS port on a ALB that you need an SSL certificate in place at a ALB level. This kind of makes sense that you can't terminate the connection at a task level if you consider the swarm nature and security implications (better minds are welcome to explain)
With the above in mind i created a certificate in the ACM that covered all the the domains that i needed, changed the listener to the HTTPS protocol and specified the certificate i created. At this point i was able to configure traefik to accept traefik to the frontend.

Express Gateway degraded causing huge delays on requests

I have a Gateway configured to handle some services, but there is a huge performance issue and I don't know how can I address this. This is the config for this particular service:
http:
port: 3020
hostname: 'xxxx.prod.xx.xxx.net'
apiEndpoints:
laPrdTest:
host: 'xxx-xxxxxxxx.xxxxxxxx.com'
paths: '/api/v1/esign/*'
serviceEndpoints:
laPrdTest:
urls:
- http://xxxxxx-app1.prd.xxx.xxxxxxxx.net
- http://xxxxxx-app2.prd.xxx.xxxxxxxx.net
policies:
- basic-auth
- cors
- expression
- log
- jwt
- proxy
- rate-limit
pipelines:
laPrdTest:
apiEndpoints:
- laPrdTest
policies:
- cors:
- log:
- action:
message: ${req.method} ${req.originalUrl} ${JSON.stringify(req.headers)}
- rate-limit:
- action:
max: 50
windowMs: 120000
rateLimitBy: "${req.ip}"
- proxy:
- action:
serviceEndpoint: laPrdTest
changeOrigin: true
Check on this gif so you can see how this is very bad on performance:
And this is a call directly to the service API with no performance issue:
Why is this happening?
What can I do to solve this issue?
Finally I couldn't use this anymore an move to do the proxy through Nginx directly.
Any request made through the Gateway it took seconds almost minutes and now using Nginx as a gateway it takes miliseconds.

Debugging istio rate limiting handler

I'm trying to apply rate limiting on some of our internal services (inside the mesh).
I used the example from the docs and generated redis rate limiting configurations that include a (redis) handler, quota instance, quota spec, quota spec binding and rule to apply the handler.
This redis handler:
apiVersion: config.istio.io/v1alpha2
kind: handler
metadata:
name: redishandler
namespace: istio-system
spec:
compiledAdapter: redisquota
params:
redisServerUrl: <REDIS>:6379
connectionPoolSize: 10
quotas:
- name: requestcountquota.instance.istio-system
maxAmount: 10
validDuration: 100s
rateLimitAlgorithm: FIXED_WINDOW
overrides:
- dimensions:
destination: s1
maxAmount: 1
- dimensions:
destination: s3
maxAmount: 1
- dimensions:
destination: s2
maxAmount: 1
The quota instance (I'm only interested in limiting by destination at the moment):
apiVersion: config.istio.io/v1alpha2
kind: instance
metadata:
name: requestcountquota
namespace: istio-system
spec:
compiledTemplate: quota
params:
dimensions:
destination: destination.labels["app"] | destination.service.host | "unknown"
A quota spec, charging 1 per request if I understand correctly:
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpec
metadata:
name: request-count
namespace: istio-system
spec:
rules:
- quotas:
- charge: 1
quota: requestcountquota
A quota binding spec that all participating services pre-fetch. I also tried with service: "*" which also did nothing.
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpecBinding
metadata:
name: request-count
namespace: istio-system
spec:
quotaSpecs:
- name: request-count
namespace: istio-system
services:
- name: s2
namespace: default
- name: s3
namespace: default
- name: s1
namespace: default
# - service: '*' # Uncomment this to bind *all* services to request-count
A rule to apply the handler. Currently on all occasions (tried with matches but didn't change anything as well):
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: quota
namespace: istio-system
spec:
actions:
- handler: redishandler
instances:
- requestcountquota
The VirtualService definitions are pretty similar for all participants:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: s1
spec:
hosts:
- s1
http:
- route:
- destination:
host: s1
The problem is nothing really happens and no rate limiting takes place. I tested with curl from pods inside the mesh. The redis instance is empty (no keys on db 0, which I assume is what the rate limiting would use) so I know it can't practically rate-limit anything.
The handler seems to be configured properly (how can I make sure?) because I had some errors in it which were reported in mixer (policy). There are still some errors but none which I associate to this problem or the configuration. The only line in which redis handler is mentioned is this:
2019-12-17T13:44:22.958041Z info adapters adapter closed all scheduled daemons and workers {"adapter": "redishandler.istio-system"}
But its unclear if its a problem or not. I assume its not.
These are the rest of the lines from the reload once I deploy:
2019-12-17T13:44:22.601644Z info Built new config.Snapshot: id='43'
2019-12-17T13:44:22.601866Z info adapters getting kubeconfig from: "" {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.601881Z warn Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
2019-12-17T13:44:22.602718Z info adapters Waiting for kubernetes cache sync... {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.903844Z info adapters Cache sync successful. {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.903878Z info adapters getting kubeconfig from: "" {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.903882Z warn Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
2019-12-17T13:44:22.904808Z info Setting up event handlers
2019-12-17T13:44:22.904939Z info Starting Secrets controller
2019-12-17T13:44:22.904991Z info Waiting for informer caches to sync
2019-12-17T13:44:22.957893Z info Cleaning up handler table, with config ID:42
2019-12-17T13:44:22.957924Z info adapters deleted remote controller {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.957999Z info adapters adapter closed all scheduled daemons and workers {"adapter": "prometheus.istio-system"}
2019-12-17T13:44:22.958041Z info adapters adapter closed all scheduled daemons and workers {"adapter": "redishandler.istio-system"}
2019-12-17T13:44:22.958065Z info adapters shutting down daemon... {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.958050Z info adapters shutting down daemon... {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.958096Z info adapters shutting down daemon... {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.958182Z info adapters shutting down daemon... {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:23.958109Z info adapters adapter closed all scheduled daemons and workers {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:55:21.042131Z info transport: loopyWriter.run returning. connection error: desc = "transport is closing"
2019-12-17T14:14:00.265722Z info transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I'm using the demo profile with disablePolicyChecks: false to enable rate limiting. This is on istio 1.4.0, deployed on EKS.
I also tried memquota (this is our staging environment) with low limits and nothing seems to work. I never got a 429 no matter how much I went over the rate limit configured.
I don't know how to debug this and see where the configuration is wrong causing it to do nothing.
Any help is appreciated.
I too spent hours trying to decipher the documentation and get a sample working.
According to the documentation, they recommended that we enable policy checks:
https://istio.io/docs/tasks/policy-enforcement/rate-limiting/
However when that did not work, I did an "istioctl profile dump", searched for policy, and tried several settings.
I used Helm install and passed the following and then was able to get the described behaviour:
--set global.disablePolicyChecks=false \
--set values.pilot.policy.enabled=true \ ===> this made it work, but it's not in the docs.