Kubernets ingress-nginx timeout in 120s - asp.net-core

I have one api, which will run about 4 minutes. I have deployed it on kubernetes with ingress-nginx. All api work normally except the long-run api, it always return 504 Gateaway as below:
I have check in stackoverflow, and try some solution, none of it work for me.
any help is welcome.
Kubernetes Ingress (Specific APP) 504 Gateway Time-Out with 60 seconds
I have changed ingress config as below:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ .Values.releaseName }}-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/session-cookie-name: "route"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
nginx.ingress.kubernetes.io/server-snippet: "keepalive_timeout 3600s; grpc_read_timeout 3600s; grpc_send_timeout 3600s;client_body_timeout 3600s;"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "3601"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3601"
nginx.ingress.kubernetes.io/proxy-read-timeout: "3601"
# nginx.org/proxy-connect-timeout: 3600s
# nginx.org/proxy-read-timeout: 3600s
# nginx.org/proxy-send-timeout: 3600s
nginx.ingress.kubernetes.io/proxy-body-size: "100M"
nginx.ingress.kubernetes.io/proxy-next-upstream: "error non_idempotent http_502 http_503 http_504"
nginx.ingress.kubernetes.io/retry-non-idempotent: "true"
nginx.ingress.kubernetes.io/proxy-next-upstream-timeout: "5"
nginx.ingress.kubernetes.io/proxy-next-upstream-tries: "1"
# nginx.ingress.kubernetes.io/configuration-snippet: |-
# location /HouseKeeping/Health/Healthz {
# deny all;
# return 403;
# }
# location /InternalApi/ {
# deny all;
# return 403;
# }
nginx.ingress.kubernetes.io/server-snippets: |
http {
client_max_body_size 100m;
}
location / {
proxy_set_header Upgrade $http_upgrade;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header Connection $http_connection;
proxy_cache_bypass $http_upgrade;
proxy-connect-timeout: 3602;
proxy-read-timeout: 3602;
proxy-send-timeout: 3602;
}
spec:
rules:
- host: {{ .Values.apiDomain }}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: {{ .Values.releaseName }}-web-clusterip-srv
port:
number: 80
I also change the configMap for ingress-nginx-controller to add below config:
apiVersion: v1
data:
allow-snippet-annotations: "true"
proxy-connect-timeout: "300"
proxy-read-timeout: "300"
proxy-send-timeout: "300"
kind: ConfigMap
I also used command to get ingress-nginx's conf, it seems okay
kubectl -n ingress-nginx exec ingress-nginx-controller-6cc65c646d-ljmrm cat /etc/nginx/nginx.conf | tee nginx.test-ingress-export.conf
# Custom headers to proxied server
proxy_connect_timeout 3601s;
proxy_send_timeout 3601s;
proxy_read_timeout 3601s;
it still timeout in 120 seconds.

Typo
Replace below (ingress config):
nginx.ingress.kubernetes.io/proxy-next-upstream-timeout: "5"
nginx.ingress.kubernetes.io/proxy-next-upstream-tries: "1"
In place of (ingress config)
nginx.ingress.kubernetes.io/proxy_next_upstream_timeout: "5"
nginx.ingress.kubernetes.io/proxy_next_upstream_tries: "1"
Edit :
Check this comment may help to resolve your issue.

Related

Ocelot gateway unable to obtain configuration from identityserver in docker compose

I have an error that I can't seem to understand why this is happening.
I have a microservice architecture running in a docker network. I'm trying to set up an identity server with the framework Identityserver4.
There is a proxy forwarding to an Ocelot gateway. The client is an angular application.
The login and logout and retrieving the access token and the identity token is successful, but when I try to set up authentication in Ocelot, I get the following error.
DX20803: Unable to obtain configuration from: 'http://identityservice:5010/.well-
known/openid-configuration'.
gateway_1 | System.InvalidOperationException: IDX20803: Unable to obtain
configuration from: 'http://identityservice:5010/.well-known/openid-configuration'.
gateway_1 | ---> System.IO.IOException: IDX20804: Unable to retrieve document
from: 'http://localhost/auth/.well-known/openid-configuration/jwks'.
gateway_1 | ---> System.Net.Http.HttpRequestException: Cannot assign
requested address
The docker-compose is set up in this way
version: '3.0'
services:
pricecalendarservice:
build:
context: ./PriceCalendarService
environment:
- ASPNETCORE_URLS=http://+:5002
- RedisConnection=redis
gateway:
build:
context: ./Gateway/
environment:
- ASPNETCORE_URLS=http://+:5000
- ID_URL=http://identityservice
frontend:
build:
context: ./SPA
dockerfile: staging.dockerfile
itemmanagerservice:
build:
./ItemManagerService
environment:
- ASPNETCORE_URLS=http://+:5003
- IdentityUrl=http://identityservice
identityservice:
build:
context: ./IdentityServer/IdentityServer
environment:
- DEV_URL=http://localhost
- ASPNETCORE_ENVIRONMENT=Developmnet
- ASPNETCORE_URLS=http://+:5010
- IDENTITY_ISSUER=http://localhost/auth
- RedisConnection=redis
ports:
- 5010:5010
proxy:
build:
context: ./proxy
ports:
- 80:80
redis:
image: redis
ports:
- 6379:6379
The Identityserver is configured in the following way
string redisConnectionString = Environment.GetEnvironmentVariable("RedisConnection",
EnvironmentVariableTarget.Process);
string prodEnv = Environment.GetEnvironmentVariable("PROD_URL");
string devEnv = Environment.GetEnvironmentVariable("DEV_URL");
string env = Environment.GetEnvironmentVariable("ASPNETCORE_URLS");
string issuer = Environment.GetEnvironmentVariable("IDENTITY_ISSUER");
var redis = ConnectionMultiplexer.Connect( redisConnectionString + ":6379");
services.AddDataProtection()
.PersistKeysToStackExchangeRedis( redis , "DataProtection-Keys")
.SetApplicationName("product");
services.AddCors(o => o.AddPolicy("MyPolicy", builder =>
{
builder
.WithOrigins("https:localhost:4200")
.AllowAnyMethod()
.AllowAnyHeader();
}));
services.Configure<CookiePolicyOptions>(options =>
{
// This lambda determines whether user consent for non-essential cookies is
needed for a given request.
options.CheckConsentNeeded = context => true;
options.MinimumSameSitePolicy = SameSiteMode.None;
});
var config = new Config();
config.setEnvironemnt(devEnv);
services.AddIdentityServer(options => {
options.PublicOrigin = issuer;
})
.AddDeveloperSigningCredential()
.AddInMemoryIdentityResources(config.GetIdentityResources())
.AddInMemoryApiResources(config.GetApis())
.AddInMemoryClients(config.GetClients())
.AddTestUsers(config.GetUsers());
NB. the issuer is set to "http://localhost/auth"
The Nginx proxy server is set with the following settings
server {
listen 80;
location / {
proxy_pass http://frontend;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
location /api/hub {
proxy_pass http://gateway:5000;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location /api {
proxy_pass http://gateway:5000;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header Upgrade $http_upgrade;
proxy_cache_bypass $http_upgrade;
proxy_set_header Connection "upgrade";
}
location /auth {
proxy_pass http://gateway:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
}
The gateway configuration is, as read in the documentation at Ocelot Documentation
var authenticationProviderKey = "TestKey";
s.AddAuthentication()
.AddIdentityServerAuthentication(authenticationProviderKey, x =>
{
x.Authority = "http://identityservice:5010";
x.RequireHttpsMetadata=false;
});
/*
options.TokenValidationParameters = new
Microsoft.IdentityModel.Tokens.TokenValidationParameters()
{
ValidAudiences = new[] {"item"}
};
*/
s.AddOcelot();
s.AddSwaggerGen(swagger =>
{
swagger.SwaggerDoc("v1", new OpenApiInfo { Title = "PriceCalendarService" });
});
It seems that the gateway, which is running inside the docker network cant get access to the identity server. But I have tried both the URL which the angular is calling which is
"http://localhost/auth"
And also the name of the service running in docker, in multiple ways.
"http://identityservice:5010"
"http://identityservice"
But somehow, the gateway can't get access to the identity server to load the discovery document.
Can anyone point me in any direction on how to get this right.

Docker-compose: Nginx proxy_pass my api

I am trying proxy my api to my client with nginx, everything is dockerized.
I got my client container which is a simple angular2 client hosted with nginx linked to my api, my api container linked to my mongo container.
The problem is my localhost:80/api is a 404 everytime.
I must be missing something....
Here is my nginx.conf
server {
listen 80;
# location /api/ {
# proxy_pass http://api:3000/;
# proxy_redirect http://api:3000/ http://localhost:80/api/;
# proxy_set_header Host $host;
# }
# location ~ /api/(?<section>.+) {
# proxy_pass http://api:3000/api/$section;
# proxy_set_header Host $host;
# }
location /api {
proxy_pass http://api:3000/api/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
Here is my docker-compose
services:
mongo:
image: mongo
ports:
- "27017:27017"
api:
build: ./server
ports:
- "3000"
volumes:
- .:/server
depends_on:
- mongo
web:
build: ./client
ports:
- "80:80"
volumes:
- .:/client
depends_on:
- api
I workarounded it by doing a node.js reverse proxy instead of nginx

Elastic Beanstalk Returning 503 Errors

My Elastic Beanstalk application is returning 503 server at capacity errors. I know that this happens when the application can't be reached but the application is stable everywhere else and I have tested it.
I believe that the issue is with my nginx.conf override:
"/opt/elasticbeanstalk/#etc#nginx#conf.d#00_elastic_beanstalk_proxy.conf" :
mode: "000664"
owner: root
group: root
encoding: plain
content: |
#WE COME FROM env.config!
upstream nodejs {
server 127.0.0.1:8081;
keepalive 256;
}
server {
listen 80;
if ($time_iso8601 ~ "^(\d{4})-(\d{2})-(\d{2})T(\d{2})") {
set $year $1;
set $month $2;
set $day $3;
set $hour $4;
}
access_log /var/log/nginx/healthd/application.log.$year-$month-$day-$hour healthd;
access_log /var/log/nginx/access.log main;
location / {
root /var/app/current/public/dist;
proxy_set_header Connection "";
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
gzip on;
gzip_comp_level 4;
gzip_types text/html text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
}
container_commands:
01_replace_nginx_eb_conf:
command: "mv -f '/opt/elasticbeanstalk/#etc#nginx#conf.d#00_elastic_beanstalk_proxy.conf' '/tmp/deployment/config/#etc#nginx#conf.d#00_elastic_beanstalk_proxy.conf'"
ignoreErrors: false
I'd like to be able to serve my 'dist' folder of the app which contains a React web application through EBS.
I'm not very familiar with DevOps or nginx. Are there any glaring issues here that would cause my app to always be 503?

nginx load balancing - do I have to have multiple copies of my API?

I want to load balance my Parse API using NGINX
My API runs currently on nginx on one server, if I want to load balance it, do I have to host my API on every host ?
I want it to look like this
---------- /----- [ api-0.myhost.com ]
| client | --------> [ api.myhost.com ] ----- [ api-1.myhost.com ]
---------- \----- [ api-2.myhost.com ]
In this case do I have to install nginx and deploy my API to very api-X.myhost.com ?
Or I just deploy my API on api.myhost.com and on the api-X.myhost.com I just install nginx ?
You just install nginx on api.host.com, then configure nginx for load balancing as below:
upstream api-app {
least_conn;
server api-0.myhost.com:port weight=1 max_fails=1;
server api-2.myhost.com:port weight=1 max_fails=1;
server api-3.myhost.com:port weight=1 max_fails=1;
}
server {
listen 80;
listen 443 ssl;
server_name api.myhost.com;
ssl_certificate /etc/ssl/certs/api_ssl-bundle.crt;
ssl_certificate_key /etc/ssl/private/api_com.key;
client_max_body_size 2000M;
large_client_header_buffers 32 128k;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_buffers 64 128k;
proxy_buffer_size 256k;
proxy_pass http://api-app;
proxy_connect_timeout 1200;
proxy_send_timeout 1200;
proxy_read_timeout 1200;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
With port is a port which your api-app listen on its host.

Wrong IP-Address with nginx + Unicorn + rails

I check the ip-address in the controller with
request.env['REMOTE_ADDR']
this works fine in my test environment.
But on the production server with nginx + unicorn I always get 127.0.0.1.
This is my nginx config for the site:
upstream unicorn {
server unix:/tmp/unicorn.urlshorter.sock fail_timeout=0;
}
server {
listen 80 default deferred;
# server_name example.com;
root /home/deployer/apps/urlshorter/current/public;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri #unicorn;
location #unicorn {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://unicorn;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
I had trouble with this too; I found this question, but the other answer didn't help me.
I looked at Rails 3.2.8's implementation of Rack::Request#ip to see how it decided what to say; to get it to use an address passed via the environment without filtering out addresses from my local network (it's trying to filter out intermediate proxies, but that's not what I wanted), I had to set the HTTP_CLIENT_IP from my nginx proxy configuration block in addition to what you've got above (X-Forwarded-For has to be there too for this to work!):
proxy_set_header CLIENT_IP $remote_addr;
If you use request.remote_addr you'll get the of your Nginx proxy.
To get the real IP address of your user, you can use request.remote_ip.
According to Rails' source code, it checks for various http headers to give you the most relevant one : in Rails 3.2 or Rails 4.0.0.beta1
The answer is in your config file :) The following should do what you want:
real_ip = request.headers["X-Real-IP"]
more here: http://api.rubyonrails.org/classes/ActionDispatch/Request.html#method-i-headers
UPDATE: The proper answer is here in another Q:
https://stackoverflow.com/a/4465588
or in this thread:
https://stackoverflow.com/a/15883610
spoiler:
use request.remote_ip
For ELB - nginx - rails you want to follow this guide:
http://engineering.blopboard.com/resolving-real-client-ip-with-amazon-elb-nginx-and-php-fpm
See:
server {
listen 443 ssl spdy proxy_protocol;
set_real_ip_from 10.0.0.0/8;
real_ip_header proxy_protocol;
location /xxx {
proxy_http_version 1.1;
proxy_pass <api-endpoint>;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-By $server_addr:$server_port;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header CLIENT_IP $remote_addr;
proxy_pass_request_headers on;
}
...
The proxy_set_header CLIENT_IP $remote_addr; didn't work for me. Here's what did..
The solution I found after reviewing the actiondispatch code remote_ip.rb source. Now I get proper IP in my devise/warden processes as well as any other routine I'm looking at request.remote_ip
My config...
Ruby 2.2.1 - Rails 4.2.1 - NGINX v1.8.0 - Unicorn v4.9.0 - Devise v3.4.1
nginx.conf
HTTP_CLIENT_IP vs CLIENT_IP
location #unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header HTTP_CLIENT_IP $remote_addr; <-----
proxy_redirect off;
proxy_pass http://unicorn;
}
Source actionpack-4.2.1/lib/action_dispatch/middleware/remote_ip.rb
Line 114:
client_ips = ips_from('HTTP_CLIENT_IP').reverse
Line 126:
"HTTP_CLIENT_IP=#{#env['HTTP_CLIENT_IP'].inspect} " +