I am trying to use envoy in front of my Typescript React App for using gRPC from client to server. This envoy proxy sits inside a Docker container within a Kubernetes Cluster.
My API Gateway Proxy is an NGINX proxy that does rate-limiting, filters, authentication communication with my Auth Service, and so on. I needed to enable TLS on both the NGINX Gateway, and the gRPC Server it's proxying for.
Here is what the error log looks like:
[api-frontend-proxy] [2021-01-06 17:53:41.897][15][debug][connection] [source/extensions/transport_sockets/tls/ssl_socket.cc:215] [C0] TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER
My envoy.yaml looks like the following:
static_resources:
listeners:
- name: listener_0
address:
socket_address:
address: 0.0.0.0
port_value: 9090
filter_chains:
- filters:
- name: envoy.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: backend
domains:
- "*"
routes:
- match:
prefix: "/"
route:
cluster: api-gateway-proxy
cors:
allow_origin_string_match:
- prefix: "*"
allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
expose_headers: grpc-status,grpc-message
http_filters:
- name: envoy.router
typed_config: {}
tls_context:
common_tls_context:
tls_certificates:
- certificate_chain:
filename: "./etc/ssl/server.crt"
private_key:
filename: "./etc/ssl/server.key"
# validation_context:
# trusted_ca:
# filename: "/etc/ca-crt.pem"
require_client_certificate: false
clusters:
- name: api-gateway-proxy
connect_timeout: 0.25s
type: strict_dns
lb_policy: round_robin
http2_protocol_options: {}
load_assignment:
cluster_name: api-gateway-proxy
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: api-gateway-proxy
port_value: 1449
And also, if this helps, my NGINX Config is here too:
worker_processes auto;
events {}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent"';
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 1449 ssl http2;
ssl_certificate ./ssl/server.crt;
ssl_certificate_key ./ssl/server.key;
location /com.webapp.grpc-service {
grpc_pass grpcs://api-grpc-service:9090;
proxy_buffer_size 512k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 512k;
grpc_set_header Upgrade $http_upgrade;
grpc_set_header Connection "Upgrade";
grpc_set_header Connection keep-alive;
grpc_set_header Host $host:$server_port;
grpc_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
grpc_set_header X-Forwarded-Proto $scheme;
}
}
}
Thanks to everyone in advance andI'd really appreciate any comments, help or solutions!
You need add transport_socket section under upstream cluster as:
clusters:
- name: api-gateway-proxy
connect_timeout: 0.25s
type: strict_dns
lb_policy: round_robin
http2_protocol_options: {}
load_assignment:
cluster_name: api-gateway-proxy
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: api-gateway-proxy
port_value: 1449
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"#type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext
Related
I want to use Envoy as a reverse proxy in which I want to redirect the request from
http://example.com:3443/node-exporter/metrics
to
http://localhost:9100/metrics
I want to redirect to a specific URL /metrics on the port 9100.
This is my current envoy_conf.yaml file
listeners:
- name: prom_listener
address:
socket_address : {address: 0.0.0.0, port_value: 3443}
filter_chains:
- name: prom_filter_chain
filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: http_connection_manager
route_config:
virtual_hosts:
- name: prom_local_host
domains: ["*"]
routes:
- name: node-exporter-route
match: {prefix: "/node-exporter/"}
route:
cluster: node-exporter-cluster-server
timeout: 0s
idle_timeout: 0s
http_filters:
- name: envoy.filters.http.router
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
clusters:
- name: node-exporter-cluster-server
type: static
connect_timeout: 2s
load_assignment:
cluster_name: node-exporter-cluster-server
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 127.0.0.1
port_value: 9100
What changes (additions/deletions) should I make in this in order to achieve the reverse proxying to a specific url on the localhost and port mentioned?
With your configuration, when you are requesting example.com:3443/node-exporter/metrics, you are actually trying to access 127.0.0.1:9100/node-exporter/metrics, which does not exist.
To access 127.0.0.1:9100/metrics (notice the missing /node-exporter part of the URL), you only have to configure your route to tell Envoy to rewrite the prefix. You should use the prefix_rewrite option to strip the /node-exporter part:
routes:
- name: node-exporter-route
match: {prefix: "/node-exporter/"}
route:
cluster: node-exporter-cluster-server
prefix_rewrite: "/"
timeout: 0s
idle_timeout: 0s
I have configurated Envoy to be my proxy to the redis database, but I'm struggling at the connection pool option which is the most important thing that I need right now.
the following file shows the configurations:
admin:
access_log_path: /tmp/admin_access.log
address:
socket_address:
protocol: TCP
address: 0.0.0.0
port_value: 1936
static_resources:
listeners:
- name: redis_listener
address:
socket_address:
protocol: TCP
address: 0.0.0.0
port_value: 1999
filter_chains:
- filters:
- name: envoy.filters.network.connection_limit
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.connection_limit.v3.ConnectionLimit
stat_prefix: limited_connections
max_connections: 3
delay: 10s
- name: envoy.filters.network.redis_proxy
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.redis_proxy.v3.RedisProxy
stat_prefix: redis_proxy
settings:
op_timeout: 5s
enable_redirection: true
prefix_routes:
catch_all_route:
cluster: redis_cluster
clusters:
- name: redis_cluster
type: STRICT_DNS
dns_lookup_family: V4_ONLY
typed_extension_protocol_options:
extensions.upstreams.tcp.generic.v3.GenericConnectionPoolProto:
"#type": type.googleapis.com/envoy.extensions.upstreams.tcp.generic.v3.GenericConnectionPoolProto
load_assignment:
cluster_name: redis_cluster
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: my-redis
port_value: 6379
when running the server i got this message:
Didn't find a registered network or http filter or protocol options implementation for name: 'extensions.upstreams.tcp.generic.v3.GenericConnectionPoolProto'
In the official documentation, there is no options shown, may you help please with this issue.
Thank you,
I am scouting through a lot of Envoy documentation but have not found a satisfactory answer yet. Our requirement is simple to terminate the TLS connection at Envoy proxy and send the upstream connection (upstream means the backend traffic) over the HTTP/unencrypted channel.
My use case is really simple:
The clients want to talk to Envoy over HTTPS
Envoy terminates the TLS connection and connects to the backend using HTTP (Our backend pool exposes both HTTP and HTTPS ports but we specifically want to connect to HTTP port)
We are using Dynamic Forward Proxy and a few basic envoy HTTP filters which do the host rewriting, there is no other fancy logic in Envoy
We would need something like this but I don't see it out of the box anywhere - https://github.com/envoyproxy/envoy/pull/14634
Current envoy.config
admin:
access_log_path: "/etc/logs/envoy/envoy.log"
address:
socket_address:
protocol: TCP
address: 0.0.0.0
port_value: 20000
static_resources:
listeners:
- name: host_manipulation
address:
socket_address:
protocol: TCP
address: 0.0.0.0
port_value: 443
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: gateway
domains:
- "*"
require_tls: EXTERNAL_ONLY
routes:
- match:
prefix: "/"
route:
cluster: dynamic_forward_proxy_cluster
host_rewrite_path_regex:
pattern:
google_re2: { }
regex: "^/(.+)/(.+)/.+$"
substitution: \2-\1.mesh
http_filters:
- name: envoy.filters.http.dynamic_forward_proxy
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.http.dynamic_forward_proxy.v3.FilterConfig
dns_cache_config:
name: dynamic_forward_proxy_cache_config
dns_lookup_family: V4_ONLY
- name: envoy.filters.http.router
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"#type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext
common_tls_context:
tls_certificates:
- certificate_chain:
filename: "/ca/tls.crt"
private_key:
filename: "/ca/tls.key"
clusters:
- name: dynamic_forward_proxy_cluster
connect_timeout: 1s
lb_policy: CLUSTER_PROVIDED
cluster_type:
name: envoy.clusters.dynamic_forward_proxy
typed_config:
"#type": type.googleapis.com/envoy.extensions.clusters.dynamic_forward_proxy.v3.ClusterConfig
dns_cache_config:
name: dynamic_forward_proxy_cache_config
dns_lookup_family: V4_ONLY
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"#type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext
common_tls_context:
validation_context:
trust_chain_verification: ACCEPT_UNTRUSTED
I'm trying to setup an omnidb server behind a envoy proxy
It was working fine with Nginx but I had to change to envoy for some reason...
I'm using omnidb v2.17
The issue is with the websocket omnidb is using. I can connect fine to omnidb, I can loggin but when I run SQL query, I get the following error:
cannot connect to websocket server with ports 443 (external) and 26000 (internal)
When I inspect in the browser I see the following error in the console:
WebSocket connection to 'wss://my-domain.com/wss' failed: Error during WebSocket handshake: Unexpected response code: 404
After few second I have this error in the console:
WebSocket connection to 'wss://my-domain.com:26000/wss' failed: Error in connection establishment: net::ERR_CONNECTION_TIMED_OUT
EDIT: In envoy log I have this:
[2021-02-16T18:52:19.016Z] "GET /wss HTTP/1.1" 404 - 0 77 63 - "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 11_2_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.150 Safari/537.36" "0d9be0f1-9517-43e0-8a66-355804dd23c7" "my-domain.com" "10.0.0.1:8080"
So it seems it try to forward to "10.0.0.1:8080" instead of port 26000. Is it that the prefix "/" match before "/wss" so everything goes to port 8080 ?
Here is my envoy.yaml file:
static_resources:
listeners:
- name: listener_0
address:
socket_address:
address: 0.0.0.0
port_value: 443
filter_chains:
- filter_chain_match:
server_names:
- my-domain.com
filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_http_and_wss
upgrade_configs:
- upgrade_type: websocket
access_log:
- name: envoy.access_loggers.file
typed_config:
"#type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
path: /dev/stdout
http_filters:
- name: envoy.filters.http.router
route_config:
name: omnidb
virtual_hosts:
- name: local_service
domains:
- "*"
routes:
- match:
prefix: "/wss/"
route:
prefix_rewrite: "/"
cluster: omnidb_ws
- match:
prefix: "/ws/"
route:
prefix_rewrite: "/"
cluster: omnidb_ws
- match:
prefix: "/"
route:
cluster: omnidb
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"#type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext
common_tls_context:
tls_certificates:
certificate_chain:
filename: /etc/letsencrypt/live/my-domain.com/cert.pem
private_key:
filename: /etc/letsencrypt/live/my-domain.com/privkey.pem
clusters:
- name: omnidb
connect_timeout: 30s
dns_lookup_family: V4_ONLY
load_assignment:
cluster_name: omnidb
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 10.0.0.1
port_value: 8080
- name: omnidb_ws
connect_timeout: 0.25s
dns_lookup_family: V4_ONLY
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"#type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext
load_assignment:
cluster_name: omnidb_ws
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 10.0.0.1
port_value: 26000
PS: I can't create the tag omnidb so I put SQL instead, would be nice to have a tag omnidb
After the edit above to get the / route as the last, otherwise it will match everything, now you need to fix how you send your request or how you treat the two routes in regard to trailing slashes.
The main point here:
You have route matches for /wss/ and /ws/, both with trailing slashes
You send a request with /wss with NO trailing slash.
This request matches neither of the two first routes, so it gets to the / route again.
You can send your request with /wss/ (note the trailing slash) or you can add/modify your routes. This can be done a number of ways, the simplest would probably be to just match on /wss and /ws. though if the trailing slash is important to the end application (which it can be in UIs), you can have /wss redirect to /wss/
I tested this just slight modifications to your config. Ignore the changes in the filter chains, the only thing that matters is the routes.
static_resources:
listeners:
- name: listener_0
address:
socket_address:
address: 0.0.0.0
port_value: 8443
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_http_and_wss
access_log:
- name: envoy.access_loggers.file
typed_config:
"#type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
path: /dev/stdout
http_filters:
- name: envoy.filters.http.router
route_config:
name: omnidb
virtual_hosts:
- name: local_service
domains:
- "*"
routes:
- match:
prefix: "/wss"
route:
prefix_rewrite: "/"
cluster: omnidb_ws
- match:
prefix: "/ws"
route:
prefix_rewrite: "/"
cluster: omnidb_ws
- match:
prefix: "/"
route:
cluster: omnidb
clusters:
- name: omnidb
connect_timeout: 30s
dns_lookup_family: V4_ONLY
load_assignment:
cluster_name: omnidb
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 10.0.0.1
port_value: 8080
- name: omnidb_ws
connect_timeout: 0.25s
dns_lookup_family: V4_ONLY
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"#type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext
load_assignment:
cluster_name: omnidb_ws
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 10.0.0.1
port_value: 26000
Then curl localhost:8443/wss or curl localhost:8443/wss/ show they get to your 26000 address.
[2021-02-22T15:57:22.818Z] "GET /wss/ HTTP/1.1" 503 UF 0 91 3 - "-" "curl/7.68.0" "806c2c28-4ab4-4069-acf1-15b75405d390" "localhost:8443" "10.0.0.1:26000"
[2021-02-22T15:57:27.287Z] "GET /wss HTTP/1.1" 503 UF 0 91 3 - "-" "curl/7.68.0" "55d32a64-c9f7-46cc-8f5e-4a024c0de00d" "localhost:8443" "10.0.0.1:26000"
We are running envoy server v1.15 on vm which serve the traffic for http and https both.
We have two listener one for http and one for https.
1.We are able to get all the route for application and http://api.example.com/stats/prometheus for envoy working on non-tls port for http listener.
2.For https listener we have provided the path of letsencrypt self-signed certificated and configured according the envoy documentation
Please find the config for same..
admin:
access_log_path: "/tmp/admin_access.log"
address:
socket_address:
address: 127.0.0.1
port_value: 9901
static_resources:
listeners:
- name: listener_0
address:
socket_address:
address: 0.0.0.0
port_value: 80
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_http
codec_type: AUTO
http_protocol_options:
accept_http_10: true
route_config:
name: local_route
virtual_hosts:
- name: local_envoy_admin_service
domains:
- "*"
routes:
- match:
path: "/stats/prometheus"
route:
cluster: envoy_admin_service
- match:
prefix: "/"
route:
cluster: local_service
timeout: 15s
http_filters:
- name: envoy.filters.http.router
- name: listener_https
address:
socket_address:
address: 0.0.0.0
port_value: 443
listener_filters:
- name: envoy.filters.listener.tls_inspector
typed_config: {}
filter_chains:
- filter_chain_match:
server_names:
- api.example.com
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"#type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext
common_tls_context:
tls_certificates:
- certificate_chain:
filename: "/etc/letsencrypt/live/api.example.com/fullchain.pem"
private_key:
filename: "/etc/letsencrypt/live/api.example.com/privkey.pem"
filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_https
use_remote_address: true
http2_protocol_options:
max_concurrent_streams: 100
access_log:
- name: envoy.access_loggers.file
typed_config:
"#type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
path: "/var/log/envoy/access.log"
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains:
- api.example.com
routes:
- match:
path: "/stats/prometheus"
route:
cluster: envoy_admin_service
- match:
path: "/"
route:
cluster: local_service
http_filters:
- name: some.customer.filter
- name: envoy.filters.http.router
clusters:
- name: envoy_admin_service
connect_timeout: 0.25s
type: STATIC
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: envoy_admin
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 127.0.0.1
port_value: 9901
- name: local_service
connect_timeout: 15s
type: STATIC
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: local
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 127.0.0.1
port_value: 8081
If https request comes, it should do the downstream TLS termination and forward non-tls request to desired upstream cluster.
Please help us to figure it out what we are missing for HTTPS configuration