I'm trying to implement a basic redirection in Traefik's file provider. I would
like the following:
http://example.com/foobar -> https://foobar.example.com
http://www.example.com/foobar -> https://foobar.example.com
https://example.com/foobar -> https://foobar.example.com
https://www.example.com/foobar -> https://foobar.example.com
Here's my attempt:
http:
middlewares:
redirect-foobar:
redirectRegex:
permanent: true
regex: "^https?://(www\\.)?example\\.com/foobar(/?.*)"
replacement: "https://foobar.example.com${2}"
routers:
catch-foobar:
middlewares:
- redirect-foobar
rule: Host(`example.com`) || Host(`www.example.com`)
service: noop#internal
But when I try it out, here's what I get:
$ curl -I http://www.example.com/foobar
HTTP/1.1 308 Permanent Redirect
Location: https://foobar.example.com
Date: Wed, 02 Mar 2022 02:12:48 GMT
Content-Length: 18
Content-Type: text/plain; charset=utf-8
# That's good =)
$ curl -I https://www.example.com/foobar
HTTP/2 404
content-type: text/plain; charset=utf-8
x-content-type-options: nosniff
content-length: 19
date: Wed, 02 Mar 2022 02:05:22 GMT
# Does that mean the request didn't even make it to the catch-foobar router?
What am I doing wrong? Any help would be greatly appreciated.
Thanks,
C
edit: fixed the regex, but the https case is still problematic
The folks from the Traefik community forum helped me. It turned out the router did not have a tls configuration. The correct configuration looks like this:
http:
middlewares:
redirect-foobar:
redirectRegex:
permanent: true
regex: "^https?://(www\\.)?example\\.com/foobar(/?.*)"
replacement: "https://foobar.example.com${2}"
routers:
catch-foobar:
middlewares:
- redirect-foobar
rule: Host(`example.com`) || Host(`www.example.com`)
service: noop#internal
tls: # here
certResolver: default
where default is a certificate resolver defined in my traefik.yml.
Related
As discussed in my other question there is no support for websockets authentication in Knox, but as a temporary solution we could handle authentication in our backend service. Our test has shown however that Knox does not pass Authorization header to the backend.
[client]$ curl -i -u '<user>:<password>' https://knox-server/gateway/default/myservice/ping
# 8090 is our backend port
[knox-server]$ ngrep -W byline port 8090
interface: eth0
filter: ( port 8090 ) and ((ip || ip6) || (vlan && (ip || ip6)))
#
T <knox-server>:59118 -> <myservice>:8090 [AP]
GET /ping?doAs=<user> HTTP/1.1.
X-Forwarded-For: <client>.
X-Forwarded-Proto: https.
X-Forwarded-Port: 443.
X-Forwarded-Host: <knox-server>.
X-Forwarded-Server: <knox-server>.
X-Forwarded-Context: /gateway/default.
User-Agent: curl/7.54.0.
Accept: */*.
Host: <myservice>:8090.
Connection: Keep-Alive.
Accept-Encoding: gzip,deflate.
.
#
T <myservice>:8090 -> <knox-server>:59118 [AP]
HTTP/1.1 200 OK.
Date: Sat, 14 Oct 2017 14:27:58 GMT.
X-Application-Context: myservice:prod:8090.
Content-Type: text/plain;charset=utf-8.
Content-Length: 4.
.
PONG
How should I configure Knox (0.12.0 from HDP 2.6.2) to make it pass Authorization header to the backend for websocket connection?
While writing this question I realised that there is a ticket KNOX-895 resolving the issue of passing cookies and headers to a backend service in Knox 0.14.0.
[EDIT]
I cloned knox git repo (commit 92b1505a), which includes KNOX-895 (2d236e78), run it locally with added websocket service to sandbox topology.
[tulinski]$ wscat -n --auth 'user:password' -c wss://localhost:8443/gateway/sandbox/echows
[tulinski]$ sudo ngrep -W byline host echo.websocket.org
#
T 192.168.0.16:59952 -> 174.129.224.73:80 [AP]
GET / HTTP/1.1.
Host: echo.websocket.org.
Upgrade: websocket.
Connection: Upgrade.
Sec-WebSocket-Key: Z4Qa9Dxwr6Qvq2QAicsT5Q==.
Sec-WebSocket-Version: 13.
Pragma: no-cache.
Cache-Control: no-cache.
Authorization: Basic dXNlcjpwYXNzd29yZA==.
.
##
T 174.129.224.73:80 -> 192.168.0.16:59952 [AP]
HTTP/1.1 101 Web Socket Protocol Handshake.
Connection: Upgrade.
Date: Mon, 16 Oct 2017 14:23:49 GMT.
Sec-WebSocket-Accept: meply+6cIyjbH+Vk2OsAqKJDWic=.
Server: Kaazing Gateway.
Upgrade: websocket.
.
Authorization header is passed to the backend service.
Cloudflare suddenly returns a 302 redirect to the origin domain, which breaks our AJAX calls, although the CORS headers are still in place.
curl -I https://cloudflare-domain.com/channel/4d90dd64aa4a4fd8a3cad8862fd88c67/?limit=12
HTTP/1.1 302 Found
Date: Fri, 29 Sep 2017 15:38:22 GMT
Content-Type: text/html; charset=iso-8859-1
Connection: keep-alive
Set-Cookie: __cfduid=dc5840cbd96478011d1bb040fcb6fc7e81506699502; expires=Sat, 29-Sep-18 15:38:22 GMT; path=/; domain=.cloudflare-domain.com; HttpOnly
Location: https://origin-domain.com/channel/4d90dd64aa4a4fd8a3cad8862fd88c67/?limit=12
CF-Cache-Status: HIT
Expires: Fri, 29 Sep 2017 17:38:22 GMT
Cache-Control: public, max-age=7200
Server: cloudflare-nginx
CF-RAY: 3a600770fec427aa-FRA
We haven't changed any settings, either in Cloudflare or on the origin server.
Any ideas why this could suddenly happen?
Found the problem: a change was made on the origin server.
We put in a redirect to enforce HTTPS, but Cloudflare was connecting over HTTP. The redirect was being returned by the origin server.
Solution: In the Cloudflare settings, under Crypto, select Full SSL (strict).
Update: Go to search and type "SSL/TLS"
and change to Full strict
Screenshot SSL/TLS Settings 302 Found cloudflare
New with REST and API Gateway.
I have installed Kong with Cassandra on a dev machine and I'm trying to add my API (spring-boot application) but reading the documentation I'm struggling to make it work.
My API:
http://ff-nginxdev-01:9003/fund-information-services/first-information/fund/{fundId}
when I run
http http://ff-nginxdev-01:9003/fund-information-services/first-information/fund/630
HTTP/1.1 200
Content-Type: application/json;charset=UTF-8
Date: Fri, 25 Nov 2016 14:47:30 GMT
Transfer-Encoding: chunked
X-Application-Context: application:9003
{
"assetSplit": {
"allocationHistories": [
{
"key": {
"asset": {
"description": "Other Far East",
"id": 18
},
"assetSplit": "09",
"effectiveDate": 1430348400000
},
......
......
Everything look fine and I'm able to retrieve the Json message.
Adding the API in Kong:
http POST http://ff-nginxdev-01:8001/apis/ name=fund-information upstream_url=http://ff-nginxdev-01:9003/ request_path=/fund-information-services
HTTP/1.1 201 Created
Access-Control-Allow-Origin: *
Connection: keep-alive
Content-Type: application/json; charset=utf-8
Date: Fri, 25 Nov 2016 14:39:45 GMT
Server: kong/0.9.4
Transfer-Encoding: chunked
{
"created_at": 1480084785000,
"id": "fdcc76d7-e2a2-4816-8f27-d506fdd32c0a",
"name": "fund-information",
"preserve_host": false,
"request_path": "/fund-information-services",
"strip_request_path": false,
"upstream_url": "http://ff-nginxdev-01:9003/"
}
Testing Kong API Gateway:
http http://ff-nginxdev-01:8000/fund-information-services/first-information/fund/630
HTTP/1.1 502 Bad Gateway
Connection: keep-alive
Content-Type: text/plain; charset=UTF-8
Date: Fri, 25 Nov 2016 14:44:33 GMT
Server: kong/0.9.4
Transfer-Encoding: chunked
An invalid response was received from the upstream server
I know I'm missing something but it is not clear to me.
By default, Dynamic SSL plugin will binds a specific SSL certificate to the request_host in API.
You just define request_path in your API and that means SSL plugin does not apply when you use request_path.
You can read more to understand why it is like that in: this issue
To make your request works, I think you should read about how to proxy an API in: Proxy Reference
Here is my solution for your problem:
Method 1: Change "strip_request_path" to true
"strip_request_path": true
This method assume you didn't specify request_host first
Method 2: Using request_host instead
"request_host" : "your_api"
Then in your request header, you should add this request_host:
Example request:
curl -i -X POST --url http://ff-nginxdev-01:8000/fund-information-services/first-information/fund/630 --header 'Host: your_api'
and it will works
you missed the api name known by kong "fund-information", so if you do this
http POST http://ff-nginxdev-01:8001/apis/ name=fund-information upstream_url=http://ff-nginxdev-01:9003 request_path=/fund-information-services
your test url is
http http://ff-nginxdev-01:8000/fund-information/first-information/fund/630
I'm puzzled why this FE doesn't seem to connect me to the BE through my HAproxy:
defaults
mode http
log global
option httplog
option dontlognull
source 0.0.0.0 usesrc clientip # transparent proxy mode
frontend fe-kb
bind :8081 ssl crt /etc/haproxy/ssl/ssl-key.pem
default_backend be-kb
backend be-kb
server afnB afnB:1080 check
I get this in HA http log:
Jan 9 17:25:04 localhost haproxy[17266]: <ip redacted>:51396 [09/Jan/2016:17:24:44.544] fe-kb~ be-kb/afnB 31/0/-1/-1/20036 503 212 - - cC-- 0/0/0/0/3 0/0 "GET / HTTP/1.1"
I can connect fine from HAproxy CLI (selinux is disabled):
[root#hapA ~]# telnet afnB 1080
Trying 10.45.69.14...
Connected to afnB.
Escape character is '^]'.
GET / HTTP/1.0
HTTP/1.1 200 OK
Server: nginx/1.9.9
Date: Sat, 09 Jan 2016 16:40:44 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Wed, 09 Dec 2015 15:05:19 GMT
Connection: close
ETag: "5668432f-264"
Accept-Ranges: bytes
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
Figured it out, the default route wasn't through my HAproxy and as I was trying to do transparency of the source IPs well ;)
Thanks for watching!
I'm trying to give my webshop a boost with warnish.
Setup varnish port 80, backend is 127.0.0.1 apache2.
Apache Benchmark gives awefull resulst, like 1-2 request/sec !
On the very first displayed file; header.tpl I have :
<?php
cache_control( "public, s-max-age=6000");
expires( to_gmt( time() + 6000 ) );
?>
Below is the response header:
HTTP/1.1 200 OK
Content-Length: 151613
Expires: Tue, 26 Feb 2013 20:04:07
Cache-Control: public, s-max-age=6000
Pragma: no-cache
Set-Cookie: PHPSESSID=i9h5ldj8k4fking69d03jr5244; path=/, language=en; expires=Thu, 28-Mar-2013 18:24:06 GMT; path=/; domain=www.domain.com, currency=CHF; expires=Thu, 28-Mar-2013 18:24:06 GMT; path=/; domain=www.domain.com
Content-Type: text/html; charset=utf-8
Accept-Ranges: bytes
Date: Tue, 26 Feb 2013 18:24:07 GMT
X-Varnish: 186646239
Age: 0
Via: 1.1 varnish
Connection: close
X-Cache: MISS
Must be missing something obvious but to me varnish just doesn't cache; what am i doing wrong ?
PHP most likely has session.cache-limiter set to nocache (the default).
This would send a Pragma: no-cache (and as far as I understand an Expire header set to the current time) to Varnish and thus disabling caching.
Varnish will ignore the « Pragma : no-cache » by default, unless it is instruction to handle the directive (https://varnish-cache.org/docs/4.0/users-guide/increasing-your-hitrate.html).
Your cache-control seems to be configuring properly and should be cached for 6000 second.
The next things that you need to take into consideration are the cookie. Looking in your header, you have a PHP session Cookie:
Set-Cookie: PHPSESSID=i9h5ldj8k4fking69d03jr5244;
Varnish will not cache the cookie unless you remove it the request in your vcl file. Ex :
sub vcl_recv {
set req.http.Cookie = regsuball(req.http.Cookie, "PHPSESSID =[^;]+(; )?", "");
}