Traefik - apply middleware to router except a specific path - traefik

I use a IP whitelist middleware to filter the access of my web application to some IPS only and it works.
But, I want to unprotect a specific path to make it public (the path is /api/transaction).
For now, I have (in my docker-comose.yml) :
varnish:
labels:
- "traefik.http.routers.api_varnish.rule=Host(`api.local`, `api`)"
- "traefik.http.routers.api_varnish.tls=true"
- "traefik.http.routers.api_varnish.middlewares=https-redirect#file"
- "traefik.http.routers.api_varnish.middlewares=https-whitelist#file"
- "traefik.http.services.api_varnish.loadbalancer.server.port=80"
This part works, then I added:
# Open middleware for payment IPN calls
- "traefik.http.routers.api_varnish_transaction.rule=(Host(`api.local`, `api`) && PathPrefix(`/api/transaction`))"
- "traefik.http.routers.api_varnish_transaction.tls=true"
- "traefik.http.routers.api_varnish_transaction.priority=2"
- "traefik.http.routers.api_varnish_transaction.middlewares=https-redirect#file"
I duplicated the lines, but I didn't apply the middleware https-whitelist#file to the new host.
It doesn't work, I can't find the correct syntax or be sure if I can do it ? documentation is pretty poor.
Any idea?

Have 2 routers, 1 for /api/transaction and another one for /* and give the first router a higher priority (set a higher number) e.g.
# ...
labels:
- traefik.http.routers.router_1.priority=2
Now requests to /api/transaction will only hit router_1
https://doc.traefik.io/traefik/routing/routers/#priority

Related

Combine IpWhitelist and Errors Middlewares. Is it possible?

I would like to do the following. Combine these two middlewares. If the user is not in the whitelist, then show an error page.
As far as I understand these 2 middlewares don't work together? Or can they be combined somehow? Didn't find anything in the documentation.
IpWhitelist works, but i become only text in Response, but I would like to get an error page.
................
entryPoints:
- websecure
middlewares:
- d-whitelist
- service-errorpages
.................
and
middlewares:
d-whitelist:
ipWhiteList:
sourceRange:
- "95.95.95.95/32"
service-errorpages:
errors:
status:
- "401"
- "403"
- "404"
- "500"
service: tools
query: "/{status}.html"
Thanks!
When defining middleware on a service, it is dependent on order within the array. If you flip service-errorpages and d-whitelist around, you will get a 403 response served from service-errorpages when requesting from a non-whitelisted IP.
If you have a lot of services using the same middleware (eg. both the whitelist & the error handler) with the same priority order, you may benefit from using a chain.

How to validate or filter a wildcard in path for http endpoints in Serverless and AWS API gateway before the process triggs the lambda function?

I have the following http path devices/{sn} in a Serverless-AWS APIgateway API. The wildcard sn is a 15 digits [A-Z0-9] pattern.
In the API today any string that is not recognized as a valid path is redirected to this end-point. Ex: devices/test goes to devices/{sn}, devices/bla goes to devices/{sn} and so on. All those strings will query the database and return null because there is no such sn in the table. I could create a validation process inside the lambda to avoid the unnecessary database query. But I want to save lambda resource and I would like to validate before call the lambda.
This is what I have today for this endpoint:
- http:
path: devices/{sn}
method: GET
private: false
cors: true
authorizer: ${file(env.yml):${self:provider.stage}.authorizer}
request:
parameters:
paths:
sn: true
How can I setup this validation or filter in Serverless.yml?
In fact it should be a very straight-forward functionality of AWS/Serverless.
Let's say we have the following scenario: myPath/{id}. In this case id is a integer (a pk in a table). If I type myPath/blabla it will trigg the lambda. The system will spend resource. It shoul have a kind of previous validation - trig the endpoint only if the {id} === integer.
Your issue is very similar to this issue
According to the post and from my experience, No, I don't think you can perform validation in api-gateway level.

google load balancing path matching pattern priority

I want to have request for HOST/code4/* to go to backb and rest all to backa .
i only defined one rule while setting up the host and path as HOST=* , PATH=/code4, /code4/* , BACKEND=backb
But when i look in to console it shows hosts as shown in image with a extra rule of /* due to which my request i think my request to /code4 goes to backa.
What am i doing wrong ?
Why google is adding a /* route? Even if its adding then unmatch route make no sense, right ?

How does HAProxy figure out SSL_FC_SNI for map_dom

I have a haproxy config using maps.
HAProxy config file has the below line:
%[ssl_fc_sni,lower,map_dom(/etc/haproxy/domain2backend.map)]
And in the domain2backend.map, i have the below entries:
dp.stg.corp.mydom.com dp_10293
/dp dp_10293
dp.admin.stg.corp.mydom.com dp_10345
Now when i access https://dp.admin.stg.corp.mydom.com/index.html it is directing me to backend dp_10293 . However using a simple full string match of map(/etc/haproxy/domain2backend.map) solves the problem and it directs me to proper backend dp_10345. The certs which i have is wildcard cert *.mydom.com
So how is map_dom comparing the domains and how is it directing request meant for dp.admin.stg.corp.mydom.com to backend of dp.stg.corp.mydom.com
Since i am using map_dom, it splits up the domain based on dots(.) and does a token matching and which ever is the first match, it returns that backend.
Here dp.admin.stg.corp.mydom.com matches any of
dp.admin.stg.corp.mydom.com
/dp
/admin
/stg
/corp
/mydom
/com
/dp.admin
/dp.admin.stg
/dp.admin.stg.corp
/dp.admin.stg.corp.mydom
/admin.stg.corp.mydom.com
/stg.corp.mydom.com
/corp.mydom.com
/mydom.com
And in my case, since i had a entry for /dp, it was routing to backend dp_10345.
Changing map_dom to map will fix it as map does a strict string comparison.

How can I load balance FastAGI?

I am writing multiple AGIs using Perl that will be called from the Asterisk dialplan. I expect to receive numerous similtaneous calls so I need a way to load balance them. I have been advised to use FastAGI instead of AGI. The problem is that my AGIs will be distributed over many servers not just one, and I need that my entry point Asterisk dispatches the calls among those servers (where the agis reside) based on their availability. So, I thought of providing the FastAGI application with multiple IP addresses instead of one. Is it possible?
Any TCP reverse proxy would do the trick. HAProxy being one and nginx with the TCP module being another one.
A while back, I've crafted my own FastAGI proxy using node.js (nodast) to address this very specific problem and a bit more, including the ability to run FastAGI protocol over SSL and route requests based on AGI request location and parameters (such as $dnis, $channel, $language, ...)
Moreover, as the proxy configuration is basically javascript, you could actually load balance in really interesting ways.
A sample config would look as follow:
var config = {
listen : 9090,
upstreams : {
test : 'localhost:4573',
foobar : 'foobar.com:4573'
},
routes : {
'agi://(.*):([0-9]*)/(.*)' : function() {
if (this.$callerid === 'unknown') {
return ('agi://foobar/script/' + this.$3);
} else {
return ('agi://foobar/script/' + this.$3 + '?callerid' + this.$callerid);
}
},
'.*' : function() {
return ('agi://test/');
},
'agi://192.168.129.170:9090/' : 'agi://test/'
}
};
exports.config = config;
I have a large IVR implementation using FastAGI (24 E1's all doing FastAGI calls, peaks at about 80% so that's nearly 600 Asterisk channels calling FastAGI). I didn't find an easy way to do load balancing, but in my case there are different FastAGI calls: one at the beginning of the call to validate the user in a database, then a different one to check the user's balance or their most recent transactions, and another one to perform a transacion.
So what I did was send all the validation and simple queries to one application on one server and all the transaction calls to a different application on a different server.
A crude way to do load balancing if you have a lot of incoming calls on zaptel/dahdi channels would be to use different groups for the channels. For example suppose you have 2 FastAGI servers, and 4 E1's receiving calls. You can set up 2 E1's in group g1 and the other 2 E1's in group g2. Then you declare global variables like this:
[globals]
serverg1=ip_of_server1
serverg2=ip_of_server2
Then on your dialplan you call FastAGI like this:
AGI(agi://${server${CHANNEL(callgroup)}}/some_action)
On channels belonging to group g1, that will resolve to serverg1 which will resolve to ip_of_server1; on channels belonging to group g2, CHANNEL(callgroup) will resolve to g2 so you get ${serverg2} which resolves to ip_of_server2.
It's not the best solution because usually calls start coming in on one span and then another, etc so one server will get more work, but it's something.
To get real load balancing I guess we would have to write a FastAGI load balancing gateway, not a bad idea at all...
Mehhh... use the same constructs that would apply to load balancing something like web page requests.
One way is to round robin in DNS. So if you have vru1.example.com 10.0.1.100 and vru2.example.com 10.0.1.101 you put two entries in DNS like...
fastagi.example.com 10.0.1.100
fastagi.example.com 10.0.1.101
... then from the dial plan agi(agi://fastagi.example.com/youagi) should in theory alternate between 10.0.1.100 and 10.0.1.101. And you can add as many hosts as you need.
The other way to go is with something a bit too complicated to explain here but proxy tools like HAProxy should be able to route between multiple servers with the added benefit of being able to "take one out" of the mix for maintenance or do more advanced balancing like distribute equally based on current load.