Rxhttpclient proxy configuration in micronaut - kotlin

I am currently facing an issue when configuring a proxy for RxHttpClient in micronaut. What I want to do is send a POST request to FCM through the API which is behind a cooperate proxy. Already tried the solutions in documentation. But nothing seems to work.
httpClient.exchange(POST("/fcm/send", param)
.contentType(APPLICATION_JSON)
.header("Authorization", "key=$serverKey"))
.subscribe { return#subscribe }
In above code snippet httpClient is a RxHttpClient. This works without a proxy setting. Is there any work around for configuring proxy for RxHttpClients?
Update
micronaut:
application:
name: admin-api
http:
services:
cloudMessagingService:
proxy-type: http
proxy-address: some.proxy.corp.com:8000
urls:
- https://fcm.googleapis.com/fcm/send
Is this way of adding the proxy configuration correct?

Related

How to pass API gateway context variable to proxy lambda

I am looking for guidance on how to pass the API Gateway context variable (for example: $context.RequestTime) to the proxy lamba.
Basically I want to send the time request has reached in Api gateway to my lambda proxy method as a request header.
Note that I am using serverless framework (serverless.com)
I tried the following but it does not work
events:
- http:
path: /{proxy+}
method: ANY
request:
parameters:
paths:
proxy: true
headers:
requestTime: $context.requestTime

Is it possible to implement OIDC in front of Nginx Stream with OpenResty?

I would like to know if it is possible to use the OpenResty OIDC module as an authentication proxy within an NGINX stream configuration.
(I don't have acccess to NGINX Plus unfortunately)
I have used NGINX with Stream configurations in the past to proxy access to upstream tcp resources and it works like a charm.
I am currently looking at implementing an OIDC proxy in front of various resources, both static html and dynamic apps, because we have an in-house OIDC IDAM provider. I came across OpenResty, and in particular the lua-resty-oidc module, and thanks to some wonderful guides, (https://medium.com/#technospace/nginx-as-an-openid-connect-rp-with-wso2-identity-server-part-1-b9a63f9bef0a , https://developers.redhat.com/blog/2018/10/08/configuring-nginx-keycloak-oauth-oidc/ ), I got this working in no time for static pages, using an http server nginx config.
I can't get it working for stream configurations though. It looks like the stream module is enabled as standard for OpenResty, but from digging around I don't think the 'access_by_lua_block' function is allowed in the stream context.
This may simply not be supported, which is fair enough when begging off other people's great work, but I wondered if there was any intention to include suport within OpenResty / lua-resty-oidc in the future, or whether anyone knew of a good workaround.
This was my naive attempt to get it working but the server complains about the
'access_by_lua_block' command at run time.
2019/08/22 08:20:44 [emerg] 1#1: "access_by_lua_block" directive is not allowed here in /usr/local/openresty/nginx/conf/nginx.conf:49
nginx: [emerg] "access_by_lua_block" directive is not allowed here in /usr/local/openresty/nginx/conf/nginx.conf:49
events {
worker_connections 1024;
}
stream {
lua_package_path "/usr/local/openresty/?.lua;;";
resolver 168.63.129.16;
lua_ssl_trusted_certificate /etc/ssl/certs/ca-certificates.crt;
lua_ssl_verify_depth 5;
# cache for discovery metadata documents
lua_shared_dict discovery 1m;
# cache for JWKs
lua_shared_dict jwks 1m;
upstream geyser {
server geyser-api.com:3838;
}
server {
listen 443 ssl;
ssl_certificate /usr/local/openresty/nginx/ssl/nginx.crt;
ssl_certificate_key /usr/local/openresty/nginx/ssl/nginx.key;
access_by_lua_block {
local opts = {
redirect_uri_path = "/redirect_uri",
discovery = "https://oidc.provider/discovery",
client_id = "XXXXXXXXXXX",
client_secret = "XXXXXXXXXXX",
ssl_verify = "no",
scope = "openid",
redirect_uri_scheme = "https",
}
local res, err = require("resty.openidc").authenticate(opts)
if err then
ngx.status = 500
ngx.say(err)
ngx.exit(ngx.HTTP_INTERNAL_SERVER_ERROR)
end
ngx.req.set_header("X-USER", res.id_token.sub)
}
proxy_pass geyser;
}
}
Anyone have any advice?
i don't think that's possible.
However to be sure, you should try creating an issue on the official github
https://github.com/zmartzone/lua-resty-openidc/issues
They helped me solve a similar issue before

How to set custom headers with Keycloak Gatekeeper?

I have Keycloak and Keycloak-Gatekeeper set up in OpenShift and it's acting as a proxy for an application that is running.
The application that Keycloak Gatekeeper is proxying requires a custom cookie to be set so I figured I could use the Gatekeeper's custom header configuration to set this however I'm running into issues.
Configuration looks like:
discovery-url: https://keycloak-url.com/auth/realms/MyRealm
client-id: MyClient
client-secret: MyClientSecret
cookie-access-name: my.token
encryption_key: MY_KEY
listen: :3000
redirection-url: https://gatekeeper-url.com
upstream-url: https://app-url.com
verbose: true
resources:
- uri: /home/*
roles:
- MyClient:general-access
headers:
Set-Cookie: isLoggedIn=true
After re-deploying and running through the auth flow, the upstream URL/application is not receiving the custom header. I tried with multiple headers (key/value) but can't seem to get it working or find where that header is being injected in the flow.
I've also checked logs and haven't been able to find anything super useful.
Sample Gatekeeper Config
Gatekeeper Custom Headers Docs
Any suggestions/ideas on how to get this working?
remove Set-Cookie.
Simply add
headers:
isLoggedIn: true

Let's Encrypt SSL ( sailsjs framework )

Is there any node modules for sailsjs framework to make ssl certificate using let's encrypt?
There is a middleware that enables http->https redirect and also handles the ACME-validation requests from Let's Encrypt. As far as I can tell, it does not actually trigger the renewal, nor writes anything, but I believe that the ACME-scripts handle that as cron-jobs every 3 months or so, allowing you app to just validate automatically when they run. I haven't implemented this myself yet though.
I would also ask you to really consider using CloudFlare or some other SSL-termination service, as that also gives you a lot of other benefits like DDoS protection, some CDN-features etc.
Docs:#sailshq/lifejacket
As has been mentioned, you should consider the best overall solution in terms of CloudFlare or SSL-offload via nginx etc.
However, you can use greenlock-express.js for this to achieve SSL with LetsEncrypt directly within the Sails node environment.
The example below:
Configures an HTTP express app using greenlock on port 80 that handles the
redirects to HTTPS and the LetsEncrypt business logic.
Uses the greenlock SSL configuration to configure the primary Sails app as HTTPS on port 443.
Sample configuration for config/local.js:
// returns an instance of greenlock.js with additional helper methods
var glx = require('greenlock-express').create({
server: 'https://acme-v02.api.letsencrypt.org/directory'
, version: 'draft-11' // Let's Encrypt v2 (ACME v2)
, telemetry: true
, servername: 'domainname.com'
, configDir: '/tmp/acme/'
, email: 'myemail#somewhere.com'
, agreeTos: true
, communityMember: true
, approveDomains: [ 'domainname.com', 'www.domainname.com' ]
, debug: true
});
// handles acme-challenge and redirects to https
require('http').createServer(glx.middleware(require('redirect-https')())).listen(80, function () {
console.log("Listening for ACME http-01 challenges on", this.address());
});
module.exports = {
port: 443,
ssl: true,
http: {
serverOptions: glx.httpsOptions,
},
};
Refer to the greenlock documentation for fine-tuning configuration detail, but the above gets an out-of-the-box LetsEncrypt working with Sails.
Note also, that you may wish to place this configuration in somewhere like config/env/production.js as appropriate.

Does Rikulo Stream support server-side SSL?

In my search to find SSL support, I have looked at the Rikulo Security package, which unfortunately does not support SSL.
If it does not support SSL, it would be nice if the url mapping could define this somehow (similar to how security plugin does it in Grails), and with config parameter for the path of the SSL certificate.
An example of the way it could be configured:
var urlMap = {
"/": home,
"/login": SECURE_CHANNEL(login), // I made this part up
.....
};
new StreamServer(uriMapping: urlMap)
..start(port: 8080);
Has anyone got SSL working with Rikulo Stream?
First, you shall use startSecure():
new StreamServer()
..start(port: 80)
..startSecure(address: "11.22.33.44", port: 443);
Second, the routing map shall be the same, i.e., no special handling.
If you'd like to have different routing map for HTTP and HTTPS, you can start two servers:
new StreamServer(mapping1).start(port: 80);
new StreamServer(mapping2).startSecure(address: "11.22.33.44", port: 443);