in the current project, we are developing an application consisting of an frontend and backend behind an nginx-server ('entry nginx'). The backend consists of multiple micro-services and one of the micro-services (let's call it 'WG') creates an encrypted connection via wireguard to an external service. The WG service itself is only accessible via second nginx ('vpn nginx'). The whole request chain from the user perspective looks something like this:
user -> entry nginx -> vpn nginx ==== connection over wireguard ===> external service
I.e., if an user wants to access the external service, its requests arrive at entry nginx and after authentication forwarded to the 'vpn nginx'. This vpn nginx redirects the requests over wireguard tunnel and when the response comes back, the whole chain is run in opposite direction.
My problem is the following: to use external service a digest auth needs to be performed. One possibility would be to implement this functinality in the frontend, but since I already know that the users are authenticated, I ask myself if 'vpn nginx' can automatically autheticate itself via digest auth at the external service such that no additional logic in the frontend will be required?
I checked the list of third party plug-ins but did not find anything that would allow nginx server to authenticate itself. Also my queries to Google always resulted in links related to authentication of users in nginx which is not what I need.
Thanks
Sergej
Related
I have been reading the Openshift documentation for secured (SSL) routes.
Since I use a free plan, I can only have an "Edge Termination" route, meaning the SSL is ended when external requests reach the router, with contents being transmitted from the router to the internal service via HTTP.
Is this secure ? I mean, part of the information transmission is done via HTTP in the end.
The connection between where the secure connection is terminated and your application which accepts the proxied plain HTTP request is all internal to the OpenShift cluster. It doesn't travel through any public network in the clear. Further, the way the software defined networking in OpenShift works, it is not possible for any other normal user to see that traffic, nor can applications running in other projects see the traffic.
The only people who might be able to see the traffic are administrators of the OpenShift cluster, but the same people could access your application container also. Any administrators of the system could access your application container even if using a pass through secure connection terminated with your application. So is the same situation as most managed hosting, where you rely on the administrators of the service to do the right thing.
We have a couple of back-end web applications to which we want to provide access via the public internet. To that end, we are setting up a reverse proxy (IIS 7.5) from our DMZ. At the same time, we want these web applications to be claims-enabled through ADFS 2.0.
WEB1.MYCORP.COM/WFE1 is the other back-end web application, on our internal network
WEB1.MYCORP.COM/WFE2 is the other back-end web application, on our internal network
ADFS.MYCORP.COM is the ADFS 2.0 server, on our internal network
FSPROXY.MYCORP.COM is the ADFS 2.0 proxy server, on our DMZ
RPROXY1.MYCORP.COM is the reverse proxy for WFE1, on our DMZ
RPROXY2.MYCORP.COM is the reverse proxy for WFE2, on our DMZ
In keeping with the proper configuration of ADFS, our internal DNS resolves ADFS.MYCORP.COM to the actual internal server, while external DNS points ADFS.MYCORP.COM to the ADFS proxy (FSPROXY).
So, here's the scenario:
End user browses to RPROXY.MYCORP.COM
Reverse proxy forwards request to WEB1.MYCORP.COM/WFE1
WFE1 redirects browser to ADFS.MYCORP.COM (actually FSPROXY)
ADFS Proxy prompts for credentials and authenticates against ADFS server
Upon successful authentication, browser redirected back to web app
I have a couple of questions. Do I need to configure something in the rp or the application to allow this. Also the adfs endpoint is the rp url is that an issue?
Do I need to set up something for the reverse proxy as well? (Should I/can I) set up a claims-enabled reverse proxy in IIS? How do I set up the reverse proxy rules to pass back the ADFS request unaltered? Currently, when I try to access the back-end application, it fails with a 401 authentication error. If I remove the proxy and just hit the app server it works fine.
Further,
This fails:
The path is client --> rp -->app -->adfs --> rp -->app --> rp -->client machine
this works:
The path is client -->rp -->app -->adfs -->app -->rp -->client machine
Any suggestions would be greatly appreciated!
Not familiar with how you enabled reverse proxy in IIS (ARR?). Something like this http://blogs.iis.net/carlosag/setting-up-a-reverse-proxy-using-iis-url-rewrite-and-arr
One choice for you is to use ADFS 2012R2 (if possible) because the proxy in that, the Web Application Proxy, handles both ADFS authentication and can handle app publishing for your claims enabled application. There are 2 ways you can publish your app to the internet. Once is pass-through which is kinda what you are trying to do. But it also allows pre-authentication support for a claims aware app. This way, you can have a different policy that decides whether the application can get pass your EDGE network before a packet goes to your internal application.
After doing lots of digging and fiddler traces I found the issue. In testing idp setup the token was different then stage env. The fiddler traces showed that the token was making it back to the app server. The issue was it also looked like the cookie dropped off for no reason. The issue was because the old dev ipd value disagreed with the stage value...naturally. Once I cleared the old token from the database everything worked.
More of a theoretical question, but I'm really curious!
I have a two part application:
Apache server hosting my UI
Back-end that services all http requests from the UI
The apache service proxies all http requests from the UI to the server. So, if the user is reasonably adept, they can reverse engineer our API by inspecting the calls in the browser's developer tools.
Thus, how do I prevent a user from using the server API directly and instead force them to use the UI?
The server can't determine whether a call came from the UI or not because a user can make a call to myapp.com/apache-proxy/blah/blah/blah from outside of the UI, apache will get the request and forward it to the server, which will have no idea it's not coming from a UI.
The option I see is to inject a header into the request from the UI, that indicates the origin of the request as the UI. This seems ripe for exploitation though.
To me, this is more of a networking question since its something I'd resolve at the network level. If you run your backend application in a private network (or on a public network with firewall rules) you can configure the backend host to only accept communication from your Apache server.
That way the end-user can't connect directly to the API, since its not accessible to the public. Only the allowed Apache server will be able to communicate with the backend API. That way the Apache server acts as an intermediary between the end-user (client side) and the backend API server.
An example diagram from AWS.
You could make the backend server require connections to be authenticated before accepting any requests from them. Then make it so only the Apache server can successfully authenticate in a way that end users cannot replicate. For example, by using SSL/TLS between Apache and the backend, where the backend requires client certificates to be used, and then issue Apache a private certificate that the backend will accept. Then end users will not be able to authenticate with the backend directly.
I have 2 web roles in a cloud service; my API and my Web Client. Im trying to setup SSL for both. My question is, do I need two SSL certificates? Do I need 2 domain names?
The endpoint for my api is my.ip.add.ress. The endpoint for my webclient is my.ip.add.ress:8080.
Im not sure how to add the dns entrees for this as there is nowhere for me to input the port number (which I have learned is because its out of the scope of the dns system).
What am I not understanding? This seems to be a pretty standard scenario with Azure Cloud Services (it is set up this way in the example project in this tutorial, for instance http://msdn.microsoft.com/en-us/library/dn735914.aspx) but I can't find anywhere that explains explicitly how to handle this scenario.
First, you are right about DNS not handling port number. For your case, you can simply use one SSL certificate for both endpoints and make the two endpoints have the same domain name. Based on which port is used by user request, the request will be routed to the correct endpoint (API vs. Web Client). Like you said this is a relative common scenario. There is no need to complicate things.
Let's assume you have one domain www.dm.com pointing to the ip address. To access your Web API, your users need to hit https://www.dm.com, without port number which defaults to 443. To access your web client, your users need to hit https://www.dm.com:8080. If you want users to use default port 443 for both web api and web client, you need to create two cloud services instead of one, then web api on one cloud service and web client on the other cloud service. Billing wise, you will be charged the same as one cloud service.
Are there any reasons you want to make 2 different domains and in turn 2 SSL certificates? If so, it is still possible. Based on your requirements, you may have to add extra logic to block requests from the other domain.
I am trying to implement third party authentication with openAM, and have a doubt regarding openAm implementation, i.e if my application is distributed under different servers which are geographically separated and controlled under the same DNS name. How can I differentiate the sessions of different server. Say for example if I type www.google.com it can forward to any of the nearest server available, now if I have to authenticate google.com how will my openAm know that the request is for that particular server. If I ask it in other way, so whenever we are changing a policy in openam or invalidating a session it callbacks to all the registered server, now in distributed environment how it can differentiate the server IP's
I assume you have some sort of LB in front of you servers. I would suggest creating a sticky session at the LB, like a cookie saying what server the user is on before starting the authentication. Then when authentication i done, openam redirects back to your LB and the LB directs to the correct server.