How can I limit access control for my apps running on cloudbees? (Limit which IP addresses can access the services via browser or api) - cloudbees

I have deployed my application on cloudbees, and was wondering how I can create a whitelist of allowed IPs which can access the application via browser requests or API requests.
Thanks

This is not possible on RUN#Cloud shared server pool.
Such IP restriction can be configured on "dedicated" servers as they provide more configuration option with isolated setup

You could write a filter (eg a servlet filter) which looks at the http://en.wikipedia.org/wiki/X-Forwarded-For header - this is the IP of the client - and filter it that way if you like.
(you have to use that header as the routing layer will provide it).

Related

Azure Traffic manager gives SSL error while App gateway URL works while using Azure App gateway ingress controller on AKS

We are going multi-region for our project and there is a need for us to use an Azure traffic manager to route traffic to each region. Our setup looks like below where our app gateway is exposed via a public IP which I used to configure on the Azure Traffic Manager.
My issue is when I hit the traffic manager URL it give me an SSL cert error, while if I hit the App gateway URL directly it works fine on HTTPS. Looking at the below error I know I need to configure the traffic manager certificate and my question is
Is this needs to be configured somewhere in the traffic manager? OR
DO we need to configure this in the application gateway and change the app gateway ingress in Kubernetes with and also use traffic manager certificate there?
• The traffic manager works at the DNS level, thus as the DNS records pointing to the traffic manager’s public URL aren’t setup correctly, you are getting this error when browsing the traffic manager’s URL. Also, when you are accessing the application gateway URLs independently, they are being accessed successfully as the URLs for the application gateways are hosted on the Azure DNS and independent public IPs are also allotted against their DNS records. Thus, appropriate DNS records to route the DNS access request for the traffic manager’s website need to be updated.
• Since you are using multi region setup in Azure with load balancing features, I am considering that your custom domain and its DNS records are setup in Azure itself. And the URLs for the application gateway are setup as separate endpoints in the form of subdomains in the custom DNS record setup itself. Thus, when you browse the application gateway URLs according to the custom domain URL setup, you can access the application page correctly. With respect to the traffic manager, you will need to create a CNAME record pointing from your custom domain to the ‘*.trafficmanager.net’ domain, while also creating a CNAME record pointing from your custom domain to your generic application gateway URLS.
• Once done, create A host records for each application gateway endpoint pointing to the public IP address assigned by Azure to them. After doing the above, your traffic manager URL should be able to route and redirect the application access requests correctly. For more information, please refer to the community discussion below which specifies the exact details relating to your problem: -
Azure Traffic Manager SSL Setup (not classic)

How to use Træfik in Cloud Foundry?

I want to to use an API gateway like Traefik to protect my apps deployed in CF. E.g. by only allowing requests from the internet to the gateway and restrict the apps behind to internal traffic only (probably via route configurations).
Unfortunately, I could not find any guidance how such a setup could be achieved in CF.

How do I prevent a user from accessing a server's API directly and instead force them to use the UI?

More of a theoretical question, but I'm really curious!
I have a two part application:
Apache server hosting my UI
Back-end that services all http requests from the UI
The apache service proxies all http requests from the UI to the server. So, if the user is reasonably adept, they can reverse engineer our API by inspecting the calls in the browser's developer tools.
Thus, how do I prevent a user from using the server API directly and instead force them to use the UI?
The server can't determine whether a call came from the UI or not because a user can make a call to myapp.com/apache-proxy/blah/blah/blah from outside of the UI, apache will get the request and forward it to the server, which will have no idea it's not coming from a UI.
The option I see is to inject a header into the request from the UI, that indicates the origin of the request as the UI. This seems ripe for exploitation though.
To me, this is more of a networking question since its something I'd resolve at the network level. If you run your backend application in a private network (or on a public network with firewall rules) you can configure the backend host to only accept communication from your Apache server.
That way the end-user can't connect directly to the API, since its not accessible to the public. Only the allowed Apache server will be able to communicate with the backend API. That way the Apache server acts as an intermediary between the end-user (client side) and the backend API server.
An example diagram from AWS.
You could make the backend server require connections to be authenticated before accepting any requests from them. Then make it so only the Apache server can successfully authenticate in a way that end users cannot replicate. For example, by using SSL/TLS between Apache and the backend, where the backend requires client certificates to be used, and then issue Apache a private certificate that the backend will accept. Then end users will not be able to authenticate with the backend directly.

Shibboleth Session Validation In Tomcat

I have an Apache/2.2.15 web server with the modules, mod_shib, mod_ssl, and mod_jk. I have a virtual host which is configured (attached below) with AuthType Shibboleth, SSLCertificates, and JKMount to direct requests using AJP to my Tomcat 8 server after a session is successfully established with the correct IDP. When my http request reaches my app server, I can see the various Shib-* headers, along with the attributes my SP requested from the IDP.
Is there a way my app server can validate the shibsession cookie or other headers? I am trying to protect against the scenario where my web server, which resides in the DMZ is somehow compromised, and an attacker makes requests to my app server, which resides in an internal zone.
Is there a way I can validate a signature of something available in the headers, to guarantee that the contents did indeed originate from the IDP, and were not manufactured by an attacker who took control of my web server?
Is there something in the OpenSAML library I could use to achieve this?
Is there a way my app server can validate the shibsession cookie or other headers?
mod_shib has already done that difficult work for you. After validating the return of information from the Identity Provider (IdP), mod_shib then sets environment variables (cannot be set by the client) for your application to read and trust. Implementing OpenSAML in your application is unnecessary as mod_shib has done the validation work for you.
From the docs:
The safest mechanism, and the default for servers that allow for it,
is the use of environment variables. The term is somewhat generic
because environment variables don't necessarily always imply the
actual process environment in the traditional sense, since there's
often no separate process. It really refers to a set of controlled
data elements that the web server supplies to applications and that
cannot be manipulated in any way from outside the web server.
Specifically, the client has no say in them.

How do I go about setting up SSL for my API and my Web Client in a Azure Cloud Service?

I have 2 web roles in a cloud service; my API and my Web Client. Im trying to setup SSL for both. My question is, do I need two SSL certificates? Do I need 2 domain names?
The endpoint for my api is my.ip.add.ress. The endpoint for my webclient is my.ip.add.ress:8080.
Im not sure how to add the dns entrees for this as there is nowhere for me to input the port number (which I have learned is because its out of the scope of the dns system).
What am I not understanding? This seems to be a pretty standard scenario with Azure Cloud Services (it is set up this way in the example project in this tutorial, for instance http://msdn.microsoft.com/en-us/library/dn735914.aspx) but I can't find anywhere that explains explicitly how to handle this scenario.
First, you are right about DNS not handling port number. For your case, you can simply use one SSL certificate for both endpoints and make the two endpoints have the same domain name. Based on which port is used by user request, the request will be routed to the correct endpoint (API vs. Web Client). Like you said this is a relative common scenario. There is no need to complicate things.
Let's assume you have one domain www.dm.com pointing to the ip address. To access your Web API, your users need to hit https://www.dm.com, without port number which defaults to 443. To access your web client, your users need to hit https://www.dm.com:8080. If you want users to use default port 443 for both web api and web client, you need to create two cloud services instead of one, then web api on one cloud service and web client on the other cloud service. Billing wise, you will be charged the same as one cloud service.
Are there any reasons you want to make 2 different domains and in turn 2 SSL certificates? If so, it is still possible. Based on your requirements, you may have to add extra logic to block requests from the other domain.