Is it safe to trust request.local? for authentication in Rails? - ruby-on-rails-3

I have an API controller in a Rails 3 application that is used to communicate with other applications on the same machine and remote servers.
Remote servers use HTTP basic authentication to gain access to the API. Requests originating from the same server should be allowed by default.
Is is safe to trust request.local? that requests are really coming from the same machine? I am thinking of IP spoofing etc.
Btw "protect_from_forgery" is deactivated in the API controller.

For a moderate level of security, yes. This requires that you trust all processes that may run on the same system by any user. It also requires the servers TCP/IP stack, routes, and network filters/internal firewall be configured in a sane way that doesn't allow packets originating from outside the server to be masqueraded as a local IP address (specifically 127.0.0.1). And it assumes that the LOCALHOST constant in Rails be set to 127.0.0.1.
By the way, protect_from_forgery protects from cross-site request forgery (CSRF) and does not and cannot protect against IP address spoofing. And ensuring that a request is coming from an authorized IP and user does not protect against CSRF.

Related

openshift ssl edge termination risk

I have been reading the Openshift documentation for secured (SSL) routes.
Since I use a free plan, I can only have an "Edge Termination" route, meaning the SSL is ended when external requests reach the router, with contents being transmitted from the router to the internal service via HTTP.
Is this secure ? I mean, part of the information transmission is done via HTTP in the end.
The connection between where the secure connection is terminated and your application which accepts the proxied plain HTTP request is all internal to the OpenShift cluster. It doesn't travel through any public network in the clear. Further, the way the software defined networking in OpenShift works, it is not possible for any other normal user to see that traffic, nor can applications running in other projects see the traffic.
The only people who might be able to see the traffic are administrators of the OpenShift cluster, but the same people could access your application container also. Any administrators of the system could access your application container even if using a pass through secure connection terminated with your application. So is the same situation as most managed hosting, where you rely on the administrators of the service to do the right thing.

How do I prevent a user from accessing a server's API directly and instead force them to use the UI?

More of a theoretical question, but I'm really curious!
I have a two part application:
Apache server hosting my UI
Back-end that services all http requests from the UI
The apache service proxies all http requests from the UI to the server. So, if the user is reasonably adept, they can reverse engineer our API by inspecting the calls in the browser's developer tools.
Thus, how do I prevent a user from using the server API directly and instead force them to use the UI?
The server can't determine whether a call came from the UI or not because a user can make a call to myapp.com/apache-proxy/blah/blah/blah from outside of the UI, apache will get the request and forward it to the server, which will have no idea it's not coming from a UI.
The option I see is to inject a header into the request from the UI, that indicates the origin of the request as the UI. This seems ripe for exploitation though.
To me, this is more of a networking question since its something I'd resolve at the network level. If you run your backend application in a private network (or on a public network with firewall rules) you can configure the backend host to only accept communication from your Apache server.
That way the end-user can't connect directly to the API, since its not accessible to the public. Only the allowed Apache server will be able to communicate with the backend API. That way the Apache server acts as an intermediary between the end-user (client side) and the backend API server.
An example diagram from AWS.
You could make the backend server require connections to be authenticated before accepting any requests from them. Then make it so only the Apache server can successfully authenticate in a way that end users cannot replicate. For example, by using SSL/TLS between Apache and the backend, where the backend requires client certificates to be used, and then issue Apache a private certificate that the backend will accept. Then end users will not be able to authenticate with the backend directly.

How to configure Windows (7/8/10) to use a proxy with authentication

I need to use certain software that connects with a server, that allows connections only from whitelisted IPS. To solve this, I have a droplet with fixed IP on DigitalOcean where I use Squid3 as proxy. I configure my system to work through the proxy, and I tell the central server to whitelist that proxy server IP.
Up to here all is great, but as I should have guessed, some people are using my proxy to send malicious packages, and now the server provider is telling me to get it sorted out, or they will cancel my account.
I added authentication to the proxy, and the attacks have stopped, since the attackers do not know the user/pass combination.
But now the problem I have is that I don't see any way to configure Windows to use authentication when connecting to the proxy! I am not talking just about HTTP requests, since the browsers allow for authentication. I am talking about some custom software that needs to communicate as well with this central server.
Is there any way to configure Windows so that it connects to the proxy passing the necessary username and password?

captive portal authentication theory

I'm a little confused on how captive portal authentication works. In some implementations, after a user is authenticated with a login page, their IP and MAC address are whitelisted and allowed to connect through the gateway. Obviously this has the problem of people spoofing MAC addresses to gain access. If the portal sets up a session between itself and the client, does that mean that every piece of traffic that the client requests from the internet must go through the portal's server?
Generally, security in a captive portal is not considered particularly important. However, there are solutions that lock a MAC to the first port to use it and disallow the use of that MAC on any additional port. Similar techniques can be used wirelessly, where the AP will refuse to pair with an additional client using the same MAC address as an existing client. This requires enterprise authentication where a unique key is negotiated for each attached device.
It's not clear to me what you mean by "the portal's server". But generally, once a MAC address is authorized and the wired port is configured or the wireless connection is established, nothing further needs to be done by the portal. The traffic for authenticated connections is just routed/NATted normally.

Centralizing outgoing two-way SSL connections

We are currently using Apache to handle incoming SSL requests. These are two-way SSL connections. Apache accepts the https connection and pass the request on as http connection to the application server. This works well for us.
We would like to use the same kind of centralized mechanism for outgoing two-way SSL connections. Is there a way do this with Apache or another product? To complicate things the client certificate needed to identify out client can vary depending on the destination.
In short:
- Internal clients connect through http to Apache or another product.
- Apache or another product knows based on a rule (?) that a two-way ssl connection is required and sets this up with the destination.
- Depending on the destination the correct certificate is sent to identify our client.
Regards,
Nidkil
What you're talking about is, or course, an HTTP proxy server. In the first scenario you are using it as a transparent proxy to provide SSL support for connections to a set of web pages. In the second scenario you want to use it to provide connections to secure-only pages on behalf of clients speaking HTTP.
You can do this with the Squid proxy, which is free and open-source, provided that your machine sits between the clients and the Internet. Look for "SSLBump". You do need a certificate which the clients would consider valid for all web pages to be accessed (otherwise they will notice what you are doing, which is basically a man-in-the-middle attack).
However, I would strongly recommend against this - if a site requires SSL, it is likely to do so for a reason. It is almost certainly not OK to have internal clients connecting to an online banking site and have you bumping down their encryption so that you can monitor their traffic or whatever...