How to secure an admin area for a public and private rails app - apache

How would you secure access to the admin area for a web app?
Our Rails CMS serves pages publicly. I would like to make the backend (/admin) inaccessible using either the webserver(apache) or firewall(netfilter).
Could this be done using an SSL certificate? I would like to limit access to the backend to only those whose have the "key", similar to SSH access to a server.
Thanks in advance.

You're absolutely right that an SSL cert is the way to go. And it's not really all that tricky to set up, though it's rarely done.
It's important to remember that this problem has two components. The first is, "how do I get the darn thing working at all," and, this being a security system, the second is, "how do I set it up so that I'm not likely to accidently do something that borks my security?"
The first thing I would suggest is to write a separate Rails application for the admin stuff, and run it with a different web server on a different port. (If you really want to avoid putting a port number in the URL for the admin site, use a proxy in front of both web servers that uses the Host: header to redirect requests to for foo.com to one server, and admin.foo.com to the other.) This separation will help ensure that you don't accidently give regular users access to admin functionality, and make the SSL setup easier.
For the admin server, set it up for SSL access only. Create a new signing cert, and allow only certificates signed by the signing cert to connect. (This is web-server dependent; if you really need details on how to do this, you probably want to post a new question giving the specifics of the server and configuration you're using.) You can set up a page (on the non-SSL site, or on a page accessable to non-authenticated users on the SSL site) that will have your admins' web browsers automatically generate and upload a certificate that you can sign which will give them access.
Keep copies of all the certs you sign so that when you need to revoke access, you can put that cert in the revocation list.

DON'T use the firewall, you'll just complicate your implementation. The "correct" approach is to use .htaccess or set up authorisation in Apache Directory configuration.
It sounds like you want SSLRequire
SSLVerifyClient none
<Directory /usr/local/apache/htdocs/secure/area>
SSLVerifyClient require
SSLVerifyDepth 5
SSLCACertificateFile conf/ssl.crt/ca.crt
SSLCACertificatePath conf/ssl.crt
SSLOptions +FakeBasicAuth
SSLRequireSSL
AuthName "Snake Oil Authentication"
AuthType Basic
AuthUserFile /usr/local/apache/conf/httpd.passwd
require valid-user
</Directory>
Howto: http://eregie.premier-ministre.gouv.fr/manual/mod/mod_ssl/ssl_howto.html

Related

Let Nginx or Apache Require And Accept any client certificate?

I was wondering if it was possible to configure either of these servers to both require a client certificate while at the same time not completely verifying it?
So far, I tried looking at the SSLVerifyClient configuration fields but the options seem to either make it optional or required and actually verified.

What's a simple way to serve http://localhost as https://someOtherDomain?

I run a local development web server for testing out code changes.
Often I have to test my local changes with remote services that can only connect securely to another domain.
e.g. https://external1.com will only talk to https://someOtherDomain.com, but I've got to test integration of my new code changes with https://external1.com
While I've got a setup configured that works, it seems complex, and took a bit to get setup right. It seems to me that many developers would want to do this same thing, so my question is this:
Is there an easy way to proxy my local webserver as https://someOtherDomain.com ?
EDIT: So maybe this should be asked this way - Does a command line or GUI tool exist that you can pass a local port and a domain name, and it serves your local port securely over https://someOtherDomain.com - no config or SSL cert creation required? Of course it'd be nice if the SSL cert could be replaced through configuration if need be, but by default, it'd work automatically, by using a precanned SSL cert. And even though I'm using Apache, I'm looking for a solution that actually doesnt use Apache - it uses something else. Why? Because I want this solution to work well with any other webserver that's being used by people on our team - as we all run different stacks, and I'd like to be able to let any of us securely serve our sites without having to configure each webserver individually.
Here's my current setup for taking my local webserver and serving it up at https://www.someOtherDomain.com
To test this locally, I've been:
editing my hosts file, and adding an entry to make www.someOtherDomain.com point to my local machine, which of course is running my dev server. This makes it so my local site is now available at http://www.someOtherDomain.com
127.0.0.1 www.someOtherDomain.com
Running Apache with a SSL Cert setup and mod_proxy to redirect all https requests to my local http server, thus making my site available at https://www.someOtherDomain.com. Here's my Apache config for this:
ServerName www.someOtherDomain.com
<Location /balancer-manager>
SetHandler balancer-manager
</Location>
ProxyPass /balancer-manager !
ProxyPass / balancer://mycluster/
<Proxy balancer://mycluster>
BalancerMember http://localhost route=1
</Proxy>
ProxyPass / balancer://mycluster
ProxyPassReverse / balancer://mycluster
SSLHonorCipherOrder On
SSLProtocol -ALL +SSLv3 +TLSv1
SSLCipherSuite RC4-SHA:HIGH:!ADH
# Rewrite all http requests to https
RewriteEngine On
RewriteCond %{HTTPS} off
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
I run this on a mac, but am interested in solutions for linux as well. I've seen various Man in the Middle proxy's that sound like they'd work with some configuration... but I'm looking for something really simple to install and run - not just for me, but something I can tell team members about too, as we may all have to do this a lot in the future.
IMPORTANT NOTE: My local webserver isn't running on Port 80, though I've put it this way in the above example, to keep it simple. I understand port 80 on a mac is a bit special, but am very happy with solutions that work fine on all ports but port 80.
I think mitmproxy can do this for you, as least on linux and os-x. I haven't tried it myself but this question seems to show how it is done. It's still not a trivial program though.
There are however other approaches, which I have used:
The first one is the most pretty simple one, create a DNS entry for develop.mydomain.com which points to 127.0.0.1 and a single certificate for a (sub)domain where you control the DNS. Spread that certificate to all your developers. They'll still need to setup SSL themself but they don't need to generate certificates anymore. It has the added benefit that everybody is developing against https://develop.mydomain.com which allows them to share the configuration.
For bonus points, create a DNS entry for *.develop.mydomain.com and a wildcard certificate and your developers can have different sites (e.g. https://project1.develop.mydomain.com and https://project2.develop.mydomain.com) on there local machine. (Contrary to what the internet sometimes tells you, name based virtual hosting works fine with SSL as long as the certificate is valid for all the named hosts). Since the domain is the same for everybody you can consider getting a valid wildcard certificate to get rid of the warnings.
Unlike the solutions below this works even outside of the office network, which may be relevant when people are working from home or at the customer.
Building on this you could also create DNS entries for the internal IP's of the developer machines (if those are fixed). This does add some work, but it means the ongoing work of a developer can be reached by others in the local network, which can be very convenient for demo's, testing on mobile devices, etc.
An other option is to configure a single machine to proxy for all your developers. Create a DNS entry pointing to the internal IP of this box on something like *.develop.mydomain.com, a matching wildcard certificate and configure that box once with the correct certificate. Now you can create a virtual host for each proxied server, and again, all sites will be reachable throughout the local network, but it does require the developer to have fixed ip addresses (or hostnames added to DNS through DHCP).
Combined with the ability of apache to include all files in a directory in it's configuration makes it trivial to create a script which adds a new site based on a template. All it has to do is write a new file based on the requested subdomain plus the destination and reload the apache config. This means something like a simple PHP script can do what you want the application to do.
If you want to expose your web on your local machine to the internet try Runscope Passageway, it's easy to setup and "just works" (from experience).
Another alternative is ngrok which I also used, but it didn't always work for me.

.htaccess AuthUserFile has no effect or is being ignored

After hours of searching the web and trying dozens of unsuccessful solutions - here is my question.
I'm currently configuring a webserver on RHEL 6.4 and httpd 2.2.15 behind another RHEL 6.4 server using squid 3.1.10 and HTTPS only. I'm also using mod_rpaf to simplify logging and identification of visitors behind the proxy.
My problem is to configure a simple password protected folder. When I try to access the folder, the password dialog pops up with the configured AuthName. So I know that the .htaccess is being parsed. But the dialog does not accept the correct credentials and gives me an error 401.
I messed around with:
different permissions for .htaccess, .htpasswd and parent folders
different absolute locations for the .htpasswd
all activated Apache modules that are available on my system
different encryption algorithms for .htpasswd (crypt, md5, sha, salted sha...)
AllowOverride All on the protected and parent folder
But what I really do not understand that even if I put a wrong location for AuthUserFile there is no error message in Apaches error_log like the well known Permission denied: Could not open password file. Even on LogLevel debug Therefore I think that something is wrong with that Directive AuthUserFile.
I hope there is someone out there knowing better methods to identify the problem.
This is my simple .htaccess I'm using for testing:
AuthType Basic
AuthName "Test123"
#AuthUserFile /var/www/test/.htpasswd
AuthUserFile /notexisting
Require valid-user
Finally I got it to work!
I tracked the error down to the squid reverse proxy by using lynx on my webserver and successfully accessing the protected folder from there.
With my new focus on squid I started googling again. Already the first link took me to the correct answer: squid did not allow the apache to handle user authentication.
Resulution:
Add login=PASS to the cache_peer command in your squid.conf

Apache - Mercurial - Authentication - Active Directory groups / LDAP groups

OS: Linux OpenSUSE
Version control - Mercurial
Apache2
I run http ://my.os.name/ it gives me a page - thus apache is running.
I run http ://my.os.name:/hg - It shows me Mercurial page, thus mercurial is
showing up on http Internet Explorer page.
I'm able to create repositories/or do normal work in Mercurial.
What I need.
1. When I open the above Hg link
then, instead of showing me the Mercurial(Hg) repository page home page, it should first check whether I belong to my company or not i.e. it should authenticate using Windows Active Directory or LDAP server.
If I'm making any changes to a file or create a directory / repository in Hg, then it should make sure / authenticate/verify whether I have valid access to do that operation or not.
HOW can I do this, I need step by step help as I'm new in Apache/Mercurial authentication setup.
I have almost read all the Online help in setting this up and so far I'm able to get to a point whether when I open Hg link, I get a popup for username/password prompt, but its not taking it / not working.
I also dont want to create .htpasswd/ .htaccess or digest files. What I'm wondering is that if in Windows Active Directory, if I have a Security group created for ex: Company/Project1_readers, Company/Project1_Contributors, Company/Project1_Repository1_Readers, Company/Project1_Repository2_Contributors... and in those AD security group ids, if I have all the developers added, then using these groups in AD, I want to grant access to developers instead of adding those users in .hg/hgrc file.
(This is what usually we do in TFS (Team foundation Server) to grant/revoke access) instead of messing with files (adding/removing users) in every repository etc.
How can I do the above?
Kindly advise if the best way is only creating .htpasswd/.htaccess/.htdigest etc files...if I'm wrong in achieving the above scenario.
My httpd.conf file Includes another .conf file (which contains)
=========================================
<Directory /srv/www/hg>
Order deny,allow
Deny from All
AuthType Basic
# #AuthName "Apache Web Site: Login with your AD(Active Directory) credentials"
AuthName "Mercurial Repositories"
#
#
# AuthBasicProvider ldap
# AuthzLDAPAuthoritative off
# #AuthLDAPURL ldap://10.211.16.1:389/OU=TSH,DC=tsh,DC=Mason,DC=com?sAMAccountName
# AuthLDAPURL "ldap://10.211.16.1:389/?samAccountName?sub?(objectClass=user)"
## #ldap://ldap.your-domain.com:389/o=stooges?uid?sub
# AuthLDAPBindDN "cn=xyzserver,OU=Services,OU=Users,OU=Infrastructure,OU=DEN,OU=KSH,DC=Psh,DC=Mason,DC=com"
# #"cn=StoogeAdmin,o=stooges"
# AuthLDAPBindPassword secret1
require valid-user
# require ldap-user
Satisfy any
</Directory>
When I'm using the abvoe LDAP URL in Jenkins, Jenkins is successfully authenticating a user while logging in, then why the same is not working when it's in this server's .conf file. Note, in apache2, the above doesn't have to be in httpd.conf file. Include concept is letting me include the file.conf and file.conf contains the above code. This is as per Apache2 directions as mentioned in httpd.conf file.
Rest of the mercurial files hgwebdir.cgi, hgweb.cgi, hgweb.config are all good (as per online blogs I have read).
I have all the required modules loaded (as they are visible in /etc/apache2/sysconfig.d/loadmodule.conf file (modules which are required for LDAP auth i.e. mod_ldap, mod_authz_ldap etc etc related to ldap and apache).
OK, Prompt part which was not taking my Windows Ldap credentials is now working.
What did I put wrong.
- See line: for AuthLDAPURL and AuthLDAPBindPassword, those were the culprits in my post shown above.
- Cause was, I was new to Windows AD/LDAP concept, so couldn't get a hold of anyone from Systems team in my company. So tried my own hands. The first line for AuthLDAPURL, I got from the GLOBAL configuration file (config.xml) of one of our Jenkins instance.
Jenkins GUI for showing config doesn't show passwords (as they are masked) so there you'll see Manager's DN password as "* * * * * *".
So, I thought I should open the config.xml file of Jenkins instance and got the password "secret1" from there. Actually "secret1" is just an example, in reality it was some crazy value over there like "VVX12##!5GH".
So basically I used that earlier which didn't work as for LDAP authentication to work correctly, you have to talk to someone in SYSTEMS team or the person WHO actually did the setup in Jenkins instance for LDAP authentication there.
Finally I got the password, and it worked.
Resolution: See below what I changed.
One important thing to notice is that, in Jenkins, AUTHURL for LDAP was:
AuthLDAPURL ldap://10.211.16.1:389/OU=TSH,DC=tsh,DC=Mason,DC=com?sAMAccountName
but,
from a Unix/Linux/in my case, SUSE machine, we have to change this line a little bit to
AuthLDAPURL ldap://10.211.16.1:389/OU=TSH,DC=tsh,DC=Mason,DC=com?sAMAccountName?sub
For more on this (Apache2.2 on connecting to Windows AD(Active Directory) authentication):
PS: http://www.yolinux.com/TUTORIALS/LinuxTutorialApacheAddingLoginSiteProtection.html
and then
- I put the correct password for cn=xyzserver (Manager DN user id) in the file and all was good then.
Snapshot of apache config file or the file which you have created separately and included in your httpd.file or through /etc/sysconfig/apache2 filer (variable APACHE_INCLUDE...) now looks like:
<Directory /srv/www/htdocs/hg>
Order deny,allow
Deny from All
AuthType Basic
AuthName "LDAP Access - Mercurial"
AuthBasicProvider ldap
AuthzLDAPAuthoritative off
AuthLDAPURL ldap://10.211.16.1:389/OU=TSH,DC=tsh,DC=Mason,DC=com?sAMAccountName?sub
#AuthLDAPURL "ldap://10.211.16.1:389/OU=TSH,DC=Mason,DC=com?sAMAccountName?sub?(objectClass=*)"
AuthLDAPBindDN "cn=xyzserver,OU=Services,OU=Users,OU=Infrastructure,OU=DEN,OU=KSH,DC=Psh,DC=Mason,DC=com"
AuthLDAPBindPassword CorrectPassword!
# require ldap-user c149807
# AuthUserFile "/dev/null"
require valid-user
Satisfy any
</Directory>
I'll work on getting the user access part now on the actual repository as Auth part is done from IE(Internet Explorer) to Hg(Mercurial) from Linux/Unix/OpenSUSE machine.
if prompted multiple times for user credentials in mercurial. Setup Mercurial_Keyring and then
this question comes which nobody explained in an easy way.
??? how to make the [auth] xx.prefix = servername/hg_or_something work for all repositories under servername/hg location either if I use servername, servername's IP or servername's FQDN ?
ANSWER: Arun • 2 minutes ago
−
OK, I put this in ~/.hgrc (Linux/Unix -home directory's .hgrc hidden file) or Windows users %UserProfile%/mercurial.ini or %HOME%/mercurial.ini file.
[auth]
default1.schemes = http https
default1.prefix = hg_merc_server/hg
default1.username = c123456
default2.schemes = http https
default2.prefix = hg_merc_server.company.com/hg
default2.username = c123456
default3.schemes = http https
default3.prefix = 10.211.222.321/hg
default3.username = c123456
Now, I can checkout using either Server/IP/Server's FQDN.

Example of using AuthType Digest to authenticate a user once across sub-domains?

I have a domain that will be accessed by a small, private group of people. So I want to control access via authentication.
The domain has a collection of applications installed that each have their own sub-domain. Eg: domain.com, app1.domain.com, app2.domain.com, app3.domain.com
I'd love to have a single sign-on solution so they don't have to authenticate themselves for each application. Also, the applications are written in different languages (PHP, Python and Perl) so authenticating users through an Apache module is ideal.
I am new to digest authentication, but it seems like a good solution. I have used htdigest to create my users. I have configured my domain and sub-domains (See below).
If I go to the domain or any of the sub-domains it will prompt for a username and password. If I enter a correct username and password, it will authenticate me and the page will load. However, if I go to another sub-domain, it will ask for me to enter a username and password again. If I enter the same username and password, it will work.
So the password file is OK, and authentication is OK, but the problem seems to lie in the configuration of the AuthDigestDomain.
I have searched all over the net to find an example of using Digest authentication on multiple domains, but I cannot find a specific example that solves my problem.
I am hoping someone here can assist. Do I put the same authentication information in every Directory? Should I be using Directory or Location or Files? Have I missed something all-together?
Thanks in advance!
Below is an example of my Apache config for domain.com:
<Directory /var/www>
AuthType Digest
AuthName "realm"
AuthDigestAlgorithm MD5
AuthDigestDomain / http://domain.com/ http://app1.domain.com/ http://app2.domain.com/ http://app3.domain.com/
AuthDigestNcCheck Off
AuthDigestNonceLifetime 0
AuthDigestQop auth
AuthDigestProvider file
AuthUserFile /etc/apache2/.htpasswd-digest
AuthGroupFile /dev/null
Require valid-user
</Directory>
And here is an example of app1.domain.com:
<Directory /var/lib/app1>
AuthType Digest
AuthName "realm"
AuthDigestAlgorithm MD5
AuthDigestDomain / http://domain.com/ http://app1.domain.com/ http://app2.domain.com/ http://app3.domain.com/
AuthDigestNcCheck Off
AuthDigestNonceLifetime 0
AuthDigestQop auth
AuthDigestProvider file
AuthUserFile /etc/apache2/.htpasswd-digest
AuthGroupFile /dev/null
Require valid-user
</Directory>
To baffle things even further, this works when using IE6, but not Firefox or Chrome. Is it the clients not sending the authentication properly, or is is the server not sending the correct credentials?
I have also been reading up on RFC 2617 and written the authentication headers using PHP to ensure that the request/response challenge is correct. This hasn't helped at all!
Most browsers do not respect the Digest "domain" directive and will not resend credentials for other URIs. As far as I know, Opera is the only browser that honors it.
For Opera, the server(s) must respond with the same "realm" string for each URI in the domain list. In other words, if domain="/test /example", the server needs to send "Test Realm - example.com" in the WWW-Authenticate header for both of those URIs. I assume Opera does this because it stores H(A1) instead of the actual password for security. Read into RFC2617 for more on this.
Here's my cross-browser solution to this problem: http://travisce.com/arest/
I have no experience with something like this myself. But I just took a look at the Apache documentation and found this:
The AuthDigestNonceLifetime directive
controls how long the server nonce is
valid. [...] If seconds is less than 0
then the nonce never expires.
So it seems to me that 0 seconds (the value you are using) is either illegal or really tells Apache to expire the nonce after 0 seconds which would exactly explain the behavior you are geting.
Could a wildcard on the AuthDigestDomain help?
*.domain.com