Why choose mod_dav_svn instead of svnserve & a repository browser? - apache

Please correct me if I am wrong about my understanding of mod_dav_svn, which is that it basically serves 2 purposes:
Expose the SVN repository (on the filesystem) to clients, which can be either:
repository browsers (e.g. web)
the 'svn' command itself, which is a client command line program
Act as a repository browser to make the repository viewable in a convenient way
Now for point 1, are my following assumptions correct?
Anytime a repository is exposed using mod_dav_svn, the http:// or https:// form of accessing the repository is used
If using svnserve, the svn:// form of accessing the repository is used
In this case, mod_dav_svn would serve no additional use
For point 2, if using Trac's repository browsing functionality, there is no additional use for the repository browsing functionality offered by mod_dav_svn?
Does mod_dav_svn serve any other purpose I haven't outlined here? Asked another way, is there any disadvantage to going with svnserve and Trac?
I ask because I get the impression that mod_dav_svn is very commonly used, so I wonder what I'm missing.

Forget Point #2: HTTP Browsing. That's just a slight bonus. It doesn't replace your need for something like Fisheye, ViewVC, or (my favorite) Sventon.
There are some disadvantages of using Apache's http for your Subversion server:
It's slower
It's harder to setup
Then, there are advantages:
It uses a standard port (80) that's not normally blocked by firewalls.
It can be integrated with LDAP and Active Directory
You can use HTTPS which will encrypt updates and checkouts (including user passwords).
You can have multiple repositories use the same Apache httpd instance. With svnserve, you can only do a single repository per instance and if you have multiple repositories on one system, you'll have to run each svnserve process on a non-standard port.
My personal take: If you are doing a corporate environment, the advantages of using the HTTP or HTTPS protocol way outweigh the disadvantages. If you are talking about a small repository and you and your friends, I run svnserve simply because of the lower overhead and easier setup. However, in those circumstances, I just use Github and not worry about it.
I run Subversion as my personal source control system on my machine, and I use svnserve in that instance.
Thanks, some follow up questions. 1) When I access a URL on my svn server as svn://server/repo, isn't that using port 80 as well? 2) If LDAP integration can't be done for svnserve, is the only way users can authenticate is if they're in the file referred to by password-db in svnserve.conf for svn:// or have a shell account for svn+ssh://? 3) Can't the same protection offered by https:// be offered by svn+ssh://, or is there a difference? (Sorry I can't put paragraphs here it submits every time I hit enter am I doing it right.) –
It's using port 3690 by default. This can be changed when you run svnserve, but then your svn URL has to reflect that too.
Pretty much true. Most places that use svnserve use the passwd file. However, since version 1.5, you can use SASL. However, I have never seen anyone use it.
Yes, ssh+svn:// does offer encrypted packets. However, SSH can be tricky to implement. Basically, the svnserve process has to be spawned and run for that particular user. That means each user needs direct read/write access to the repository. You need to setup umask for each user and create a Subversion Unix group everyone belongs to. Then, since these users have direct access to the repository files, keep them from logging onto the repository server. The Online Manual has complete details. But, in the end, it only works on Unix servers and Unix clients. Windows clients don't have SSH on them, and would have to install that. I've tried it a few times, but https:// is much easier.

The simplicity of svnserve makes it a no brainer for quick and dirty installs, especially if you are deploying on Windows.
However, the moment that you need to memorize a lot of passwords, and would wish that the Subversion repository use the same SSO mechanism that is used in the organization, using Apache's authentication mechanisms coupled with mod_dav_svn helps a lot.
Prior to Subversion 1.7, mod_dav_svn's performance was said to be atrocious and known to be slower than svnserve. Subversion 1.7 supposedly offers a faster and simpler HTTP protocol which should make mod_dav_svn use more palatable.

Related

Change remote directory ownership without ssh

First, I feel very silly.
For fun/slight profit, I rent a vps which hosts an email and web server and which I use largely as a study aid. Recently, I was in the middle of working on something, and managed to lose connection to the box directly after having accidentally changed the ownership of my home folder to an arbitrary non-root, incorrect user. As ssh denies root, and anything but pubkey authentication, I'm in a bad way. Though the machine is up, I can't access it!
Assuming this is the only issue, a single chown should fix the problem, but I haven't been able to convince my provider's support team to do this.
So my question is this: have I officially goofed, or is there some novel way I can fix my setup?
I have all the passwords and reasonable knowledge of how all the following public facing services are configured:
Roundcube mail
Dovecot and postfix running imaps, smtps and smtp
Apache (but my websites are all located in that same home folder, and
so aren't accessible - At least I now get why this was a very bad idea...)
Baikal calendar setup in a very basic fashion
phpMyAdmin but with MySql's file creation locked to a folder which apache isn't serving
I've investigated some very simple ways to 'abuse' some of the other services in a way that might allow me either shell access, or some kind of chown primitive, but this isn't really my area.
Thanks!!
None of these will help you, at least of the services you listed none have the ability to restore the permissions.
All the VPS providers I've used give "console" access through the web interface. This is equivalent to sitting down at the machine, including the ability to login or reboot in recovery mode. Your hosting provider probably offers some similar functionality (for situations just like this, or for installing the operating system, etc), and it is going to be your easiest and most effective means of recovery. Log in there as root and restore your user's permissions.
One thing struck me as odd,
I haven't been able to convince my provider's support team to do this.
Is that because they don't want to do anything on your server which you aren't paying them to manage, or because they don't understand what you're asking? The latter would be quite odd to me, but the former scenario would be very typical of an unmanaged VPS setup (you have root, console access, and anything more than that is your problem).

Symfony permission recommendation: same user cli and webserver

I read this recommendation in the installation guidelines from Symfony:
1. Use the same user for the CLI and the web server
In development environments, it is a common practice to use the same UNIX user for the CLI and the web server because it avoids any of these permissions issues when setting up new projects. This can be done by editing your web server configuration (e.g. commonly httpd.conf or apache2.conf for Apache) and setting its user to be the same as your CLI user (e.g. for Apache, update the User and Group values).
This is only good practice for local development environments or should I do this on my public test & prod server as well? To me this doesn't seem as a very secure configuration?
Questions Can I safely follow this recommendation on a prod server? What are the risks, if there are any?
This recommendation give an easy alternative to avoid the common permissions problem.
I would prefer setup the web server permissions correctly once and keep the default webserver group/user.
The documentation has a good guide to achieve this.
EDIT
You shouldn't make your CLI user as your webserver user, especially in production because it opens you up to all kinds of potential abuse.
The whole point of the www-data user is that it is an unprivileged user, by default not able to write to any file .
Your CLI user is most often root, also keep the www-data user as the web server owner protect you from bad manipulations that can involves a lot of problems and potential security issues.
Plus, if your webserver is under an attack, other services which depends on the same user can be also compromised.
Server daemons accessible from the outside network (such as the web server) typically run as an unprivileged user so that in the event that they are hacked due to a vulnerability, the possible things the attacker can do is minimal.

Explain CouchDB's serving of websites, is CouchDB bundled somehow with Apache and how does it work?

I am trying to understand how CouchDB work. Does it come bundled up with separate Apache or does it use the Apache in the system. I am trying to understand how it determines where to serve the site and how are different directions done. This is important information because I am trying to understand how to implement the Apache 2.2 mod-proxy -module here with it. Do I need to tune CouchDB or do I need to tune a separate Apache process? Suppose you have 10 CouchDB processes and you want to direct their results to siteA, how can you do that?
Sorry I am now vague but I am trying to understand how to combine different things from one Site to another, having different authorization-cookies etc. I am having a problem where I have two separates sites hello.com/myCouchDb/ and hallo.de/someOthersite.html working separately. When I merge the codes, the authentication fails -- I think there are at least three different solution candidates:
A) redirect the verification things from the other site to another (a bit hackish) and/or
B) somehow configure the CouchDB Apache -settings, I have tried in Futon but failed.
C) store the authentication cookies to some dir or db and refresh them when they become old (or use never-old cookies)
So how can I merge different CouchDB -instances together with different authentication settings? Suppose you have ten people with different authentication cookies and you want to get them somehow incorporated to the same site. How can you do it? Do you tune network -settings, Apache -settings or CouchDB -settings? Or do you just stores the cookies to some directory or DB that you refresh every time they become old?
P.s. I am the admin so do not worry about the OAuth2.0, I have the authentication-cookies to do whatever I want with the different instances. I just cannot understand how to merge the different instances.
Perhaps related
CouchDB proxy? Apache As a Reverse Proxy?
https://stackoverflow.com/questions/12398389/different-definitions-of-the-term-proxy
What is a proxy? What is it in Apache? Does it have many different meanings?
It sounds like you're confused about the structure of CouchDB. CouchDB is a native JSON Database that has an HTTP API. That API is provided via Mochiweb, an Erlang based webserver that is bundled inside CouchDB. There's only one CouchDB server running, but it runs inside the Erlang Virtual Machine (BEAM) and has a fundamentally different architecture to the typical Apache httpd approach.
Regarding authentication, CouchDB has a per-instance (server) _users database that contains passwords and minimal account details. As an admin you can see this using Futon, although normal users only have access to their own profile. You can assign users into various roles, and then apply those roles and users to each database. Once the _security object is set on a DB, you need to be authenticated to read, and you can use validation update functions to enforce constraints on write. Some brief information on http://blog.couchbase.com/what%E2%80%99s-new-couchdb-10-%E2%80%94-part-4-security%E2%80%99n-stuff-users-authentication-authorisation-and-permissions and http://blog.mattwoodward.com/2012/03/definitive-guide-to-couchdb.html as well as on the wiki.

Ideal railo + tomcat vhost setup for busy production server

I'm migrating a lot of websites from Resin 3 to Tomcat 7 (centos 4/apache 2.20) and I'm struggling to determine what type of configuration matches my requirements. In particular:
proxy_ajp vs mod_jk vs mod_proxy for passing requests to Tomcat/Railo
automating deployment of new sites
putting WEB-INF outside the site roots (to simplify cloning sites)
using apache itk with tomcat so each vhost runs as a different user and process
having a single shared railo server administrator config
support for SES URLs with no extension (ie: /path/to/page)
SSL support required
I've read a lot of howtos already but most are out of date or provide conflicting advice. I would like to see some examples from people who run many railo vhosts and deploy them automatically or programmatically. In general I'd prefer efficiency/speed over simplicity as I want to get the most out of limited resources.
I could have asked these questions separately but I want to be sure any answers take into account all the above factors (assuming the requirements are actually compatible).
firstly, check out the vivotech installers - they are a hosting company, so use their installers as your base, they are flawless. (it uses tomcat)
railo 3.3 makes it a lot easier to deploy contexts from admin, so scripting this shouldn't be that hard.
web-inf should be automatically put into a site when it is defined in tomcat
if you give each user a new context-root, then they will have their own admin
every webserver (apache/iis2k8/even tomcat) supports url-rewrite
everything supports ssl
you might also want to look at how you're going to tune your jvm's for this senario, then do some load testing to see how they fare.
drop an email to sean corfield, google railo and his name and you'll get his email.

For a SaaS running on Node.JS, is a web-server (nginx) or varnish necessary as a reverse proxy?

For a SaaS running on Node.JS, is a web-server necessary?
If yes, which one and why?
What would be the disadvantages of using just node? It's role is to just handle the CRUD requests and serve JSON back for client to parse the date (like Gmail).
"is a web-server necessary"?
Technically, no. Practically, yes a separate web server is typically used and for good reason.
In this talk by Ryan Dahl in May 2010, at 37'30" he states that he recommends running node.js behind a reverse proxy or web server for "security reasons". To elaborate on that, hardened web servers like nginx or apache have had their TCP stacks evolve for a long time in terms of stability and security. Node.js is not at that same level yet. Thus, since putting node.js behind nginx is easy, doesn't have many negative consequences, and in theory increases the security of your deployment somewhat, it is a good choice. At some point in time, node.js may be deemed officially "ready for live direct Internet connections" but wait for Ryan/Joyent to make some announcement to that effect.
Secondly, binding to sub-1024 ports (like 80 and 443) requires the process to be root. nginx and others automatically handle binding as root and then dropping privileges to a safer user account (www-data or nobody typically). Although node.js has system call wrappers in the process module to drop root privileges with setgid and setuid, AFAIK other than coding this yourself the node community hasn't yet seen a convention emerge for doing this. More on this topic in this discussion.
Thirdly, web servers are good at virtual hosting and in general there are convenient things you can do (URL rewriting and such) that require custom coding in node.js to achieve otherwise.
Fourthly, nginx is great at serving static files. Better than node.js (at least by a little as of right now). Again as time goes forward this point may become less and less relevant, but in my mind a traditional static file web server and a web application server still have distinct roles and purposes.
"If yes, which one and why"?
nginx. Because it has great performance and is simpler to configure than apache.