I am fairly new with puppet but not new to the server administration world.
I've managed to get a puppet master up and running as well as a puppet node on a different machine. I've been working on configuring the node as a web server. I've configured my first non SSL vhost and all seems well.
I've went to setup a SSL Vhost but this is where I am running in to what I suspect is a trivial problem. I have the SSL Key/Cert/CSR/Intermediate Chain file. However, after googling I cannot seem to be able to get these files copied over to the node, automatically, through puppet.
Any help would be welcomed!
You want to organize your site manifest into modules. These allow you to store files in canonical locations so that you can deploy them to agents using the file type, e.g.
file { "/etc/apache2/ssl/chain-and-key.pem":
ensure => file,
source => "puppet:///mymodule/etc/apache2/ssl/chain-and-key.pem";
}
Related
I have an OpenShift environment built with an Apache AAA Pod (service and route) that allows external (to OpenShift) https requests via an intranet browser (yes, I mean intranet and not internet). Apache is setup as a proxy server for multiple pods/services inside of OpenShift. I also have a new pod that was recently created that runs Jenkins. Jenkins has a web interface built in. I am able to get to the Jenkins web GUI by setting up a ProxyPass and ProxyPassReverse for the default Jenkins web address.
Now here comes the problem...
When I go to example.com/jenkins, Apache sees the request and passes it to the Jenkins Pod but the Pod returns another address example.com/login. For this I have to enter another ProxyPass and Reverse into Apache. I then get that in and find that every link on the presented Jenkins Web GUI has another link that seems to present https://example.com/*. This is a problem because there are dozens of sub links and sub pages that each seem to require a separate ProxyPass and Reverse entry.
To add to this, I cannot simply pass "/" to the Jenkins pod because there are other pods and services that are being passed through the Apache server. My department does not have access to create new URLs on a whim so I have to stick with example.com/ as my only path into my OpenShift setup.
How can I do one of the following:
Change Jenkins to force the presented URL to include a header for every link. Like putting .../jenkins/* in front of every link so that I can use .../jenkins/ as my ProxyPass & Reverse to cover all jenkins web GUI URLs.
Configure Apache to convert the URLs coming from the Jenkins Pod into a URL that is presented to the web browser in such a way that .../jenkins/ is added between & /login or any other jenkins web links.
Some other option that I have not thought of yet that may have worked for others with similar setups.
(Sorry for the long question but there are a lot of details that needed to be included as this is a complex issue.)
You could startup jenkins at a different context path: java -jar jenkins.war --prefix=/jenkins, or start it up behind tomcat with a different context path.
Have you set the Jenkins URL in the Jenkins->Manage Jenkins->Configure System?
you can achieve this in two steps
implement the route changes at proxy level
implement the route changes at app level
I have implemented the same with Openshift environment.
Thanks.
My introduction to Puppet and Foreman has been very painful, but I know there's a big community around it, so I'm hoping that someone can set me straight here.
I set up Foreman and Puppet using the Foreman-Installer and it went great. I had Foreman up and running and it worked great! However, I added the OpenStack controller role to the machine, it wiped out the Apache vhosts for Foreman. I've scoured Google and Github for copies of the vhost files, but with no luck.
So the main questions here:
1) How do I locate/generate the Foreman vhosts for Apache?
2) How do I prevent Puppet from removing them again?
Thanks in advance all you Puppet Masters!
To prevent Puppet from blasting your Apache config, start managing that config through Puppet.
I'm not sure how your OpenStack controller role works, but it likely employs the puppetlabs-apache module, which will purge unmanaged configuration. You should use this module to configure the Foreman vhost on the machine.
As for getting it back - Puppet should have stored the contents of deleted files in the clientbucket. Check the logs on that machine. There should be md5 sums for all removed files. Use those to retrieve the contents, either through the filebucket tool, or by manually trudging /var/lib/puppet/clientbucket (or whatever puppet agent --configprint clientbucketdir yields).
We are using Apache Tomcat 7 for my web applications and we have decided to go on production stage.
So now is the time to think about how to secure the Tomcat and the machine. After reading "Apache tomcat security considerations" we decided to go on run tomcat process on dedicated user with minimum scenario.
From what I understand the best option is to configure it in a way that the running tomcat process has only read privilege to all the tomcat files.
I figured I would do it in this way:
I would create 2 users:
-tomcat_process - only for running tomcat
-admin - this is the one all the files belong to
tomcat_process will have access to conf directory, and also will be able to run scripts from tomcat/bin/
My main problem is that Tomcat needs to write to some files in $CATALINA_HOME/$CATALINA_BASE. I know I can change the location of logs and work directory and I thought I would point them to tomcat_process home dir (is this even a good idea?).
But I can't find any information if I can change the path to /conf/Catalina dir. Is it possible?
I would like to avoid adding write access to conf directory, as the whole configurations sits in there.
Or do you think that I should live those directories where their are and just add write privileges to them for tomcat_process?
I was wondering if you could please tell me if this is a correct approach or can I do it better?
I'm so confused with all those security guides which are telling me to restrict privileges but not telling how to do it :(
Keeping it simple I think is the key:
Create a new tomcat for each (set of) web application(s) with their own user.
Limit the tomcat resources to only the tomcat user. In linux you can use the chmod/chown command for this.
Place the tomcat behind a reverse proxy: Internet (https) <- external Firewall -> Apache Reverse Proxy <- Internal Firewall (block all unless whitelisted) --> Tomcat
Delete all standard webapps 'manager', 'root', 'docs'
Disable the shutdown command in server.xml
As for java web applications try to contain them in their own sandbox, meaning own database, own users.
To safe maintenance effort, you could run multiple instances using one tomcat binary and a single tomcat user.
http://www.openlogic.com/wazi/bid/188102/How-to-Run-Multiple-Instances-of-Tomcat-on-a-Single-Server
I'm running apache 2.2.24 on Max OS X 10.9.1. Currently, we have a network drive that we access all of our Git repos on at /Volumes/GitWebsites. I would like to configure Apache to serve our PHP based repos from that directory. So, localhost (or 127.0.0.1)/phpsite1/ or /phpsite2? etc. will serve sites from /Volumes/GitWebsites/phpsite1/ or /phpsite2/ in the browser. My two questions are:
Do I simply modify the server root or do I need to use the mod-alias in the httpd.conf file?
What are the permission setting I need to in order for apache to access /Volumes/GitWebsites ?
I've done configuration changes like this in IIS 7.5 and set up a NodeJS dev environment but still new to make large scale changes to Apache. Thanks for any help given.
If you are happy with serving the contents of /Volumes/GitWebsites as it is then it should be fine to point the document root at it. It's also makes it easy to add sites later.
However this could be troublesome later if you want to manage php configuration later on for the sites separately.
So I have managed it. I can clone mercurial-repositories remotely using HTTP to my Windows Server 2003 machine and the ipaddress from that machine. Although I did deactivate IIS6 and am using Apache 2.2.x now. But not all works right now...darn! Here's the thing:
Cloning goes smooth! But when I want to push my changes to the original repository I get the message "cannot lock static http-repository". On the internet I get to read several explanations that Mercurial wasn't designed to push over HTTP connections. Still, on the Mercurial website there's something about configuring an hgrc file.
There's also the possibilty to configure Apache to host via HTTPS (or SSL). For this you have to load the module enabling OpenSSL and generating keys.
Configuring the hgrc file
Just add "push_ssl = false" under the [web] line. But where to put this file when pushing your changes back?! Because I placed it in the root of the server, in the ".hg" directory, nothing works.
Using SSL/HTTPS with Apache
When I try to access 'https://myipaddress' it fails, displaying a dutch message which would mean something like "server taking too long to respond". Trying to push also gives me a dutch error message which means about the same. It can not connect to my server via https although I followed the steps exactly at this blog.
I don't care which of the above solutions will work for me. Turns out none of them work so far. So please, can anyone help me with one of the solutions above? Pick the easiest! Help will be greatly appreciated, not only from me.
Summary
-Windows Server 2003
-Apache 2.2 with OpenSSL
-Mercurial 1.8.2
-I can clone, but not push!
Thank you!
Maarten Baar(s)
It seems like you might have apache configured incorrectly for getting it to do what you want. Based on your question it sounds like you have a path (maybe the root of the server) pointing to the repository you want to serve.
Mercurial comes with a script for this exact purpose, in the latest version it is hgweb.cgi. There are reasonably good instructions for setting it up on the mercurial site. It should allow both cloning and pushing. You will need the push_ssl=false if you will not be configuring https and also an allow_push line which will let certain users, or all (*) push to the repository. But all that should be part of the setup docs.