Provisioning server with Puppet, deploying with Capistrano - apache

I'm having issues with provisioning a server via Puppet that is compatible with a Capistrano deployment. I'm using puppetlabs/apache to setup my virtualhosts, and it is (rightfully) checking that the docroot exists (and creating it if it doesn't, ignoring potential nested directory issues). However, the true docroot will be /var/www/vhosts/${::fqdn}/current/public, and Capistrano will be creating the appropriate symlink (from current to releases/{releasestamp}) when deploying, and is not happy when that directory path is setup by Puppet in advance (due to current being an actual directory, instead of a symlink, and not being empty).
I've thought about adding something like:
file { "/var/www/vhosts/${::fqdn}/current":
ensure => 'link',
target => '/tmp/initial'
}
and setting up a blank file at /tmp/initial/public/index.html, so that Capistrano will be able to point current to the appropriate release when deploying. However, this means that whenever someone re-runs the provision (to apply any configuration changes, for example), the symlink will be relocated to the junk directory (if it even exists at that point).
Any suggestions? I've considered splitting the provision into an application provision and server provision, and having Capistrano execute the application provision when deploying, but I'd really prefer to keep this concise.

You should just use Puppet to create the application directory, and let Capistano take care of creating the releases and shared directories using the cap deploy:setup task, and laying down the current symlink when deploying a new version with cap deploy.
In more concrete terms, I recommend having this in your Puppet config:
file { "/var/www/vhosts/${::fqdn}":
ensure => 'directory'
}
Then, as a one-time setup step, run this task to instruct Capistrano to create the releases and shared directories within the /var/www/vhosts/${::fqdn} directory Puppet should have created for you:
cap deploy:setup
And thereafter, simply run cap deploy to deploy your app, which will create or update the current symlink for you.
(Note that you can pass the -n option to cap to preview the exact commands Capistrano will run, For instance, cap -n deploy:setup will show the mkdir commands it runs.)
Updated
Puppet's standard Apache module seems to require that docroot exist, which it does using the following:
# This ensures that the docroot exists
# But enables it to be specified across multiple vhost resources
if ! defined(File[$docroot]) {
file { $docroot:
ensure => directory,
owner => $docroot_owner,
group => $docroot_group,
}
}
But you may be able to disable that using the following:
file { "/var/www/vhosts/${::fqdn}/current/public":
ensure => directory,
replace => false, # don't replace the directory once Capistrano makes it a symlink
}
Just defining the resource yourself may prevent the if ! defined(File[$docroot]) part of the apache::vhost module from running.

Stuart,
Thanks for the great answer. I found that I had to amend it slightly to put the replace => false on the 'current' directory, not the 'current/public' directory, like so (using a vagrant owner):
file {
'/home/vagrant/app/current':
ensure => directory,
replace => false,
owner => vagrant,
group => vagrant,
}

Related

trying to modify SRS: where is the directory ./objs?

I am running SRS from a Docker container. On Manjaro I can issue a docker command in terminal, and stream video from OBS to the SRS instance I just launched. I can then view live video in a web page, from that server.
BUT!
The configuration is a bit opaque. The config files refer to a directory ("./objs") that is likely defined in a config file, but I can't find it.
AND!
I'm still a bit confused about how Docker works- isn't there a straightforward way to edit an image or container?
I can see all the config files in /etc/srs (like srs.conf, rtmp.conf and so on). The config files refer to the directory ./objs for the location of (for instance) the nginx root, index.html. But I cannot find that directory.
I also have installed SRS from official repos, on the same machine. I launched srs from /usr/bin/srs (not from the Docker image), but it immediately quits. So I gave up for now. Running it from Docker seems to work pretty well.
When start SRS by docker, it actually the same to start in shell:
cd /usr/local/srs && ./objs/srs -c conf/srs.conf
You can also use docker inspect ossrs/srs:5 to get the information:
"Cmd": ["./objs/srs", "-c", "conf/srs.conf"],
"WorkingDir": "/usr/local/srs",
The current work directory is /usr/local/srs, so ./objs is /usr/local/srs/objs, and I will describe more about why relative directories.
The configuration file is conf/srs.conf, which is actually /usr/local/srs/conf/srs.conf, and there are lots of configuration examples in ./conf.
The examples in ossrs.io is in the same format, uses a configuration and depends on the work directory.
SRS uses relative directory ./objs or ./conf because there are some features depend on it. You can see full.conf for all the configurations that supported.
We use relative directory because you can easily change to other directory, or start more than one SRS server. Of course, you could change the relative directory to abolute directory, please search ./objs in full.conf, for example:
pid ./objs/srs.pid;
ff_log_dir ./objs;
srs_log_file ./objs/srs.log;
http_server {
dir ./objs/nginx/html;
}
vhost __defaultVhost__ {
hls {
hls_path ./objs/nginx/html;
}
Please note that you can use docker volume to mount directory of host for container.

Laravel 403 error when displaying images from storage folder

I am unable to access files saved to the storage folder. I'm able to upload, files, save files. If I run for example the size method it gets the file size of the uploaded image but when it comes to displaying the file, I get a 403 error. I used the laravel artisan command to create the symlink, I've tried manually creating the symlink. I've checked to verify that follow symlinks is in my apache config, I can cd into it from shell the permissions are 777 (I had it 755 but in trying to figure it what is wrong I changed it to 777) ownership of the symlink and files inside are all the same user and group as every other file in the public directory.
I'm super tired so maybe I'm just missing something obvious, but I can't for the life of me figure out what is wrong. The file clearly exists, its visibility set to "public". Is there any reason why I'd be able to write the directory but not display images saved there?
Edit 1:
web
app
bootstrap
config
database
error
node_modules
public
resources
routes
stats
storage
temp
vendor
Is the basic structure, with a symlink inside public pointing at storage/app/public
the filesystems for my storage folder config is:
'public' => [
'driver' => 'local',
'root' => storage_path('app/public'),
'url' => env('APP_URL').'/storage',
'visibility' => 'public',
],
I haven't really edited anything from the basic laravel install at this point. I did read someone else having a similar problem, said their issue is they weren't allow to access direcotires outside of their document root. So they just had all uploads go to their public folder instead of using the storage folder. I do have my document root set to my public folder. Could that be a problem? (I can edit my apache file if needed)
Ok - got some sleep and this morning looked over everything and realized that when logged in as the site owner, everything looks fine, however when logged in as root it shows the link as broken. Basically artisan creates an absolute link, which /storage/app/public is fine as the site owner because its a jailkitted account whose "root" directory is the web folder. However it was actually creating a symlink to the system root, of which the normally account doesn't have access to so it was returning a 403
Basically I just made the as a relative link instead of an absolute one by removing the broken symlink laravel created and while in the public directory entering:
ln -s ../storage/app/public storage
I had the same issue, after a lot of debugging i managed to identify the actual problem that caused 403 error.
Solution is simple, in
Config/Filesystem.php
where you define your public_path and storage path, you must have same/identical public_path name and storage_path end directory name.
Example:
Incorrect:
public_path('brands') => storage_path('storage/app/public/brandimages');
This will generate 403 error, "since public_path('brands')" is not same as "storage_path('../../../brandsimage')".
Correct:
public_path('brands') => storage_path('storage/app/public/brands');
Now the public_path('brands') and "storage_path('../../../brands')" are same, therefore, correct symlinks will generated,thus solving 403 error.
Generate symlinks with following artisan command
php artisan storage:link
if relative links need to be generated, than use following command
php artisan storage:link --relative
My hosting is a clud server, and my site path is /httpdocs
The solution worked for me, was:
from folder /httpdocs/public, execute ln -s ../storage/app/public storage
then everything works fine.

Changing permissions of added file to a Docker volume

In the Docker best practices guide it states:
You are strongly encouraged to use VOLUME for any mutable and/or user-serviceable parts of your image.
And by looking at the source code for e.g. the cpuguy83/nagios image this can clearly be seen done, as everything from nagios to apache config directories are made available as volumes.
However, looking at the same image the apache service (and cgi-scripts for nagios) are run as the nagios user by default. So now I'm in a pickle, as I can't seem to figure how to add my own config files in order to e.g. define more hosts for nagios monitoring. I've tried:
FROM cpuguy83/nagios
ADD my_custom_config.cfg /opt/nagios/etc/conf.d/
RUN chown nagios: /opt/nagios/etc/conf.d/my_custom_config.cfg
CMD ["/opt/local/bin/start_nagios"]
I build as normal, and try to run it with docker run -d -p 8000:80 <image_hash>, however I get the following error:
Error: Cannot open config file '/opt/nagios/etc/conf.d/my_custom_config.cfg' for reading: Permission denied
And sure enough, the permissions in the folder looks like (whist the apache process runs as nagios):
# ls -l /opt/nagios/etc/conf.d/
-rw-rw---- 1 root root 861 Jan 5 13:43 my_custom_config.cfg
Now, this has been answered before (why doesn't chown work in Dockerfile), but no proper solution other than "change the original Dockerfile" has been proposed.
To be honest, I think there's some core concept here I haven't grasped (as I can't see the point of declaring config directories as VOLUME nor running services as anything other than root) - so provided a Dockerfile as above (which follows Docker best practices by adding multiple volumes) is the solution/problem:
To change NAGIOS_USER/APACHE_RUN_USER to 'root' and run everything as root?
To remove the VOLUME declarations in the Dockerfile for nagios?
Other approaches?
How would you extend the nagios dockerfile above with your own config file?
Since you are adding your own my_custom_config.cfg file directly into the container at build time just change the permissions of the my_custom_config.cfg file on your host machine and then build your image using docker build. The host machine permissions are copied into the container image.

Puppet download database and import using Tim Kay's aws tool

I've been trying to set up a development environment for my colleage's using vagrant and puppet for provisioning (Vagrant version 1.2.2, precise64 box). I'm currently stuck on downloading and importing a database.
For some context my company has daily database backups that get stored on Amazons S3 service, the dev environments should obtain the latest database backup and import it. We currently use Tim Kays aws tool to fetch the backups from S3.
I've successfully setup the aws tool using the following with puppet (and confirmed it works via 'vagrant ssh'):
file { '/usr/local/bin/aws':
owner => 'root',
group => 'root',
mode => 755,
source => '/puppet-files/aws',
}
file { '/home/vagrant/.awssecret':
owner => 'vagrant',
group => 'vagrant',
mode => 400,
source => '/puppet-files/.awssecret',
}
I've tried using a modified version of a suggestion for 'wget' from a post on google groups with no luck. The following is the configuration for fetching the database.
# Get and import latest db copy
exec { "download-reduced-db" :
cwd => "/tmp",
command => "/usr/local/bin/aws get sqldump.example.com/2013-01-02-db-backup.sql.gz /tmp/2013-01-02-db-backup.sql.gz",
creates => "/tmp/2013-01-02-db-backup.sql.gz",
timeout => 3600,
require => [File["/usr/local/bin/aws"], File["/home/vagrant/.awssecret"]],
}
The (shortened) output from 'vagrant up' is below, which indicates that it completed successfully.
notice: /Stage[main]//File[/home/vagrant/.awssecret]/ensure: defined content as '{md5}a4b7b1ac48eb207d93cb0b1541766718'
notice: /Stage[main]//File[/usr/local/bin/aws]/ensure: defined content as '{md5}92fa9a6d77c8185fdaf970e2c4eb254e'
notice: /Stage[main]//Exec[download-reduced-db]/returns: executed successfully
However when using 'vagrant ssh' and checking the /tmp/ directory the file is not listed. If I execute the command from above by hand it completes successfully and I can see the file listed in /tmp/
Thanks for your time and help.
Try to use
puppet.options = "--verbose --debug"
as explained here http://docs.puppetlabs.com/references/latest/configuration.html
Possibly, try to run exec with the user parameter to make sure it runs as vagrant, but first check the debug output.
Also redirect stdout and stderr of your command to a file, so you can check what's going wrong. In the Bourne shell, used by default by Puppet, this is done as:
command > file.log 2>&1
Let me know how it goes.

I can't generate a model either migration in rails

I have a rails project and i want to add a new table and also make changes in one table which is already in the project.
From the terminal and inside the folder of the existing rails project I write this:
Mini-1:arbinet anna$ rails generate model PriceRate profitable_routes_id:int normalized_rate:float normalized_payout:float margin:float old_rate:float old_payout:float old_margin:float
And also this for the migration:
Mini-1:arbinet anna$ rails generate migration ChangeColumnsFromProfitableRoutes
But for both I get this message:
Usage:
rails new APP_PATH [options]
Options:
-r, [--ruby=PATH] # Path to the Ruby binary of your choice
# Default: /Users/anna/.rvm/rubies/ruby-1.9.3-p286/bin/ruby
-b, [--builder=BUILDER] # Path to a application builder (can be a filesystem path or URL)
-m, [--template=TEMPLATE] # Path to an application template (can be a filesystem path or URL)
[--skip-gemfile] # Don't create a Gemfile
[--skip-bundle] # Don't run bundle install
-G, [--skip-git] # Skip Git ignores and keeps
-O, [--skip-active-record] # Skip Active Record files
-S, [--skip-sprockets] # Skip Sprockets files
-d, [--database=DATABASE] # Preconfigure for selected database (options: mysql/oracle/postgresql/sqlite3/frontbase/ibm_db/sqlserver/jdbcmysql/jdbcsqlite3/jdbcpostgresql/jdbc)
# Default: sqlite3
-j, [--javascript=JAVASCRIPT] # Preconfigure for selected JavaScript library
# Default: jquery
-J, [--skip-javascript] # Skip JavaScript files
[--dev] # Setup the application with Gemfile pointing to your Rails checkout
[--edge] # Setup the application with Gemfile pointing to Rails repository
-T, [--skip-test-unit] # Skip Test::Unit files
[--old-style-hash] # Force using old style hash (:foo => 'bar') on Ruby >= 1.9
Runtime options:
-f, [--force] # Overwrite files that already exist
-p, [--pretend] # Run but do not make any changes
-q, [--quiet] # Suppress status output
-s, [--skip] # Skip files that already exist
Rails options:
-h, [--help] # Show this help message and quit
-v, [--version] # Show Rails version number and quit
Description:
The 'rails new' command creates a new Rails application with a default
directory structure and configuration at the path you specify.
You can specify extra command-line arguments to be used every time
'rails new' runs in the .railsrc configuration file in your home directory.
Note that the arguments specified in the .railsrc file don't affect the
defaults values shown above in this help message.
Example:
rails new ~/Code/Ruby/weblog
This generates a skeletal Rails installation in ~/Code/Ruby/weblog.
See the README in the newly created application to get going.
And I don't have any clue why, but it was working when I create the other tables.
You should call the 'rails' command from within the proper docroot (folder) of your app.
So if your app lives in /pub/www/mycoolapp/ you should do: cd /pub/www/mycoolapp/ and run the rails commands from there
A little bin later, but maybe it is a problem that any of you may have.
Run this command inside your app root folder
bundle exec rake rails:update:bin
And enjoy coding!!