Puppet download database and import using Tim Kay's aws tool - amazon-s3

I've been trying to set up a development environment for my colleage's using vagrant and puppet for provisioning (Vagrant version 1.2.2, precise64 box). I'm currently stuck on downloading and importing a database.
For some context my company has daily database backups that get stored on Amazons S3 service, the dev environments should obtain the latest database backup and import it. We currently use Tim Kays aws tool to fetch the backups from S3.
I've successfully setup the aws tool using the following with puppet (and confirmed it works via 'vagrant ssh'):
file { '/usr/local/bin/aws':
owner => 'root',
group => 'root',
mode => 755,
source => '/puppet-files/aws',
}
file { '/home/vagrant/.awssecret':
owner => 'vagrant',
group => 'vagrant',
mode => 400,
source => '/puppet-files/.awssecret',
}
I've tried using a modified version of a suggestion for 'wget' from a post on google groups with no luck. The following is the configuration for fetching the database.
# Get and import latest db copy
exec { "download-reduced-db" :
cwd => "/tmp",
command => "/usr/local/bin/aws get sqldump.example.com/2013-01-02-db-backup.sql.gz /tmp/2013-01-02-db-backup.sql.gz",
creates => "/tmp/2013-01-02-db-backup.sql.gz",
timeout => 3600,
require => [File["/usr/local/bin/aws"], File["/home/vagrant/.awssecret"]],
}
The (shortened) output from 'vagrant up' is below, which indicates that it completed successfully.
notice: /Stage[main]//File[/home/vagrant/.awssecret]/ensure: defined content as '{md5}a4b7b1ac48eb207d93cb0b1541766718'
notice: /Stage[main]//File[/usr/local/bin/aws]/ensure: defined content as '{md5}92fa9a6d77c8185fdaf970e2c4eb254e'
notice: /Stage[main]//Exec[download-reduced-db]/returns: executed successfully
However when using 'vagrant ssh' and checking the /tmp/ directory the file is not listed. If I execute the command from above by hand it completes successfully and I can see the file listed in /tmp/
Thanks for your time and help.

Try to use
puppet.options = "--verbose --debug"
as explained here http://docs.puppetlabs.com/references/latest/configuration.html
Possibly, try to run exec with the user parameter to make sure it runs as vagrant, but first check the debug output.
Also redirect stdout and stderr of your command to a file, so you can check what's going wrong. In the Bourne shell, used by default by Puppet, this is done as:
command > file.log 2>&1
Let me know how it goes.

Related

Laravel 403 error when displaying images from storage folder

I am unable to access files saved to the storage folder. I'm able to upload, files, save files. If I run for example the size method it gets the file size of the uploaded image but when it comes to displaying the file, I get a 403 error. I used the laravel artisan command to create the symlink, I've tried manually creating the symlink. I've checked to verify that follow symlinks is in my apache config, I can cd into it from shell the permissions are 777 (I had it 755 but in trying to figure it what is wrong I changed it to 777) ownership of the symlink and files inside are all the same user and group as every other file in the public directory.
I'm super tired so maybe I'm just missing something obvious, but I can't for the life of me figure out what is wrong. The file clearly exists, its visibility set to "public". Is there any reason why I'd be able to write the directory but not display images saved there?
Edit 1:
web
app
bootstrap
config
database
error
node_modules
public
resources
routes
stats
storage
temp
vendor
Is the basic structure, with a symlink inside public pointing at storage/app/public
the filesystems for my storage folder config is:
'public' => [
'driver' => 'local',
'root' => storage_path('app/public'),
'url' => env('APP_URL').'/storage',
'visibility' => 'public',
],
I haven't really edited anything from the basic laravel install at this point. I did read someone else having a similar problem, said their issue is they weren't allow to access direcotires outside of their document root. So they just had all uploads go to their public folder instead of using the storage folder. I do have my document root set to my public folder. Could that be a problem? (I can edit my apache file if needed)
Ok - got some sleep and this morning looked over everything and realized that when logged in as the site owner, everything looks fine, however when logged in as root it shows the link as broken. Basically artisan creates an absolute link, which /storage/app/public is fine as the site owner because its a jailkitted account whose "root" directory is the web folder. However it was actually creating a symlink to the system root, of which the normally account doesn't have access to so it was returning a 403
Basically I just made the as a relative link instead of an absolute one by removing the broken symlink laravel created and while in the public directory entering:
ln -s ../storage/app/public storage
I had the same issue, after a lot of debugging i managed to identify the actual problem that caused 403 error.
Solution is simple, in
Config/Filesystem.php
where you define your public_path and storage path, you must have same/identical public_path name and storage_path end directory name.
Example:
Incorrect:
public_path('brands') => storage_path('storage/app/public/brandimages');
This will generate 403 error, "since public_path('brands')" is not same as "storage_path('../../../brandsimage')".
Correct:
public_path('brands') => storage_path('storage/app/public/brands');
Now the public_path('brands') and "storage_path('../../../brands')" are same, therefore, correct symlinks will generated,thus solving 403 error.
Generate symlinks with following artisan command
php artisan storage:link
if relative links need to be generated, than use following command
php artisan storage:link --relative
My hosting is a clud server, and my site path is /httpdocs
The solution worked for me, was:
from folder /httpdocs/public, execute ln -s ../storage/app/public storage
then everything works fine.

unable to migrate database on production while running rake task

i am developing a small app which will
download a csv file(from a give url) everyday and inject the data(in csv file) to database and load the same data in the webview. its working perfect in my local system.but when i deployed to heroku database injection from csv file is not working.
here is my code.
downloader.rake file
namespace :downloader do
desc "download a file"
task:downloading => :environment do
Rails.logger.info("message from task")
Download.destroy_all
ActiveRecord::Base.connection.execute("DELETE from sqlite_sequence where name = 'downloads'")
#**********some other code ************
end
end
schedule.rb file
set :environment, 'production'
every 1.minutes do
rake "downloader:downloading"
end
when i run it in production it shows in log($tail -f log/production.log)
D, [2015-07-21T12:17:02.910529 #11740] DEBUG -- : Download Load (0.2ms) SELECT "downloads".* FROM "downloads"
E, [2015-07-21T12:17:02.910635 #11740] ERROR -- : SQLite3::SQLException: no such table: downloads: SELECT "downloads".* FROM "downloads"
Looks like your database is not ready on Heroku.
Please veryify to have installed a mysql database for your application and then run heroku run db:setup to build the schema and seed it.

PHP APC installed but not enabled

on an amazon linux ami I've succesfully (I think) installed apc (with yum or pecl. tried both) but I'm not able to enable it (sorry for the pun).
php -i says:
apc
APC Support => disabled
Version => 3.1.15-dev
APC Debugging => Disabled
MMAP Support => Enabled
MMAP File Mask =>
Locking type => pthread mutex Locks
Serialization Support => broken
Revision => $Revision: 329725 $
Build Date => May 28 2013 18:03:40
Directive => Local Value => Master Value
apc.cache_by_default => On => On
(and so on...)
extension=apc.so and apc.enabled=1 are in /etc/php.d/apc.ini (and php -i confirm that /etc/php.d/apc.ini is parsed).
I tried also to put them in /etc/php.ini to no avail. (And of course I dutifully restarted httpd every time).
Now I really don't know what to do...
Apc is showing disabled , but did you check whether its working or not , as most of the time it shows disabled but works fine.
I installed the same in my centOS and checked that about the issue , me also getting the same output of php -i and but everything is working fine please see the below screen shot.

Laravel3 migration on production server error: SQLSTATE[HY000] [2002] No such file or directory

I tried my database migration in fortrabbit.com through SSH, but I got an error:
SQLSTATE[HY000] [2002] No such file or directory
How to specify database name, password of production database?
How to do Laravel3 migration via SSH access in production server. I have used pagodabox it was working fine but very costly, so I switched to fortrabbit.com.
I tried this http://forums.laravel.io/viewtopic.php?id=6177, but no solution.
We surely will need more info about your problem.
To do a migration on Laravel using ssh, you basically have to go to your app folder and execute php artisan migrate. But I think you know that, right?
To specify your database infor for production,
1.create a ditectory production inside your app/config
2.copy your current app/config/database.php to it
3.edit your new app/config/production/database.php
4.on your app/start/global.php, configure your environment:
$env = $app->detectEnvironment(array(
'development' => array('localhost'),
'production' => array('example.com')
));
But to clarify it even better, you can watch this fresh video from Jeffrey Way on Laracasts:
https://laracasts.com/lessons/from-zero-to-deploy-with-fortrabbit

Provisioning server with Puppet, deploying with Capistrano

I'm having issues with provisioning a server via Puppet that is compatible with a Capistrano deployment. I'm using puppetlabs/apache to setup my virtualhosts, and it is (rightfully) checking that the docroot exists (and creating it if it doesn't, ignoring potential nested directory issues). However, the true docroot will be /var/www/vhosts/${::fqdn}/current/public, and Capistrano will be creating the appropriate symlink (from current to releases/{releasestamp}) when deploying, and is not happy when that directory path is setup by Puppet in advance (due to current being an actual directory, instead of a symlink, and not being empty).
I've thought about adding something like:
file { "/var/www/vhosts/${::fqdn}/current":
ensure => 'link',
target => '/tmp/initial'
}
and setting up a blank file at /tmp/initial/public/index.html, so that Capistrano will be able to point current to the appropriate release when deploying. However, this means that whenever someone re-runs the provision (to apply any configuration changes, for example), the symlink will be relocated to the junk directory (if it even exists at that point).
Any suggestions? I've considered splitting the provision into an application provision and server provision, and having Capistrano execute the application provision when deploying, but I'd really prefer to keep this concise.
You should just use Puppet to create the application directory, and let Capistano take care of creating the releases and shared directories using the cap deploy:setup task, and laying down the current symlink when deploying a new version with cap deploy.
In more concrete terms, I recommend having this in your Puppet config:
file { "/var/www/vhosts/${::fqdn}":
ensure => 'directory'
}
Then, as a one-time setup step, run this task to instruct Capistrano to create the releases and shared directories within the /var/www/vhosts/${::fqdn} directory Puppet should have created for you:
cap deploy:setup
And thereafter, simply run cap deploy to deploy your app, which will create or update the current symlink for you.
(Note that you can pass the -n option to cap to preview the exact commands Capistrano will run, For instance, cap -n deploy:setup will show the mkdir commands it runs.)
Updated
Puppet's standard Apache module seems to require that docroot exist, which it does using the following:
# This ensures that the docroot exists
# But enables it to be specified across multiple vhost resources
if ! defined(File[$docroot]) {
file { $docroot:
ensure => directory,
owner => $docroot_owner,
group => $docroot_group,
}
}
But you may be able to disable that using the following:
file { "/var/www/vhosts/${::fqdn}/current/public":
ensure => directory,
replace => false, # don't replace the directory once Capistrano makes it a symlink
}
Just defining the resource yourself may prevent the if ! defined(File[$docroot]) part of the apache::vhost module from running.
Stuart,
Thanks for the great answer. I found that I had to amend it slightly to put the replace => false on the 'current' directory, not the 'current/public' directory, like so (using a vagrant owner):
file {
'/home/vagrant/app/current':
ensure => directory,
replace => false,
owner => vagrant,
group => vagrant,
}