Is there a way to use after.sh to run php artisan migrate?
I tried this:
#!/bin/bash
cd exercise-8
vagrant ssh
php artisan migrate
I realized a few things
you have to vagrant ssh before you do migrations
the bash script runs from /home/vagrant
vagrant ssh returns /tmp/vagrant-shell: line 3: vagrant: command not found
php artisan returns
==> default:
==> default: [PDOException]
==> default: SQLSTATE[HY000] [1045] Access denied for user 'forge'#'localhost' (using password: NO)
==> default:
Try this:
cd *
php artisan migrate
Explanation:
cd * enter the only folder you'll have on /home/vagrant
Since these commands are executed on Homestead you do not need vagrant ssh.
Related
I'm trying to install aerospike in my local by following the steps here.
mkdir ~/aerospike-vm && cd ~/aerospike-vm
vagrant init aerospike/aerospike-ce
vagrant up
All the above commands are successful and there are no error.
Below is the minimal log of the vagrant up command.
default: Successfully added box 'aerospike/aerospike-ce' (v4.5.0.5) for 'virtualbox'!
.
.
.
Going on, assuming VBoxService is correct...
==> default: Checking for guest additions in VM...
default: The guest additions on this VM do not match the installed version of
default: VirtualBox! In most cases this is fine, but in rare cases it can
default: prevent things such as shared folders from working properly. If you see
default: shared folder errors, please make sure the guest additions within the
default: virtual machine match the version of VirtualBox you have installed on
default: your host and reload your VM.
default:
default: Guest Additions Version: 5.2.12
default: VirtualBox Version: 6.0
==> default: Configuring and enabling network interfaces...
==> default: Mounting shared folders...
default: /vagrant => /Users/rajkumar.natarajan/aerospike-vm
Below command clearly show only amc is running but not aerospike.
BOSM0001-RANATA:aerospike-vm rajkumar.natarajan$ vagrant ssh -c "sudo service aerospike status"
asd is stopped
Connection to 127.0.0.1 closed.
BOSM0001-RANATA:aerospike-vm rajkumar.natarajan$ vagrant ssh -c "sudo service amc status"
amc (pid 1458) is running...
Connection to 127.0.0.1 closed.
BOSM0001-RANATA:aerospike-vm rajkumar.natarajan$ vagrant ssh -c "sudo grep -i cake /var/log/aerospike/aerospike.log"
Connection to 127.0.0.1 closed.
Any idea what is wrong here.
Do: $ vagrant ssh
that will get you inside the shell. Then see why aerospike did not start.
First try:
$ sudo service aerospike start
then
$ sudo service aerospike status
If it is not running, go through /var/log/aerospike/aerospike.log and see what the log file is showing as the error.
I'm developing a website on a totally offline environment. also, I use gitlab runner for CI and the host is CentOS 7.
the problem is that gitlab runner uses gitlab-runner user on centos for deploying laravel application and apache uses apache user for running laravel.
I got Permission denied error on apache til I changed ownership of files. after that I get this error on apache log:
Uncaught UnexpectedValueException: The stream or file "storage/logs/laravel.log" could not be opened: failed to open stream: Permission denied
it seems that some vendor libraries like monolog want to write error or debug logs onto storage/logs/laravel.log but it gets permission denied. :(
.gitlab-ci.yml
stages:
- build
- test
- deploy
buildBash:
stage: build
script:
- bash build.sh
testBash:
stage: test
script:
- bash test.sh
deployBash:
stage: deploy
script:
- sudo bash deploy.sh
build.sh
#!/bin/bash
set -xe
# creating env file from production file
cp .env.production .env
# initializing laravel
php artisan key:generate
php artisan config:cache
# database migration
php artisan migrate --force
deploy.sh
#!/bin/bash
PWD=$(pwd)'/public'
STG=$(pwd)'/storage'
ln -s $PWD /var/www/html/public
chown apache.apache -R /var/www/html/public
chmod -R 755 /var/www/html/public
chmod -R 775 $STG
Am I using gitlab runner correct? how can I fix the permission denied error?
SELinux
I found the problem and it was selinux, like always it was selinux and I ignored it at the begining
What's the problem:
you can see selinux context on files with ls -lZ command, by default all files on www are httpd_sys_content_t, the problem is that selinux just allow apache to read these files. you should change storage and bootstrap/cache context so it can be writable.
there are 4 apache context type:
httpd_sys_content_t: read-only directories and files
httpd_sys_rw_content_t: readable and writable directories and files used by Apache
httpd_log_t: used by Apache for log files and directories
httpd_cache_t: used by Apache for cache files and directories
What to do:
first of all install policycoreutils-python for better commands
yum install -y policycoreutils-python
after installing policycoreutils-python the semanage command is available, so you can change file context like this:
semanage fcontext -a -t httpd_sys_rw_content_t "/var/www/html/laravel/storage(/.*)?"
semanage fcontext -a -t httpd_sys_rw_content_t "/var/www/html/laravel/bootstrap/cache(/.*)?"
don't forget to commit the changes by this command:
restorecon -Rv /var/www/html/laravel/storage
restorecon -Rv /var/www/html/laravel/bootstrap/cache
the problem is solved :)
ref: http://www.serverlab.ca/tutorials/linux/web-servers-linux/configuring-selinux-policies-for-apache-web-servers/
I'm trying to alter an Vagrant box I created for my office. Currently, like most boxes, running vagrant ssh logins me in as the vagrant user, but team members get frustrated having to use su - xxadmin to switch to our primary admin user.
In my Vagrantfile, I added: config.ssh.username = "xxadmin", but then I started receiving the common Vagrant error when running vagrant up:
[default] Configuring and enabling network interfaces...
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
sed -e '/^#VAGRANT-BEGIN/,/^#VAGRANT-END/ d' /etc/network/interfaces > /tmp/vagrant-network-interfaces
Stdout from the command:
Stderr from the command:
sudo: no tty present and no askpass program specified
and when running vagrant halt:
[default] Attempting graceful shutdown of VM...
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
shutdown -h now
Stdout from the command:
Stderr from the command:
sudo: no tty present and no askpass program specified
What's going on here? Why would simply changing the ssh user create these errors? How do i find a solution forward?
Specs:
OS X Mavericks (host)
Vagrant 1.3.5
Virtualbox 4.3.2
Debian 7 Wheezy (vm client)
In your box, you need to modify your sudoers file by running visudo and adding the following:
Defaults !requiretty
I kept running into this error until I made sure that my user's NOPASSWD sudoers entry was not being squashed.
If I ssh into my VPS as the deployment user and run bundle -v I get Bundler version 1.1.5 as expected.
If I run ssh deployment#123.123.123.123 bundle -v, then I see bash: bundle: command not found
Why isn't bundle being shown running commands via ssh?
More Info
$ cat ~/.bashrc
# ~/.bashrc: executed by bash(1) for non-login shells.
# see /usr/share/doc/bash/examples/startup-files (in the package bash-doc)
# for examples
if [ -d "${RBENV_ROOT}" ]; then
export PATH="${RBENV_ROOT}/bin:${PATH}"
eval "$(rbenv init -)"
fi
# If not running interactively, don't do anything
[ -z "$PS1" ] && return
When you run:
ssh deployment#123.123.123.123
You get a login shell on the remote host, which means that your shell will run (...for bash...) .bash_profile or .profile or equivalent AS WELL AS your per-shell initialization file.
When you run:
ssh deployment#123.123.123.123 some_command
This does not start a login shell, so it only runs the per-shell initialization file (e.g., .bashrc).
The problem you've described typically means that you need something in your .profile file (typically an environment variable setting) for everything to work.
Goal: To have "cap staging deploy" work again.
Problem: The development server's IP was changed.
Background:
I develop on my personal PC/Ubuntu 10.04 LTS and push updates to the development/test server which is Ubuntu 10.04 LTS-Virtual Machine. I am using Rails 3 & Ruby 1.9.2.
I have the git repository on the development server and I use SSH keys instead of passwords when I push updates or run: cap staging deploy.
I can successfully do: git push web_forms2_git_repo develop
When I run: cap staging deploy ... I get these results:
* executing `staging'
triggering start callbacks for `deploy'
* executing `multistage:ensure'
* executing `deploy'
* executing `deploy:update'
** transaction: start
* executing `deploy:update_code'
executing locally: "git ls-remote ssh://git#my-domain-name/home/git/web_forms2.git develop"
command finished in 3381ms
* executing "git clone -q ssh://git#my-domain-name/home/git/web_forms2.git /home/rails_192/apps/cals_web_forms/public/releases/20111220174923 && cd /home/rails_192/apps/cals_web_forms/public/releases/20111220174923 && git checkout -q -b deploy 5c2910f687480f136206e56ba73c268c7026df20 && (echo 5c2910f687480f136206e56ba73c268c7026df20 > /home/rails_192/apps/cals_web_forms/public/releases/20111220174923/REVISION)"
servers: ["my-domain-name"]
[my-domain-name] executing command
** [my-domain-name :: out] ssh: connect to host my-domain-name port 22: No route to host
** fatal: The remote end hung up unexpectedly
command finished in 3271ms
*** [deploy:update_code] rolling back
* executing "rm -rf /home/rails_192/apps/cals_web_forms/public/releases/20111220174923; true"
servers: ["my-domain-name"]
[my-domain-name] executing command
command finished in 39ms
failed: "env PATH=/home/...
I did try to clone web_forms2 repository to my local PC but it didn't work and I will paste the results below:
Command: git clone ssh://git#my-domain-name/home/git/web_forms2.git
Results: fatal: could not create work tree dir 'web_forms2'.: Permission denied
Has anyone come across this before?
Thanks
You can set the IP of the server in your Capistrano config file (config/deploy.rb):
role :web, "71.19.150.118" # replace this IP with the new IP or server address.
Keep in mind that if you use host names instead of IPs, #Sergei Tulentsev is right and you'll have to update your /etc/hosts file to reflect the change in the IP.