Run ServiceStack Console as Daemon on DigitalOcean - mono

All,
I have successfully installed my ServiceStack console app on my DigitalOcean droplet and can run it from the command line using mono. When I do this, my app is accessible using Postman from my laptop.
I have also tried to use Upstart to run my app as a daemon. I can see from the logging that it successfully launches when I reboot, but unless I am logged in as root and have started my console app from the command line, I can't access the console app from the outside when running as the daemon. I have tried this with ufw enabled (configured to allow the port I am using) and disabled and it makes no difference.
I am reasonably certain this is a permissions issue in my upstart config file for my console app, but since I am brand new to linux, I am unclear as to my next step to get this console app available as a daemon.
Any and all help is greatly appreciated...
Bruce
# ServiceStack GeoAPIConsole Application
# description “GeoAPIConsole”
# author “Bruce Parr”
setuid root
# start on started rc
start on started networking
stop on stopping rc
respawn
exec start-stop-daemon --start --exec /usr/bin/mono /var/console/GeoAPIConsole.exe

This worked. I added a user geoapiconsole and added the -S and -c switches, then I followed with initctrl start GeoAPIConsole
# ServiceStack Example Application
description "ServiceStack Example"
author "ServiceStack"
start on started rc
stop on stopping rc
respawn
exec start-stop-daemon -S -c geoapiconsole --exec /usr/bin/mono /var/console/GeoAPIConsole.exe

Related

Docker container immediately exits when started after system reboot

I'm starting my custom docker container (OpenSuse, PHP, Apache, some add-ons) this way:
docker build --build-arg http_proxy=http://user:pwd#ip:port -t prefix/myapp myapp
create --name=myapp --hostname=myapp-p 80:80 -v ${PWD}/myapp:/srv/www/myapp prefix/myapp
docker start myapp
This works perfectly. I can stop and later start the container. However, if I reboot my host system (Windows 10), I'm not able to start the container again. When I try to, the container immediately exits.
How can this be? As stated above, I use the -p and -v flags to map ports and mount a directory.
This is the output of...
docker logs myapp
-> httpd (pid 1) already running
May or may not be your problem (the logs will be telling), but I ran into an issue with docker on windows where the container tries to start before the file system is ready, which causes an error with the volume mounts. I never found a great solution aside from running a task that verifies the volume mount and restarts the container if it failed.

Chef-client does not work from Workstation but does work directly on the server

I have a chef recipe that runs a chocolatey install for Microsoft SQL server. From my workbench when I run
knife winrm [IP] 'chef-client -o "recipe[NetDevMachine::default]"' -m -x 'domain\myuser'
over a node it fails with the error:
532459699 or sometimes 2022834173 or 2057043966
However if I log onto the VM as the same user and locally run
chef-client -o "recipe[NetDevMachine::default]"
It works. Does anyone know what the difference is between running chef-client locally and running it remotely from the workbench? What does chef-client do differently here? Both are pulling the recipe from the same chef-server repo.
Additional Details
I am using the same user for both
I have successfully run other recipes from the workstation, it's just this MicrosoftSQLServer install that's not working
running "knife winrm [IP] 'choco install MicrosoftSQLServer' -m -x 'domain\myuser' also does not work
Recipe contents:
powershell_script "choco install MicrosoftSQLServer2014" do
code <<-EOH
choco install MicrosoftSQLServer2014
EOH
end
Error:
[ERROR] Running C:\Users\myuser\AppData\Local\Temp\MicrosoftSQLServer2014\setup.exe with
/QUIET /IACCEPTSQLSERVERLICENSETERMS /ACTION=INSTALL /INSTANCENAME=MSSQLSERVER /FEATURES=SQL,TOOLS,LOCALDB
/SQLSVCACCOUNT="NT AUTHORITY\Network Service" /SQLSYSADMINACCOUNTS=BUILTIN\ADMINISTRATORS
/SKIPRULES=REBOOTREQUIREDCHECK UIMODE=AUTOADVANCE was not successful.
Exit code was '-532459699'.
Edit - the only difference I can see is that one runs in the foreground and the other doesn't. So the location of the compile chef-script is /temp/2 for the failing run. I don't know if there is a way to force chef to run in the foreground or if that will even help.
Thanks
I never got to the bottom of this and still don't understand how running the chef client remotely using knife was different from running chef client directly on the VM. Perhaps it was something to do with the handling of credentials or permissions when remotely sending commands.
But I did find that removing chocolatey and replacing it a command line silent install inside a chef powershell resource allowed me to install MSSQL.

Keep scrapyd running

I have scrapy and scrapyd installed on a debian machine. I log in to this server using a ssh-tunnel. I then start scrapyd by going:
scrapyd
Scrapyd starts up fine and I then open up another ssh-tunnel to the server and schedule my spider with:
curl localhost:6800/schedule.json -d project=myproject -d spider=myspider
The spider runs nicely and everything is fine.
The problem is that scrapyd stops running when I quit the session where I started up scrapyd. This prevents me from using cron to schdedule spiders with scrapyd since scrapyd isn't running when the cronjob is launched.
My simple question is: How do I keep scrapyd running so that it doesn't shut down when I quit the ssh session.
Run it in a screen session:
$ screen
$ scrapyd
# hit ctrl-a, then d to detach from that screen
$ screen -r # to re-attach to your scrapyd process
You might consider launching scrapyd with supervisor.
And there is a good .conf script available as a gist here:
https://github.com/JallyHe/scrapyd/blob/master/supervisord.conf
How about ?
$ sudo service scrapyd start

Can't stop foreman

I have the following Profile that I use with foreman to do development work for a heroku site:
web: gunicorn project_name.wsgi -b 0.0.0.0:$PORT
worker: python manage.py rqworker default
redis: redis-server
Everything worked great until I added the redis line. While the app runs fine, I cannot kill foreman with control-c -- it just keeps running. The only way I can kill foreman is by killing the redis-server process.
How can I get foreman to respond (and stop) to the control-c?
This usually happens because redis or memcached won't shut down. So I have just created a script that I run to kill the development environment. Currently it is:
#!/bin/bash
redis-cli SHUTDOWN
killall memcached

running delayed_job under monit with ubuntu

I'm struggling to get delayed_job working under rails 3.0.9 (ruby 1.9.2). The only way I have succeeded to run is to tape manualy the command rake jobs:work.
But I want that to be automatically started when the rails application is starting.
I have installed monit under ubuntu and I configured it to launch a file located in my app. This fails looks like:
check process delayed_job with pidfile /home/me/myapp/tmp/pids/delayed_job.pid
start program = "/home/me/myapp/script/delayed_job start"
stop program = "/home/me/myapp/script/delayed_job stop"
And I added the environment setting in the delayed_job script file:
#!/usr/bin/env ruby
ENV['RAILS_ENV'] = "development"
require File.expand_path(File.join(File.dirname(__FILE__), '..', 'config', 'environment'))
require 'delayed/command'
Delayed::Command.new(ARGV).daemonize
When I run the command "sudo monit start delayed_job" I get the following error:
/usr/lib/ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require': no such file to load -- bundler/setup (LoadError)
So I guess it is because sudo is using a wrong version of ruby environment
I tried then the solution of:
rvm monit delayed_job
by adding rvm -S in the start program / stop program lines.
But it still failing with the error : rvm command not found
my rvm dir is located in my home dir /home/me/.rvm
I tried to find workarounds in (sudo changes PATH - why?) to change the PATH environment variable by adding
/usr/bin/env PATH=/home/me/.rvm/bin:$PATH
The command "sudo monit start delayed_job" succeeded! and the worker started.
But the issue is: When I launch sudo /etc/init.d/monit start and when I look to the syslog I still get 'delayed_job' failed to start
So I don't know how to investigate more, how to get more verbose errors for monit.
I have finally succeeded to solve this issue.
I modified the monit file like this:
check process delayed_job with pidfile /home/me/myapp/tmp/pids/delayed_job.pid
start program = "/bin/su - me -c 'cd /home/me/myapp/; script/delayed_job start'"
stop program = "/bin/su - me -c 'cd /home/me/myapp/; script/delayed_job stop'"
I have also downgraded the daemons gem because it seems that there are problems with the latest version. So I'm using now daemons v 1.0.10
I also modified the rights of the log file /home/me/myapp/log/delayed_job.log, because it seems that is was created before my root and my user had no access to it (I had problems to test the command "script/delayed_job start" with "me" user)
This i s the only line that worked for me that read the ENV properly
start program = "/usr/local/rvm/bin/rvm-shell -c 'cd /var/www/[APP]/current/; RAILS_ENV=production bundle exec bin/delayed_job start'"
Hope it helps!