Rails server log not outputting immediately - ruby-on-rails-3

We use Foreman to start all of our web processes in development.
A while back, I tried to get the ruby-debugger gem working with this setup, but I couldn't, so I abandoned my effort. Along the way, I must have changed some setting or another, and now when I try to look at the server log in real time when I make a request to my local environment, nothing gets printed out. I have to kill foreman in order to see any output from the request.
This is really slowing down my development, as I have to make a request, kill foreman to get information about what went wrong, then start up and try again.
Any ideas how to get my server log to spit out everything as I'm making requests?

I had the same problem. It's solved here:
It's as simple as adding a line
$stdout.sync = true
To your config/environments/development.rb file, then restarting foreman.
Worked for me and makes my life much easier.

Related

how to properly see the errors of a flask application in production mode

I have made a flask application at my local computer in the debugging mode and it runs all fine. But when it comes to production, the website gives me 500 or internal server error, which I have no idea what the bug is. I am fairly new to flask production and this has been stopping me from moving forward for quite a few days.
My questions are:
1> in my local development environment, one could always print things out. But how can I see those prints in the production stage?
2> Do I see them through Apache2 log? Where is Apache2 log?
For production, I actually followed the tutorials from pythonprogramming.net. Youtube link is here:
https://www.youtube.com/watch?v=qZNL4Ku1UQg&list=PLQVvvaa0QuDc_owjTbIY4rbgXOFkUYOUB&index=2
To use a very simple example, if the code imports a package which wasn't installed, where can we see the errors?
Thanks in advance.
I've tried to use to use try ... except block for every flask function. Whenever there is an exception, it can be return to the front-end. But what about other errors?
I found out:
Use logging module
Read apache2 log from /var/log/apache2

Forcing a DNS failure

I need to test a change in our application's DNS retry behavior.
It previously switched into another mode to report the issue to the end user, but we've found a bug when the retry attempt worked, it would proceed to try loading the now-found far-end service in that "error reporting" mode.
To fix this, we have disabled the switch to the error reporting mode, and expect that on a successful retry we will load into the expected mode.
Thus, I need DNS (rndc/named) to fail once, and only once, and provide a successful result on the second attempt.
The only thing I can think is to run a large load test, and hope DNS fails like this at some point... But I am hoping someone on here might know of a better solution.
Maybe a way to block the connection attempt once ? The DNS server is part of the application, though, so it would be blocking the connection to localhost.
for sure you can use docker/vm/dedicated os, change its dns settings and use it as a dns resolver. it will be probably a lot of work to script it but it seems possible. but before it i would look for some dns mock service/server

Making Config for Monit to check program started from bash

I'm hoping someone out there is used to monit and can help me.
Im running a home data server, with Ubuntu 13.10.
I have CGminer setup to start when the PC boots, from a bash script of my own creation. It contains a few tweaks and setting that need running before it gets going.
But if for some reason my interweb goes down...cgminer will close after a small amount of time. Now, if im asleep, and it closes. That valuable mining time, and a waste of the electric. So I'm looking into monit as a way of fixing that.
Im hoping to be able to have monit (or something similar, doesnt have to be monit) Start CGMiner from my script, check every so often that CGminer is still running, and if not, restart it from my script.
I just cant get my head around the config file for monit...Help would be awesome
Yes, you can achieve that with monit. You only need that your start script writes pid into pidfile:
check process xyz with pidfile /var/run/xyz.pid
start = "/bin/xyz start"
stop = "/bin/xyz stop"

Rails delayed job fail after close session

I use the delayed job gem to handle my email deliveries. It is working fine in the development and I am very happy with it. However after I deployed to the server, when I use command:
RAILS_ENV=production script/delayed_job start
it will be working. I've checked the log file and database, everything is fine and I can receive the mails just as I expected. However, when I exit from the server, nothing is going to happen.
I've checked my database by using sequel pro and seen that the delayed job has created a row in the DB and after the time in the run_at column, the row would disappear, but no mails can be received. When I log in again, the delayed job process is still running, and the log is nothing strange, but I just cannot receive and email that I suppose to. I can't keep my self log in all the time. Without the delayed job, I can use the traditional way and it's working properly but slow. Why the delayed job failed after I log out of the server?
This is my delayed job setting in the config/initializers/delay_job.rb
require "bcrypt"
Delayed::Worker.max_attempts = 5
Delayed::Worker.delay_jobs = !Rails.env.test?
Delayed::Worker.destroy_failed_jobs = false
P.S. I am not sure is it anything to do with the standalone passenger as I have to use different version of rails so I have to use a standalone passenger with port 3002.
I think I've found the solution.
After reading through this https://github.com/collectiveidea/delayed_job/wiki/Common-problems#wiki-jobs_are_silently_removed_from_the_database
I soon realized I might miss the "require bcrypt" in the configuration file.
I use RVM and have many gemsets, but just this particular gemset has the gem bcrypt-ruby. The delayed job might use the global or default gemset after I log out the system, so I install bcrypt-ruby in all the gemsets and restart the standalone passenger and it works!.
But still, I dont really know the connection between bcrypt and the delayed job.

Heroku: What to do when your dyno/worker crashes?

I have a worker doing some processing 24/7. However, sometimes the code crashes and it needs to be restarted (even if I catch the exception, I have to restart the worker in order for it to work).
What do you do when this happens or am I doing something wrong and this shouldn't happen at all? Does your dynos/workers crash or it is just me?
thanks
Heroku is supposed to restart a worker every time it crashes. As far as I know, you don't have to select or configure anything. Whatever is in your jobs:work task will be executed as soon as it fails.
In the event that you are heavily dependent on background jobs in your web app. You could create a rake task that finds the last record to be updated and execute a background job to update it. Or perhaps automate the rake task to find the rest of the records that need updating, since the last crash.
Alternatively, you force worker restart manually as indicated in this article (using delayed_job):
heroku workers 0;
heroku workers 1;
Or perhaps you can restart a specific worker by doing (mentioned in this article):
heroku restart worker.1
By the way, try the 1.9 stack. Make sure your app is 1.9.2 compatible, before doing so. Hopefully crashes are less frequent there:
heroku stack:migrate bamboo-mri-1.9.2
In the event, that such issues still arise. Best to contact Heroku support. They are very responsive at what they do.
Latest command to restart a specific heroku web worker (2014):
heroku ps:restart web.1
(tested on Cedar stack)
At times, for instance in case of DB crashes, the worker may not restart automatically. you would need to do this.
heroku restart web.1
It worked for me.