Rails 3 on Apache with Passenger: I need to take down Rails so that the DB connection(s) are closed -- this is necessary to perform some regular database maintenance -- but I want Apache up so that it can respond to requests with a static maintenance page.
I am using Capistrano and have seen the threads on how to invoke maintenance mode, but I need to know where to hook my DB tasks, and cannot figure out where.
Any links, or even pointers to where to look in the Capistrano code would be greatly appreciated.
TIA
You can use capistrano's deploy:web:disable task to block access to your site, allowing you to do database maintenance, etc:
cap deploy:web:disable REASON="a Database Upgrade" UNTIL="in a few minutes"
Then, once you're done:
cap deploy:web:enable
Related
We have a DotNetNuke site running on two servers that are load balanced. To ensure the files are in sync on these servers, we are using File Replication Service.
Search works fine on DotNetNuke when not load balanced, but in the load balanced setup the search stops working after a while (no suggestions, no results).
The following related exception is all over our log files:
[D:2][T:31][ERROR] DotNetNuke.Services.Exceptions.Exceptions - Lucene.Net.Store.LockObtainFailedException: Lock obtain timed out: NativeFSLock#D:\Sites\SiteName\App_Data\Search\write.lock
at Lucene.Net.Store.Lock.Obtain(Int64 lockWaitTimeout)
at Lucene.Net.Index.IndexWriter.Init(Directory d, Analyzer a, Boolean create, IndexDeletionPolicy deletionPolicy, Int32 maxFieldLength, IndexingChain indexingChain, IndexCommit commit)
at Lucene.Net.Index.IndexWriter..ctor(Directory d, Analyzer a, MaxFieldLength mfl)
at DotNetNuke.Services.Search.Internals.LuceneControllerImpl.get_Writer()
at DotNetNuke.Services.Search.Internals.LuceneControllerImpl.Delete(Query query)
at DotNetNuke.Services.Search.Internals.InternalSearchControllerImpl.DeleteSearchDocumentInternal(SearchDocument searchDocument, Boolean autoCommit)
at DotNetNuke.Services.Search.Internals.InternalSearchControllerImpl.DeleteSearchDocumentsByModule(Int32 portalId, Int32 moduleId, Int32 moduleDefId)
at DotNetNuke.Services.Search.SearchDataStore.StoreSearchItems(SearchItemInfoCollection searchItems)
at DotNetNuke.Services.Search.SearchEngine.IndexContent()
at DotNetNuke.Services.Search.SearchEngineScheduler.DoWork()
My best guess is that the issue is caused because both servers are running their search functionality, and the File Replication Service is syncing the files which causes conflicts.
What would be the best way to solve this?
Add an exclusion rule to not replicate the search index folder, but let both servers keep running search?
Somehow disable one server from indexing?
Any other suggestions?
Installation details:
DNN v. 09.02.00 (366)
.NET Framework 4.6
There's a 'scheduler' tool inside of the settings section that contains all CRON/background jobs functionality.
One of the background jobs is the 'Search: Site Crawler' job which is responsible for indexing the website. When that job runs at the same time on both servers, unexpected conflicts occur. To prevent this from happening, you can configure the job to only run on a specified server using the 'Servers' setting.
After configuring the job to only run on one server, the issue did not come back and search still works on both servers.
Thanks #Sanjay for pointing me in the right direction.
If I remember correctly search is done via a scheduled task. Have you tried setting up the task to run on only one server and then use file replication to sync across to the other server.
Our users are restless. They keep complaining about woolly, unmeasurable stuff, particularly slowness, without giving specifics, which of course makes it very difficult to track down.
Nonetheless, it is quite possible that they are right, that there are server calls that are taking way too long to come back. So I want to put some kind of sniffer on the web site (we're using ASP.NET MVC 4 on IIS7) that will log any call that takes more than n seconds to turn around, or that returns more than x megabytes of data, along with all request parameters, the response size, and maybe a certain amount of response data.
I haven't a clue how to do this, though. Any suggestions?
here is my take on this:
FRT
While you can use failed request tracing to log slow requests, in my experience is more useful for finding out why a request fails before it hits your application, rather than why its running slowly. 9/10 times its going to simply show you that the slowdown is in your code somewhere.
Log Parser
Yes you can download and analyze iis logs. I use Log Parser Lizard to do the analysis - its a great gui over log parser. Here's a sample of how you might query slow requests over 1000ms:
SELECT
To_String(To_timestamp(date, time), 'dd/MM/yyyy hh:mm:ss') As Time,
cs-uri-stem, cs-uri-query, cs-method, time-taken, cs-bytes, sc-status
FROM
'C:\inetpub\logs\LogFiles\W3SVC1\u_ex140721.log'
WHERE
time-taken > 1000
ORDER BY time-taken desc
New Relic
My recommendation - go easy on yourself and sign up for a free trial. No I don't work for them, but I've used their APM product a lot. Install the agent on the server - set it up. In 10 mins you will be amazed at the data you see about the site. Trust me.
Its designed to work in production environments and gives you amazing depth of info on what's running slow, down to the database query and stack traces. Its pure awesome. Once its setup wait for the next user complaint, log in and look at traces for the time frame.
When your pro trial ends, you can still get valuable data on the free tier, but it will only keep last 24 hours. We purchased licenses -expensive yes, but worth every cent. Why? Time taken to identify root causes was reduced by an order of magnitude, we can get proactive by looking at what is number 2, 3 and 4 on the slow requests list and working those before they become big problems, and finally the alerting makes us much more responsive when things were going wrong.
Code it
You could roll you own. This blog uses Mvc ActionFilters to do the logging. You could also use an HttpModule similar to this post. The nice thing about this approach is you can compile and implement the module separately from your application, and then just drop in the dll and update web.config to wire up the module. I would be wary of these approaches for a very busy site. Also, getting the right level of detail to fully identify the root is challenging.
View Requests
As touched on by Appleman1234, IIS has a little known feature to look at requests currently executing. Its handy for the 'hey its running slow right now' situation. You can use appcmd.exe or the IIS gui to do it. You will need to install the 'Request Monitor' IIS feature for this to work. This approach is ok for rudimentary narrowing of the problem, but does not show you whats running slowly in your controller.
There are various ways you can do this:
Failed Requests Tracing(FRT) – formerly known as Failed Request Event Buffering (FREB) with custom failure condition of takes over a certain time to load / run
Logging request information with IIS logging functionality and then using a tool like LogParserStudio
Using tools like Fiddler or IISMonitor on the IIS server to capture request information
For FRT the official documentation is available here and information how to capture dumps for long running process is avaliable here
For logging request information in IIS information about log file analysis is located here
For information on configuring Fiddler to capture IIS requests find information here
A summary of the steps in the linked resources is provided below.
For FRT
From IIS Manager for a given site,In the Actions pane, under Configure, click Failed Request Tracing and enter desired values in dialog box to enable Failed Request Tracing.
From IIS Manager for a given site, under IIS click Failed Request Tracing Rules, in order to define rules of failure for a given request. In the Actions pane, click Add and follow the wizard.
The logs will go in the directory you specify and are viewable in a web broswer.
For IIS logging
Logging is enabled by default on IIS
From IIS Manager for a given site,under IIS click Logging, and in the Actions Pane, click Enable to enable logging if it isn't already.
From IIS Manager for a given site,under IIS click Logging, and then configure as desired and click apply.
Install LogParser, .Net 4.x and LogParserStudio (if you need additional steps see here
Open LogParserStudio and add logs to it, you then can use SQL queries to get information from the log files.
For Fiddler
You need to change the user that IIS runs as to a user that can launch applications, like Fiddler (instead of Network Service), and then launch Fiddler with that user.
Also see Monitor Activity on a Web Server (IIS 7) for further information.
In the configuration file example for Puma, it says the following for the on_restart function:
Code to run before doing a restart. This code should close log files,
database connections, etc.
Do I need to implement this for a Rails app, to close connections to the db and the logfile, or is that taken care of automatically? If not, how do I actually do all that?
No you don't, Rails takes care of reloading your code automatically. But this code reloading support is limited. For example changes to application.rb are not applied until you restart the app server.
But I would recommend Phusion Passenger over Puma. Phusion Passenger is a lot easier to setup, especially when you hit production. Phusion Passenger integrates into Apache and Nginx directly and provides advanced features like dynamic worker management. Phusion Passenger is very mature, stable and performant and used by the likes of New York Times, Symantec, AirBnB, etc.
I've found that using Redis as my Rails.cache provider causes an error page upon the first request every time my Rails/Puma server is restarted. The error I got was:
Redis::InheritedError (Tried to use a connection from a child process
without reconnecting. You need to reconnect to Redis after forking.)
To get around this error, I didn't add anything to on_restart, but did have to add code to on_worker_boot ( I am running Puma with workers=4 ):
puma-config.rb
on_worker_boot do
puts "Reconnecting Rails.cache"
Rails.cache.reconnect
end
I use the delayed job gem to handle my email deliveries. It is working fine in the development and I am very happy with it. However after I deployed to the server, when I use command:
RAILS_ENV=production script/delayed_job start
it will be working. I've checked the log file and database, everything is fine and I can receive the mails just as I expected. However, when I exit from the server, nothing is going to happen.
I've checked my database by using sequel pro and seen that the delayed job has created a row in the DB and after the time in the run_at column, the row would disappear, but no mails can be received. When I log in again, the delayed job process is still running, and the log is nothing strange, but I just cannot receive and email that I suppose to. I can't keep my self log in all the time. Without the delayed job, I can use the traditional way and it's working properly but slow. Why the delayed job failed after I log out of the server?
This is my delayed job setting in the config/initializers/delay_job.rb
require "bcrypt"
Delayed::Worker.max_attempts = 5
Delayed::Worker.delay_jobs = !Rails.env.test?
Delayed::Worker.destroy_failed_jobs = false
P.S. I am not sure is it anything to do with the standalone passenger as I have to use different version of rails so I have to use a standalone passenger with port 3002.
I think I've found the solution.
After reading through this https://github.com/collectiveidea/delayed_job/wiki/Common-problems#wiki-jobs_are_silently_removed_from_the_database
I soon realized I might miss the "require bcrypt" in the configuration file.
I use RVM and have many gemsets, but just this particular gemset has the gem bcrypt-ruby. The delayed job might use the global or default gemset after I log out the system, so I install bcrypt-ruby in all the gemsets and restart the standalone passenger and it works!.
But still, I dont really know the connection between bcrypt and the delayed job.
We use Foreman to start all of our web processes in development.
A while back, I tried to get the ruby-debugger gem working with this setup, but I couldn't, so I abandoned my effort. Along the way, I must have changed some setting or another, and now when I try to look at the server log in real time when I make a request to my local environment, nothing gets printed out. I have to kill foreman in order to see any output from the request.
This is really slowing down my development, as I have to make a request, kill foreman to get information about what went wrong, then start up and try again.
Any ideas how to get my server log to spit out everything as I'm making requests?
I had the same problem. It's solved here:
It's as simple as adding a line
$stdout.sync = true
To your config/environments/development.rb file, then restarting foreman.
Worked for me and makes my life much easier.