Optimization tool for Rails 3 in development? - optimization

I'm developing a Rails 3 app deployed on Heroku which would like to optimize. I've explored different solutions such as query_reviewer or New Relic.
I couldn't make query_reviewer work with Rails 3.0.1 (also I had to switch to MySql, because PostgreSQL is not supported).
Regarding New Relic, it looks like a great free tool, but works only in production. I first need to improve many DB queries at development before getting to tune the app in production.
So none of this tools fit my needs.
Any advice? Maybe I should just rely on log traces and reduce the number of SQL queries?

You want to find out which activities aren't absolutely necessary and would save a good amount of time if you could "prune" them?
Forgive me for being a one-track answerer, but there's an easy way to do that, and it's easy to demonstrate.
While the code is running slowly and making you wait, manually interrupt it with Ctrl-C or whatever, and examine the stack trace. Do this a few times.
Anything you see it doing on more than one stack trace is responsible for a substantial percent of time, and it doesn't really matter exactly how much. If it's something you could prune, it will have that much less work to do.
If the efficacy of this method seems doubtful because it's low-tech, that's understandable, but in fact it can quickly find any problem any profiler can find.

I found that New Relic has a Development mode, which looks like an ideal setup for optimizing an application in development phase: http://support.newrelic.com/kb/docs/developer-mode

Related

How to compare the performance of web frameworks

I want to compare the performance of some web frameworks (Ruby on Rails and ASP MVC3) but I don't know how to get started... Should I measure how fast each framework renders e 10k long loop or how fast its renders 10k lines of html? Are there maybe programs that can help you with this? Also how can the server load be monitored? Any help is appreciated!
Thijs
With respect, this is an unanswerable question. Is a Porche faster than a Prius? Well, no, not when the Porche is in the shop :-).
The answer depends on what you're trying to accomplish, how you do it, and how you code it. For example, Rails goes out of its way to transparently cache as much as it can, and then makes it trivially easy to cache stuff on your command. Of course there's a way to do the same in ASP MVC3, but is it as easy?
Can you find, hire, and train a suitable team in that knows how to use the framework? What's the culture of the organization (Windows or Unix?). I could write a really fast application in MS-Access and the same application poorly in Rails against a high-performance database and the MS-Access app would win. It's far from a given that an application will be written well, optimized, or whatever.
These days, a well-written application is typically performance bound on data I/O, and if this is the case, then it's which database you use that might matter. The loop-test you propose would test almost nothing, unless you're writing an application that calculates pi to the billionth place, or something.
I am sure there are published benchmarks of application frameworks available, but again, they need to make assumptions about what the application actually has to do.
The reality is that any reasonable framework (which includes both of the two you mention) is likely to be as fast as necessary for most scenarios, and again, what you do, and how you architect and implement it are the far more likely culprits for performance problems.
Once you do choose, there's a great (awesome) tool called NewRelic RPM which works with several frameworks -- I use it with Rails, and it gives you internal metrics at a level of detail that is beyond belief.
I don't mean to be glib, or unhelpful. But this is a little bit of a sore spot for me -- in so many cases people say "we should use foo instead of bar because foo's faster", and weeks go by as bar is replaced by foo. And then there are little incompatibilities. And an unexpected bug. And then, well, for some reason the new one is a little slower. And then after it gets optimized, it's finally just as fast.
I'll step down from my soapbox now :-)

Software crashes in production environment, no access to debugger. What to do in short-term and long-term? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
This is an interview question:
Software crashes in production environment, no access to debugger. What steps would you do to solve the problem short term? Long term? What would you do to prevent it from happening? What tools would you use?
My ideas:
Short term:
Track the log file of the program generate by OS, which may generate some signals about the crash.
Narrow down to the file where the program crashes by adding some print.
Add try-catch in the possible locations.
Find the reason.
Long-term:
Check the whole program design idea, algorithm/data structure usage, to make sure that they are used correctly and suitably.
Test it with different cases that have caused crashes to find the essential reasons
Tools : GDB, Valgrind family, gprof
Any better ideas or solutions?
Short Term
1. The absolute first thing to do is work out what was done to generate the problem and try and reproduce it. If you can do that, you can now track it down in a debugged environment.
2. If it is not reproducible, you need to look through all the information you collected in step one (which will include any logging) and see if you can see a possible problem.
3. If the problem has not been found, you will need to add logging, and lots of it. This is where a "DEBUG" logging setting comes in handy. It will probably slow down the system, and may even mask the problem (which tells you something about the nature of the problem).
4. With the new logging information you can go back to step one. Repeat this until the problem is solved!
In the long term the most obvious thing to do is make sure you have sufficient logging in place, even if it has to be turned on and off, to catch problems. As well as this, you need to try and beef up the testing effort..
When you have tracked down a problem, it is worth noting the type of problem (race condition, scalability, database access, etc.). This gives you an area to apply more automated and manual tests.
You have some good initial ideas, here are my comments:
Add logging to your code - you will get very little information from
the operating system about your code.
If exceptions can be thrown by methods that you call, you should catch them. Don't let them bubble up to the end user!
Run valgrind now, not later
Setup a test environment that simulates your production environment. Start simple, and increase the complexity until you are able to reproduce your issue. You do have a test environment, right?
The very first thing you should do is determine the severity of the problem. This will help to devise your short-term strategy. You will need to have some brief discussions with the major stakeholders in the software (such as the client), or have a project manager do this and report back to you.
In the heat of the moment, this is often the bit overlooked, and rushing a short-term fix almost always means wasting a lot of time not really understanding what needs to be done.
After this, your actual strategy, both long term and short term, is rather dependent on the technology you are using and how it is deployed.
Short term
It is absolutely vital to grab some preliminary information about the crash before attempting to resolve the problem, grab log files, take screenshots, note down system info like memory/CPU usage, archive any temporary data that might be useful.
The short-term action should be to get the system up-and-running again, quickly. Some common approaches to short-term solutions:
Try turning it off and on again... Seriously, 90% of the time this
will get production running again in the short term, at least until
the bug manifests itself again.
Revert to a previous production
release, preferably the latest version that was known to work fairly
reliably.
Run a second instance on another machine and fail-over if
the problem occurs again. This has the added bonus that logs and
system state are preserved after the last crash occurred.
Long term
In the long term, you will want to properly analyse the information you gathered at the time of failure. Where possible, try to reproduce the problem as closely as you can. Revert your code to the version being deployed (you do use version control tools right?), check high-level factors as well as low-level configuration ones. e.g. who was using the system when it crashed? Can they show you what they did?
Debugging and logging may be useful at this stage, and all the usual developer tools such as functional tests and memory profiling tools. A crash could come from a number of sources, from memory protection faults to an unexpected state of a resource. You should compile a list of candidate problems, and cross them off as you gain confidence that they aren't the cause of the crash.
Apart from logging, you can enable creation of mdmp files ( windows ) or the core dumps ( linux ) then examine them later; One downside of this approach is that core dumps can be pretty big. mdmp and core dumps contain the context of the application when the crash occurred.

Ease A-B Testing / Beta Testing support within a framework

I'm looking for an implementation strategy to ease A-B testing / Beta testing. I don't see any code/plugin available for any framework. If not for a direct solution, let us at least brain-storm the requirements/expectations from the component:
There are already a few threads around my query..
Is there a PHP CMS with builtin A/B Testing Support?
Anyone got any good strategies for A/B testing with the Play Framework?
Beta Testing
As no one's answered this question, I'll attempt to do so.
Basically, I'm not sure if there's a directly useful connection between your PHP framework and your A/B testing needs. I think this is mainly because what you're testing can be almost anything: the colour of a conversion-sensitive button, a page layout, an entire registration funnel, etc. These don't inherently have anything to do with your PHP framework and there are lots of options for how you could do your testing.
Another issue is that you might not really know the parameters of what you're testing until you start testing. Your testing might lead you down a way that you didn't really even consider, so how could you have accounted for it in how you built the site? If you need a REALLY wide window for what you'll be testing, you're probably better off not building it at all and using some type of vapor/smoke-testing to get the basic concepts right first. Not everything can be subjected to testing and you'll still need subjectively-generated hypotheses as your test cases (and your testing will be only as good as your hypotheses).
If you have something very specific that you need to test repeatedly over time and want to build this flexibility into the system, then I'd look for the most obvious solution in the framework to make it happen. For example, if you're using Symfony and if you think that you'll need to test 50 different sidebar variations for a page over the course of 6 months, it probably makes sense to build it as a slot/component so you can build some logic around simplifying your testing and swap those sidebars with ease. I'm not sure why it would need to be anything more complicated than that.
Overall, I'd also add that the role of A/B testing should to guide your product to sell/convert/monetize/engage better. Unless you're building some type of a testing platform, I wouldn't over-think it. I tend to see that most sites fail to test sufficiently not because the system isn't flexible enough for various test cases but because top management won't give enough product/dev time for it, or because people aren't making enough use of their analytics packages to draw even the most basic of conclusions.
Hope that helps.
http://phpabtest.com/ looks like a pretty easy to use framework that comes free!

Recommend some open source web frameworks for a fun project

I maintain in-house business software for a living. Technologies included here are Java, Struts, Spring MVC, jsp, wicket, and a few others. I think it's time to branch out and learn something new.
I am hoping to show myself with a side project that writing code can, in fact, be fun (in some plane of the universe), and that I haven't wasted the past few years of my life doing something I can never love or have fun doing.
I'm thinking of having a fantasy-sport style web site - obviously much, much smaller with regards to features and all that. I was hoping I could get some recommendations for the newest or cleanest frameworks that will allow me to accomplish such a project. My goals are to work on following a real development process instead of just hacking a bunch of crap into an already crappy application on a daily basis. Also I will strive to follow best practices and create good, clean, understandable code that I don't shudder at the thought of having to modify. It's hard to do this at work, because the software I work on has already been developed by 50 guys from various continents that never took the time to design anything before jumping into coding.
I would need a simple database to store users and their picks for each event. Also at my job, the login security is all handled by another group completely. Do people usually write their own login systems from scratch, or are there open source utilities for that as well? I'd be interested in those, as my site will need to have a user login system, and be secure.
I had ruby and rails installed on my computer the last time I conjured up the motivation for this idea, but that was nixed by a hard drive crash. I figured before I just jumped straight to rails for this idea, that I would get a few other opinions off stack overflow to see if people liked something else that I didn't know about.
Also, if anyone has any good resources for how to think about OO design, I could brush up on that as well. I'm looking for anything that will help me to just think about the design from the start and how to get my thoughts into a diagram. I'd like it not to focus so much on patterns and other principles as much as just how to get started and actually put my thoughts in a professional document that I can use to build my project from. I tried to practice this prior to a card game that I wrote, and it got way too complicated way too fast, and the results ended up being not so great.
I’m more familiar with Django, although like you, the only frameworks I’ve really used are the Java/Struts/Spring/JSP, etc. The automatically generated administration interface in Django is amazing coming from these, and it comes with its own authentication system too.
Unless you’re especially predisposed against Python, I think you should give it a go.
Ruby on Rails, Python on Django, PHP on (not sure -- maybe Zend? or CakePHP?), are probably the most popular frameworks if I understand correctly that you want to learn a new language. If I misunderstood you, and you'd rather stick with Java, GWT seems pretty cool -- it's the only real way to avoid "explicitly" writing Javascript (if you DO want to learn and use some Javascript, I personally am in love with Dojo, but jQuery is substantially more popular: those are two good popular frameworks you should consider, though there are others of course, like for all languages I mentioned so far).
One advantage of picking Python and Django is that they work particularly well with Google App Engine (and with Dojo, too, thanks to the cool dojango project!) -- GAE supports JVM too, now, but it's supported Python for a much longer time and the Python side of it is more solid and complete at this time. So, if that's the technology stack you choose, you get to develop and deploy for free, on highly scalable infrastructure, at least until your app gets more than a few million page views per month -- and you really minimize your system adminsitration hassles, all you do is basically to code and write one simple configuration file.

What do you do with a developer who does not test his code? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
One of our developers is continually writing code and putting it into version control without testing it. The quality of our code is suffering as a result.
Besides getting rid of the developer, how can I solve this problem?
EDIT
I have talked to him about it number of times and even given him written warning
If you can do code reviews -- that's a perfect place to catch it.
We require reviews prior to merging to iteration trunk, so typically everything is caught then.
If you systematically perform code reviews before allowing a developer to commit the code, well, your problem is mostly solved. But this doesn't seem to be your case, so this is what I recommend:
Talk to the developer. Discuss the consequences for others in the team. Most developers want to be recognized by their peer, so this might be enough. Also point out it is much easier to fix bugs in the code that's fresh in your mind than weeks-old code. This part makes sense if you have some form of code owneship in place.
If this doesn't work after some time, try to put in place a policy that will make commiting buggy code unpleasant for the author. One popular way is to make the person who broke the build responsible for the chores of creating the next one. If your build process is fully automated, look for another menial task to take care of instead. This approach has the added benefit of not pinpointing anyone in particular, making it more acceptable for everybody.
Use disciplinary measures. Depending on the size of your team and of your company, those can take many forms.
Fire the developer. There is a cost associated with keeping bad apples. When you get this far, the developer doesn't care about his fellow developers, and you've got a people problem on your hands already. If the work environment becomes poisoned, you might lose far more - productivity-wise and people-wise - than this single bad developer.
As a developer who rarely tests his own code, I can tell you the one thing that's made me slowly shift my behavior...
Visibility
If the environment allows pushing code out, waiting for users to find problems, and then essentially asking "How about now?" after making a change to the code, there's no real incentive to test your own stuff.
Code reviews and collaboration encourage you to work towards making a quality product much more than if you were just delivering 'Widget X' while your coworkers work on 'Widget Y' and 'Widget Z'
The more visible your work is, the more likely you are to care about how well it works.
Code review. Stick all of your dev's in a room every Monday morning and ask them to bring their most proud code-based accomplishment from the previous week along with them to the meeting.
Let them take the spotlight and get excited about explaining what they did. Have them bring copies of the code so other dev's can see what they're talking about.
We started this process a few months ago, and it's astonishing to see the amount of sub-conscious quality checks that take place. After all, if the dev's are simply asked to talk about what they're most excited about, they'll be totally stoked to show people their code. Then, other dev's will see the quality errors and publicly discuss why they're wrong and how the code should really be written instead.
If this doesn't get your dev to write quality code, he's probably not a good fit for your team.
Make it part of his Annual Review objectives. If he doesn't achieve it, no pay rise.
Sometimes though you do just have to accept that someone is just not right for your team/environment, it should be a last resort and can be tough to handle but if you have exhausted all other options it may be the best thing in the long run.
Tell the developer you would like to see a change in their practices within 2 weeks or you will begin your company's disciplinary procedure. Offer as much help and assistance as you can, but if you can't change this person, he's not right for your company.
Using Cruise Control or a similar tool, you can make checkins automatically trigger a build and unit tests. You would still need to ensure that there are unit tests for any new functionality he adds, which you can do by looking at his checkins.
However, this is a human problem, so a technical solution can only go so far.
Why not just talk to him? He probably won't actually bite you.
Make him "babysit" the build, and become the build manager. This will give him less time to develop code (thus increasing everyone's performance) and teach him why a good build is so necessary.
Enforce test cases - code cannot be submitted without unit test cases. Modify the build system so that if the test cases don't compile and run correctly, or don't exist, then the entire task checkin is denied.
-Adam
Publish stats on test code coverage per developer, this would be after talking to him.
Here are some ideas from a sea shanty.
Intro
What shall we do with a drunken sailor, (3×)
Early in the morning?
Chorus
Wey–hey and up she rises, (3×)
Early in the morning!
Verses
Stick him in a bag and beat him senseless, (3×)
Early in the morning!
Put him in the longboat till he’s sober, (3×)
Early in the morning!
etc. Replace "drunken sailor" with a "sloppy developer".
Depending on the type of version control system you are using you could set up check-in policies that force the code to pass certain requirements before being allowed to check-in. If you are using a sytem like Team Foundation Server it gives you the ability to specify code-coverage and unit testing requirements for check-ins.
You know, this is a perfect opportunity to avoid singling him out (though I agree you need to talk with him) and implement a Test-first process in-house. If the rules aren't clear and the expectations are known to all, I've found that what you describe isn't all that uncommon. I find that doing the test-first development scheme works well for me and improves the code quality.
They may be overly focused on speed rather than quality.
This can tempt some people into rushing through issues to clear their list and see what comes back in bug reports later.
To rectify this balance:
assign only a couple of items at a time in your issue tracking system,
code review and test anything they have "completed" as soon as possible so it will be back with them immediately if there are any problems
talk to them about your expectations about how long an item will take to do properly
Peer programming is another possibility. If he is with another skilled developer on the team who dies meet quality standards and knows procedure then this has a few benifits:
With an experienced developer over his shoulder he will learn what is expected of him and see the difference between his code and code that meets expectations
The other developer can enforce a test first policy: not allowing code to be written until tests have been written for it
Similarly, the other developer can verify that the code is up to standard before it is checked-in reduicing the nmber of bad check-ins
All of this of course requires the company and developers to be receptive to this process which they may not be.
It seems that people have come up with a lot of imaginative and devious answers to this problem. But the fact is that this isn't a game. Devising elaborate peer pressure systems to "name and shame" him is not going to get to the root of the problem, ie. why is he not writing tests?
I think you should be direct. I know you say that you've talked to him, but have you tried to find out why he isn't writing tests? Clearly at this point he knows that he should be, so surely there must be some reason why he isn't doing what he's been told to do. Is it laziness? Procrastination? Programmers are famous for their egos and strong opinions - perhaps he's convinced for some reason that testing is a waste of time, or that his code is always perfect and doesn't need testing. If he's an immature programmer, he might not fully understand the implications of his actions. If he's "too mature" he might be too set in his ways. Whatever the reason, address it.
If it does come down to a matter of opinion, you need to make him understand that he needs to set his own personal opinion aside and just follow the rules. Make it clear that if he can't be trusted to follow the rules then he will be replaced. If he still doesn't, do just that.
One last thing - document all of your discussions along with any problems that occur as a result of his changes. If it comes to the worst you may be forced to justify your decisions, in which case, having documentary evidence will surely be invaluable.
Stick him on his own development branch, and only bring his stuff into the trunk when you know it's thoroughly tested. This might be a place where a distributed source control management tool like GIT or Mercurial would excel. Although with the increased branching/merging support in SVN, you might not have too much trouble managing it.
EDIT
This is only if you can't get rid of him or get him to change his ways. If you simply can't get this behaviour to stop (by changing or firing), then the best you can do is buffer the rest of the team from the bad effects of his coding.
If you are at a place where you can affect the policies, make some changes. Do code reviews before check ins and make testing part of the development cycle.
It seems pretty simple. Make it a requirement and if he can't do it, replace him. Why would you keep him?
I usually don't advocate this unless all else fails...
Sometimes, a publicly-displayed chart of bug-count-by-developer can apply enough peer pressure to get favorable results.
Try the Carrot, make it a fun game.
E.g The Continuous Integration Game plugin for Hudson
http://wiki.hudson-ci.org/display/HUDSON/The+Continuous+Integration+Game+plugin
Put your developers on branches of your code, based on some logic like, per feature, per bug fix, per dev team, whatever. Then bad check-ins are isolated to those branches. When it comes time to do a build, merge to a testing branch, find problems, resolve, and then merge your release back to a main branch.
Or remove commit rights for that developer and have them send their code to a younger developer for review and testing before it can be committed. That might motivate a change in procedure.
You could put together a report with errors found in the code with the name of the programmer that was responsible for that piece of software.
If he's a reasonable person, discuss the report with him.
If he cares for his "reputation" publish the report regularly and make it available to all his peers.
If he only listens to the "authority", do the report and escalate the issue to his manager.
Anyway, I've seen often that when people are made aware of how bad they seem from outside, they change their behaviour.
Hey this reminds me of something I read on xkcd :)
Are you referring to writing automated unit test or manually unit testing prior to check-in?
If your shop does not write automated tests then his checking in of code that does not work is reckless. Is it impacting the team? Do you have a formalized QA department?
If you are all creating automated unit tests then I would suggest that part of your code review process include the unit tests as well. It will become obvious that the code is not acceptable per your standards during your review.
Your question is rather broad but I hope I provided some direction.
I would agree with Phil that the first step is to individually talk to him and explain the importance of quality. Poor quality can often be linked to the culture of the team, department and company.
Make executed test cases one of the deliverables before something is considered "done."
If you don't have executed test cases, then the work is not complete, and if the deadline passes before you have the documented test case execution, then he has not delivered on time, and the consequences would be the same as if he had not completed the development.
If your company's culture would not allow for this, and it values speed over accuracy, then that's probably the root of the problem, and the developer is simply responding to the incentives that are in place -- he is being rewarded for doing a lot of things half-assed rather than fewer things correctly.
Make the person clean latrines. Worked in the Army. And if you work in a group with individuals who eat a lot of Indian food, it wont take long for them to fall in line.
But that's just me...
Every time a developer checks something in that does not compile, put some money in a jar. You'll think twice before checking in then.
Unfortunately if you have already spoken to him many times and given him written warnings I would say it is about time to eliminate him from the team.
You might find some helpful answers here: How to make junior programmers write tests?
I'd be tempted to suggest elaborating a bit on what you've tried and what results you got as this may have changed a bit but here are my initial suggestions:
Is it any tests or comprehensive tests? Some may code blindly and do zero tests, but this is rather rare, IME. Usually there are some tests done but not enough to cover most of the cases that would be comprehensive testing.
Group dynamics may help. I'd assume he is part of a team and that the team's view may be of some help here. In a way this is trying to get peer pressure which is usually a bad thing but sometimes it can be used in good ways.
How well spelled out were the warnings? In a way this can seem childish but there is a chance that what you think of as testing may not be the same as his. Do you want nUnit tests, an excel spreadsheet, logs from his computer, or something else as proof of the existence and use of tests? From what you've described there isn't anything to confirm that he did understand what you meant, was going to use tests and provide evidence of doing so.
Check-in policy question. Some places, such as my current workplace, encourage committing often which can mean that one does commit code without tests. Is there a known, accepted and well-followed policy where you are? That's another aspect here.