What was your most efficient technique to improve productivity and quality? - process

I am aware that this could be seen as subjective but this is definitely not my intention. I am always on the hunt for techniques that I may have never heard of that help improve both productivity and quality of software engineers.
In particular I am looking for tools, techniques, approaches, tips and tricks, best practices, etc. that helped you to improve both your productivity and quality as a software engineer. This is actually a process related question. So please don't answer with your opinion about which programming language is the best from your perspective.
I expect the answers to be subjective. But that is the beauty of it. Not everything works for everybody. We all have a different set of constraints that we operate under. Therefore it is unavoidable that we make different choices. If the answers are contradictory, that would be perfectly fine!
What were the techniques that were helpful to you specifically? How did they make a difference? What criteria do you use to come to that conclusion?

Ok, here for my subjective answer.
Standard approach for improving quality is unit testing, in my opinion. Of course you can still write crappy code that works and have unit tests that confirm that it works but at least you know that it works. Where unit tests really give you an advantage is when you want to make changes to your code or add additional features. Having unit tests guarantees that your code keeps working.
As for productivity and unit tests it depends whether you look at short-term or long-term productivity. Unit testing takes time so you are less productive writing actual functionality. In the long term I'm absolutely sure you are more productive since your unit tests guarantee that during maintenance all functionality keeps working.
Second productivity and quality enhancing tip is to think each new feature thoroughly through. Once a new feature is shipped, you have to maintain it. Maintenance takes time and decreases productivity. Is the new feature necessary? How many customers actually want this feature? Always try to look at the bigger picture, what is your own vision for your product, does the new feature fit in with this vision.
The less code you have, the less code you have to maintain and the less bugs you have. I always try to keep that in mind.

Delegate to the appropriate party. Then review the assigned task periodically.
Once you know who best to delegate to, who will get tasks done correctly and efficiently, it improves your productivity massively.
It lets you focus on what you want or need to be doing.
Like answering stackoverflow questions for example.

Learning how to use recursion effectively. It's given me a framework to decompose complex problems into understandable code. It's helped me to code tough bits faster with no or very few errors. The book that taught me how to think this way was The Little Schemer by DP Friedman.
The second I'd say was learning Lisp. It's helped me to learn other languages more quickly. I can now classify other languages by the subset of Lisp functionality that they implement.
P.S. I don't use recursion frequently in my software. Most modern languages and frameworks have features and utility functions that allow you to avoid using recursion for simpler problems.

Related

Code cleanup and optimization, where to start?

I want to optimize my angularjs frontend application and cleanup the code to provide better code quality.
I thought about bringing in more abstraction, since I implemented a lot of similiar looking, but slightly different controllers.
My question(s) are the following:
Are there common techniques to recognize bad code and optimize it?
How can one determine if code is either good, bad or redundant?
Where should one start, when trying to provide better code quality in
an existing software project?
To answer your questions:
Yes there are: by looking at the code itself experienced programmers can tell if the code has certain characteristics or not. Some metrics exist that could indicate alarm signals in terms of quality like "many dots" in object-oriented languages (same in Javascript) which indicates close coupling. Here is a comprehensive list.
By looking at it or as written before with static code analysis.
As others stated don't optimize or refactor just for the sake to have a good looking code base. When you need to touch existing code again to e.g. add a feature or fix a bug then start to look for code redundancies and many other signals that might indicate to refactor the code. Martin Fowler wrote an excellent book about it with step-by-step examples which IMO is a must-read for every developer. Also a good starting point is Misko's site. He talks about testability but "good" code is well testable.
What's really important before refactoring is to have a strong automated test base to rely on. If not add tests and go really slow to make sure you don't break existing functionality.
The topic is really huge and impossible to work through in a post here but I think it's one of the most important ones that makes an experienced programmer.
If a code is good or bad is your own opinion.
To make the code look better and more efficient I would do something like this:
Don't make the lines to long.
Use variables that make sense.
Use tabs and enter when it is too messy.
There are a whole lot more things to clean up your code, but these are just some examples.
If the code works - don't touch it :)
Then when you work on bug fixes or new features\changes - see if you can also gradually improve pieces of code you are working with. The more you work with the code the better understanding of overall picture you should get and opportunities to improve and optimize should become more obvious. (you should also continue learning from other sources - books, internet, other codebases)
There is now magic "one size fits all" solution :) but yes, you can start with simple style changes as suggested in the other answer.
The process you refer to is commonly known as Refactoring. There are a number of standard techniques for improving code; Martin Fowler's book "Refactoring" has a list, with examples.
Many popular IDEs have refactoring tools built-in.
One of the processes in agile development is known as "red/green/refactor". Red means your code doesn't pass its unit tests; green means it passes (i.e. it does what it's supposed to do), and "refactor" means you make it elegant, maintainable and clean. Because you have a unit test, you know the refactoring doesn't break the code.
Where to start is a tough question - I typically recommend refactoring when you're fixing bugs. You may as well write a unit test to expose the bug, and tidy up the code while fixing the bug. Because that module has a bug, it's likely to be high-risk, so you should improve the code quality.

Code readability and UI design balance

I have worked a lot on improving how my code runs (and it's "beauty"), but when is the time to stop fixing, and start working on the UI?
Microsoft (in my opinion) seems to go with the nice-code, while Apple goes with nice UI (although Apple's developer examples do have very nice code).
I'm bad at balancing, when is it the time to work on one or the other?
Make it work, then make it elegant, then make it fast.
If the user doesn't like the program, the quality of the code is irrelevant. Once the user likes the program, then code quality becomes more important. And by "like" I mean a good user experience where the program doesn't crash, fulfills the need that the user has and adhere's to the Principle of Least Surprise.
Note I say "more important" because code is not something the user sees or is interested in. Code quality and "beauty" is important to developers because it's what they see of the program and what they hand over to other developers.
I remember reading something some time ago comparing (in general terms) software developed for the windows platform and software developed for OS X. In general terms it said that windows programs tended to be developed by developers who didn't spend too much time on the UI or thinking about the user experience. Their concentration was getting every piece of functionality they could think of into the program without any thought about it making sense. Mac OS X on the other hand tended to be developed by people concentrating on the user experience first and solving their problems. So it didn't necessarily have as much functionality, but what it did have was directly associated with what the user needed and was easy to use.
So when is it time to stop and think about the UI? The fact that you have asked the question makes me think it's time to stop right now. If anything I'd suggest that even before writing any code you should have drawn out a basic UI and worked out what sort of user experience your program is going to provide If you cannot work that out, you don't want to wasting time writing code because you will never use it.
Thats not to say I think code "beauty" is irrelevant. I spend time on making sure my code is well written, easy to follow and looks "good". But that's after I've figured out the UI and because I've had a lot of experience with cleaning up other people aweful code :-)
This is an incredibly broad question, and the answer is entirely a function of what you're building, why, and for whom.
Some random bits of conventional wisdom that I agree with:
If you're building broadly-applicable consumer software (e.g. for Mac or iOS), then the UI is a critical component of the application. Spend lots of time on it. :) Work with designers. If your software looks like crap, no one will want to use it. (Assuming nobody's going to make them: see next point.)
If you're building internal or enterprise tools, UI polish is probably less important.
Even if you're a one-man (or woman) operation, think of engineering as distinct from "product management". Product management is the thought process that determines what the software should do, and how it should work (and look). Engineering creates the reality, but has different tradeoffs. These are sometimes in conflict, which is hard to handle if you're one person, but try to wear different hats.
In all cases, your code should work, for some reasonable definition of work.
If it's ugly because it's hacked together, you're likely to run into accuracy/correctness problems sooner than if you construct it methodically. It will also be harder to maintain. The tradeoff here is almost never worth it. As you gain experience, you'll more naturally write clean code from the start, even if it's "scratch" code. You'll save time in the long run this way, and not be faced with later wholesale attempts to improve its beauty.
If you're on a team, or working with others, clean code is even more important. Find out if there are any "local" coding conventions, and work closely with colleagues.
As far as improving "how code runs", if you mean performance optimization, don't do any of that until you're sure you need to. Write the simpler code first, even if it's slower. It's likely that it won't be slow enough to matter.
Special case: if you're writing a game, long-term maintainability is less important, because they tend to be "throwaway" at some level. YMMV.

Test driven development: What if the bug is in the interface?

I read the latest coding horror post, and one of the comments touched a nerve for me:
This is the type of situation that test driven design/refactoring are supposed to fix. If (big if) you have tests for the interfaces, rewriting the implementation is risk-free, because you will know whether you caught everything.
Now in theory I like the idea of test driven development, but all the times I've tried to make it work, it hasn't gone particularly well, I get out of the habit, and next thing I know all the tests that I had originally written not only don't pass, but they're no longer a reflection of the design of the system.
It's all well and good if you've been handed a perfect design from on high, straight from the start (which in my experience never actually happens), but what if halfway through the production of a system you notice that there's a critical flaw in the design? Then it's no longer a simple matter of diving in and fixing "the bug", but you also have to rewrite all the tests. A fundamental assumption was wrong, and now you have to change it. Now test driven development is no longer a handy thing, but it just means that there's twice as much work to do everything.
I've tried to ask this question before, both of peers, and online, but I've never heard a very satisfactory answer. ... Oh wait.. what was the question?
How do you combine test driven development with a design that has to change to reflect a growing understanding of the problem space? How do you make the TDD practice work for you instead of against you?
Update:
I still don't think I fully understand it all, so I can't really make a decision about which answer to accept. Most of my leaps in understanding have happened in the comments sections, not in the answers. Here' s a collection of my favorites so far:
"Anyone who uses terms like "risk-free"
in software development is indeed full
of shit. But don't write off TDD just
because some of its proponents are
hyper-susceptible to hype. I find it
helps me clarify my thinking before
writing a chunk of code, helps me to
reproduce bugs and fix them, and makes
me more confident about refactoring
things when they start to look ugly"
-Kristopher Johnson
"In that case, you rewrite the tests
for just the portions of the interface
that have changed, and consider
yourself lucky to have good test
coverage elsewhere that will tell you
what other objects depend on it."
-rcoder
"In TDD, the reason to write the tests
is to do design. The reason to make
the tests automated is so that you can
reuse them as the design and code
evolve. When a test breaks, it means
you've somehow violated an earlier
design decision. Maybe that's a
decision you want to change, but it's
good to get that feedback as soon as
possible."
-Kristopher Johnson
[about testing interfaces] "A test would insert some elements,
check that the size corresponds to the
number of elements inserted, check
that contains() returns true for them
but not for things that weren't
inserted, checks that remove() works,
etc. All of these tests would be
identical for all implementations, and
of course you would run the same code
for each implementation and not copy
it. So when the interface changes,
you'd only have to adjust the test
code once, not once for each
implementation."
–Michael Borgwardt
One of the practices of TDD is the use of Baby Steps (which could be very boring in the beggining) which is the use of really small steps in order for you to understand your problem space and make a good and satisfactory solution for your problem.
If you already know the design of your application you aren't doing TDD at all. We should design it while doing your tests.
So the suggestion I would give is for you to concentrate on the baby steps in order to get a proper testable design
I don't think any real practitioner of TDD will claim that it completely eliminates the possibility of error or regression.
Remember that TDD is fundamentally about design, not about testing or quality control. Saying "all my tests pass" does not mean "I'm finished."
If your requirements or high-level design change drastically, then you may need to throw away all your tests along with all the code. That's just how things are sometimes. It doesn't mean that TDD isn't helping you.
Properly applied, TDD should actually make your life a lot easier in the face of changing requirements.
In my experience, code that is easy to test is code that is orthogonal from other subsystems, and which has clearly defined interfaces. Given such a starting point, it is much easier to rewrite significant portions of your application, since you can work with confidence knowing that a) your changes will be isolated to a few subsystems, and b) any breakage will quickly show up as failing tests.
If, on the other hand, you're just slapping unit tests on your code after it has been designed, then you may well have problems when requirements change. There's a difference between tests that fail quickly when subsystems change (because they're effectively flagging regressions) and those that are brittle, because they depend on too many unrelated pieces of system state. The former should be fixable by a few lines of code, while the latter may leave you scratching your head for hours trying to unravel them.
The only true answer is it depends.
There are ways to do TDD wrong, such
that it doesn't fit in with your
environment and eats effort with
minimal benefit.
There are ways to do TDD right, such
that it both cuts costs and increases
quality.
There are ways to something
similar-but-different to TDD, which
may or may not get called TDD, and
may or may not be more appropriate in
your particular situation.
It's a strange quirk of the market for software tools and experts that, to maximise the revenue for those pushing them, they are always written as if they somehow apply to 'all software'.
Truth is, 'software' is every bit as diverse as 'hardware', and nobody would think of buying a book on bridge-making to design an electronic gadget or build a garden shed.
I think you have some misconceptions about TDD. For a good explanation and example of what it is and how to use it, I recommend reading Kent Beck's Test-Driven Development: By Example.
Here are a few further comments that may help you understand what TDD is and why some people swear by it:
"How do you combine test driven development with a design that has to change to reflect a growing understanding of the problem space?"
TDD is a technique for exploring a problem space and creating and evolving a design that meets your needs. TDD is not something you do in addition to doing design; it is doing design.
"How do you make the TDD practice work for you instead of against you?"
TDD is not "twice as much work" as not doing TDD. Yes, you'll write a lot of tests, but that doesn't really take much time, and the effort isn't wasted. You have to test your code somehow, right? Running automated tests are a lot quicker than manually testing whenever you change something.
A lot of TDD tutorials present highly detailed tests of every method of every class. In real life, people don't do this. It is silly to write a test for every setter, every getter, and so on. The Beck book does a good job of showing how to use TDD to quickly design and implement something, slowing down to "baby steps" only when things get tricky. See How Deep Are Your Unit Tests for more on this point.
TDD is not about regression testing. TDD is about thinking before you write code. But having regression tests is a nice side benefit. They don't guarantee that code will never break, but they help a lot.
When you make changes that cause tests to break, that's not a bad thing; it's valuable feedback. Designs do change, and your tests aren't written in stone. If your design has changed so much that some tests are no longer valid, then just throw them away. Write the new tests you need to be confident about the new design.
it's no longer a simple matter of
diving in and fixing "the bug", but
you also have to rewrite all the
tests.
A fundamental creed of TDD is to avoid duplication both in the production code AND in the test code. If a single design change means you have to rewrite everything, you weren't doing TDD (or not doing it correctly at all).
Ideally, in a well-designed system with proper separation of concerns, design changes are local, just like implementation changes. While the real world is rarely ideal, you still usually get something in between: you have to change some of the production code and some of the tests, but not everything, and the changes are mostly simple and may even be done automatically by refactoring tools.
Coding something without knowing what will work best in the UI, while at the same time writing unittests. That is very time consuming. It's better to start out making some prototypes of the GUI to get the interaction right.. and then rewrite it with unittests (if you employer allows you).
Continuous Integration (CI) is one key. If your tests run automatically every time you check in to source control (and everyone else sees it if they fail), it's easier to avoid "stale" tests and stay in the green.
As Mr. Dias mentioned, Baby Steps are important. You make a small refactoring, you run your tests. If tests break, you immediately determine if this is expected (design change) or a failed refactoring. When tests are truly independent (comes with practice), this is seldom very difficult. Evolve your design slowly.
See also http://thought-tracker.blogspot.com/2005/11/notes-on-pragmatic-unit-testing.html - and definitely buy the book!
EDIT: Perhaps I'm looking at this the wrong way. Say you had a legacy codebase that you wanted to redesign. The first thing I would try to do is add tests for the current behavior. Refactoring without tests is risky - you might change behavior. After that, I would start to clean up the design, in small steps, running my unit tests after each step. That would give me confidence that my changes weren't breaking anything.
At some point the API might change. This would be a breaking change - clients would have to be updated. The tests would tell me this - which is good, because I'd have to update any existing clients (including the tests).
Now that's not TDD. But the idea is the same - the tests are specifications of behavior (yes, I'm shading into BDD), and they give me the confidence to refactor the implementation while insuring that I preserve the behavior (as well as letting me know when I change the interface).
In practice, I've found TDD gives me immediate feedback on poor interface design. I'm my first client - I know when my API is hard to use.
We tend to do much less design up front with TDD, knowing it can change. I have taken projects through huge gyrations (it's a web app, no it's a RESTful server, no it's a bot). The tests provide me with the ability to refactor and restructure and evolve your code much more easily than untested code. Although it seems contradictory, it is true-- even though you have more code, you are able to make major changes and have confidence that nothing has broken in the existing functionality.
I understand your concern that fundamental assumptions changing make you throw out tests. This seems intuitive, but I personally haven't seen it. Some tests go, but most are still valid-- often a major change isn't as major as it seems at first. Plus, as you get better at writing tests, you tend to write less brittle ones, which helps.

Understanding code metrics

I recently installed the Eclipse Metrics Plugin and have exported the data for one of our projects.
It's all very good having these nice graphs but I'd really like to understand more in depth what they all mean. The definitions of the metrics only go so far to telling you what it really means.
Does anyone know of any good resources, books, websites, etc, that can help me better understand what all the data means and give an understanding of how to improve the code where necessary?
I'm interested in things like Efferent Coupling, and Cyclomatic Complexity, etc, rather than lines of code or lines per method.
I don't think that code metrics (sometimes referred to as software metrics) provide valuable data in terms of where you can improve.
With code metrics it is sort of nice to see how much code you write in an hour etc., but beyond they tell you nada about the quality of the code written, its documentation and code coverage. They are pretty much a week attempt to measure where you cannot really measure.
Code metrics also discriminate the programmers who solve the harder problems because they obviously managed to code less. Yet they solved the hard issues and a junior programmer whipping out lots of crap code looks good.
Another example for using metrics is the very popular Ohloh. They employ metrics to put a price tag on an opensource project (using number of lines, etc.), which in itself is an attempt which is flawed as hell - as you can imagine.
Having said all that the Wikipedia entry provides some overall insight on the topic, sorry to not answer your question in a more supportive way with a really great website or book, but I bet you got the drift that I am not a huge fan. :)
Something to employ to help you improve would be continuous integration and adhering to some sort of standard when it comes to code, documentation and so on. That is how you can improve. Metrics are just eye candy for meetings - "look we coded that much already".
Update
Ok, well my point being efferent coupling or even cyclomatic complexity can indicate something is wrong - it doesn't have to be wrong though. It can be an indicator to refactor a class but there is no rule of thumb that tells you when.
IMHO a rule such as 500+ lines of code, refactor or the DRY principal are more applicable in most cases. Sometimes it's as simple as that.
I give you that much that since cyclomatic complexity is graphed into a flow chart, it can be an eye opener. But again, use carefully.
In my opinion metrics are an excellent way to find pain points in your codebase. They are very useful also to show your manager why you should spend time improving it.
This is a post I wrote about it: http://blog.jorgef.net/2011/12/metrics-in-brownfield-applications.html
I hope it helps

Best concrete "How-To Manual" on *Managing* Test Driven and/or Agile development? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I am looking for an easily digestible book to present to my boss/team.
Background info: More and more of our meetings at work involve my boss/team pondering how to implement more "best practices" around here. ("Here" = a very small application development shop. 4 developers)
The following things are items that my whole team agrees that we need:
Nightly builds
Decomposing "bugs" in our bug-tracker into smaller, more-specific items
Automated testing
The problem we face is how to get started.
I believe that if my shop could simply choose a clear and specific plan or set of rules, then everything else would fall into place. Right now we are stuck in discussions of fuzzy, feel-good ideas and nice-sounding buzzwords.
Please recommend to me your favorite book (or online resource) that contains clear, discrete, sequential steps for implementing a management scheme for guiding a TDD or Agile team/shop.
I realize that there are other paradigms besides TDD and Agile that would also address these concerns, but my own self-interests and biases point toward TDD and Agile so I would love to harness my team's desire for change and "nudge" it in that direction. Or feel free to slap me down if you vehemently disagree with my sentiments! I will take no offense. :)
As others have stated, I think these questions are answered best when respondents list only one book recommendation per answer.
Thank you all.
To throw another Pragmatic Programmers title in the mix: Ship It!
Great book - take a look, might suit your needs with management.
For your needs I recommend Test Driven Development: By Example (Kent Beck). It is clearly-written, more practical than theoretical, and prescribes time-tested recipes to adopt an agile, test-driven approach.
I suggest that you start with what you agree on: Nightly Builds and Automated tests. Nightly builds is a no-brainer. For Automated tests I would start with:
Each commit with new functionality should have at least one automated test
Each commit that fixes a bug should have at least one automated test that fails without the fix and succeeds with the fix
If you do this, you will start to gain experience. With this experience it will be much easier to understand all the good advice that the literature has.
There is a lot of good books on good practises, but you have to figure out what works for your team.
Agile methods are not really methods...
There are more a spirit. The main is a focus on :
communication
reactivity to changes
customers orientation
This can be achieved in lot of ways, and it's more important to find your way to do it. If you want an example of what this spirit can be, you can read the free 37signals' online book, Getting Real.
But there are some steps you can start with
They are no big rules you must enforce, but you can try the following and see how it goes with you team :
Quick stands up meetings. 5-15 minutes daily meeting where everybody stays up on his feet and explain what he has accomplished, what need to be done and what can prevent him from doing it. Keep it under 15 minutes, with the minimum number of people.
Set simple goals for short term deadlines instead of big goals in weeks.
Build small team (3 persons) and divide the work between them. Put them in the same room and ensure they have at least half a day to work without ANY interruption.
Set many little regular review with your customer. Don't write specs. Sketch, design, prototype, show to the customer, fix/adapt/change then build. Then do it again.
Testing, versioning, bug tracking are tools, not methods. No one cares how you do it and with what software are long as you do it. They are not the issue.
It's not really a step-by-step book, but full of great advices and easy to digest: Practices of an Agile Developer. And if you want to have in house TDD training, I recommend netobjectives. I had their TDD course ones and it really opened my eyes.
Sadly, even the most clear and specific plan can turn out to be disputable.
I'll tell you what works. Starting TDD immediately. It has boundaries. It's relatively easy. You'll still have a million questions.
You are free to say, "But what about nightly builds?" "What about using the bug tracker?"
A lot of pondering can mean one of two things.
First, it can mean that someone is muddying the waters with "concerns" and "questions". Sometimes this is really displeasure at changing, voiced as "concerns". Sometimes this is really crushed egos ("I thought I was pretty sharp, now someone is saying I must have improvements imposed on me.")
Second, it can mean that this is dauntingly large. Consequently, don't look at this as "Many New Best Practices". Look at this as just a few incremental improvements. You're not changing yourselves fundamentally (well, that could happen, but don't start out with that as your plan.)
My experience is that you can only do one new thing at a time. Do TDD until it's boring. Then do something else. Often nightly builds become obvious after you have a robust test suite. Then when that's boring, do some other small, incremental process improvement.
One thing at a time. Baby steps. Avoid throwing out babies with bath-water. All you want is to be a little better next month than you are this month.
If there are concerns on adopting one small incremental improvement, find the root cause. Who's ego is bruised? Who's worried about change?
You can print him Scrum and XP from the Trenches by Henrik Kniberg, It's more focusing on agile development process that on TDD, but it's an easy and quick read.
Check out books by Marty Cagan.
my favorite is Planning Extreme Programming
EDIT: it provides a complete replacement for traditional project management geared towards an XP/Agile team
the danger is, adopting modern development methods and then strangling them with archaic project-management and administration practices!