Tuesday, April 22, 2008

Tracking Technical Debt in ReSharper

I love ReSharper. It's purely platonic. After all, I am a married man. But it's always there for me: my trusty little sidekick, my own personal Sancho Panza. Its array of helpful features is truly impressive: it enhances editing capabilities; it helps me organize the classes in my projects; it cleans up my code for me; it tells me when I'm missing references; it offers suggestions for more readable conditional statements; it's a floor wax, a dessert topping, and a gentle laxative.

One of my favorite ReSharper features is the To-do Explorer. It scans your code base filtering on comment tags you define ("//TODO", et al) and gives you a clickable sorted list of those items. It's nicer than the VS TaskList in that you can sort the results in different ways (my favorite being by namespace and type), and you can copy the results to the clipboard or save them out to a text file.

Since swinging into TDD at our office in the last few weeks, we've started taking the idea of Technical Debt very seriously. As we're working through our implementation, we recognize that there are some deadlines which will force us to make some compromises on design decisions. That's ok as long as we don't forget to go back and re-work those things later. We've found that an easy way to track those things is to mark them with the comment tag "//DEBT" and set up a ReSharper To-do Explorer filter for that tag. Then when we're planning our next phase, we've got a sortable, printable list of items we know we have to prioritize into the project plan.

Taking that idea a bit further, we decided to standardize on the following three code comment tags and create ReSharper To-do Explorer filters for each:

  1. //HACK == a HACK is something we put in place knowing it was wrong to begin with, and knowing it has to be taken care of as quickly as possible.
  2. //TODO == a TODO could be anything that we need to complete by the end of the current iteration.
  3. //DEBT == a technical debt is a calculated decision to complete a feature in this iteration using a design we know we will need to refactor in a future iteration. 

When we take a look at To-do Explorer, we get something like this:


So before the end of the current iteration, we know we need to clean up our HACKs and TODOs.  And at the beginning of the next iteration, we can throw the DEBTs in with the requirements to be prioritized and worked into the iteration plan.

I'm sure there are other ways to track technical debt that offer more info, but this is a quick, easy, and cheap way.

Share this post :

Continuous Integration and You

OK, so it's not really about you.  It's about me.  I had to have something to pique your interest though, didn't I?  I wrote this up for my team at work a while back, and I thought I would share it with the universe.  So here it is, universe.  Hope you dig it.

Working Together

Creating software is fundamentally a creative act of communication. There are plenty of things to communicate about on any non-trivial software project:

  • Customers and end users must communicate with business analysts and designers about their requirements for the system.
  • Business analysts and designers must work with test engineers, and must use these requirements to describe the bar the system must clear to be acceptable to customers.
  • The development team must understand this specification and communicate with designers, and with one another, so that they can clear that bar quickly enough to satisfy customers and deliver true value to them.
  • And tons and tons of other stuff…

Poor communication can contribute real friction to a software project. And, as any major dude will tell you, friction can be a real drag.  A primary goal of any software development team should be to reduce friction wherever it occurs in the development process.

In a traditional waterfall approach to managing software projects, stitching together each developer’s work into a single integrated project typically occurs towards the end of a development cycle. This step can be a long and unpredictable process, full of friction, and it can rapidly turn into a nightmare. If you think about it, this is really another communication issue: how do we communicate with one another about the work we have each done as individuals, and put it together so that it all works together seamlessly?

Benefits of Continuous Integration

It is important to remember that creating software is a complex process, and there are no silver bullets to ward off a certain amount of complexity. But we should simplify things where we can. Adopting the practice of Continuous Integration (CI) is one step towards simplifying our software development projects. What can CI buy us when we use it in a disciplined way?

  • CI can eliminate the need for a long, arduous, risky integration task at the end of a development cycle.
  • If we combine CI with good automated unit testing and code coverage metrics, we can make our projects practically self-testing. A good suite of unit tests executing a high percentage of the code base on every integrated build can keep the quality of the code from deteriorating as the development cycle progresses.
  • By working in short bursts and committing new code to the master build server often, defects that are introduced into the master build become easier to find and eliminate: you’ve only changed a small piece of code since the last time the system functioned properly, so you don’t have very far to look to find the defect you just introduced.
  • Anyone on the team can get the latest functioning code, build it, and test it locally with minimum effort.
  • At any time in the development cycle, everyone on the team knows precisely what works, what doesn’t, and where they are on progressing towards functional code for the iteration.
  • CI can facilitate enforcement of coding standards.
  • In short, the biggest umbrella benefits CI can buy us are better communication on the development team and reduced risk.

Practices of Continuous Integration

So what exactly is involved when working in a CI environment? What tools are needed? What do developers have to do differently? I know: reading a bunch of wordy blather[1] from yet another starry-eyed XP acolyte about how “this practice will change your life” is extremely tedious. So I drew you a picture. Lemme ‘splain:

CI Tools

This diagram is intended only to be an example of how a CI environment can be configured. There are many different ways to configure a CI server, and many tools you could use. In my diagram, I chose the following tools, some of which we already use at [my company]:

  1. CruiseControl.Net – the CI server software
  2. CCTray – A status notification tool for CruiseControl.Net
  3. SubVersion – our source control system
  4. Visual Studio – duh!
  5. NUnit – an open source .Net library for creating running unit tests
  6. NCover – an open source .Net library for measuring the percentage of code the unit tests actually cover when testing.
  7. FxCop – an open source .Net library for enforcing coding standards

Most of these tools are open source, which means they are free* - that's a big asterisk there; the licenses for some of these products are changing. You could choose others which are true third party commercial applications, supported by third party commercial software vendors. But there is a wealth of knowledge documented in the .Net community at large for a CI environment configured like this:


These are the steps:

  1. Individual developers on the team check new code into subversion.
  2. CruiseControl.Net checks the source code repository on regular intervals for newly checked-in code.
  3. When CruiseControl.Net detects new source code in the repository, it runs targets in the NAnt build script.
  4. NAnt uses the latest source code from the repository on the build machine.
  5. NAnt compiles the project using Visual Studio. If the project won’t compile, the NAnt script exits with a failure message, and CruiseControl.Net delivers a failure message to developers via CCTray.
  6. NAnt runs all the unit test fixtures using the NUnit framework. If any individual test in any test fixure fails, the NAnt script exits with a failure message, and CruiseControl.Net delivers a failure message to developers via CCTray.
  7. NAnt uses NCover to measure the percentage of code executed by the NUnit test fixtures. You can set a coverage threshold as a failable build step: if the coverage percentage does not meet the stated coverage standard, say 85%, the NAnt script exits with a failure message, and CruiseControl.Net delivers a failure message to developers via CCTray.
  8. NAnt uses FxCop to check the code for compliance with coding standards. If it finds any non-compliant code, the NAnt script exits with a failure message, and CruiseControl.Net delivers a failure message to developers via CCTray.
  9. Once the NAnt script finishes all its tasks, it exits with a success or failure message. CruiseControl.Net delivers the message to developers via CCTray. At the same time, CruiseControl.Net creates web pages on the build it just completed containing information from each step in the build process.
  10. Members of the development team can check these web pages to see details on the build results.

Developer Tasks

There are a lot of things that happen automatically in the scenario I described above. Running an environment like that is like having a whole extra person whose job is to put everybody’s code together, regression test every build, check everybody’s code for compliance to coding standards, and report the status of all that back to everyone on the team; but you don’t have to increase your Mountain Dew budget for this extra person.

However, like just about everything else in life, you will only get out of CI what you put into it. You have to approach CI with a certain discipline, and that means developers have to do things a little bit differently. But the cost to developers is small, and the benefit to the quality and progress of the project is great.

Automate the Build

Somebody has to champion the task of creating the CruiseControl.Net configuration and the NAnt scripts. These are not trivial tasks. But once the first set of config files and build scripts have been created, they can be extended for new build tasks and used as templates for other projects. Ideally, the build server should be looking for newly checked-in code in a very short feedback loop, say a range from every 60 seconds to every 10 minutes. Structure the automated build process such that it performs tasks which are important to the development team, and reports on the results of those tasks.

Write Unit Tests

Test your code. First write tests for your code, then write some more tests, and finally write some tests. And while you’re at it, write some tests. I cannot stress enough how valuable unit testing will be when combined with a CI environment. Test fixtures and test coverage metrics will be the safety net you rely on to tell you the health of the code as you move through the development cycle. Testing and testability should be first class citizens among all the considerations involved in the design process. Every developer on the team should provide test fixtures for their code, and the results of these tests should be viewed as a measure of the health of the code. Decide on a code coverage standard early (85% isn’t bad), and enforce it throughout the development cycle.

Commit Early and Commit Often

A developer’s attitude towards committing code to source control should be much like Al Capone’s attitude towards voting: do it early and often. The quicker you can get code onto the build server, the quicker you know your code will integrate with everyone else’s code.

Do the Check In Dance

Didn’t know this was going to be a dance lesson, did you? Jeremy Miller, the Arthur Murray of programming, has documented what he calls “The Check In Dance[2]. And it goes a little something like this:

    1. Let the rest of the team know a change is coming if it's a significant update.
    2. Get the latest code from source control. 
    3. Do a merge on any conflicts.
    4. Run the build locally [using the same build script as the build server],[3] and fix any problems found.
    5. Commit the changes to source control.
    6. Stop coding until the build passes.
    7. If the build breaks, drop everything else and fix the build.

Stupid Developer Tricks

Speaking of Jeremy Miller, he has also documented some good general tips for developers to follow in a CI environment. It is worth quoting him at length:

    • “Check in as often as you can.  Try to reach stopping points as often as you can.  This goes back to the basic agile philosophy of making small changes and immediately verifying the small change.  When you're doing Test Driven Development you strive to keep ‘Red Bar’ periods as short as possible.  The same kind of thinking applies to code check-ins.  Make small changes and see the impact on the rest of the code immediately.  Merging code will be less painful the more frequently a team integrates their code.
    • Avoid stale code.  If you have to keep code out for any length of time, make sure you are getting everyone else's changes.  Try really hard not to keep code out overnight.  If you're using shared developer workstations, put some sort of sign on the workstation that there is outstanding code on the box.  I've seen several XP zealots swear that they'll throw away any code left overnight.  Personally, I think that's just a silly case of ‘I'm more agile than thou,’ but it's still a bad idea to leave code out overnight if you can help it.
    • Don't ever check into or out of a busted build.  Checking in might make it harder to fix the build because it will cloud the underlying reason for the build, and you can't really know if your changes are valid.
    • Communicate and negotiate check-ins to the rest of the team.  Frequently the complexity of a merge can be dependent upon who goes first.  Some teams will use some kind of toy as a ‘check in token’ to ensure that there is never more than one set of updates in any CI build.  Pay attention to what the rest of the team is doing too.
    • If you're working on fixing the build, let the rest of the team know.
    • DON'T LEAVE THE BUILD BROKEN OVERNIGHT.  That's also an occasional excuse to your wife on why you're home late from work.  Use with caution though.
    • Not every member of the team needs to be a full-fledged ‘Build Master,’ but every developer needs to know how to execute a build locally and troubleshoot a broken build.  If you're suckered into being the technical lead, make sure every team member is up to speed on the build.
    • The best practice for effective CI is to perform the integration on a developer workstation before that code escapes into the build server wild. It's okay to break the build once in awhile. One of my former colleagues used to say that the CI build should break occasionally just to know it's actually working. What's not okay is to leave the build in a broken state. That slows down the rest of the team by preventing them from checking in or out. Even worse, somebody might accidentally update their workstation with the broken build and get into an unknown state. If you follow these dance steps, you can minimize build breaks and run more smoothly. Besides, it's embarrassing to have the ‘Shame Card’ on your desk.”


As I said above, there are no silver bullets for much of the complexity we face on software development projects. Continuous Integration is certainly not a silver bullet; but when used effectively, it can indeed improve communication and reduce risk significantly. As Martha says, “Continuous Integration: it’s a Good Thing.

[1] My wife often accuses me of being tedious and pedantic, particularly because I tend to use words like “tedious” and “pedantic”.

[2] Jeremy Miller, http://codebetter.com/blogs/jeremy.miller/archive/2005/07/25/129797.aspx, 2005

[3] I added the [bracketed text].

Wednesday, April 16, 2008

True Tales of the South - Vol 1

Returning to Austin recently from a friend's wedding in Athens, GA, Wife and I had a layover in Houston. While Wife was napping, I spotted a guy at our gate furtively picking his nose. I nudged Wife and pointed out the picker to her, but he spotted me spotting him. The jig was up. I whispered to Wife that if he got on the plane first, she was going to have to walk in front of me to be a human shield. Sure enough he was in first class. Wife made it past him as we walked by. But as soon as I got to him, he stood up rather suddenly. I startle easily, so I "busted a grumpy", as a friend likes to say. It was silent, swift, and deadly, like a Navy Seal, and I'm sure his freshly-cleared nostrils caught it in full bloom. Turns out he wasn't lunging for me though, he was just reaching for something in his carry-on bag. The end.

Actual Items - April 16, 2008 Edition

Here's an actual code snippet from a user details page I saw recently: ... //Get Feedback
//Get Paid Invoices
//Get Actions:
//Get Address
GetOtherData(); ... This goes on for about 15 method calls. Way to go on adding those useful, useful comments in there! And then the call to the method named GetOtherData() isn't commented at all. Cracks me up.

Saturday, April 12, 2008

Tools of the Trade

I'm a programmer who is interested in much more than just putting in my eight hours a day and drawing my paycheck twice a month. I want to excel at creating software. Learning to do so will be a life-long pursuit sitting at the feet of masters who are kind enough to pass on their expertise. Standing alongside excellent works by Martin Fowler, Michael Feathers, Brian Marick and others, one of my favorite books on my shelf is "Hand Tools: Their Ways and Workings" by Aldren A. Watson. Mr. Watson is a Vermont woodworker and illustrator who has spent many thousands of hours with the tools of his own trade. He says this about his tools:
"In one sense, tools are simply things of steel and wood, attractive to the eye, perhaps even beautiful in their efficient lines, functional design, and appealing contrasts of texture and color. In another, it might be imagines that they only wait to be taken up and used, when they will then automatically perform with the precision that their appearance implies. This is an illusion. Tools can indeed be made to perform extraordinary tasks, sometimes with such impressive dispatch that they seem to have life of their own. However, it is more realistic to see that a tool has no more and no less than a high potentiality for capacity performance. At the same time each one has its own peculiar ways and workings, individual quirks of personality, if you like. These traits must be discovered, at times only through dogged trial and error, and the knowledge of them applied with persistent discipline and an attitude of acceptance, for the tool will not change its ways. When a tool is picked up and used in recognition of these limitations, then its full capability can be exploited to your purposes, and the two of you will work agreeably in tandem. Thus there is a sharp distinction between working with you tools and merely working them on wood. “To my way of thinking the most practical means of acquiring this intimate understanding of the ways and workings of a tool is to take apart, see how it is built and how its mechanism controls its performance. Sharpen the cutter iron, clean and oil the tool, and put it back together again. Then look into its adjustments, trying out each one of them on waste pieces of wood. Experiment, too, with the different handholds, and the stance of your feet to determine what effect they have on the ease and efficiency of using the tool. “All of these factors operate in a cyclical fashion. As the potentialities and limitations of a tool are explored and understood, the quality of work tends to improve; and along with it grows the confidence that even more professional procedures are possible. As the tool begins to show signs of functioning more nearly as it was designed to perform, you may perceive that he implications of the phrase ‘in good hands this tool is capable of the finest work’ is not after all beyond your reach.”