|IBM home | Products & services | Support & downloads | My account|
|Demystifying Extreme Programming: "XP distilled" revisited, Part 2|
Programmer practices: Creating the
Before we get into the programmer practices, let me clarify something from last month. The revised/renamed/new practices I'm talking about in this column represent my musings on two unpublished articles written by Kent Beck. There haven't been any official changes to XP, as far as I know, and these practices and ideas haven't appeared in any formal way on the leadership group or similar location. Maybe there will be no official changes. There certainly isn't any kind of formal revision effort underway. I think the most I can say at the moment is that what I've written might be XP at some point in the future. It'll probably look different. I hope my ideas contribute to the discussion. Read at your own risk.
Test-first development (Maps to
There are two kinds of testing in XP: unit testing and acceptance testing. These are the typical names, but I don't like them. They're too much jargon for me. I prefer the names suggested in "What is Extreme Programming?" (see Resources) by Ron Jeffries: customer tests and programmer tests.
These names get to the heart of the reason behind the two kinds of tests. Programmers write the programmer tests as they write code. Customers write customer tests after defining stories. Programmer tests tell developers whether the system works (as defined by the customer) at any point in time. Customer tests tell the team whether the system does what users want it to do. I'll talk about programmer tests here, and cover customer tests next month.
Assuming the team is using an object-oriented language like the Java language, developers write programmer tests for every method that could possibly break (just the public interface most of the time), before they write the code for that method. Then they write just enough code to get the test to pass. People sometimes find this a little weird, but the point is simple. Writing tests first gives you:
A developer cannot check code into the source code repository until all the programmer tests pass. Programmer tests give developers confidence that their code works. They leave a trail for other developers to understand the original developer's intent (I've rarely seen better code documentation). Programmer tests also give developers courage to refactor the code, because a test failure tells the developer immediately if something's broken. Programmer tests should be automated and give a clear pass or fail result. xUnit frameworks (see Resources) do all this and more, so most XP teams I know of use them. There is an xUnit for almost every language imaginable. Just replace "x" with the language or tool of your choice (for instance, VBUnit for Microsoft Visual Basic, CppUnit for C++, and so on).
As a programmer, I am frequently amazed by the difference between coding with programmer tests and coding without them. I find myself taking extremely small steps when I write code. In fact, I don't have to debug very much when I'm taking small enough steps, because the source of a problem becomes obvious: it's got to be that line of code I just wrote. The phenomenal thing to me is that writing a test first quite often drives me to create the simplest code possible.
I'm sure you have written code you thought you would need later, but did you need it? Perhaps, but if you didn't, did you remove it? Probably not. So it just sits around not serving any useful purpose. What if your test told you exactly how much code to write? What if you only wrote enough code to get the test to pass, and no more? You would probably have less code, and it would all be used. That's the kind of code I want to write. When you write your test first, the test drives the code. As a result, your code will look remarkably different when tests drive it. And in my experience, my code is much simpler when tests drive it, and I'm always in the market for greater simplicity. Kent Beck is writing a book called Test-Driven Development (see Resources), which details this practice. I recommend it. In fact, I prefer the name test-driven development for this practice.
This approach may sound inefficient. I've always liked Martin Fowler's response: "When people say that pair programming reduces productivity, I answer, 'That would be true if the most time-consuming part of programming was typing.'" In fact, pair programming, or just "pairing" for short, provides many benefits, including:
Empirical research indicates that code reviews increase code quality, but I hate doing them. I also believe they tend to be very difficult to do correctly, and they aren't nearly as effective as pair programming. What if every line of production code were reviewed by somebody who was intimately familiar with what the code was supposed to be doing? Pair programming gives you precisely that. Nothing keeps you honest like a witness.
Research discussed in The Costs and Benefits of Pair Programming by Alistair Cockburn and Laurie Williams (see Resources) is also showing that programming in pairs is actually more efficient than programming alone. This is a bit counterintuitive. Most managers (and I've been one) will see two developers doing the work of one and stop there. That's not thinking beyond the end of your nose. It's too simplistic. It's also false.
As for risk, think for a minute about why projects fail. One of the big reasons is extreme dependence on individual heroes. If your project's hero is killed in a freak farming accident, your project might be toast. The essence of pairing is to spread knowledge around. Pairs should switch around every so often. If pairs get too sticky, they get stuck.
Despite the good numbers and the strong arguments in favor of pairing, however, most developers hate this idea. Perhaps it's an issue of pride. Deep down, I believe most developers want to be the hero. Pairing makes that next to impossible. As I wrote in Extreme Programming Applied: Playing to Win (see Resources):
Anybody can sit next to someone else and throw in two cents every so often. Many people can be completely engaged and try to make the result better. But the ones who really understand pairing know that it's about loving another person.
As I said in the book, I'm talking about the kind of love that shows itself through actions, not the romantic kind. When you love another person, you try to see the best in him and help him grow. You'll be patient, kind, not jealous, generous, humble, polite. This kind of intense interpersonal relationship, even if it's just for a few hours, is not something most people are interested in, and developers are people. This is a radical idea and it isn't for everyone. But it also can produce some of the most rewarding experiences of your professional career.
While XP says you should write the simplest code that could possibly work, it also says you'll learn along the way. Refactoring lets you incorporate that learning into your code without breaking the tests. It keeps your code clean. That means it will survive longer, introduce fewer problems for future developers, and guide them in the right direction.
The point of refactoring is to improve the design of existing code. The team should have automated suites of programmer tests and customer tests. The former should pass all the time; the latter should pass to the degree that customers require. Run the tests. Refactor the code. Rerun the tests. Did any programmer tests break? Fix them. Did any of the "required" customer tests break? Fix them. If you can't fix it, back out the refactoring you just tried. Without tests, changing your code would be a guessing game. With refactoring, if the code breaks, your tests will tell you.
Collective ownership (Maps to collective
Saying that everybody owns the code isn't the same as saying nobody owns it. When nobody owns the code, people can wreak havoc anywhere they want and bear no responsibility. XP says, "You break it, you fix it." The team should have programmer tests that must run before and after each integration. If you break something, it's your responsibility to fix it, no matter where it is in the code. This requires extreme discipline.
I've noticed that some team members simply can't handle this practice. They can't stand the idea of somebody messing with "their" code. If you want to be part of an XP team, you must share ownership of all the code. If you don't, the team will eventually hit a wall. Some parts of the code will be off limits, which makes the system difficult to change -- a scenario we're trying to avoid. If somebody behaves this way, the team needs to point this out and encourage him to change. If he refuses, strongly recommend he change or leave. If he still refuses, tell him to move on. Don't compromise on this.
Continuous integration does not mean you integrate every second, but it does mean you should integrate early and often. Daily is not enough. In an eight-hour day, I start to get that icky feeling if I haven't integrated at least once every couple of hours, if not more. This sounds scary to many people. Once again, the tests should drive that fear away. The tests tell me whether an integration "works" and that it's safe to release to the rest of the team.
YAGNI, or, "You aren't going to need it"
(Maps to simple design)
What this objection is really saying is that people aren't comfortable with the idea of emergent design: letting the design of the system emerge, rather than trying to nail it all down at the start. Establish a direction, note some milestones and landmarks, then start the trip. You can adjust along the way as you learn. People who don't like this approach mistakenly suggest it is undisciplined. In reality, it is the only realistic approach to development in today's economy.
Typical heavyweight methods say you should do all but the most trivial design tasks up front. This is like taking a static picture of the horizon, staying still, and trying to draw a perfect map of how to get there. This is a fine approach if requirements are constant. If you know at the beginning what the system needs to do and how it needs to do it, you can do most (if not all) your design up front. In reality, however, most developers are exploring problems that haven't been adequately solved before or are implementing solutions that haven't been tried before. These days, most systems are being designed for businesses competing in markets that change constantly, not once every ten years.
In environments like this, requirements are effectively changing all the time and stability of requirements is a pipe dream. This means big, up-front design is inappropriate. You simply cannot know at the beginning where you will end up in the end, or even where you want to end up. The best you can do is establish a general direction and make small, frequent adjustments along the way in order to hit a moving target. XP requires simplicity at every step so that you can change direction as often as necessary. We always try to use the simplest design that could possibly work at any point, changing it as we go to reflect emerging reality.
According to Kent Beck, the simplest design that could possibly work is the design that:
Requiring a simple design doesn't imply that all designs will be small or trivial. They just have to be as simple as possible and still work. Don't include "extra" features that aren't being used. We call such things YAGNI, which stands for "you aren't going to need it." In other words, don't design for things you might need. In the next iteration, you may find out you didn't need it after all. Instead, write a test, then write just enough code to get that test to pass. That's usually all the design you need to do. Applying the YAGNI principle doesn't mean you can't ever think ahead, but it does mean you shouldn't look too far.
As I said before, I would be nervous about refactoring without tests. Likewise, without tests to tell me if I broke something, I would find collective ownership insane and continuous integration impossible. As for YAGNI, I don't think it's possible to keep your design simple without having tests drive you to that simplicity. The urge to build in hooks for the future is just too strong to resist. Interestingly, notice that most of these dependent practices are dependent on having tests in place. (Perhaps that practice is particularly important.)
While some of the practices can stand alone, most cannot. If you have tests, perhaps you could adopt more of the other practices in a piecemeal fashion, but why would you want to? Using the practices together and letting them reinforce each other can produce startlingly successful teams and even better software. If you don't think you'll like a particular practice, resist the "Green Eggs and Ham" defense and try it for a while. You might be surprised. If you have tried a particular practice and still don't like it, you can try to do the rest without that one, but I believe your speed and your results will suffer.
My recommendations from "XP distilled" remain the same: The whole is greater than the sum of the parts. You can implement single practices or a small subset, and get great benefits over not using any. But you only get the maximum benefit if you implement all of them, because their power comes from their interaction. Do XP by the book at first as a benchmark. Once you understand how the practices interact, you will have the knowledge you need to adapt them to your context. Remember that "doing XP" is not the goal; it is a means to an end. The goal is to develop superior software quickly. If your process mutates in a manner that disqualifies you from saying you are doing XP, yet your results are blowing the doors off your competitors, you have succeeded.