IBM Skip to main content
Search for:   within 
      Search help  
     IBM home  |  Products & services  |  Support & downloads   |  My account

developerWorks > Java technology
developerWorks
Demystifying Extreme Programming: "XP distilled" revisited, Part 2
Discuss74KBe-mail it!
Contents:
Programmer practices
Test-first development
Pair programming
Refactoring
Collective ownership
Continuous integration
YAGNI
Practice synergy
Next month
Resources
About the author
Rate this article
Related content:
XP distilled
Demystifying Extreme Programming series
Extreme Programming with IBM VisualAge for Java
Subscriptions:
dW newsletters
dW Subscription
(CDs and downloads)
How programmer practices fit into the picture

Level: Introductory

Roy W. Miller (mailto:rmiller@rolemodelsoft.com?cc=&subject="XP distilled" revisited, Part 2)
Software Developer, RoleModel Software, Inc.
10 September 2002

Column iconIn this month's installment of Demystifying Extreme Programming, Roy Miller explains what it means to be a programmer on an XP team and how the six programmer practices specifically fit into the picture. While all 19 XP practices are important, the programmer practices are vital for a team making software.

Programmer practices: Creating the system
The XP programmer practices describe what programmers need to do to create the system our team wants. To many people, these practices are XP. Of course, XP is about more than writing code, but without the code the entire exercise is a waste of time. Just like last month, some of these practices weren't in the original XP list of 12. After each name, I'll parenthetically note whether the practice is new, unchanged, or maps to an original name. Note that these names are in flux, but the principles probably aren't. (You may have noticed that one of the original practices for programmers, coding standard, is no longer on the list. The reason is that this practice would be redundant in the new list. A coding standard emerges as the programmers write code. Having it as a separate practice is unnecessary.)

Before we get into the programmer practices, let me clarify something from last month. The revised/renamed/new practices I'm talking about in this column represent my musings on two unpublished articles written by Kent Beck. There haven't been any official changes to XP, as far as I know, and these practices and ideas haven't appeared in any formal way on the leadership group or similar location. Maybe there will be no official changes. There certainly isn't any kind of formal revision effort underway. I think the most I can say at the moment is that what I've written might be XP at some point in the future. It'll probably look different. I hope my ideas contribute to the discussion. Read at your own risk.

Test-first development (Maps to testing)
Whenever programmers change their code, they need to know if what they just did is an improvement or if they just broke something. More importantly, they need to maintain the discipline necessary to create the smallest amount of code necessary to get the job done, not some bloated behemoth that has a bunch of hooks somebody might need later. This is what test-first development is all about.

There are two kinds of testing in XP: unit testing and acceptance testing. These are the typical names, but I don't like them. They're too much jargon for me. I prefer the names suggested in "What is Extreme Programming?" (see Resources) by Ron Jeffries: customer tests and programmer tests.

The six programmer practices

  • Test-first development
  • Pair programming
  • Refactoring
  • Collective ownership
  • Continuous integration
  • YAGNI

These names get to the heart of the reason behind the two kinds of tests. Programmers write the programmer tests as they write code. Customers write customer tests after defining stories. Programmer tests tell developers whether the system works (as defined by the customer) at any point in time. Customer tests tell the team whether the system does what users want it to do. I'll talk about programmer tests here, and cover customer tests next month.

Assuming the team is using an object-oriented language like the Java language, developers write programmer tests for every method that could possibly break (just the public interface most of the time), before they write the code for that method. Then they write just enough code to get the test to pass. People sometimes find this a little weird, but the point is simple. Writing tests first gives you:

  • The most complete set of tests possible, which increases your confidence in the code
  • The simplest code that could possibly work, which makes it easier to refactor later
  • A clear vision of the intent of the code, which makes it easier to understand and refactor later

A developer cannot check code into the source code repository until all the programmer tests pass. Programmer tests give developers confidence that their code works. They leave a trail for other developers to understand the original developer's intent (I've rarely seen better code documentation). Programmer tests also give developers courage to refactor the code, because a test failure tells the developer immediately if something's broken. Programmer tests should be automated and give a clear pass or fail result. xUnit frameworks (see Resources) do all this and more, so most XP teams I know of use them. There is an xUnit for almost every language imaginable. Just replace "x" with the language or tool of your choice (for instance, VBUnit for Microsoft Visual Basic, CppUnit for C++, and so on).

Don't miss the rest of the "XP distilled revisited" series

Part 1: "Cutting through the hype of XP" (August 2002)

Part 3: "Customer and management practices" (October 2002)

As a programmer, I am frequently amazed by the difference between coding with programmer tests and coding without them. I find myself taking extremely small steps when I write code. In fact, I don't have to debug very much when I'm taking small enough steps, because the source of a problem becomes obvious: it's got to be that line of code I just wrote. The phenomenal thing to me is that writing a test first quite often drives me to create the simplest code possible.

I'm sure you have written code you thought you would need later, but did you need it? Perhaps, but if you didn't, did you remove it? Probably not. So it just sits around not serving any useful purpose. What if your test told you exactly how much code to write? What if you only wrote enough code to get the test to pass, and no more? You would probably have less code, and it would all be used. That's the kind of code I want to write. When you write your test first, the test drives the code. As a result, your code will look remarkably different when tests drive it. And in my experience, my code is much simpler when tests drive it, and I'm always in the market for greater simplicity. Kent Beck is writing a book called Test-Driven Development (see Resources), which details this practice. I recommend it. In fact, I prefer the name test-driven development for this practice.

Pair programming (Unchanged)
In XP, pairs of developers write all production code. Most developers have never experienced actually writing code with another person, and it can feel a little strange at first. Pair programming typically means two developers share a single computer. One person keys in the code (the "driver") and the other helps him find his way (the "navigator" or "partner"). If the driver gets stuck or frustrated, or if the navigator has a good idea he can't describe too well without typing, the current driver can give up control and be the navigator for a while. Each person should switch roles frequently. Once you get used to these roles, switching back and forth happens rather freely.

This approach may sound inefficient. I've always liked Martin Fowler's response: "When people say that pair programming reduces productivity, I answer, 'That would be true if the most time-consuming part of programming was typing.'" In fact, pair programming, or just "pairing" for short, provides many benefits, including:

  • All design decisions involve at least two brains.
  • At least two people are familiar with every part of the system.
  • There is less chance of both people neglecting tests or other tasks.
  • Changing pairs spreads knowledge throughout the team.
  • Code is always being reviewed by at least one person.

Empirical research indicates that code reviews increase code quality, but I hate doing them. I also believe they tend to be very difficult to do correctly, and they aren't nearly as effective as pair programming. What if every line of production code were reviewed by somebody who was intimately familiar with what the code was supposed to be doing? Pair programming gives you precisely that. Nothing keeps you honest like a witness.

Research discussed in The Costs and Benefits of Pair Programming by Alistair Cockburn and Laurie Williams (see Resources) is also showing that programming in pairs is actually more efficient than programming alone. This is a bit counterintuitive. Most managers (and I've been one) will see two developers doing the work of one and stop there. That's not thinking beyond the end of your nose. It's too simplistic. It's also false.

As for risk, think for a minute about why projects fail. One of the big reasons is extreme dependence on individual heroes. If your project's hero is killed in a freak farming accident, your project might be toast. The essence of pairing is to spread knowledge around. Pairs should switch around every so often. If pairs get too sticky, they get stuck.

Despite the good numbers and the strong arguments in favor of pairing, however, most developers hate this idea. Perhaps it's an issue of pride. Deep down, I believe most developers want to be the hero. Pairing makes that next to impossible. As I wrote in Extreme Programming Applied: Playing to Win (see Resources):

Anybody can sit next to someone else and throw in two cents every so often. Many people can be completely engaged and try to make the result better. But the ones who really understand pairing know that it's about loving another person.

As I said in the book, I'm talking about the kind of love that shows itself through actions, not the romantic kind. When you love another person, you try to see the best in him and help him grow. You'll be patient, kind, not jealous, generous, humble, polite. This kind of intense interpersonal relationship, even if it's just for a few hours, is not something most people are interested in, and developers are people. This is a radical idea and it isn't for everyone. But it also can produce some of the most rewarding experiences of your professional career.

Refactoring (Unchanged)
Refactoring is the technique of improving code without changing functionality. An XP team refactors mercilessly. Two key opportunities exist for developers to refactor: before and after implementing a feature. Developers try to determine if changing existing code would make implementing the new feature easier. They look at the code they just wrote to see if there is any way to simplify it. For example, if they see an opportunity for abstraction, they refactor to remove duplicate code from concrete implementations. An important thing to note here is that you should either design and write new code, or refactor existing code. Don't try to do both at once.

While XP says you should write the simplest code that could possibly work, it also says you'll learn along the way. Refactoring lets you incorporate that learning into your code without breaking the tests. It keeps your code clean. That means it will survive longer, introduce fewer problems for future developers, and guide them in the right direction.

The point of refactoring is to improve the design of existing code. The team should have automated suites of programmer tests and customer tests. The former should pass all the time; the latter should pass to the degree that customers require. Run the tests. Refactor the code. Rerun the tests. Did any programmer tests break? Fix them. Did any of the "required" customer tests break? Fix them. If you can't fix it, back out the refactoring you just tried. Without tests, changing your code would be a guessing game. With refactoring, if the code breaks, your tests will tell you.

Collective ownership (Maps to collective code ownership)
Any person on the team should have the authority to make changes to any of the code to improve it. Everybody owns all the code, meaning everybody is responsible for it. This technique allows people to make necessary changes to a piece of code without going through the bottleneck of an individual code owner. The fact that everybody is responsible counteracts the chaos that ensues from no code ownership.

Saying that everybody owns the code isn't the same as saying nobody owns it. When nobody owns the code, people can wreak havoc anywhere they want and bear no responsibility. XP says, "You break it, you fix it." The team should have programmer tests that must run before and after each integration. If you break something, it's your responsibility to fix it, no matter where it is in the code. This requires extreme discipline.

I've noticed that some team members simply can't handle this practice. They can't stand the idea of somebody messing with "their" code. If you want to be part of an XP team, you must share ownership of all the code. If you don't, the team will eventually hit a wall. Some parts of the code will be off limits, which makes the system difficult to change -- a scenario we're trying to avoid. If somebody behaves this way, the team needs to point this out and encourage him to change. If he refuses, strongly recommend he change or leave. If he still refuses, tell him to move on. Don't compromise on this.

Continuous integration (Unchanged)
Integrate new changes into the system multiple times each day, and rebuild the entire thing automatically. Don't release changes until all the tests run. If a test fails, you have two choices: fix it and integrate, or don't integrate. A single failing test that you can't fix means you should not integrate.

Continuous integration does not mean you integrate every second, but it does mean you should integrate early and often. Daily is not enough. In an eight-hour day, I start to get that icky feeling if I haven't integrated at least once every couple of hours, if not more. This sounds scary to many people. Once again, the tests should drive that fear away. The tests tell me whether an integration "works" and that it's safe to release to the rest of the team.

YAGNI, or, "You aren't going to need it" (Maps to simple design)
In "XP distilled," Chris Collins and I wrote that "XP's detractors claim that the process neglects design." They still do, and they're still wrong, but I think XP fans sometimes gloss over the objection too quickly.

What this objection is really saying is that people aren't comfortable with the idea of emergent design: letting the design of the system emerge, rather than trying to nail it all down at the start. Establish a direction, note some milestones and landmarks, then start the trip. You can adjust along the way as you learn. People who don't like this approach mistakenly suggest it is undisciplined. In reality, it is the only realistic approach to development in today's economy.

Typical heavyweight methods say you should do all but the most trivial design tasks up front. This is like taking a static picture of the horizon, staying still, and trying to draw a perfect map of how to get there. This is a fine approach if requirements are constant. If you know at the beginning what the system needs to do and how it needs to do it, you can do most (if not all) your design up front. In reality, however, most developers are exploring problems that haven't been adequately solved before or are implementing solutions that haven't been tried before. These days, most systems are being designed for businesses competing in markets that change constantly, not once every ten years.

In environments like this, requirements are effectively changing all the time and stability of requirements is a pipe dream. This means big, up-front design is inappropriate. You simply cannot know at the beginning where you will end up in the end, or even where you want to end up. The best you can do is establish a general direction and make small, frequent adjustments along the way in order to hit a moving target. XP requires simplicity at every step so that you can change direction as often as necessary. We always try to use the simplest design that could possibly work at any point, changing it as we go to reflect emerging reality.

According to Kent Beck, the simplest design that could possibly work is the design that:

  • Runs all the tests
  • Contains no duplicate code
  • Clearly states the programmers' intent for all code
  • Contains the fewest possible classes and methods

Requiring a simple design doesn't imply that all designs will be small or trivial. They just have to be as simple as possible and still work. Don't include "extra" features that aren't being used. We call such things YAGNI, which stands for "you aren't going to need it." In other words, don't design for things you might need. In the next iteration, you may find out you didn't need it after all. Instead, write a test, then write just enough code to get that test to pass. That's usually all the design you need to do. Applying the YAGNI principle doesn't mean you can't ever think ahead, but it does mean you shouldn't look too far.

Practice synergy
All of the XP practices work together and are mutually reinforcing. This is especially true for the programmer practices. Some programmer practices, however, can stand alone (for instance, test-driven development and pair programming), while others (refactoring, collective ownership, continuous integration, and YAGNI) are more "dependent" practices, requiring other practices to be in place before they can work.

As I said before, I would be nervous about refactoring without tests. Likewise, without tests to tell me if I broke something, I would find collective ownership insane and continuous integration impossible. As for YAGNI, I don't think it's possible to keep your design simple without having tests drive you to that simplicity. The urge to build in hooks for the future is just too strong to resist. Interestingly, notice that most of these dependent practices are dependent on having tests in place. (Perhaps that practice is particularly important.)

While some of the practices can stand alone, most cannot. If you have tests, perhaps you could adopt more of the other practices in a piecemeal fashion, but why would you want to? Using the practices together and letting them reinforce each other can produce startlingly successful teams and even better software. If you don't think you'll like a particular practice, resist the "Green Eggs and Ham" defense and try it for a while. You might be surprised. If you have tried a particular practice and still don't like it, you can try to do the rest without that one, but I believe your speed and your results will suffer.

My recommendations from "XP distilled" remain the same: The whole is greater than the sum of the parts. You can implement single practices or a small subset, and get great benefits over not using any. But you only get the maximum benefit if you implement all of them, because their power comes from their interaction. Do XP by the book at first as a benchmark. Once you understand how the practices interact, you will have the knowledge you need to adapt them to your context. Remember that "doing XP" is not the goal; it is a means to an end. The goal is to develop superior software quickly. If your process mutates in a manner that disqualifies you from saying you are doing XP, yet your results are blowing the doors off your competitors, you have succeeded.

Next month
This month's column gave you an overview of the programmer practices of XP. Next month I'll cover practices for customers and management, who also are part of our one team. If you are a programmer and haven't been thinking of these people as being part of your team, you've been developing software incorrectly. If you are a business person responsible for providing business direction for a project or a manager trying to keep a project on track, and you haven't considered yourself as a part of one team with the programmers, you have been part of the reason most projects fail. You need to get in the game in a radically different way. Next month I'll tell you how.

Resources

About the author
Roy W. Miller has been a software developer and technology consultant for almost ten years, first with Andersen Consulting (now Accenture) and currently with RoleModel Software, Inc. in North Carolina. He has used heavyweight methods and agile ones, including XP. He is co-author of the Addison-Wesley XP Series book, Extreme Programming Applied: Playing to Win, and is currently writing a book about complexity, emergence, and software development. Contact Roy at rmiller@rolemodelsoft.com.


Discuss74KBe-mail it!

What do you think of this document?
Killer! (5) Good stuff (4) So-so; not bad (3) Needs work (2) Lame! (1)

Send us your comments or click Discuss to share your comments with others.



developerWorks > Java technology
developerWorks
  About IBM  |  Privacy  |  Terms of use  |  Contact