Thursday, August 31, 2006

 

Methodology Musical Chairs

Here is another idea while we are writing the book "Beyond Agile - What is Wrong with Agile Development and How to Fix It to Scale Massively and Succeed Consistently."

In the 1970s, we had several competing methodologies: IBM's SDM/70, Arthur Andersen's Method/1 and many more. It seemed like these methodologies had many differentiators, but, looking back with 20/20 vision, they were more the same than different. They were all all "waterfall" methodologies, meaning that they advocated that we completely finish each phase - requirements, analysis, design, coding, testing, etc. - before proceeding to the next phase.

This model has an interesting history. The term "waterfall" was coined by Walker Royce, one of the first prominent methodologists in the computer field. Royce proposed this model, but then immediately noted what flaws existed in it and suggested improvements, including feedback loops from testing back to design. However, the industry seized on Royce's "initial model" of waterfall and built all methodologies based on Royce's admittedly flawed approach. We are told that Royce was not pleased with this, but his voice was lost in the noise.

Very soon after, Fred Brooks wrote the book "Mythical Man Month" which proposed a very different way of software development. Reading the book today, a reader could be forgiven for thinking it was written by the latest, hipest XP/Agile/Scrum evangelist, when, in fact, it was first published over thirty years ago. Tight feedback loops, iterative/incremental lifecycles, deep involvement of the business users were all concepts proposed by Brooks.

In the 1980s, detailed processes like James Martin's Information Engineering and Yourdon/DeMarco's Structured Analysis and Design claimed prominence. These were also waterfall processes, and continued with a focus on rigorous, extensive piles of documentation throughout the lifecycle, making any possibility of change more and more difficult as a project progresses through its phases.

In the 1990s, there was plenty of momentum for waterfall lifecycles, with IBM introducing a new waterfall lifecycle called AD/Cycle and Andersen Consulting even, somewhat inexplicably, introduced a new waterfall process in the late 1990s called "Business Intelligence" (not related to data warehousing). The 1990s was also when the CASE tool phenomenon hit our industry - Computer Aided Software Engineering. Six figure tools like Application Development Workbench (ADW) and the Information Engineering Facility (IEF) were purchased by many IT departments in hopes of reducing time-to-market of software applications. Instead, these costly tools only served to lengthened the development lifecycle, handcuff the developers in their implementation choices, and frustrate business users and their ever-changing demands. CASE tools, also called I-CASE (Integrated CASE) could possibly have been our biggest mistake so far in this industry (although we did learn some good things from our CASE experiences, so nothing is a complete loss).

In the late 1990s, the most noteworthy event in the methodology world was when Rational Software introduced the Rational Unified Process (RUP). It was heralded as possibly a process that could unite the industry, in the same way the Unified Modeling Language (UML) had done for notation of diagrams and models. The RUP included all the latest ideas in software development, iterative/incremental lifecycle, use cases, automated testing, traceability, object-oriented design and construction, component-based development, architecture-centric - you name a buzzword, RUP had it.

But RUP was no sooner out of the chute than a series of books was written pointing to the "new" great white way - eXtreme Programming (XP). XP could hardly be called a methodology, instead it was a collection of "best practices" taken from a failed project at Chrysler called the C3 Project. The proponents of XP were already well known in the industry, having led the practice of "Class-Responsibility-Collaboration Cards" or CRC Card sessions, which remains a successful technique to this day.

Again, XP hardly had a chance to gain ground before a high-profile meeting of methodologists at a ski resort in Utah brought us "Agile Development." Agile was an extension, not a dismissal of, XP. The Agile banner allowed multiple people to proclaim their adherence to a certain set of baseline principles, while varying their own processes slightly to maintain autonomy of their individual brands. Today, under Agile, we have XP, Scrum, Crystal, Lean Systems Development, Feature Driven Development and Dynamic Systems Development (among others). Even the proponents of these processes would admit that these approaches have much more in common than they have differentiators. That can largely be attributed to the first meeting at the ski resort of the industry experts who agreed to basically be cooperative rather than political about managing the competition between ideas.

Why has our industry faced such tremendous change in our short history? Every five to eight years, we seem to completely overhaul our view of software development. We jump from one train-of-thought to another, always hoping to find something that will help, and (so far, at least) always being disappointed.

Most notably, all this change has not led us to converge on a single, best answer, but, instead we've been seesawing from waterfall to iterative, from lots of documentation to none, from involving the business users to pushing them away, from trying to automate our processes to doing things manually.

One client I was consulting with last year was moving steadily toward an iterative/incremental, use case driven lifecycle. Suddenly, with the installation of a new CIO, they've jumped back into the waterfall camp and have forgotten about use cases as well.

Why do we play these games of "Methodology Musical Chairs?"

Should computer methodologies resemble hairstyles? If you substitute an afro for Method/1, big hair for SA/SD, and a buzz cut for XP, isn't it kind of a true statement that we have methodology "fashions" that closely mirror those hairstyles from days gone by? Is that what we want? Isn't there more to software development than jumping from fad to fad?

Let's examine a little closer what these "fashions" are based on. It's fairly safe to say that each of the new trends in methodology are based on someone's experiences. A highly intelligent person looks at what is working on their own projects and what isn't working, and begins to experiment. If they find something that works, maybe they write a book about it.

That is certainly how our book "Use Cases: Requirements in Context" was written. Eamonn Guiney and I noticed that requirements phases were going very poorly on certain projects, but that the approaches using use cases seemed to work better. When we noticed this trend, we began paying more and more attention to the "best practices" of requirements on our teams, and the book resulted from those observations.

But are best practices the right way to develop methodologies? See our post on best practices here.

Perhaps it's time to come up with a new way of coming up with methodologies.

Tuesday, August 29, 2006

 

Aristotle and the Definition of Work

Aristotle, having sough knowledge at the feet of Plato, was also an icon of big thinking. After already establishing species categorizations in biology and many other accomplishments, he set his mind on a new topic - work.

How does work get done? How do our own thoughts get translated into work getting accomplished?

Aristotle found a sculptor that he respected, and decided to observe him doing his work. Aristotle put the aspects of the work that he saw being accomplished into four different categories:

- the effort of the sculptor in doing the work
- the plan or methodology the sculptor was following, shaped by his years of experience and his long apprenticeship when he was younger
- the materials going into the work, like the stone block, the tools, the brushes
- the benefactor of the work, the person who had commissioned the sculptor and without whom this entire project could not happen

Aristotle named these aspects of the work "causes." We must take the meaning of this word in context, because it is not the same as the "cause and effect" term we use today. In Aristotle's thinking, these causes were more like aspects or forces at work in this situation. To be completely accurate, you could call them "entailments."

Efficient Cause - the effort of the sculptor
Formal Cause - plan or methodology
Material Cause - raw materials and tools
Final Cause - benefactor


This example of the sculptor is very useful to examine how work gets done. Let's try applying it to the creation of software.

For a software project being done in-house, let's say, what are the correlations of each Aristotelean "causes:"

Efficient cause - this is the effort of the team doing the work - the programmers, designers, database administrators, requirements analysts, project managers, testers, technical architects

Formal Cause - the methodology the team is following. However, it's much more than just whether we're using Agile, Unified Process, waterfall, etc. It's everything about how we work and plan. It turns out to be a multi-layered set of instructions that come largely, not from an Agile textbook, but from our own experiences of what works and what doesn't. The plan we follow and "how we work" depends on who is on the team.

Material Cause - Ah, this is a little tricky. What are the raw materials of software development? Yes, the development boxes and IDEs are our tools, but what is the equivalent of our "stone block." The material cause of our software is actually our own thoughts. Software comes from the minds of the analysts, designers, programmers, testers. And we are interpreting what comes from the minds of the end users, the businesspeople who will use our application.

Final Cause - This is also not as simple as it seems. Sure, our end users are the final cause. But what about the executive stakeholders? The ones who pay the bills? And sometimes the people who own the data for the application are not the ones who will use it (like in an Executive Information System). Then who is the final cause? A project team needs a clear final cause. One group of businesspeople who are charged by the company with making the decisions for this application. It might be a steering committee or something similar. They are the final cause.

Let's go back to the sculptor for a minute.

This gets really interesting when we think of the best practices concept in terms of the sculptor.

Imagine you have a master sculptor and a novice sculptor doing their work side by side.

You watch them both, and they seem to be doing the work exactly the same way. They put the chisel on the rock, chip a little away, then move the chisel, chip again, then step back, look at it, move in again, chip again.

But, when they are finished, you can definitely tell the difference. The master sculptor's work looks beautiful and lifelike. The amateur's looks ragged and lopsided.

So, how do you teach the amateur the "best practices" of the master?

This is where it gets goofy. I just imagine some Accenture guy coming into the room and examining EXACTLY how the master holds the chisel, measuring the number of times the master steps back from the statue to look at the big picture, putting little pit marks into a second chisel so it matches the master's chisel.

Then the Accenture guy goes to the amateur and teaches the "best practices" to him. Voila! The result is --- the same thing. The amateur's sculpture is not helped by holding the chisel in a different way, or by stepping back more often, or with the additional pit marks.

And the Accenture guy gets fired - as usual.

So what is the difference between the amateur and the master?

Here it is. (We need to now revisit the four causes.)

The master does one important thing that the amateur doesn't do yet. Remember, as the sculptor, he is the efficient cause. His tools and the stone are the material cause. His plan/methodology is the formal cause. And the final cause is his customer.

As the master (efficient) is working with the stone (material), he is constantly revising his plan (formal) based on what he sees happening as he moves toward what the benefactor (final) wants.

Bit by bit, he changes what he is doing based on what he sees is happening. It might be shaving off a bit here, or being more gentle there. He adjusts his plan constantly, or, you could say, iteratively and incrementally.

It's one thing to say that the sculptor is sculpting iteratively and incrementally. Of course he is, so is the amateur.

But the master is also revising his methodology iteratively and incrementally too. The amateur is not. The amateur starts out with his plan and runs with it. Once he gets to the end, he realizes his mistakes, adjusts for them and tries to make a better sculpture next time, but this sculpture remains a mess. The amateur might see what is happening much later and try to "fix it" with some late changes, but it won't work.

Once the amateur realizes that he needs to be changing his game plan constantly, revising what he thinks the next set of steps are, he's moved to a much higher level. And it has NOTHING to do with "best practices," the chisel grip, the big picture step-backs, or anything else. He will naturally do those things right once he can take the leap of iterative/incremental plan changing to heart.

 

If Software Testing Were Mechanical...

How many times have I heard from team members "I don't want to be on the testing team! It's just mechanical work."

I've always tried to convince them that they were wrong. That testing was as creative a process as software development itself. Most of the time, I could convince people that it was true, but not always.

In our discussions about our book "Beyond Agile," Hong came up with the perfect point on this.

"If testing were mechanical," Hong said yesterday, "Microsoft would have already won the battle against spyware, viruses and spam."

What is the definition of a computer virus? It's a program that "found a hole" in your software and exploited it.

If testing were mechanical, we would have an ironclad path to sealing up all the "holes" in software and there would be no virus problem. No spyware problem.

Even spam is just e-mail that evades the software filters that are built into our e-mail clients, exploiting one hole after another.

If software testing were mechanical, a fully-tested application would have no susceptibility to viruses, spyware or spam.

And it would have no bugs. Have you ever used an application that has NO BUGS? I certainly haven't.

No, testing is a highly creative, heroic job. Underappreciated in certain companies? Sure. But the job itself is every bit as challenging, as rewarding, as creative and as interesting as designing or creating code.

This page is powered by Blogger. Isn't yours?