Friday, September 20, 2013

Devil's Advocate: Where Context Driven Testing fails?

TL/DR: The community of Context Driven Testing is fractured and unclear how to implement the activity of test.

Lack of Clarity

I am still debating the title of this article, and will likely do so even after it is published.  The problem isn't that Context Driven Testing, the problem is the implementations.  For a technical analogy, CDT is like an Interface, something that defines a general contract of how to do things, but doesn't provide any meat on the bones of how to implement the philosophy.  The problem is that in order to learn how to implement all the various pieces of CDT, you must go searching all over the Internet trying to find all the data.  When it comes down to it, is that a failing of the Context Driven Testing philosophy, or the Context Driven Testing community?  I don't really know what to blame, and as someone who identifies with being part of the QA community I feel a need to point it out.  If only we had a sort of JIRA system.

Don't get me wrong, I do think we have a lot of data to offer, I just think it is not in a simple pathway.  There is no college like course structure or even a choose your own adventure system.  However, you might not believe me, so lets start at the beginning.  While context driven testing does say a few things, but it could be summed up as two words: "It depends."  It clearly labels itself an approach:
Context-driven testing is an approach, not a technique.
With that in mind, it becomes all too clear that CDT is not meant as a specific set of methods or patterns but rather an idea that you're smart, you figure it out within your context.  It is a little like going to a councilor and saying "I'm angry all the time, what do I do?" and them only asking why you are angry, without any suggestions.  It might help, but it sure seems to be a round about method.

No Patterns?

I don't disagree with the idea that things depend on context, but I do sometimes wonder if there aren't some context-dependent patterns.  That is to say, I do wonder if there is a sort of mapping of context-to-effective-practices or if literally every situation requires different analysis.  In either case, the current answer context driven testing gives is to consider the situation and create tests based upon that.

This leaves us with a sort of mob-rules attempt at QA where you can do any activity and call that testing, as long as the product solves a problem.  Now I've peeled away my first layer of the onion, and I expect someone to immediately argue with me that my claim is untrue.  In fact, there are many, many source of data from James Bach to Cem Kaner to Michael Bolton to ad nasium.  And you would be right, these are all CDT advocates, some of whom founded the ideas.

I can respect that idea, but now we are applying different people's ideas to the problem space using rough analogies or giving examples by inserting their context and how they react.  That is fundamentally a scatter shot approach, with variable quality between different bloggers and different blog posts, not to mention the fuzzy aspects of trying to apply someone else's context to your own situation.  If there isn't even a mapping of contexts to effective practices, how can you ever be expected to know what is a good method of testing?  Worse yet, even if these can be used as a map of context to ideas, there is no central repository of knowledge.  You have to search for these ideas and hope your picks turned out well.


But wait, what of the various frameworks given?  These frameworks were developed around the idea of CDT, like Session Based Test Management, Thread Based Test Management, and Tours, which is meant for the more detailed work.  I'm not saying those don't exist, but I don't think there is a very good way of knowing the right choice.  There isn't a particularly strong heuristic map, which means that someone coming into the industry just has to keep trying these until they find out what works.

Tours is perhaps the most specific of the set, and is perhaps the exception to these as I have seen some built in recommendations in the naming and descriptor.  The Money Tour is meant to check the features that the customers bought the tool for.  However, once you think a little about it, it seems like that is the useful area to test, so why worry about most of the other tours?  Cem Kaner talks of how researching what tours are useful is a bad option from a context driven perspective.  He asserts that you need to know your developers in order to know what sorts of errors might matter along with liability.  You might also need to understand your objectives in order to consider which tour to do.

So let's consider the sorts of questions Kaner would want us to consider.  What does the company want?  Well in many cases, I think the company looks at QA to say what it wants, or it says what it wants, but it really gets it wrong.  In reality, QA often pushes the agenda, particularly when the company is young in QA.  From my experience, young companies also don't have much money, so it might be reasonable to expect they would hire inexperienced QA, making for a terrible combination.  What would the customer want?  Often these young companies are creating their product with the hope to get customers, so who knows?  Should I still use the Money Tour using what we expect our customers to care about?  Even if I did assume I understood my context, how do I evaluate my success of a given tour when I don't have much idea of what matters?

Lack of Specific Direction

Ultimately, I feel that Context Driven's layers of abstraction make it impossible to learn in any quick fashion without meandering the Internet, reading lots of different opinions and trying lots of ideas until maybe, just maybe, you find a method that works.  Next thing you know, you change jobs with your new experiences and you now have "Best Practices" which you might claim come from the Context Driven Testing philosophy.

It's like someone saying you should use pairwise testing, because that always helps pair down parameters.  Yet, what parameters you choose matters more than the strict number you have, be it a random sampling (which based upon my reading of Bach's paper, only takes roughly 2x of pairwise to reach equal coverage).  Sure, Context matters for the choice of the parameters, but can there not be some 'Good Practices' we can push, based upon some reasonably high level contexts?  Like, oh, I don't know, say in a application that deals with customer purchasing, customer purchasing patterns might be a decent area to choose for your parameters?  But wait, what if you context....  *le sigh*

Doctrine or Methodology?

As one final aside, I ponder if this might be not unlike a religious war based upon if you want your bed made or not.  Some people really want everything neat and tidy, with QA having a very specific role, a specific set of guidelines that are neat and orderly.  On the other side you have the people who say that our role is so impossibly different based upon organization that we can just barely define the activity at a high level, but can't define what activities should be done.  Or if you rather, you can think of it as Linux, with its diverse set of implementations compared to the more mono-culture of Windows or Apple.  Ultimately, I wonder if there is something in the middle, a sort of, useful set of norms in certain settings that can be broken, but only when you have good reason.   As an example of what I'm interested in, let me quote from Kaner tour post,
Rather than looking for a few “best” tours, I think it would be more interesting to develop a more diverse collection of tours that we can do, with more insight into what each can teach us.

NOTE: As an aside, I had my co-author, Isaac, read my post and he noted I cut out Rapid Software Testing, which was unintentional.  I simply haven't studied that form of CDT, which is a big part of the post, that there is no repository of knowledge for CDT testing.


  1. I expect some controversy with this post, and that is completely welcome. Most of my points are not meant to attack the community but rather to create conversation and hopefully make us better as a group and as individuals, including myself.

  2. I like that you are articulate and thoughtful. But, wow, you have a very shallow understanding of CDT.

    CDT is not summed up by "it depends." That's an empty pointer. CDT is summed up by "solve the problem" or in longer form "gain the skills to solve the problems of testing yourself, rather than expecting other people to tell you how to test."

    The CDT community develops specific ways of developing testing skill. The Rapid Software Testing methodology is one specific approach to that. You can also check out the BBST classes.

    Come do a Skype coaching session with me (I don't charge for that), or do an RTI Online class, a BBST class, or attend a live RST class. Come to a CAST conference or Let's Test. Or geez, read Lessons Learned in Software Testing. You'll learn fast enough, if you want to.

    -- James Bach

    1. I really do appreciate your feedback, and I have actually done the foundations BBST class, and I do agree that my argument does contain some shallow points, but I think that is part of the point (Thus the title Devil's Advocate)! I tried to tackle the problems I see in CDT as some of my less experienced colleges might see it. That is to say, some of my co-workers and ex-co-workers are trying to learn, but they are looking for a structured approach rather than the jumble that is CDT. In fact, I think that the community as a whole still struggles with the problem of choice ( ).

      I do agree it is just as much about a scientific method, but when I look at it, it isn't until the 7th item that you final get around to using a similar descriptor as you yourself define it (your second description, the first one is just as empty as "Just do it" or "Work smarter, not harder"). Perhaps it is the factor that you don't state:

      The problem:
      1. The value of any practice depends on its context.
      2. There are good practices in context, but there are no best practices.
      3. People, working together, are the most important part of any project’s context.
      4. Projects unfold over time in ways that are often not predictable.

      Our hypothesis:
      5. The product is a solution. If the problem isn’t solved, the product doesn’t work.

      We Believe/Our Solution:
      6. Good software testing is a challenging intellectual process.
      7. Only through judgment and skill, exercised cooperatively throughout the entire project, are we able to do the right things at the right times to effectively test our products.

      You can do this by:

      I would like to consider myself somewhat senior in my own area (automation) and perhaps an experienced tester rather than an expert, as I'm a jack of all trades. However, when you have so many choices and no clear direction, that works for some, but not for others. And when we do have specialists, like myself, who can't study everything all the time, it can be difficult to know what is of value. That being said, I may try to take you up on your offer some time in the future. Thanks!

      - JCD

    2. Doh, I didn't preview and it appears it stipped out my greater than and less than signs. It should have read:

      You can do this by:

      {Reading/Watching/Participating these links/articles/whatever}

  3. Mind you, my point of view comes from an “in the trenches” rather than a “plan, manage, implement” vantage point.

    I’ve always somewhat construed the idea of CDT as more of a philosophy with the various resulting branches to be forms of implementation. A mindset that one must incorporate to perfect the art and/or skills needed to be a valuable asset within the community. From a management perspective, I’d assume this would be particularly helpful in ascertaining a personalized implementation of the methodology (for example; CDT resulting in RST). However, with an “in the trenches” mindset, this method merely opens the mind to a different way of thinking. We all have a specific structure in which we test; some starting out may need more of a guideline than a philosophy. Others may soak in as much information as possible before performing their own assertions or guidelines to a personalized testing approach.

    Where CTD may fail in the sense of a structured testing method, may prove to be an asset in the overall philosophy or methodology of an experienced tester.

    Side note: I find it kind of funny that one of the minds behind CTD would respond with the written equivalent of a firm backhand rather than a proper (informing) rebuttal. Wouldn’t it be more constructive to participate in an intellectual debate over the matter, to inspire critical thinking and questioning that is pertinent to the mind of a valued tester rather than solicit personal pet projects?