Thursday, January 16, 2014

Why can't anyone talk about frameworks?

In writing for WHOSE, I was dismayed at the total lack of valuable information regarding automation frameworks and developing them.  I could find some work on the frameworks with names (data driven, model driven and keyword driven), but almost nothing on how to design a framework.  I get that few people can claim to have written 5-10 frameworks like I have, but why is it we are stuck with only these 3 types of frameworks?

Let me define my terms a little (I feel like a word of the week might show up sometime soon for this).  An architecture is a concept, the boxes you write on a board that are connected by lines, the UML diagram or the concepts locked in someone's head.  Architecture never exists outside of the stuff of designs and isn't tied to anything, like a particular tool.  Frameworks on the other hand have real stuff behind them.  They have code, they do things.  They still aren't the tests, but they are the pieces that assist the test and are called by the test.  A test results datastore is framework, a file reading utility is framework, but the test along with its steps is not part of the framework.

Now let me talk about a few framework ideas I have had for the past 10 years.  Some of them are old and some are relatively recent.  I am going to pull from some of my presentations of old, but the ideas have at least been useful for one framework of mine, if not more.

Magic-Words


I'm sure I'm not the first one to come to this realization, but I have found no records of other automation engineers speaking of this before me.  I have heard the term DSL (Domain Specific Language) which I think is generally too tied to Keyword-driven testing, but a close and reasonable label.  The concept is to use the compiler and auto complete to assist in your writing of the framework.  Some people like the keyword driven frameworks, but in my past experience, they don't give compile time checking nor do they help you via auto complete.  So I write code using a few magic words.  Example: Test.Steps.*, UI.Page.*, DBTest.Data, etc.  These few words are all organizational and allow for a new user to 'discover' the functionality of the automation.  It also forces your automation to separate out the testing from the framework.  A simple example of that can be given:

@Test()
public void aTestOfGoogleSearch() {
 Test.Browser.OpenBrowser("www.google.com");
 Test.Steps.GoogleHome.Search("test");
 Test.Steps.GoogleSearch.VerifySearch("test");
}

//Example of how Test might work in C#, in Java it would have to be a method.
public class TestBase { //All tests inherit this
  private TestFramework test = new TestFramework();
  public TestFramework Test { get { return test; } }
}

Clearly the steps are somewhere else while the test is local to what you can see.  The "Test.*" provides access to all the functionality and is the key to discoverability.

Reflection-Oriented Data Generation


I have spoken of reflections a lot, and I think reflections are a wonderful tool for solving data-generation style problems.  Using annotations/attributes to tell each piece of data how to generate, what sorts of expectations there are (success, failure with exception x, etc.), filter the values you allow to generate and then picking a value and testing with it is great.  I have a talk later this year where I will go in depth on the subject and I hope to have a solid code example to show.  I will certainly post that up when I have it, but for now I will hold off on that.

...

Okay, fine, I'll give you a little preview of what it would look like (using Java):

public class Address {

 @FieldData(classes=NameGenerator.class)
 private String Name;
 @FieldData(classes=StateGenerator.class)
 private String State;
 //...

}
public class NameGenerator {

  public List<Data> Generate() {
   List<Data> d = new ArrayList<Data>();
   d.add(new Data("Joe", TestDetails.Positive);
   d.add(new Data(RandomString.Unicode(10),  {TestDetails.Unicode, TestDetails.Negative));//Assume we don't support Unicode, shame on us.
   //TODO More test data to be added
   return d;
  }

}

Details


Why is it that we as engineers who love the details fail to talk about them?  I get that we have time limits and I don't want to write a book for every blog post, but rarely do I see anyone outside of James McCaffrey and sometimes Doug Hoffman talk on the details.  Even if you don't have a framework, or a huge set of code, why can't you talk about your minor innovations?  I come up with new and awesome ideas once in a while, but I come up with lots of little innovations all the time.

Let me give one example and maybe that will get your brain thinking.  Maybe you'll write a little blog on the idea and even link to it in the comments.  I once helped write a framework piece with my awesome co-author, Jeremy Reeder, to figure out the most likely reason a test would fail.  How?

Well we took all the attributes we knew, mostly via reflections of the test and put them into a big bag.  We knew all the words used in the test name, all the parameters passed in, the failures in the test, etc.  We would look at all the failing tests and see which ones had similar attributes.  Then we looked at the passing tests and looked to see which pieces of evidence could 'disprove' the likeliness of a cause.

For example, say 10 tests failed.  All 10 involving a Brazilian page. 7 of those touched checkout and 5 of those ordered an item.  We would assume that the Brazilian language is the flaw if all tests failed, as that might be the most common issue.  However, if we had passing tests involving Brazilian, then that seems less likely, so we would see if we could at least establish if all checkout failures had no passing tests involving checkout.  If none had, we would say there was a good chance that checkout was broken and notify manual testers to investigate that part of the system first.  It worked really well and solved a lot of bugs quickly.

I do admit I am skipping some of the details in this example, like we did consider variables in concert, like Brazilian tests that involved checkout might be considered together rather than just as separate variables, but I hope this is enough that if you wanted to you could build your own solution.

Now your turn.  Talk about your framework triumphs.  Blog about them and if you want to put a link in the comments.

No comments:

Post a Comment