Thursday, June 19, 2014

What is the Highest Level of Skill in Automation?

Thanks to Robert Sabourin for generating this topic.  Rob asked me roughly, 'What in your opinion is the highest level of skill in automation?'  He asked this to me in the airport after WHOSE had ended, while we waited for our planes.  It gave me pause in considering the skills I have learned and help generate this post.

Let me make clear a few possible issues and assumptions regarding what the highest level of skill is in automation.  First of all, I think that there is an assumption of pure hierarchy, which may not exist.  That is to say, there might not be a 'top' skill at all or the top skill might vary by context.  So I really am mostly speaking from a personal level and with my own personal set of automation problems I have faced.  When I answered Rob's question in person, I neglected to add that stipulation.  The other possible concern is that the answer I give is overloaded, and so I will have to work on describing the details after I give the short answer.  Without making you wait, here is my rough answer: Reflections.

What are reflections?

In speaking of reflections, you might assume I am speaking of the technology, and for good reason.  I have spoken on them many times in this blog.  However, that is just a technical trick, albeit a useful one. I am not talking about that trick, even if the comp-science term 'reflections' is part of the answer.  In speaking of reflections, I mean something much broader.

There is the famous "thinker" sitting on his rock just pondering is much closer to what I had in mind.  But you might say, "Wait, isn't that human thinking?  Isn't that critical thinking or introspection?"  Yes, yes it is.  What I mean by reflections is the art form of making a computer think.  While a computer's intelligence is not exactly human intelligence, the closer we approach that vast gulf, the closer we are to generating better automation.

Most people might start to argue that requires someone with in depth knowledge or artificial intelligence or at least a degree in computer science or someone with a development oriented background.  Perhaps that is the logical conclusion we will ultimately see in the automation field, but I don't think that either an in depth knowledge of development or AI is required for now.  I know that you need to go to that level to start understanding this concept.

Instead, I think you need to start thinking of the automation in the way you think about writing tests.  In some ways this relates to test design.  Why can't the automation ask what am I missing? Why can't my automation tell me what the most likely reason a failure occurred*?  Why can't the automation work around failures*?  Or at the very least, ignore some failures so it isn't blocked by the first issue it runs into*?

* I've done some work around these, so don't say they are impossible.

Now that I have walked around the definition, let me define the reflections in context of this article.

Reflections:  Developing new ideas based upon what is already know.

An example

A good example for the need for reflections is the brilliant talk given by Vishal Chowdhary, in which he notes that in translations (and searches, etc), you can't know what the correct answer is.  You have no Oracle to determine if the results are correct.  Many words could be chosen for a translation and it is hard to predict which ones are the 'best'.  Since computer language translations are adaptive, you can't just write "Assert.Equals(translation, expectedWord)" with hardcoded values.  Since these values are dynamic, the best you can do is to use a "degree of closeness".  You see, they couldn't predict how the translation service would work because it has dynamic data and the world changes quickly, including new words, proper titles, et cetera.

So how do you test with this?  Well you can look at the rate of change between translations.  You can translate a sentence, translate it back and record how close it was to the original sentence.  Now track how close it is over time, with different code and data changes.  You could take translation string lengths and see how they vary over time and note when large deviations occur. There are lots of methods to validate a translation, but most of them require the code to reflect on past results, known sentences and the likes.  The automation 'thinks' about its past, and on that basis judges the current results.

Not to say some automation shouldn't be reflective.  For example you could hard code a sentence with "Bill Clinton" in it and check to make sure that it didn't in fact translate his name.  You could translate a number and check to see it didn't change the value.  You might translate a web page and check something not related to the translation such as layout.

Not just the code

In reading my blog you might assume that because I specialize in automation I think reflections is a code-oriented activity.  I do think that, but I think it applies more broadly.  When I write a test, I should be reflecting on that activity.  That is to say I should be thinking "Is that really the best design?", "Should I be copying and pasting?", "Should I really be automating this?", etc.  In always having part of my brain reflecting on the code, I too am write better code.  Hopefully between my writing better code and my code trying to do better testing using reflections, we do better testing overall.  This also applies to testing in general, with considerations around things like "That doesn't look like the rest of the UI." or "I don't recall that button there in the last build."

I have only scratched the surface of this topic and made it more specifically apply to automation/testing, but I think this applies to life too.  For a more broad look at this topic I would highly recommend Steve Yegge's blog post Gödel-Escher-Blog.  It will make you smarter.  Then next time you go do some automation, reflect upon these ideas. :)  And if you are feeling really adventurous, please put a comment about your reflections on this article here.


  1. Interesting how you labeled it as "reflections", I would have more colloquially said "learning from your mistakes". At least that is my interpretation.

    Test Automation isn't something you just learn and go do, unlike what some people think. It is just like programming, or speaking a foreign language, where you can learn the syntax and verbage. Then put it together to create the statement and then run it (speak it). But it may not make sense, there isn't the true understanding of "is this really the correct way to state this" or "did this make sense, was my pronunciation correct". There may need to be inflections and nuances incorporated in. Maybe there is some accentuation that is needed.

    All of this comes from experience; trial and error, and learning from past mistakes to make corrections and improvements later on down the road.

    Also, test automation is really a broad subject. Typically when people talk about it it is only really being discussed as automated test execution of Regression type tests (at least in my experience over the last 24 years of doing this type of work). Now we have other forms of "test automation" involving different levels and types of test execution. We have Unit (code statement coverage and call-called pairs), Integration (services, inter-system), API/Interfaces (below GUI layer or between parts of the local system), UI/Business level (GUI, basic business & logic functionality) and Intra-system level (interactions between disparate systems) to name a few. Each of these has there own types and methods of testing (and thus test automation) approaches.

    It is a monumental task to be able to learn and understand all of these. But there are other forms of automation of test tasks. We now have tools that aid in Test Design & Test Generation (test scenarios, test data), Test Analysis and Validation (yes, comparator tools are test tools too as an example) and Test & Defect Management (even with Excel and its intrinsic capabilities you can automate actions of this task) to name a few.

    Anytime we use a tool to aid us (Tester) in the performance of our work we are at some level using automation. Really we are, because we are using some type of mechanical or non-human method to help us do our job. Some of them are just more readily appearant.

    Now getting back to your basic premise of "What is the highest level of skill in Automation". Well... there really isn't a 'top' level, and because as you already said there are contextual circumstances that can come into play. There needs to be some type of a'priori knowledge (learned/experienced) along with the rest to give you the 'experience' to ask questions as you go along and say "is this right" or "should I even do this" or "what the hell, let's give it a shot".

    As part of this people have their niche areas of expertise. Recognizing that is a key thing. Having a high level of skill/expertise in one area is great, but also recognizing what that is along with knowing your weak spots allows you to go and find people or information about those items/topics of weakness and learn about it.

    So to me a high level of skill is recognizing what you do know, and don't know and then doing something about it. In automation that means you always try to keep learning, but also recognizing that no one knows it all.


    Jim Hazen

    1. I suppose I could have easily said "I stand on the shoulders of giants" ( ) but that would have lost some of the flavor I was looking for. I think learning from my mistakes is a subset of what I was getting at. I want to make myself and my automation smarter every day. I mostly kept it at a 'automation' level as a constraint because that was how I interpreted the question, but I think developing myself is really be included.

      My method for getting my automation smarter maybe learning from mistakes or it might be standing on the shoulders of another giant, be it the Bach brothers or Rene Descartes. I think that being able to consider what you don't know as also being part of reflecting on your work and part of making automation smarter. Realizing your weaknesses and either compensating (E.G. don't do that, get help, etc.) or learning to get better at your weakness are part of reflecting.

      I think that the skill of reflection is a deep and wide one. Making the automation smarter requires we be smarter to make smarter automation. As Steve Yegge described it, it is a loop that looks like this:

      do {
      getSmarter();//Reflect on what you did, read a book, etc.
      actSmarter();//And hopefully do something important while acting smart

      Thanks for the comment!

      - JCD