Tuesday, December 10, 2013

WHOSE up for a little skill?

A Brief Summary:

I went to WHOSE, a workshop with a mandate to create a list of skills and am now back.  I want to briefly summarize my experiences.  The idea was generated by Matt Heusser to attempt to have a list of skills that could be learned by anyone interested in a particular skill in testing.  The list of skills was to be put in a modern format and presented to the AST board.  In part I did it because I felt like the CDT community had too little organized detail on how to implement the activity of test.

WHOSE worked on this thing?

I was not present for the first few hours, so I missed out on giving a talk.  When I got there I met my fellow WHOSE'rs:

Jess Lancaster, Jon Hagar, JCD (me), Nick Stefanski, Pete Walen, Rob Sabourin, David Hoppe, Chris George, Alessandra Moreria, Justin Rohrman, Matt Heusser (facilitator), Simon Peter Schrijver (facilitator), Erik Davis (facilitator).

It was a blur of new faces and people I had read from but had not gotten to meet before.  Later, Doug Hoffman showed up.  I was made late when my plane did not actually make it to the airport, having been rerouted.

WHOSE line is it anyway? (Day 1)

This being my first conference, my initial reaction was to keep silent and just observe.  The group had about 200 cards on a table with a bunch of skills semi-bucketed (skills that 'felt' similar).  The definition of a skill was unknown to me still, in spite of the fact that I had researched and considered this problem for hours.  I had also looked into how to model the information, I had considered the Dreyfus model and how it might be used

Many of the blog posts I have written were in fact considerations of skills, such as my reflections post, to help prepare me.  I had debated what a skill is with my fellow testers, and even created a presentation, and now I was standing facing what felt like 200 or so skills.  How do you organize them?  Outside of that, what questions do I ask and who do I ask?  Sometimes when a tester doesn't have a lot of answers and no one obvious to ask, who has the time, you just poke around and that is what I did.  I created a few skills I saw were missing and possibly a few that might not be considered skills, or at least not as written.  For example, I wrote out Reflections, Tool Development both which I thought were reasonable and XML which I thought was questionable.  For some reason, as a young tester I found XML to be scary because I didn't understand the formatting and so XML seemed to belong, yet did it?  Eventually the task moved to grouping which seemed to happen while I was still behind a little.  Clearly my impromptu skills were a bit lacking.

WHOSE cards are these?

I wanted to take on the technical piece since I feel like that is the part I can provide the most feedback.  I had written in the airplane a hierarchy of technical skills in the hope that they could use, but feared might be too technical, too 'computer science'.  Having mentioned this, no one seemed enthused to combined the two lists, which I'm not sure if that is for better or for worse.  Having Object-Oriented Programming (OOP) without going into the depth of typing, generics, etc. and how that is useful in testing much less automation or tool smithing seems incomplete.  Coding as the only technical skill involving programming is clearly too small.  Is OOP a good stopping point?  I would have chosen more depth, but I also know there is a limit to what 15 or so people can get done in a few days.

We started through the cards and skipping my list, I acted as a sort of wiki gate keeper (a role I didn't much like) while other people did research on the definition of the skill as well as resources for the skill.  Some people seemed interested in very formal definitions and resources while others liked informal 'working' definitions and web blogs.  I mean no criticism on any particular view point, but I tended towards the informal side.  We ended with a show and tell of our work which seemed interesting.  One group had a lovely format.  Another group had extensive research and a third group had lots of definitions completed.  I noted that if we moved each definition onto its own page no gate keeper would be required.  We closed up feeling a little bit dazed by the amount of work left.

WHOSE on first? (Day 2)

Rob S. had emailed Matt H. over the night suggesting we change the format a little.  Why not make these into a more CDT style.  Instead of having very formal works move to a context-based definition set of skills.  That is to say, skills based upon stories of how we used a skill.  While Matt generated a proof of concept around this, we observed the formatting and tool usage.  Once it was understood, we started writing up skills based upon our interests.  We wrote and we wrote and we wrote.  The question of what skills belong where and what is a skill was pushed aside for a more formal editing process.  XML as as skill was removed even though the original list of 'skills' was saved in a deprecated area.

I wrote somewhere between 10-15 skills over the course of a day.  I know my skills as a writer were stretched that day.  I heard warnings about the flame wars and anger we might see from this venture.  I expect that, people in testing have a habit of finding bugs.  I still have lots of open questions on where this goes next.  I wonder how we will edit the skills.  I wonder a lot about how this process will be managed.  I wonder where this will be posted and how it will be viewed.  Those are still open questions.  Questions I hope to be resolved at some point.  After writing until our fingers bled, we finally went to dinner.  Much thanks to AST for bribing...er... treating us to dinner. :)

WHOSE going to finish this? (Day 3)

We as a group attempt to finish off with figuring out who will finish which skills.  I have a lot of skills still needing finishing.  I know others do too.  I signed up to help deal with the format shifting question so when this comes out it'll be readable in useful formats.  I appreciated Matt's openness in considering methodologies and talking through 'what is next'.  I maybe slower to blog for a while as I work through my WHOSE backlog.  Truthfully, I was not as 'on top of it' the last day as I was the previous two days.  I think exhaustion had finally hit, so I'm glad it was just a half day.

WHOSE missing from this narrative?

This was a rather interesting experience.  I have never been to a workshop before.  I never saw any 'LAWST-Style' workshops before so I didn't have that to compare to.  I have worked with a bunch of bright people before, but not some of the 'experts' of the world (even if they would reject that label...).  That is a little humbling.  Seeing Rob write is amazing.  Watching the speed Matt can break out an article is...well... something to be seen.  In fact, in the spirit of that, I have written this entire article straight and will attempt to limit my editing.  Sorry, poor readers. :)  

The group as a whole had some nice philosophical discussions, but no one got angry, and overall I think that helped make the content better.  Is the content useful?  I honestly don't know, I'm not an oracle, but I hope so.  Is the content done?  No, I hope this to be a living document and that others will get a chance to help contribute to this and grow.  I hope they too can understand that their context and usage of a skill will be considered just as valuable as our context and usage of a skill.

I would like to make a special thanks to Matt, Eric and Simon for setting up this conference.  Also thanks to the AST and Doug Hoffman for feeding us.  Thanks to Hyland for hosting the conference.  For other perspectives, see the following blogs: here, here, here, and here.  One last piece I would be remiss to neglect to mention.  At the airport afterwards I got to have a long chat with both Rob and Jon.  That was a great conversation which I really enjoyed.  I'm still considering some the questions Rob posed to me.  Expect some future blog posts!

UPDATED 12/27: Added more blog post links.

Tuesday, December 3, 2013

Book Consideration: An Introduction to General System’s Thinking

I think I need to first start this review with a little digression, which I think will be justified, and is of importance.  I personally have a strong interest in the study of how we work, be that sociology, physiology, psychology, philosophy, law, biology, etc.  I have had less "interest" (i.e. time spent studying) in the just as interesting (i.e. fascinating) hard sciences with the exception of computer science, which I have had some time invested.  That being said, I learned long ago that much of what applied in one subject could be applied to another.  I recall a story (urban legend?) of man who told the dean of his college he could pass any written test.  He was given a test for dentistry even though he had studied philosophy (or something like that) and got a B on the test.

How does that have anything to do with General Systems Thinking (GST)?  Well, everything in a way.  The idea of GST is that you can create heuristically rules around the concepts that apply to more than one system of thinking.  It could perhaps be summed up by xkcd.  One field is generally an "applied" version of another field.  Now if the book only talked about this, it would not have been a very impressive book indeed. 

Frankly this is the hardest review I’ve had to write for a book in a long time.  I really would like to gush over it, to say it changed my life, because I could see it doing so.  On the other hand, it really didn’t teach me that much.  The concepts are hard work, but thinking at a high, pure, abstract and with a great deal of rigour is not easy.  I genuinely enjoyed the book, and frankly I felt the author did a great deal of multiple-faceted examples, to ensure the ideas were communicated well.  The questions at the end of the chapter were thought provoking; although sometimes they lacked enough context and expected a teacher to have the materials somewhere nearby.

I loved his willingness to note the problems with science and even with the field under study.  It makes the book feel much more like an honest discussion with a friend than some jerk trying to persuade my view by forcing his on me*.  I would like to take a quote of the author’s out of context, to note one of the places where he didn’t do this, but which I think was a innocent mistake:  "Try to cope with unfamiliar, complex phenomena, we try to…complete view… minimal view… independent view…."  The author gives an enumerated list of possible views a person might take in order to analyze something unfamiliar, yet he seems to be missing some possibilities.  For example, what about the diverse view, where you sample at random a complex data set in order to understand it?  I feel this is an honest mistake as he makes an honest effort to build together a persuasive case, including the acknowledging the flaws.  There is also an ethereal quality that some people have that causes one to trust a person, which for me this author connects with that.

* To be fair, I’ve often been persuaded by said jerks for a short time, until I’ve had some time to clear my head and see just how manipulative their words are.

A few more side notes before I hit my last point.  It is really annoying that the law’s he proposes aren’t recorded anywhere in the book as a whole.  Those laws as a list would have been useful to me, particularly with a little context.  Instead I have to keep referring back to find the exact definition of a given law.  I also found it funny how I kept reading "Brilligance" as "Brilliance", which is in a sense the author’s point (read the book and you'll get this)!  I have to say I found his "Cousins"/"Friends" argument (Pg 156) weak, as I can completely imagine grouping cousins together via relations from a individual basis.  That is to say, A has cousins B and C; B has cousins A and C; and C has cousins A and B.  Thus you can group those people together as cousins.  He never states why he feels that a person must be a cousin of themselves to make it valid.  Maybe he has a point, but it is unclear to me.

As anyone who talks with me often enough will know, I love quotes.  I find the small points of wisdom in them, sometimes with great value.  That being said, I loved this book’s use of quotes and felt it could in fact be highly quotable.  So for the rest of this review I’m going to try to cite quotations and novel ideas worth looking at:

  • "In short, we can learn about ourselves, which is really why all of us our playing this incredible game, call it poetry, beads, or, if you will, science" – Pg 143
  • "It will be objected…misrepresentations depending on over-simplifications… dilemma of the teacher: the teaching of facts and figures or the teaching of truth.  To convey a model, the teacher must reify and diagram and declare what cannot be seen at all.  The student… approximation of the truth, an approximation [t]he[y] will continue to revise all his [sic] life long. " - Pg 38, Karl Menninger
  • Pg 27, Robinson Jeffers, "The Answer" was amazing!
  • "If we want to learn anything, we mustn’t try to learn everything." - Pg 105, "The Lump Law"
    • (Boy am I guilty of this!)
  • "Things we see frequently are more frequent: 1. because there is some physical reason to favor certain states… . 2. because there is some mental reason… ." - Pg 100-101
  • "Laws should not depend on a particular choice of notation." – Pg 72
  • "…Grown-ups love numbers. When you tell them about a new friend, they never ask questions about what really matters. They never ask: 'What does his voice sound like?' 'What games does he like best?' 'Does he collect butterflies?' They ask: 'How old is he?' 'How many brothers does he have?' 'How much does he weigh?' 'How much money does his father make?' Only then do they think they know him." -Antoine de Saint Exupery, Pg 67
  • "Heuristic devices don’t tell you when to stop." – Pg 55
  • "A general law must have at least two specific applications… If you never say anything wrong, you never say anything." – Pg 42
  • "Method for detecting if something is inside or outside of a given line by counting the number of line in two direction (up/left, down/right) and seeing if the count is even or odd" – Pg 145
  • While not specifically talked about, why is it that we can’t pass through glass (it is solid) but light can, yet light does not pass through everything, else we would never had shadows?
  • The author believes his system will fail to become popular, or become diluted with new views, which eventually will reverse the creators...  Then why do people create?  Why not just give up?  Does he only fear an evolution or would a revolutionary new view also be just as bad.  He claims people can only adapt (roughly) once in a given system before they become too attached to their system.  Why?  Can it be avoided or would people who avoid it not have enough passion to get the first system in place?  Is this a case of people dying too soon to adapt again?
  • "With respect to a given transformation, there are those properties that are preserved by it and those that are not." – Pg 154
    • What of money to an item?  What of the happiness in having money?  Compare that to the happiness of spending it?  On others?  On yourself?  Are those things equal even though the property might remain?  What if the other converts items back to money (pays you back)?  Is the transformation of the first dollar equal to the last dollar in your pocket?
  • "A system is a collection of parts, no one of which can be changed." – Pg 162
  • "We cannot with certainty attribute observed constraint either to system or environment." – Pg 214
  • "The number of untested assumptions in science is staggering.  On any say we can open any one of dozens of newly arrived journals and find reports of "discoveries" that were made simply by relaxing a constraint…" – Pg 215
    • I find it interesting that the author assumes that the untested constraints are simply made by science because of ignorance of the constraint rather than by necessity of time or other possible variable(s).
  • "1. Why do I see what I see? 2. Why do things stay the same? 3. Why do things change?" – Systems Triumvirate, Pg 228
  • "...Science comprehends the thought of reality, but not reality itself; thought of life, not life itself.  That is its limit, its only really insuperable limit, because it is found on the very nature of thought, which is the only organ of science." – Bakunin, Pg 229

WHOSE - Part 1

I'm keeping this brief and unedited, as I don't have much time.  As many people, I have been getting busier for the holidays, but for me part of it is going to WHOSE.  I will perhaps be delayed by a few weeks as I gather together material and prepare for some presentations for next year.  That being said, I hope to have some good material from WHOSE and I will make sure to publish it as quickly as I can.  I do have a few older review I wrote for some classic books.  I may post them to keep you all entertained.  Be back soon.

Monday, November 25, 2013


  So recently my children's school underwent a renovation over the summer. They changed the carpet from institutional (a dreary mud-grey) to fun (black and white stripes with occasional blocks of bright colors). They painted all the cubby hangers the kids had (used to be varnished particle board, now it's bright elementary colors). They moved / changed 10 of the 12 teachers around to different grades. The front office got a complete remodel, including windows where walls used to be.
  My daughter came home the other day and said, "Dad, they made some big changes at school, and I'm not sure I like them." So in my practicing socratic style I asked her why she didn't like the new changes. "Well, they halved the salad bar, and put the silverware on the old half of the salad bar, so there aren't as many vegetables as there used to be. Can I start taking lunch to school with more vegetables?"
  In her world, carpet, wall hangings, windows and teachers in grades didn't matter. What really mattered to her was how many vegetables were served at lunch.

  And that's what most people think about when they think of your software. They don't care how it was envisioned, backlogged, developed, tested or deployed. They care about how it directly affects them in their attempt to use your product. Think about it from there next time.

Monday, November 18, 2013

Word of the Week: Oracle

Oracle Test Definitions

Thanks to:
Isaac Howard and Wayne J. Earl who had a great deal to do with the editing and formulation of this article.
Like my previous word of the week on Heuristics and Algorithms, this is a complicated one. According to wiki,
An oracle is a mechanism used by software testers and software engineers for determining whether a test has passed or failed.
According to the wiki citation, this comes from BBST, which has some thoughts about what an Oracle is or isn't.  Specifically it talks a lot about Oracle Heuristics, an interesting combination that Bach roughly states as a way to get the right answer some of the time.  I don't feel I have a problem with that, but then we go back into the BBST class and things get confusing.  On Slide 92 of the 2010 BBST course, it says,
How can we know whether a program has passed or failed a test?  Oracles are heuristics 
Slide 94 says:
An oracle is a reference program. If you give the same inputs to the software under test and the oracle, you can tell whether the software under test passed by comparing its results to the oracle's. 
Which later goes on to say that the definition is wrong.  The slides does so because of the claim that Oracles are heuristics.

Classic Problems With The Definitions

But how can this be so if Oracles know all?  They are the truth tellers.  Well the problem in software is that Oracles are not absolute like in the stories.  They give you an answer, but the answer might be wrong.

For example, you might test Excel and compare it to a calculator.  You take the calculator and enter 2.1 * 1 getting back the value 2.  Now perhaps the calculator is setup to provide integers, but when you compare it to Excel's output, you find that Excel gives back 2.1.  This appears to be a failure in Excel, but in reality it is a configuration issue.  The heuristic is in assuming your Oracle is right.  This might of course be a false assumption, or it might be only right in some circumstances.  Interestingly, one of the creators of BBST, Cem Kaner, has revised the definition of Oracle slightly in a posting about Oracles and automation,
...a software testing oracle is a tool that helps you decide whether the program passed your test.

Partial Oracle

While I don't want to get too far off track, I do want to note that Partial Oracles do exist, and from what I can tell, they are Oracles that tell you if the answer is even reasonable.  For example, for two positive integers, you can say that when you add them together, you will get a larger number than either of the separate digits.  1+1=2.  Thus 1<2.  3+4=7.  Thus 4<7.  In both cases the larger number is always smaller than the sum of the two numbers, for ANY two numbers.

New Questions

Let me chart out the idea of an Oracle:
  1. Test: Tester runs test.  Example: Login with valid user.
    1. Result: Login takes 5 seconds and goes to a internal page.
  2. Request for Data: Make a request out to the Oracle; Was 5 seconds too long?
    1. Process: Oracle considers the answer.
    2. Data: Oracle generates the answer: Yes, 5 seconds is too long.
  3. Compare: Verify if Test's Result are acceptable.
  4. Output: Test's Results are not acceptable.
  5. React: Tester reacts to the result.  Maybe they fail the test.  Maybe they...
Now lets get picky.  Is the monitor (the display) which is beaming information to you an Oracle?  It shows the results you requested and is a tool.  While I noted the parts that are the Oracle, who is this Oracle?  If the Oracle is your mind, then what makes this different from testing?

My colleague Isaac noted that the requirements of a test in most definitions includes making observations and comparing them to expectations.  For example, Elisabeth Hendrickson said,
Testing is a process of gathering information by making observations and comparing them to expectations.
Does this make an Oracle simply part of a test?  Even to be able to come up with the question seems to indicate you might suspect an answer.  Is this too long?  Well, in asking that, one assumes you have a built in answer in your head.  Perhaps you are wrong, but that is part of what an Oracle can be.

Alternatively, maybe an Oracle is an external source, thus it has to be outside of the "Tester".  If that is the case, then can the Oracle be the System Under Test?  Imagine testing using two browsers at the same time doing the above test and the login time has a large difference between browsers.  Is the Oracle the browser or the SUT?

Lets take a different approach.  Lets say you take screenshots after 4 seconds from logging in using an automated process.  You compare the screenshot to the previous version of the result, pixel by pixel.  If the pixels compare differently, the automation reports back a failure.  Where is the Oracle?  The request for data was to get a previous edition of the SUT in image form.  No processing occurred, so the Oracle can't be the processing, but perhaps the image itself is the Oracle.  Or is the Oracle the past edition of the site and the data?  Continuing into this, the automation pulls out each pixel (is that the Oracle?) then compares them.  But wait a minute... someone wrote the automation.  That someone thought to ask the question about comparing the images.  Are they the Oracle?  Since the author is the Tester (checker, whatever) the first time, capturing the first set of images, they saw what was captured and thus became a future Oracle.

Even if the Oracle is always an external source, is it always a tool?  Is a spec a tool?  Is another person (say a manager) a tool?  No, not that sort of tool.  Is a list of valid zip codes a tool or just data?

In case you are wondering, many of the examples given are real with only slight changes to simplify.

How Others Understand Oracles - BBST

Perhaps you feel this detailed "What is it?" questioning is pedantic or irrelevant.  Perhaps we all know what an Oracle is and my attempt to define it is just making a mess.  In order to address that question, I am going to do something a little different from what I have done in the past.  I'm going to open up my BBST experience a little, as I answered the question Kaner wrote about and then talk a little about it. To be clear, this answer has the majority of the content pulled out, as it could be used as an exam question in the future:

Imagine writing a program that allows you to feed commands and data to Microsoft Excel and to Open Office Calc. You have this program complete, and it’s working. You have been asked to test a new version of Calc and told to automate all of your testing. What oracles would you use and what types of information would you expect to get from each?

1. Excel (Windows) – Verify that the most common version of Excel works like Calc does. Are the calculations the same? Do the formulas work the same? If you save a Calc file can Excel open it? What about visa-versa?
a. I’m looking to see if they have similar calculations and similar functionality.

7. Stop watch – Does Calc perform the tasks at the same rough rate as Excel, Google docs? Is it a lot faster or a lot slower?
a. I’m looking to see if we are consistent with current user expectations of speed.
The responses I got were interesting in my peer reviews.  Please note I removed all names and rewrote the wording per the rules in BBST.  One person noted that item 7 would be categorized under performance and that item 1 was looking at consistency with comparable products.  Multiple reviewers felt I was looking at consistency, a heuristic Bach created.  What I find odd about that, is the need to label the Oracle when the Oracle (in my view at the time) was the tool, not the heuristic, therefore citing the heuristic of comparable products was not part of the question.  I got a complaint that I was not testing Excel or Google but Calc, yet the call of the question is about how I would use those Oracles.  One fair issue was, I should have noted I could have compared the old version to the new version using a stop watch, which I had missed.  However, I had cited Old Calc in my full document, so I think that was a relatively minor issue.

Since Oracles are tools, how can I not be implicitly testing Excel?  I kept hearing people say I should name my Oracles, yet to me I was naming them very clearly.  I got into several debates about if something like Self Verifying Data is in fact an Oracle (even though the article clearly has its own opinion on that)!  It seemed like everyone wanted to label the heuristic the Oracle, probably because of the "Heuristic Oracle" label in BBST.  While I did feel BBST failed to make clear what an Oracle is, it did make me think about Oracles a lot more.

Wrapping Up

I'm sorry if that felt a little ranty, but by talking about this subject, I want you to also think about what you see as an Oracle.  Oddly, Kaner himself cite's Doug Hoffman with items which I did not consider an Oracle (such as Self Verifying Data) when I started writing this article.  I think Kaner's own work defends his view point, as he doesn't appear to use his own rule (his definition) to the letter but rather the similarity of the items, a method humans appear to use frequently.

Truth be told, I'm not so sure that Oracle should be a word we in the industry should be using at all.  Isaac does not seem to believe in Oracles anymore, and appears to feel the definition is broken as it really cannot be separated from test.  To me, I see that many people seem to use it and perhaps it can have value if we shrink down the role of the word.  So let me wrap this up with my attempt to patch the definition into something that is usable.

Oracle: One or more imperfect external sources (often tools) that provide data or are data to assist in determining the success or failure of a test.

For a not exactly correct but useful working definition, an Oracle is the value that is the expected result.

What do you think a Oracle is?  Are we missing critical details or other expert views?  Feel free to write us a note in the comments!

Thursday, November 7, 2013

@Testing with [Reflections] Part II

If you haven't already, I suggest you read about reflections before reading too deeply into the topic of annotations.  In case I failed to convince you or in case it didn't totally sink in, reflections as I see it is a way for code to 'think' about code.  In this case we are only considering the "reflection" piece and not the "eval" piece that I spoke about previously.  Reflections supports some pretty fascinating ideas and can be used in multiple different areas with different ways of doing so.  Annotations (Java) or Attributes (C#) in comparison would be code having data about code which works hand in hand with reflections.


Lets start with one of the most common examples people in test would be exposed to:

[Test] //Java: @Test()
public void TestSomeFeature() { /*... */}

First to be clear, the attribute (or commented out annotation) is on the first line.  For clarities sake, for the rest of the article I'm going to say "annotation" to mean either attribute or annotation. It defines that the method is in fact a "Test" which can be run by xUnit (NUnit, JUnit, TestNG).

The annotation does not in fact change anything to do with the method.  In fact, in theory xUnit could run ANY and ALL methods, including private methods in an entire binary blob (jar, dll, etc.) using reflections.  In order to filter out the framework and helper methods, they use annotations, requiring them to be attached so they know which methods will run.  This by itself is a very useful feature.

The Past Anew

Looking back at my example from my previous post, you can now see a few subtle changes which I have marked NEW:

class HighScore {
 String playerName;
 int score;
 Date created;
 int placement;
 String gameName;
 @CreateWith(Class = LevelNamer.class)//NEW
 String levelName;
 //...Who knows what else might belong here.

Again, the processing function could be subtly modified to support this change. The change is again marked NEW:

function testVariableSetup(Object class) {
for each variable in class.Variables {
 if(variable.containsAnnotation(Exclude.class)) //NEW
  continue;//NEW don't process the variable.
 if(variable.containsAnnotation(CreateWith.class)) { //NEW
  variable.value = CreateNewInstance(variable.getAnnotation(CreateWith).Class).Value();//NEW ; Create New Instance must return back the interface.
 if(variable.type == String) then variable.value = RandomString();
 if(variable.type == Number) then variable.value = RandomNumber();
 if(variable.type == Date) then variable.value = RandomDate();

So what this code demonstrates is the ability to exclude fields in any given class by attaching some meta data to it, which any functionality can look at but doesn't have to. In the high score class we marked the created date variable as something we didn't want to set, maybe because the constructor sets the date and we want to check that first. The second thing we did was we set a class to create the levelName. The levelName might have a custom requirement that it follow a certain format. Having a random String would not do for this, so we created an annotation that takes in a class which will generate the value.

Now we could have a different custom annotation for each and every custom value type, but that would defeat the purpose of make this as generic as possible. Instead, we use a defined pattern which could apply to several different variables. For example, gameName also had to follow a pattern, but it was different from the levelName pattern. You could create another class called gameNamer and as long as it followed the same design (had a method called "Value()" that returned a string), you could just use the CreateWith(Class=X) annotation and they would act the same. This means you would not need to add another case in the testVariableSetup method or even change it. In Java and C# the mechanism for this is a common ancestor which can be either an interface or an abstract class. That is to say, they both inherit from the same abstract class or implement the same interface. For the sake of completeness and to help make this make sense, I have included an updated pseudo code examples below:

class HighScore {
 String playerName;
 int score;
 Date created;
 int placement;
 @CreateWith(Class = GameNamer.class)//NEW
 String gameName;
 @CreateWith(Class = LevelNamer.class)//NEW
 String levelName;
 //...Who knows what else might belong here.
// All NEW below:

class ICreateWith { String Value(); }
class GameNamer implements ICreateWith { String Value() { return "Game # " + RandomNumber(); }
class LevelNamer implements ICreateWith { String Value() { return "Level # " + RandomNumber(); }

//In some class
class some { ICreateWith CreateNewInstance(Class class) { return (ICreateWith)class.new(); } }

TestNG - Complex but powerful

One last example that is a little more of a real life example that is common in the testing world. Although I think the code might be a little too complex to get into here, I want to talk about a real life design and how it works in general. TestNG uses annotations with an interesting twist. Say you have a "Group", a label saying that this is part of a set of tests. Well, perhaps you have a Group called "smoke" for the smoke tests you want to run separate from all the others. TestNG might support filtering, but between TestNG and Maven and ... you decide you want to do determine the filtering of tests at run time using a flag somewhere (database, environment variable, wherever) that says "run smoke only". During run time, TestNG calls an event saying "I'm about to run test X, here is all the annotation data about it. Would you like to change any of it?" At this point you can read the Group information about the test. If your flag says smoke only, you can then check the groups the test has. If the Group list does not have smoke in it, you set the test to enabled=false, changing the annotation's data at run time. TestNG calls this Annotation Transformations. I call it cool.

The weird part is that you are modifying data at run time annotation data that is hard coded at compile time.  That is to say, annotation values cannot be actually changed at runtime*, but a copy of the instance of them can be.  That is what TestNG actually changes from what I can tell.

If you are reading this and saying, this topic is rather confusing, don't feel too badly. I know it is confusing. The TestNG part in particular is a bit mind bending.  And to be clear, I don't see myself as an expert.  There are way more complex ideas out there that just amaze me.

* This is from what I can tell.  Perhaps there are reflective properties to let you do this.  You can however override annotations through inheritance, but that is a more complex piece.

Friday, November 1, 2013

How do you know when you are right?

I have been thinking about this problem, the problem of correctness (which is what I mean by "right") for a good number of years, but never formally.  This post was written a bit on a lark, so forgive the lack of intellectual rigour in the post.  So what sorts of strategies do we have at are disposal:
  1. Faith - Assume another has all the knowledge and take it as The Truth(tm) without a good reason for confidence.
    1. Example: My fortune cookie said ... and while it has never been right, it will be this time.
  2. Personal History - Using your personal knowledge to judge the correctness. 
    1. Example: You've known your smart friend for years and he typically doesn't lead you astray, so I trust this to be true.
    2. Example: My teacher said the earth is round.
  3. History - If it ain't broken, don't fix it.
    1. Example: My grandfather worked as a miner, my father worked as a miner, therefore I should work as a miner.
  4. Senses - Observe and deduce.  
    1. Example: I see a black sky every night, so the sky must be black everywhere during the night.
  5. Logic - Attempt to determine rightness by applying a set of rules.
    1. Example: If I am floating, I am not on Earth.
  6. Scientific Method - Test it and test it until you believe you have a more accurate world view.
    1. Example: 5 / 6 times I tried to login with this username/password, it succeeded.  Therefore, this username / password must be correct.
  7. Probability - Attempt to assign probabilities that a given item is right.  This might be via the scientific method, your senses, etc.  
    1. Example: If I am floating, I have a 60% chance of being in space, a 39% chance of being on the vomit comet and a 1% chance of something outside of my experience.
  8. Research - Use the "Faith" principle, but take multiple sources.  Intentionally look for counter arguments.  Intentionally look for consensus.
    1. According to Lou Adler and Venkat Rao, you should ask an interviewee to tell a stories about their success.  Others, like Nick Corcodilos note that different questions are useful for different requirements.
  9. Random / Sounds Good- Choose a value at random or seemingly random and claim it is right.  This might be a intentional lie or it could be the first thing that 'popped' into your head.
    1. Example: Q: How many wheels does a typical truck have?  A: 16.  Q: How did you get the value?  Well I knew a little, but I basically guessed.
    2. Example: Q: Are you a better than average driver? A: Yes. (80-90% of the American population says yes, thus some are lying or 'randomly' choosing yes because it sounds good).
  10. Assume - Sometimes we just assume something is the truth without knowledge.  Cultural ideas follow this often.  
    1. Example: Barber shops are where you get your hair cut. (Not always; some are also brothels for example.)
  11. Null - I refuse to answer the question or I don't know.
    1. Example: Q: What is the smallest atomic unit in the universe?  A: I don't know.  (Even shades of this like, smaller than a truck or smaller than an atom could fall into this category).
  12. ???
I am wondering, what other ways do people "assign" the idea of "right" (as in correct) to a statement.  Do you find that you use multiple of these, and which of these strategies are the most helpful?  Care to add to the list?

One last question I have in mind with this topic that I think Scott Adams hit reasonably well with BOCTAOE (But Of Course There Are Obvious Exceptions).  How much rigour do we need in any given statement to be clear?  Does the audience matter?  Does the % accuracy matter?  "Login works" might be right, but not right for all customers (or under all loads).  If I had to mention all exclusions of rightness in all my statements, I might end up needing a EULA just to let someone hear me (or read my works).  Last but not least, is our inability to recall the exact truth a just plain human flaw, making the concept of being 'right' really null in and of itself.  Should right be really "within an order of magnitude of the 'truth'"?

I'm not sure I know the answers to these questions, but I think there is a high probability that I have an opinion which maybe within an order of magnitude of the truth, but for now I'm going to say I don't know.

Thursday, October 24, 2013

Software Testing is in my Genes (maybe).

In possibly good news, we may now hire based upon a genetic test! I wonder how that will go wrong as I'm sure it will.  As a personal aside, I find that the glass contains what appears to be roughly 50% atmosphere and 50% liquid, but I have not tested it to validate those observations.

Monday, October 21, 2013

The case for an Automation Developer

Disclaimers: As is always true, context matters. Your organization or needs may vary. This is only based upon my experience in the hardware, B2B, ecommerce and financial industries. Given the number of types of businesses and situations, I assume you can either translate this to your situation or see how it doesn't translate to your situation.


Automation within this context is long living, long term testing somewhere between vertical integration testing (e.g., Unit testing including all layers) and system testing (including load testing).  These activities include some or all of the following activities:
  • Writing database queries to get system data and validate results.
  • Writing if statements to add logic, about things like the results or changing the activities upon environment.
  • Creating complex analysis of results such as reporting those to an external system, rerunning failed tests, assigning like reasons for failure, etc.
  • Capturing data about the system state when a failure occurs, such as introspection of the logs to detect what happened in the system.
  • Providing feedback to the manual testers or build engineers in regards to the stability of the build, areas to investigate manual, etc.
  • Documenting the state of automation, including what has been automated and what hasn't been.
  • Creating complex datasets to test many variations of the system, investigating the internals of the system to see what areas can or should be automated.
  • Figuring out what should and shouldn't be automated.

Developer within this context is the person who can design complex systems.  They need to have a strong grasp on the current technology sets and be able to speak to other developers at roughly the same level.  They need to be able to take very rough high level ideas and translate them into working code.  They should be able to do or speak to some or all of the following activities:
  • Design
  • Understand OOP
    • Organization
  • Database
    • Design
    • Query
  • Refactor
  • Debug
  • Reflections
Automation Developer

You will notice that the two lists are somewhat similar in nature.  I tried to make the first set feel more operational and the second set to be a little more skills based, but in order to do those operations, you really have to have the skills of a developer.  In my experience, you need at least one developer-like person on a team of automators.  If you want automation to work, you have to have someone who can treat the automation as a software development project.  That also of course assumes your automation is in fact a software development project.  Some people only need 10 automated tests, or record-playback is good enough for their work.  For those sorts of organizations, a 'manual' tester (that is to say, a person who has little programming knowledge) is fine for those sorts of needs.

Automation Failures

I know of many stories of automation failure.  Many of the reasons revolve around expectations, leadership and communication.  As that is an issue everywhere I don't want to consider those in too much depth other than to say a person who doesn't understand software development will have a hard time to clearly state what they can or can't do.

Technical reasons for failure involve things as simple as choosing the wrong tool to building the wrong infrastructure.  For example, if you are trying to build an automated test framework, have you got an organized structure defining the different elements and sub elements.  These might be called "categories" and "pages" with multiple pages in a category and multiple web elements in a page.  How you organize the data is important.  Do you save the elements as variables, call getters or embed that in actions in the page?  Do actions in the page return other pages or is the flow more open?  What happens when the flow is changed based upon the user type?  Do you verify that the data made it into the database or just check the screen?  Are those verifications in the page layer or in a database layer?  Organization matters and sticking to that organization or refactoring it as need be is a skill set most testers don't have initially.  This isn't the only technical development skill most testers don't have, but I think it illustrates the idea. Maybe they can learn it, but if you have a team for automation, that team needs a leader.

Real Failure

These sorts of problems I talk about aren't new (Elisabeth Hendrickson from 1998) which is why I hesitate to enumerate the problems with much more detail.  The question is how have we handled such failures as a community?  Like I said, Elisabeth Hendrickson said in 1998 (1998! Seriously!):
Treat test automation like program development: design the architecture, use source control, and look for ways to create reusable functions.
 So if we knew this 15 years ago, then why have we as a community failed to do so?  I have seen suggestions that we should separate the activities into two camps, checking vs testing, with checking being a tool to assist in testing, but not actually testing.  This assumes that automation purely assists because it doesn't have the ability to come up with judgment.  This may be insightful in trying to denote roles, but this doesn't really tell you much about who should do the automating.  CDT doesn't help much, they really only note that it depends on external factors.

When often automation fails or at least seems to have limited value, who can suggest what we should do?  My assertion is that testers typically don't know enough about code to evaluate the situation other than to say "Your software is broken" (as that is what testers do for a living,).  Developers tend to not want to test is typically noted when talking about developers doing testing.  Furthermore, what developer ever intentionally writes a bug (that is to say, we are often blind to our own bugs)?

A Solution?

I want to be clear, this is only one solution, there maybe others which is why the subheading starts with "A".  That being said, I think a mixed approach is reasonable.  What you want is a developer-like person leading the approach, doing the design and enforcing the code reviews.  They 'lead' the project's framework while the testers 'lead' the actual automated tests.  This allows for the best of both worlds.  The Automation Developer is mostly doing code as a software development project while the testers do what they do best, develop tests.  Furthermore, the testers then have buy-in in the project and they know what actually is tested.


Wednesday, October 16, 2013


I have been recently been reading over some of Steve Yegge's old posts and they reminded me of a theme I wanted to cover.  There is a idea we call meta-cognition, that testers often use to defocus and focus, to occasionally come back for air and look for what we might have missed.  It is a important part of our awareness.  We try to literally figure out what we don't know and transfer that into a coherent question(s) or comment(s).  Part of what we do is reflect on the past, using a learned sub-conscious routine and attempt to gather data.

In the same way, programming too has ways of doing this, in some cases, in some frames of reference.  This is the subject I wish to visit upon and consider a few different ways.  In some languages this is called reflections, which uses a reification of typing to introspect on the code.  Other languages allow other styles of the same concept and they call them 'eval' statements.  No matter the name, the basic idea is brilliant.  Some computer languages literally can consider things in a meta sense intelligently.


So lets consider an example. Here is the class under consideration:
class HighScore {
 String playerName;
 int score;
 Date created;
 int placement;
 String gameName;
 String levelName;
 //...Who knows what else might belong here.

First done poorly in pseudo code, here is a way to inject test variables for HighScore:
function testVariableSetup(HighScore highScore) {
highScore.playerName = RandomString();
highScore.gameName = RandomString();
highScore.levelName = RandomString();
highScore.score = RandomNumber();
highScore.create = RandomDate();
//... I got tired.

Now here is a more ideal version:
function testVariableSetup(Object class) {
for each variable in class.Variables {
 if(variable.type == String) then variable.value = RandomString();
 if(variable.type == Number) then variable.value = RandomNumber();
 if(variable.type == Date) then variable.value = RandomDate();

Now what happens when you add a new variable to your class?  For that matter, what happens when you have 2 or more classes you need to do this in?  The first version can be applied to anything that has Strings, Dates and Numbers.  Perhaps we are missing some types, like Booleans, but that doesn't take too much effort to get the majority of the simple types.  Once you have that, you only have to pass in a generic object and it will magically set all fields.  Perhaps you want filtering, but that too is just another feature in the method.

The cool thing is, this can also be used to get all the fields without knowing what the fields are. In fact, this one is so simple, I am going to show a real life example, done in Java:

//import java.lang.reflect.Field;
//import java.util.*;

 public static List<String> getFieldNames(Object object) {
  List<String> names = new ArrayList<String>();
  for(Field f : object.getClass().getFields()) {
  return names;

 public static Object getFieldValue(String fieldName, Object object) {
   Field f = object.getClass().getDeclaredField(fieldName);
   return f.get(object);
  }catch (Throwable t) {
   throw new Error(t);

 public static Map<String, Object> getFields(Object object) {
  HashMap<String, Object> map = new HashMap<String, Object>();
  for(String item : getFieldNames(object)) {
   map.put(item, getFieldValue(item, object));
  return map;

Lets first define the term "Field."  This is a Java term for a variable, be it public or private.  In this case, there is code to get all the field names, get any field value and get a map of field names to values. This allows you to write really quick debug strings by simply automatically reflecting any object and spitting out name/value pairs. Furthermore, you can make it so that it would filter out private variables, variables with a name like X or rather than getting fields, use it to get properties rather than variables.  Obviously this can be rather powerful.


Let me give one other consideration of how reflective like properties can work.  Consider the eval statement, a method of loading in code dynamically.  First, starting with a very simple JavaScript function, let me show you what eval can do:

  var x = 10;
  alert( eval('(x + 2) * 5'));

This would display an alert with the value 60. In fact, an eval can execute any amount of code, including very complex logic. This means you can generate logic using strings rather than hard code it.

While I believe the eval statement is rather slow (in some cases), it can be useful for generating dynamically generated code.  I'm not going to write out an exact example for this, but I want to give you an idea of a problem:

for(int a = 0; a!=10; a++) {
  for(int b = 0; b!=10; b++) {
    //for ... {

First of all, I do know you could use recursion to deal with this problem. That's actually a hard problem to solve, hard to follow and hard to debug. If you were in a production environment, maybe that would be the way to go for performance reasons, but for testing, performance is often not as critical. Now imagine if you had something that generated dynamic strings? I will again attempt to pseudo code an example:

CreateOpeningsFor(nVariables, startValue, endValue) {
  String opening = "for(a{0} = {1}; a{0}!={2}; a{0}++) {";  
  String open = "";
  for(int i = 0; i!=nVariables; i++) {
    open = open + "\n" + String.Format(opening, i, startValue, endValue);
  return open;

eval(CreateOpenFor(5, 0, 10) + CreateFunctionFor("test", "X", 5) + CreateCloseFor(5));
//TODO write CreateFunctionFor, CreateCloseFor...  
//Should look like this: String function = "{0}({1}{2});" String closing = "}";

While I skipped some details, it should be obvious to someone who programs that this can be completed. Is this a great method? Well, it does the same thing as the hard coded method, yet it is dynamically built, thus is easily changed. You can log the created function even and place it in code if you get worried about performance. Then if you need to change it, change the generator instead of the code. I don't believe this solves all problems, but it is a useful way of thinking about code. Another tool in the tool belt.

The next question you might ask is, do I in fact use these techniques? Yes, I find these tools to be invaluable for some problem sets. I have testing code that automatically reflects and generates lots of different values and applies them to various fields or variables. I have used it in debugging. I have used it to create easily filterable dictionaries so I could get all variables with names like X. It makes the inflexibly type system into a helpful systems. I have even used it in creating a simple report database system which used reflections to create insert/update statements where the code field names are the column names in the database. Is it right for every environment? No, of course not, but be warned, as a tool it can feel a little like the proverbial hammer which makes everything look like nails. You just have to keep in mind that it is often a good ad-hoc tool, but not always a production worthy tool without significant design consideration. That being said, after a few uses and misuses, most good automators can learn when to use it and when not to.

Another reasonable question is what other techniques are useful and similar to this? One closely related tool would be regular expressions, a tool I don't wish to go in depth on as I feel like there is a ton of data on it. The other useful technique is known as annotations or attributes. These are used to define meta data about a field, method or class. As I think there is a lot more details to go over, I will try to write a second post on this topic in combinations with reflections as they are powerful together.

Tuesday, October 15, 2013

Interviewing, post 3

TLDR: brain dump.

This is my internal thoughts, presented for you to look at. (cause I want to remove my internal stigma of having to have perfect / point-filled blogs before I post.)

Little background: I used to interview people at a much larger company, we had a team of roughly 50 testers / SDETs. This allowed me a certain amount of freedom in hiring people that might not perfectly fit any particular profile. Aka they could be higher risk (in my mind) in certain key areas, cause they were offset but a large group of other people. Or we could move them to a team that offset any other risky areas while we saw if they panned out over a 2-3 month period. Or were able to learn what we thought was necessary.
I now work for a company with a 4 person QA team, hiring a final person for this year to make it a team of 5. Personally I think this gives me significantly less leeway on what it is that we can hire. I have less wiggle room for potential issues. I don't have the cycles I would like to, to devote to training someone with only some potential. Basically what I think I'm looking for and what I need to fill are more tightly coupled for this current position.

So, in interviewing people, I look for certain key talents. One of those talents is self-motivation / drive / passion. In an effort to attempt to figure out what I'm looking for in good people to hire, one of my employees asked me to define what I meant by motivation or drive.

Before looking it up in a dictionary:
Motivation: A reason to do a task.
Drive: Internal motivation, requiring little to no outside force(s). This can be synonomous with Self-motivated.
Passion: A strong drive that allows one to singularly focus on a task or set of tasks. Passion can be good and bad. Not allowing oneself to defocus when necessary (OCD).

Dictionary (the definition I thought the most pertinent):
Motivation: a motivating force, stimulus, or influence : incentive
Drive: to press or force into an activity, course, or direction
Passion: a strong liking or desire for or devotion to some activity, object, or concept

What does all this mean to me? I think my ideas of what motivation / drive are…are pretty reasonable (perhaps self-motivated is the more appropriate word). Now the real question comes down to HOW do you find out if someone is self-motivated or driven?
I've been reading Hiring with your Head (1) recently to see how someone regarded as a great interviewer goes about it. Adler likes the idea of past proof of having done it. He asks, "What is your most significant accomplishment related to X?" 
Personally, I'm not sure I want to just ask someone directly, "Are you motived? Prove it." People can make up anything if they know what you are looking for.

Lately I've been saying something to the point of:
One of the key objectives for this position is the ability to write bug reports that are clear, concise, accurate and relevant. At a high level, what do you think about this and how would you go about this? What have you accomplished that's most similar to this?
One of the key objectives for this position is the conversion of component, functional, system and regression tests into automation. At a high level, what do you think about this and how would you go about this? What have you accomplished that's most similar to this?

And then digging into the details of the given answers to get specific details. I have found this seems to give me the data I need to determine "is this person self-motivated". But since I just changed up the interview questions I use to start taking this into account, I'll reserve judgment till later.


(1) Adler, Lou. Hire with Your Head: Using Performance-based Hiring to Build Great Teams. 3rd ed. Hoboken, N.J: John Wiley & Sons, 2007. Print.

Monday, October 7, 2013

Words of the Week: Heuristic [& Algorithm]

Isaac and I were debating the meaning of Heuristics the other day, trying to come to some common ground.  We ended up going into some interesting places, but it lead into a good question about what they are and how they are used.  Let me start with my off the cuff definition using no wiki links, then we'll hit into a more formal look.  A heuristic in my mind is a way of getting a not always right answer, but an answer one hopes is good enough.  In comparison, an algorithm will always provide a 'correct' answer.  The term correct meaning that the output is consistent given the input and will always provide the best answer it can. This leads into the question of, can a computer give a non-algorithmic (heuristic) response?

Now when Isaac and I were talking through this, he noted that a computer always gives the same answer if you lock down all of the inputs.  Random.Int() uses a seed, and if you replay with that same seed it will do the same thing.  If you change the configuration or time of the computer and that has an affect, that is an input, and in theory if you lock that down too as an input.  If the number of CPU cycles for the process is an input, that too would be locked down.  Now given my informal definition, are all heuristics really algorithms, just with inputs that are hard to define?

Lets flip this on its head.  What about people?  Isaac asked me, "What would you call that plant?"  I said, "Green" to his chagrin.   He said, "No, the name of the plant is?"  A co-worker interjected, "George."  Obviously it wasn't Isaac's day, but the point Isaac was going for was, if I didn't know the name and then he told it to me, would I then be able to get a closer to accurate answer.  Even if Isaac was wrong, I had some knowledge base to draw on and could use that to give a heuristically better answer, but I could still be wrong, given that the name might also be George.  The problem is...  What if we locked down all of my life experiences, genes and repeat the experiment?  Obviously it can't be done, but am I really an algorithm that is just a walking input/output device of large complexity?  We are hitting into the realm of determinism and free will.  Is everything an algorithm?  I don't really want to get too far into the weeds, but I believe a point will emerge from this.

The time is now that we start looking at more formal and test oriented definitions to Heuristics.  In wiki it says,
In computer science, artificial intelligence, and mathematical optimization, a heuristic is a technique designed for solving a problem more quickly when classic methods are too slow, or for finding an approximate solution when classic methods fail to find any exact solution. This is achieved by trading optimality, completeness, accuracy, or precision for speed.

 Bach and Kaner say,
A heuristic is a fallible method of solving a problem or making a decision.
So now that we know that, I want us to be able to contrast it to Algorithm, the other word that incidentally is being considered.  Again, let us consider wiki:
In mathematics and computer science, an algorithm is a step-by-step procedure for calculations. Algorithms are used for calculation, data processing, and automated reasoning.  ... While there is no generally accepted formal definition of "algorithm," an informal definition could be "a set of rules that precisely defines a sequence of operations."
In both definitions, Heuristics acknowledge failure as a possibility.  On the other side, the Algorithms definition notes that their is no formal definition and the only 'failure' I noted that was somewhat on topic was the question if an Algorithm needed to stop.  If an Algorithm does not care if it gives back a correct value, just that it has a set of finite steps, then it too acknowledges failure is allowed.  In theory, some might note that it should end eventually, depending on if you think a program that has a halting state is an Algorithm but this is the only question about outcome.  I suppose you could say an Algorithm can also fail, as success is not a required condition, only halting.  In all the definitions, their is some method for finding a result.  The only difference appears that Heuristics specifically acknowledge limits and stopping points and Algorithms don't.

So what is the difference between Heuristic and Algorithm?  One of the popular StackOverflow answers says:
  • An algorithm is typically deterministic and proven to yield an optimal result
  • A heuristic has no proof of correctness, often involves random elements, and may not yield optimal results.
So in this very formal world, an Algorithm requires mathematical proof of correctness (within a given context, such as assuming our universe's constraints).  Heuristics on the other hand need no such formal proof.  In that case, most code is in fact Heuristical in nature and most of our testing is also Heuristical in nature.  This starts to lead into the question of sapient testing vs checking, but still, I don't want to get into that yet.  Well not much.  I do want to address one other quote from Bach,
There are two main issues: something that helps you solve a problem without being a guarantee. This immediately suggests the next issue: that heuristics must be applied sapiently (meaning with skill and care).
The idea that Heuristics require skill and care is an interesting one.  When I write an automated test or when I write a program, I use skill and care.  Am I using Heuristics in my development or is my Algorithm the Heuristic?  When I test, am I exploring a system using a Heuristic but when I write automation, the Heuristic of my exploration is lost after the writing of the test, and then it becomes something else, an Algorithm to formally prove if the two systems can interact the same way controlling for the majority of the inputs?  What happens when computers start getting vision and understand a little of the context?  Are they now sapient in a given context (meaning are they skilled and take care to manage the more important aspects compared to the less important aspects)?

I don't intend on going on in this questioning manner, but rather to hit you with a surprise left.  Sometimes, words are so squirrelly, that when one person attempts to pin them down, they end up creating a unintended chain of events.  They create just another version of the meaning of the word.  Next thing you know, no one really knows what the word means or what the difference is between two words is.  I have done way more research on this than most do, and yet I don't think their is a good answer.  I too will attempt to put my finger in the dike, but I don't expect to stop the flow of definitions:

  • A Heuristic is an attempt to create a reasonable solution in a reasonable amount of time.  Heuristics are always Algorithmic in that they have a set of steps, even if those steps are not formal.
  • A Algorithm is a series of steps that given the control of all inputs will consistent give back a result without necessarily considering other external factors, such as time or resources.  These steps have some formal rigor.

Wednesday, October 2, 2013

Refactoring: Improving The Design Of Existing Code

Having read many books on the basics of programming, building up queries and the basics of design, I have found that almost none really talk about how to deal with designing code outside abstract forms. Most books present design in a high level, possibly where a UML is shown and revolves around most of the OOP connections. Some talk over how to use design patterns or create data structures of various types. All seem to be under the illusion that, we as programmers actually can apply abstract concepts into real every day practical methodologies. Almost always they use things like animals or shapes or other "simple" (but not practical) OOP examples. They are always well designed, and seem relatively simple. In spite of that, I think I have learned how to do proper design to some degree by years of trial and error.

Refactoring, on the other hand starts out with a highly simple, yet more real world, movie system, and then slowly but surely unwraps the design over 50 pages. It is one of the most wonderful sets of “this code works, but let us make it better” I have ever seen, and it is dead simple in implementation. The book starts with a few classes, including a method to render to a console. Then blow by blow, the shows differences via a bolding of each change. The book talks design choices like how having a rendering method that contains what most would call “business logic” prevents you from easily having multiple methods of rendering (I.E. Console and html) without duplicate code. They also make a somewhat convincing argument against temporary variables, although I am not 100% convinced. Their reasoning is that it is harder to refactor with temp. variables, but sometimes temp variables (in my opinion) can provide clarity, not to mention they tend to show up in debugger watch windows. To be fair, later on, the author notes that adding back in temporary variables for clarity, but it is not emphasized nearly as much.

As I continued through the book, a second and important point was brought up. Refactoring requires reasonable amounts of unit testing in order to ensure that you do not accidentally break the system by the redesign. The argument continues that all coding falls into two categories. One is creating new functionality and the other is refactoring. When creating the initial code, you need testing around it, which connects into the TDD. The point is to be careful with refactors because they can cause instability, and to work slowly in the refactoring. They talk about constantly hitting the compile button just to be sure you have not screwed something up.

Sometimes, it is comforting to know that I am not the only one who runs into some types of trouble. One of my favorite statements in the book is in chapter 7 where the author notes after 10+ years of OOP, he still gets the placement of responsibilities in classes wrong. This to me is perhaps the entire key to refactoring. The point is, we as humans are flawed, even if we are all flawed in unique ways; it is something we have in common. This book, in my opinion is not ultimately how to correctly design, but how to deal with the mess that you will inevitably make. This is so important I think it bears repeating; this book says that no one is smart enough to build the correct software the first time.  Instead, it says that you must bravely push on with your current understanding and then, when you finally do have something of a concept of what you are doing, go back and refactor the work that you just created. That is hard to put into practice, since you just got it working, and now you have to go tear it up and build again. The beauty of the refactor is that you don’t have to go it alone, you have techniques that allow you to make wise choices on what the change and the changes should have no affect on the final product.

One final thing I think is worth mentioning. The way the book is laid out, you can easily re-visit any given refactoring technique that you didn’t “get” the first time, as it is fairly rationally structured, grouping refactors together but keeping each refactor individualized. It makes me wonder how many times they refactored the book before they felt they had it right?

Friday, September 27, 2013

Legistlative Code III

In my first post, I consider how many analogies there are in software development, including legislation.  I noted how analogies help bridge an understanding gap, but only do so when we connect well with that analogy.  Then, in my previous post, I attempted to apply the analogy of software development is like legislators creating legislation.  I considered some example high level activities that occur in software development and then attempted to compare and contrast them to those in of legislation.  I left it with the cliff hanger of 'so what?'  Here is my final posting on the subject.

So What?

In that last post, I tried to roughly describe what software testing is, but when I first did so, I wrote:
Writing Test Plans...Write Tests...
My co-author Isaac said, "I don't do those things!"  I said, "bull feather!", because as one of his employees, I know what he does.  He totally writes high level plans.  He did it with a 'war board' strategy a few months ago.  And what about that story for updating the tests we have documented?  He begrudgingly agreed he did so, but that he HATED those words as they bring up a very different idea in his head.  An idea of these extensive documents that go on and on.  He basically couldn't (easily) get out of his head his own idea of the connotations of those words.

Now, you might notice those words were edited some, because I too did not want someone to read that and apply something more detailed than I had intended.  The problem is, the activity, any way I might describe it, is either going to be too heavily attached to MY context or so generic as to be meaningless.  This goes back to my Context Driven Test post, however, I'm really not trying to talk about approaches too much.  So, what is someone to do?

Please allow me to go on a small tangent for a moment.  I have a pet theory that we, that is to say, the human race, is story driven.  We tell each other stories all the time, and we are engaged by stories but not technical details.  Only a few of us can even deal with technical details, but almost everyone loves a ripping yarn.  That is why agile and people like Joel Spolsky tell you to try to tell stories, even for technical people.  I may write more on this later, but just think about it a little.

So how do I communicate to you, my audience?  Using a story driven system, without either inserting my own language or context, to avoid using words and concepts you already have attached meanings (your context).  It is much easier for me to avoid those words using analogies than it is to struggle to shake off your preconceived notions about a particular word.  So, what if I tell a story with my analogy? Let us consider a example:

When legislation is written, multiple parties participate.  At first, the city mayor notices a problem; there are too many stop lights being installed that are imping the flow of traffic.  Like a business person, she sees a solution, which she proposes and brings to the attention of the people who would have to perform the fix, the local road development dept.  Maybe the solution is installing roundabout in newer streets so that traffic need not stop.  The road development crew might create a detailed plan, and to validate that the solution is feasible they pass their plan to another dept.  In legislative terms, the solution might go to a feasibility study dept, and like QA, they test the solution, to see if this is even possible for the newer roads, given the budget, etc.  Computer simulations might be created to verify the fix, testing the traffic flow, verifying that the road ways will work.  Accountants are contacted, the legal dept might get pulled in during the study.  Even if it gets past this point, you might raise the proposed solution to the citizens, much like deploying to beta, who may note that they don't want their yard taken up by a roundabout.  Perhaps the ordinance gets some modifications, going back through the process with the addition that it won't include residential areas where stop signs would better serve.  The mayor and her constitutions are happy to see an improved road system....

If you are in an area where the local government doesn't do these sorts of activities, the story will likely not work.  So let's consider a very different story from a different story teller:

Well, when I create software, I think of the developers as Jedi, who when they have completed a feature,  pass it on to the Emperor of QA, who orders his team to start on order 66 and find every last issue in the feature without mercy.  Next thing you know, the developers are struggling to survive under the weight of all those bugs. 

Lets consider these two stories.  In the first case, I was attempting to describe a process and how that process worked using a comparable process.  My coworker would have likely not objected to that as an example of 'what development is like' as it would not have used words he disagreed with.  Now maybe he would have gotten caught up in the details and argued about how legislation really works, or pointed out flaws in the analogy (technical people tend to do this), but it would have been easier for us to not get stuck in the weeds.  Even if he did so, it would be more likely to be clarifying remarks rather than argument about what activities we really do.

In contrast, the other story is about how feelings, how someone felt.  The process details are less than clear, but it doesn't matter (to the speaker).  If you know Star Wars, you will know the story teller sees QA as the bad guys, creating more issues in a unfair way.  Perhaps to a person who hasn't seen Star Wars, this wouldn't be a useful story, but to someone who has, it would be easy enough to pick up the gist.  The problem is, without interconnecting the details of what the environment is like to the details of the story, it becomes unclear just how far I should take the analogy.  Is this a whiner or is there really an issue?  Is this QA's fault or management?  Is this one person, the QA manager (or the story teller), or is it entire teams?

In this sense, an analogy is a tool, used to help create a picture and a conversation.  The risk with an analogy is that either the audience doesn't understand the analogy or that the reader take it too far.  Analogies, some of which are referred to as parables, have been around since the dawn of story telling.  We preach morals, teach and inform with them.  Why?  Well the reason is because when you learn something, you take what you already know and extend it slightly.  With that in mind, my attempt to consider software development like something else comes down to me attempting to extend my knowledge of one subject and make it like another, comparing the items, attempting to learn new truths.  This concept is sometimes referred to as "System's Thinking."

Isaac after looking at a draft of this said he thought the TL/DR was "Tell stories."  While I won't disagree with that, I think a second bonus TL/DR is to keep your mind open and look for things you can learn from in areas you aren't a specialist in.  If you twisted my arm for a third TL/DR, I might add that this entire blog is my attempt to learn via the process of teaching.  So here goes:

TL/DR: Learn by teaching using stories.

Word of the Week: Manumatic

Before I go into the depths of what I mean, I should first talk about what the wiki defines manumatic as.  According to the dictionary, Manumatic is a type of semi-automatic shifter used for vehicles.  This is not what I am talking about, even though it shares some of the same flavor.  What I am talking about is semi-automated testing (or is that semi manual checking?).  Some testers like the term tool-assisted testing and I can imagine a half dozen other terms like tool driven testing.  Whatever you want to call it, I tend to call it a manumatic process or manumatic test.

The idea is that you have a manual process that is cumbersome or difficult to do.  However, either some part of the test is hard to automate or the validation of the results requires human interpretation.  There are many different forms this can come in, and my attempt to define it may be missing some corner cases (feel free to QA me in the comments), but allow me to give some examples.

At a previous company I worked for I had to find a way to validate thousands of pages did not change in 'unexpected' ways, but unexpected was not exactly defined.  Unexpected included JavaScript errors, pictures that did not load, html poorly rendering and the likes.  QA had no way of knowing that anything had in fact changed, so we had to look at the entire set every time and these changes were done primarily in production to a set of pages even the person who did the change may not have known.  How do you test this?  Well, you could go through every page every day and hope you notice any subtle changes that occur.  You could use layout bugs detectors, such as the famous fighting layout bugs (which is awesome by the way), but that doesn't catch nearly all errors and certainly not subtle content changes.

We used a sort of custom screenshot comparison with the ability to shut off certain html elements in order to hide things like date/time displays.  We did use some custom layout bug detectors and did some small checking, but primarily the screenshots were our tool of choice.  Once the screenshots were done, we would manually look at the screenshots and determine which changes were acceptable and which were not.  This is a manumatic test, as the automation does do some testing, but a "pass" meant nothing changed (in so far as the screenshots were concerned), and finding a diff or change in the layout didn't always mean "fail".  We threw away the "test results", only keeping the screenshots.

In manually testing, often we need new logins.  It requires multiple sql calls and lots of data to create a new login, not to mention some verifications that other bits are created.  It is rather hard to do, but we wrote automation to do it.  So with a few edits, an automated 'test' was created that allows a user to fill in the few bits of data that usually matter and lets the automation safely create a user.  Since we have to maintain the automation already, this means every tester need not have the script on their own box and fight updates as the system changes.  This is a manumatic process.

Let me give one more example.  We had a page that interacted with the database based upon certain preset conditions.  In order to validate the preset conditions, we need to do lots of different queries, each of which was subtly connected to other tables.  Writing queries and context switching was a pain, so we wrote up a program to do the queries and print out easy to read HTML.  This is a manumatic process.

I honestly don't care what you call it; I just want to blur the lines between automated testing and manual testing, as I don't think they are as clear as some people make them out to be.

Legislative Code II

As I spoke about previously, I think that code might be comparable to legislation. In addition to that, I went on to note how many different analogies we have for the activity of software development. I noted how we tend to make assumptions about our own analogies being roughly about the same activity, but that might not be true. In fact our entire social, economic, person, professional backgrounds might get in the way of our understanding of what an analogy is meant to say, yet we still use analogies. Finally I noted that analogies are useful tools and should be used as such rather than absolute ways of thinking.

So this time I want to actually talk about what software development is like to me.  I think there are multiple levels, and each one can be compared to other activities.  Lets try to divide up the activity into a couple of sub-activities:
  • Business Ideas - Creating a viable business plan, including high level Products.
  • Product Ideas - Converting an idea into something on the screen.  Often this is words describing the idea with some detail.  Sometimes it involves producing screen shots showing what the application would look like.
  • Writing Code - Converting the Product Ideas into logical steps, sometimes reorganizing those steps into sub steps.
  • Writing Test Code - Converting the expectation of what the software should do to a series of steps to see if it fits those expectations.
  • Create Plans - Converting an Product Ideas into high level attack plans.
  • Create Tests - Converting Test Plans into a set of steps to test the product, sometimes reorganizing those steps into sub steps (E.G. A test).
  • Testing - Creating instances of either Written Tests or Test Code.  Some test code will cause more Code to be created (or deleted).
  • Shipping Bytes - Moving the Tested code into other environments.
  • Support - Making those Shipped Bytes work together, including sometimes modifying the data being used.
I tried to tie each of these activities together, however often all the parts of the machine are moving, with different pieces in different pipe-lines.  A new business might be getting created while the current product is being updated and a previously code complete feature is being tested while old bits are being supported.  I tried to represent this by capitalizing these activities as if they were Proper Names.  Now we can debate about each and every one of these activities (and we should) and their limits, order, proper place, etc., but I'm not too interested in that today.  I just want a rough outline that most people agree is an example of software development.

Alright now, the question is, with all these activities, what analogies can make sense?  Well, let's "test" the legislative analogy.  What is legislation and what do legislators do anyway?  To take a quick quotes from those wiki articles:

A legislature is a kind of deliberative assembly with the power to pass, amend, and repeal laws. ... In most parliamentary systems... However, in presidential systems... In federations... Because members of legislatures usually sit together in a specific room to deliberate, seats in that room may be assigned exclusively to members of the legislature. 
- Legislature
(Another source of law is judge-made law or case law.)  ... Legislation can have many purposes: to regulate, to authorize, to proscribe, to provide (funds), to sanction, to grant, to declare or to restrict. Most large legislatures enact only a small fraction of the bills proposed in a given session. Whether a given bill will be proposed and enter into force is generally a matter of the legislative priorities of government.
- Legislation

So let me see, a bunch of guys create logical rules, which can sometimes be amended by another group by interpreting the rules or sometimes excluding large parts of the rules.  They create these rules for all sorts of purposes, depending on what is required by the system they are in for the people they represent.  These rules, sometimes known as code.  Well how well does that match?

A business comes up with ideas, like a government comes up with ideas, either from the populous or by situation or etc.  They start coming up with rough ideas of what this should look like, and a small number of those ideas are then written as code.  These "coders" all have their own desk and often have separate infrastructure to actually enforce the ideas by deploying their code.  In the US, this is called the 'executive' branch (or was it the ops team?).  In my experience, legislation often has a time for public comment, which is a rough comparable to test, however, it is never exactly like the production environment, so bugs occur.  Thus the legislature creates patches... er... amendments to fix it or if they don't fix it, often the judicial branch modifies the system by pulling out bad code until the coders get time to fix it.

I don't want to stretch my analogy too far, nor give it too much force, but there does seem to be a lot of comparable things between the two systems.

"So what?" you say.  Great question... and we are out of time!  Seriously, I will address this in a third and final post which I will put up really soon.