Friday, September 27, 2013

Legistlative Code III

In my first post, I consider how many analogies there are in software development, including legislation.  I noted how analogies help bridge an understanding gap, but only do so when we connect well with that analogy.  Then, in my previous post, I attempted to apply the analogy of software development is like legislators creating legislation.  I considered some example high level activities that occur in software development and then attempted to compare and contrast them to those in of legislation.  I left it with the cliff hanger of 'so what?'  Here is my final posting on the subject.

So What?

In that last post, I tried to roughly describe what software testing is, but when I first did so, I wrote:
Writing Test Plans...Write Tests...
My co-author Isaac said, "I don't do those things!"  I said, "bull feather!", because as one of his employees, I know what he does.  He totally writes high level plans.  He did it with a 'war board' strategy a few months ago.  And what about that story for updating the tests we have documented?  He begrudgingly agreed he did so, but that he HATED those words as they bring up a very different idea in his head.  An idea of these extensive documents that go on and on.  He basically couldn't (easily) get out of his head his own idea of the connotations of those words.

Now, you might notice those words were edited some, because I too did not want someone to read that and apply something more detailed than I had intended.  The problem is, the activity, any way I might describe it, is either going to be too heavily attached to MY context or so generic as to be meaningless.  This goes back to my Context Driven Test post, however, I'm really not trying to talk about approaches too much.  So, what is someone to do?

Please allow me to go on a small tangent for a moment.  I have a pet theory that we, that is to say, the human race, is story driven.  We tell each other stories all the time, and we are engaged by stories but not technical details.  Only a few of us can even deal with technical details, but almost everyone loves a ripping yarn.  That is why agile and people like Joel Spolsky tell you to try to tell stories, even for technical people.  I may write more on this later, but just think about it a little.

So how do I communicate to you, my audience?  Using a story driven system, without either inserting my own language or context, to avoid using words and concepts you already have attached meanings (your context).  It is much easier for me to avoid those words using analogies than it is to struggle to shake off your preconceived notions about a particular word.  So, what if I tell a story with my analogy? Let us consider a example:

When legislation is written, multiple parties participate.  At first, the city mayor notices a problem; there are too many stop lights being installed that are imping the flow of traffic.  Like a business person, she sees a solution, which she proposes and brings to the attention of the people who would have to perform the fix, the local road development dept.  Maybe the solution is installing roundabout in newer streets so that traffic need not stop.  The road development crew might create a detailed plan, and to validate that the solution is feasible they pass their plan to another dept.  In legislative terms, the solution might go to a feasibility study dept, and like QA, they test the solution, to see if this is even possible for the newer roads, given the budget, etc.  Computer simulations might be created to verify the fix, testing the traffic flow, verifying that the road ways will work.  Accountants are contacted, the legal dept might get pulled in during the study.  Even if it gets past this point, you might raise the proposed solution to the citizens, much like deploying to beta, who may note that they don't want their yard taken up by a roundabout.  Perhaps the ordinance gets some modifications, going back through the process with the addition that it won't include residential areas where stop signs would better serve.  The mayor and her constitutions are happy to see an improved road system....

If you are in an area where the local government doesn't do these sorts of activities, the story will likely not work.  So let's consider a very different story from a different story teller:

Well, when I create software, I think of the developers as Jedi, who when they have completed a feature,  pass it on to the Emperor of QA, who orders his team to start on order 66 and find every last issue in the feature without mercy.  Next thing you know, the developers are struggling to survive under the weight of all those bugs. 

Lets consider these two stories.  In the first case, I was attempting to describe a process and how that process worked using a comparable process.  My coworker would have likely not objected to that as an example of 'what development is like' as it would not have used words he disagreed with.  Now maybe he would have gotten caught up in the details and argued about how legislation really works, or pointed out flaws in the analogy (technical people tend to do this), but it would have been easier for us to not get stuck in the weeds.  Even if he did so, it would be more likely to be clarifying remarks rather than argument about what activities we really do.

In contrast, the other story is about how feelings, how someone felt.  The process details are less than clear, but it doesn't matter (to the speaker).  If you know Star Wars, you will know the story teller sees QA as the bad guys, creating more issues in a unfair way.  Perhaps to a person who hasn't seen Star Wars, this wouldn't be a useful story, but to someone who has, it would be easy enough to pick up the gist.  The problem is, without interconnecting the details of what the environment is like to the details of the story, it becomes unclear just how far I should take the analogy.  Is this a whiner or is there really an issue?  Is this QA's fault or management?  Is this one person, the QA manager (or the story teller), or is it entire teams?

In this sense, an analogy is a tool, used to help create a picture and a conversation.  The risk with an analogy is that either the audience doesn't understand the analogy or that the reader take it too far.  Analogies, some of which are referred to as parables, have been around since the dawn of story telling.  We preach morals, teach and inform with them.  Why?  Well the reason is because when you learn something, you take what you already know and extend it slightly.  With that in mind, my attempt to consider software development like something else comes down to me attempting to extend my knowledge of one subject and make it like another, comparing the items, attempting to learn new truths.  This concept is sometimes referred to as "System's Thinking."

Isaac after looking at a draft of this said he thought the TL/DR was "Tell stories."  While I won't disagree with that, I think a second bonus TL/DR is to keep your mind open and look for things you can learn from in areas you aren't a specialist in.  If you twisted my arm for a third TL/DR, I might add that this entire blog is my attempt to learn via the process of teaching.  So here goes:

TL/DR: Learn by teaching using stories.

Word of the Week: Manumatic

Before I go into the depths of what I mean, I should first talk about what the wiki defines manumatic as.  According to the dictionary, Manumatic is a type of semi-automatic shifter used for vehicles.  This is not what I am talking about, even though it shares some of the same flavor.  What I am talking about is semi-automated testing (or is that semi manual checking?).  Some testers like the term tool-assisted testing and I can imagine a half dozen other terms like tool driven testing.  Whatever you want to call it, I tend to call it a manumatic process or manumatic test.

The idea is that you have a manual process that is cumbersome or difficult to do.  However, either some part of the test is hard to automate or the validation of the results requires human interpretation.  There are many different forms this can come in, and my attempt to define it may be missing some corner cases (feel free to QA me in the comments), but allow me to give some examples.

At a previous company I worked for I had to find a way to validate thousands of pages did not change in 'unexpected' ways, but unexpected was not exactly defined.  Unexpected included JavaScript errors, pictures that did not load, html poorly rendering and the likes.  QA had no way of knowing that anything had in fact changed, so we had to look at the entire set every time and these changes were done primarily in production to a set of pages even the person who did the change may not have known.  How do you test this?  Well, you could go through every page every day and hope you notice any subtle changes that occur.  You could use layout bugs detectors, such as the famous fighting layout bugs (which is awesome by the way), but that doesn't catch nearly all errors and certainly not subtle content changes.

We used a sort of custom screenshot comparison with the ability to shut off certain html elements in order to hide things like date/time displays.  We did use some custom layout bug detectors and did some small checking, but primarily the screenshots were our tool of choice.  Once the screenshots were done, we would manually look at the screenshots and determine which changes were acceptable and which were not.  This is a manumatic test, as the automation does do some testing, but a "pass" meant nothing changed (in so far as the screenshots were concerned), and finding a diff or change in the layout didn't always mean "fail".  We threw away the "test results", only keeping the screenshots.

In manually testing, often we need new logins.  It requires multiple sql calls and lots of data to create a new login, not to mention some verifications that other bits are created.  It is rather hard to do, but we wrote automation to do it.  So with a few edits, an automated 'test' was created that allows a user to fill in the few bits of data that usually matter and lets the automation safely create a user.  Since we have to maintain the automation already, this means every tester need not have the script on their own box and fight updates as the system changes.  This is a manumatic process.

Let me give one more example.  We had a page that interacted with the database based upon certain preset conditions.  In order to validate the preset conditions, we need to do lots of different queries, each of which was subtly connected to other tables.  Writing queries and context switching was a pain, so we wrote up a program to do the queries and print out easy to read HTML.  This is a manumatic process.

I honestly don't care what you call it; I just want to blur the lines between automated testing and manual testing, as I don't think they are as clear as some people make them out to be.

Legislative Code II

As I spoke about previously, I think that code might be comparable to legislation. In addition to that, I went on to note how many different analogies we have for the activity of software development. I noted how we tend to make assumptions about our own analogies being roughly about the same activity, but that might not be true. In fact our entire social, economic, person, professional backgrounds might get in the way of our understanding of what an analogy is meant to say, yet we still use analogies. Finally I noted that analogies are useful tools and should be used as such rather than absolute ways of thinking.

So this time I want to actually talk about what software development is like to me.  I think there are multiple levels, and each one can be compared to other activities.  Lets try to divide up the activity into a couple of sub-activities:
  • Business Ideas - Creating a viable business plan, including high level Products.
  • Product Ideas - Converting an idea into something on the screen.  Often this is words describing the idea with some detail.  Sometimes it involves producing screen shots showing what the application would look like.
  • Writing Code - Converting the Product Ideas into logical steps, sometimes reorganizing those steps into sub steps.
  • Writing Test Code - Converting the expectation of what the software should do to a series of steps to see if it fits those expectations.
  • Create Plans - Converting an Product Ideas into high level attack plans.
  • Create Tests - Converting Test Plans into a set of steps to test the product, sometimes reorganizing those steps into sub steps (E.G. A test).
  • Testing - Creating instances of either Written Tests or Test Code.  Some test code will cause more Code to be created (or deleted).
  • Shipping Bytes - Moving the Tested code into other environments.
  • Support - Making those Shipped Bytes work together, including sometimes modifying the data being used.
I tried to tie each of these activities together, however often all the parts of the machine are moving, with different pieces in different pipe-lines.  A new business might be getting created while the current product is being updated and a previously code complete feature is being tested while old bits are being supported.  I tried to represent this by capitalizing these activities as if they were Proper Names.  Now we can debate about each and every one of these activities (and we should) and their limits, order, proper place, etc., but I'm not too interested in that today.  I just want a rough outline that most people agree is an example of software development.

Alright now, the question is, with all these activities, what analogies can make sense?  Well, let's "test" the legislative analogy.  What is legislation and what do legislators do anyway?  To take a quick quotes from those wiki articles:

A legislature is a kind of deliberative assembly with the power to pass, amend, and repeal laws. ... In most parliamentary systems... However, in presidential systems... In federations... Because members of legislatures usually sit together in a specific room to deliberate, seats in that room may be assigned exclusively to members of the legislature. 
- Legislature
(Another source of law is judge-made law or case law.)  ... Legislation can have many purposes: to regulate, to authorize, to proscribe, to provide (funds), to sanction, to grant, to declare or to restrict. Most large legislatures enact only a small fraction of the bills proposed in a given session. Whether a given bill will be proposed and enter into force is generally a matter of the legislative priorities of government.
- Legislation

So let me see, a bunch of guys create logical rules, which can sometimes be amended by another group by interpreting the rules or sometimes excluding large parts of the rules.  They create these rules for all sorts of purposes, depending on what is required by the system they are in for the people they represent.  These rules, sometimes known as code.  Well how well does that match?

A business comes up with ideas, like a government comes up with ideas, either from the populous or by situation or etc.  They start coming up with rough ideas of what this should look like, and a small number of those ideas are then written as code.  These "coders" all have their own desk and often have separate infrastructure to actually enforce the ideas by deploying their code.  In the US, this is called the 'executive' branch (or was it the ops team?).  In my experience, legislation often has a time for public comment, which is a rough comparable to test, however, it is never exactly like the production environment, so bugs occur.  Thus the legislature creates patches... er... amendments to fix it or if they don't fix it, often the judicial branch modifies the system by pulling out bad code until the coders get time to fix it.

I don't want to stretch my analogy too far, nor give it too much force, but there does seem to be a lot of comparable things between the two systems.

"So what?" you say.  Great question... and we are out of time!  Seriously, I will address this in a third and final post which I will put up really soon.

Friday, September 20, 2013

Where CDT fails: Rebuttal?

tldr: How do we bootstrap the next generation of testers?

So I decided to attempt to refute JCD's thought's on CDT. I think JCD was trying to say, "How does one learn testing in the CDT community?" But in trying to refute the idea that CDT stuff is easy to find and everywhere "out there," it occurred to me how much does one have to know to progress further? Is part of the journey of CDT the self-discovery of the path? When the student is ready, the master (in the form of classes, blogs, books) is out there for you to find?

Then I wrote a rebuttal anyways:

I consider myself a CDT practitioner but it's been a long and arduous journey to get to where I am (like much of the people I've chatted with).

Can we as a community bootstrap the next generation quicker onto the correct path of sapient testing? I read a blog one time that said, "I can't tell you what to learn next, everyone is different." But that doesn't mean we can't bring a list to the table and say, "Start somewhere here." When you've progressed far enough with that move to something in this more advanced list, etc.

As I've been trying to mentor two newb testers in the last couple of years, I've found it hard to just say, "well read all this stuff," as one of them is not a read-a-book-learner type. I've been struggling with what order they should learn it in. Is there stuff that honestly shouldn't be touched upon until they have some serious experience under their belt? I've come to the conclusion that as expected it depends on the individual and their internal drive. 

Beginner:
Coaching with Bach, Bolton or Charrett (http://www.associationforsoftwaretesting.org/about/membership/skype-coaching/
Pick 1 for a month
{
  Session Based Test Management (http://www.satisfice.com/articles/sbtm.pdf
  Thread Based Test Management (http://www.satisfice.com/blog/archives/503
  x Based Test Management (http://christintesting.wordpress.com/2013/01/20/xbtm-harnessing-the-power-of-exploratory-testing/
  Tours (http://www.developsense.com/blog/2009/04/of-testing-tours-and-dashboards/
}
Rapid Software Testing (http://www.satisfice.com/info_rst.shtml)
Rapid Testing Intensive Online (http://www.satisfice.com/rapidtestintensives.shtml
CAST Videos (http://www.youtube.com/user/TheAstVideos
BBST Foundations, Bug Advocacy and Test Design (http://www.associationforsoftwaretesting.org/training/courses/, or if you just want to self-study http://bbst.info/)  
book: Lessons Learned in Software Testing (http://www.amazon.com/exec/obidos/ASIN/0471081124/satisinc
book: How to Break Software (http://www.amazon.com/How-Break-Software-Practical-Testing/dp/0201796198/
book: How to Break Web Software (http://www.amazon.com/How-Break-Web-Software-Applications/dp/0321369440/
James Bach's blog (http://www.satisfice.com/blog/)
Michael Bolton's blog (http://www.developsense.com/)  


What ever a non-beginner / non-advanced student is called:
Weekend Testing: America (http://weekendtesting.com/america
Weekend Testing: Australia / New Zealand (http://weekendtesting.com/wtanz
book: Testing Computer Software (http://www.amazon.com/Testing-Computer-Software-2nd-Edition/dp/0471358460/
book: Agile Testing (http://www.amazon.com/Agile-Testing-Practical-Guide-Testers/dp/0321534468/
book: Exploratory Software Testing (http://www.amazon.com/Exploratory-Software-Testing-Tricks-Techniques/dp/0321636414/
An insane number of blogs, I can't list them all. However a good place to start is anyone who is published here…(http://www.associationforsoftwaretesting.org/blog/


Advanced:
book: General Systems Thinking (http://www.amazon.com/Introduction-General-Systems-Thinking-Anniversary/dp/0932633498/
Anything written by Jerry Weinberg
ISST webinars (http://www.commonsensetesting.org/events/) (Don't know where to classify these…as I haven't seen them yet)
book: Tools of Critical Thinking (http://www.amazon.com/Tools-Critical-Thinking-Metathoughts-Psychology/dp/0205260837/
Cem Kaner's blog (http://kaner.com/

This list isn't all inclusive, it's what I found or could remember off the top of my head in 17 minutes.

Now I will admit I've been following CDT off and on as a lurker for the last 5-6 years. Pulling ideas from the community when I feel there is something for me to learn. So it might be a little unfair to say "off the top of my head in 17 minutes.". As really I've been logging a lot of them into my subconscious as they come up.

Last I want to include these pieces: ISTQB and ASQ if you haven't personally looked into them, you need to. You need to understand what they are and how they 'certify' you. You can't diss on someone or something without understanding it. Sometimes you need to know what isn't a good way to do something in order to: recognize the bad stuff and to find a better way to do it.

Devil's Advocate: Where Context Driven Testing fails?

TL/DR: The community of Context Driven Testing is fractured and unclear how to implement the activity of test.

Lack of Clarity


I am still debating the title of this article, and will likely do so even after it is published.  The problem isn't that Context Driven Testing, the problem is the implementations.  For a technical analogy, CDT is like an Interface, something that defines a general contract of how to do things, but doesn't provide any meat on the bones of how to implement the philosophy.  The problem is that in order to learn how to implement all the various pieces of CDT, you must go searching all over the Internet trying to find all the data.  When it comes down to it, is that a failing of the Context Driven Testing philosophy, or the Context Driven Testing community?  I don't really know what to blame, and as someone who identifies with being part of the QA community I feel a need to point it out.  If only we had a sort of JIRA system.

Don't get me wrong, I do think we have a lot of data to offer, I just think it is not in a simple pathway.  There is no college like course structure or even a choose your own adventure system.  However, you might not believe me, so lets start at the beginning.  While context driven testing does say a few things, but it could be summed up as two words: "It depends."  It clearly labels itself an approach:
Context-driven testing is an approach, not a technique.
With that in mind, it becomes all too clear that CDT is not meant as a specific set of methods or patterns but rather an idea that you're smart, you figure it out within your context.  It is a little like going to a councilor and saying "I'm angry all the time, what do I do?" and them only asking why you are angry, without any suggestions.  It might help, but it sure seems to be a round about method.

No Patterns?


I don't disagree with the idea that things depend on context, but I do sometimes wonder if there aren't some context-dependent patterns.  That is to say, I do wonder if there is a sort of mapping of context-to-effective-practices or if literally every situation requires different analysis.  In either case, the current answer context driven testing gives is to consider the situation and create tests based upon that.

This leaves us with a sort of mob-rules attempt at QA where you can do any activity and call that testing, as long as the product solves a problem.  Now I've peeled away my first layer of the onion, and I expect someone to immediately argue with me that my claim is untrue.  In fact, there are many, many source of data from James Bach to Cem Kaner to Michael Bolton to ad nasium.  And you would be right, these are all CDT advocates, some of whom founded the ideas.

I can respect that idea, but now we are applying different people's ideas to the problem space using rough analogies or giving examples by inserting their context and how they react.  That is fundamentally a scatter shot approach, with variable quality between different bloggers and different blog posts, not to mention the fuzzy aspects of trying to apply someone else's context to your own situation.  If there isn't even a mapping of contexts to effective practices, how can you ever be expected to know what is a good method of testing?  Worse yet, even if these can be used as a map of context to ideas, there is no central repository of knowledge.  You have to search for these ideas and hope your picks turned out well.

Frameworks


But wait, what of the various frameworks given?  These frameworks were developed around the idea of CDT, like Session Based Test Management, Thread Based Test Management, and Tours, which is meant for the more detailed work.  I'm not saying those don't exist, but I don't think there is a very good way of knowing the right choice.  There isn't a particularly strong heuristic map, which means that someone coming into the industry just has to keep trying these until they find out what works.

Tours is perhaps the most specific of the set, and is perhaps the exception to these as I have seen some built in recommendations in the naming and descriptor.  The Money Tour is meant to check the features that the customers bought the tool for.  However, once you think a little about it, it seems like that is the useful area to test, so why worry about most of the other tours?  Cem Kaner talks of how researching what tours are useful is a bad option from a context driven perspective.  He asserts that you need to know your developers in order to know what sorts of errors might matter along with liability.  You might also need to understand your objectives in order to consider which tour to do.

So let's consider the sorts of questions Kaner would want us to consider.  What does the company want?  Well in many cases, I think the company looks at QA to say what it wants, or it says what it wants, but it really gets it wrong.  In reality, QA often pushes the agenda, particularly when the company is young in QA.  From my experience, young companies also don't have much money, so it might be reasonable to expect they would hire inexperienced QA, making for a terrible combination.  What would the customer want?  Often these young companies are creating their product with the hope to get customers, so who knows?  Should I still use the Money Tour using what we expect our customers to care about?  Even if I did assume I understood my context, how do I evaluate my success of a given tour when I don't have much idea of what matters?

Lack of Specific Direction


Ultimately, I feel that Context Driven's layers of abstraction make it impossible to learn in any quick fashion without meandering the Internet, reading lots of different opinions and trying lots of ideas until maybe, just maybe, you find a method that works.  Next thing you know, you change jobs with your new experiences and you now have "Best Practices" which you might claim come from the Context Driven Testing philosophy.

It's like someone saying you should use pairwise testing, because that always helps pair down parameters.  Yet, what parameters you choose matters more than the strict number you have, be it a random sampling (which based upon my reading of Bach's paper, only takes roughly 2x of pairwise to reach equal coverage).  Sure, Context matters for the choice of the parameters, but can there not be some 'Good Practices' we can push, based upon some reasonably high level contexts?  Like, oh, I don't know, say in a application that deals with customer purchasing, customer purchasing patterns might be a decent area to choose for your parameters?  But wait, what if you context....  *le sigh*

Doctrine or Methodology?


As one final aside, I ponder if this might be not unlike a religious war based upon if you want your bed made or not.  Some people really want everything neat and tidy, with QA having a very specific role, a specific set of guidelines that are neat and orderly.  On the other side you have the people who say that our role is so impossibly different based upon organization that we can just barely define the activity at a high level, but can't define what activities should be done.  Or if you rather, you can think of it as Linux, with its diverse set of implementations compared to the more mono-culture of Windows or Apple.  Ultimately, I wonder if there is something in the middle, a sort of, useful set of norms in certain settings that can be broken, but only when you have good reason.   As an example of what I'm interested in, let me quote from Kaner tour post,
Rather than looking for a few “best” tours, I think it would be more interesting to develop a more diverse collection of tours that we can do, with more insight into what each can teach us.

NOTE: As an aside, I had my co-author, Isaac, read my post and he noted I cut out Rapid Software Testing, which was unintentional.  I simply haven't studied that form of CDT, which is a big part of the post, that there is no repository of knowledge for CDT testing.

Tuesday, September 17, 2013

Introduction…a little late.

tldr: Red hair, chops, boots, attitude…

Currently I am JCD's manager and previously managed Veronique.  I have built multiple testing organizations, as well as doing my own test work.  My focus is in being a jack of all trades.  I can write automation, deal with C-levels, influence developers, run manual regression, create load tests and lead intelligent people.  I do an awesome job on all, but I do depend on my team of rockstars to deal with whatever technical challenge we come across.

Part of the reason that I started blogging is to increase my proficiency at writing. I'd like to extend my sphere of influence from just merely the people that I've worked with or hired, to something more epic scale. I require large goals in order to satisfy my small ego. I'm constantly amazed at most people's willingness to do the bare minimum and not improve themselves. Internally, I'm driven to do things the best that I can at the moment.

I started testing cause coding 8 hours a day sucked. Having the ability to code / test / research / choose my own priorities is critical to my success. I believe there is no "one true way", and I value results over ceremony, metrics, paper shuffling and acting busy.

I have found value in context driven development and while I don't require my people to follow it, I do expect them to be able to talk about it with some intelligence.  I hold high standards and tend to be a little brash in my language.  I expect people to be intelligent enough to be able to research things they don't understand rather than spoon feeding you links.

I have lived in my little valley for my entire life and have slowly learned from our diverse community of development scenarios….You may see me around as Isderf, my long time handle.

BLAH
Like JCD, I need to say my views do not represent any company I have worked for, past present or future.
/BLAH

Thanks to JCD for ghost writing a lot of this, as I hate talking about myself without several scotches…

Friday, September 13, 2013

My interviewing start and the changes I've made.


tldr: Changes in how you interview is inevitable.

I started interviewing at my first job. It was a gig with a small company (<20 people) and I was the Engineering Intern. No, I still don't know what that meant, I just did everything anyone asked me to from hardware to software to fetching donuts. I was included in interviews there, but not as a participant, I was asked to only be an observer. They were very traditional. What tech do you know? Do you know this specific tool? And not much more then that. If everyone got along with the person, and they had decent answers, they were hired.

I'm moved through several companies now and moved from observing to influencing to being the final say about who gets hired. Initially, I started with the traditional questions, but they resulted in people that didn't always work out. Then I started to experiment with hiring. What questions should we ask? What answers should we expect? I'm grateful to the employer at the time who allowed me the freedom to experiment with how to judge what people to hire.

While this data doesn't cover all of my hiring, it is an 18 month period where I kept fairly detailed records while I was experimenting heavily with interviewing.

  - 250-300 phone screenings (don't remember an exact number, as there were untold tons of them)
  - 50 people made it to a physical interview
  - 1 person disappeared during the interview process (moved to Texas I found out)
  - 25 people made it to a second interview, 2 of those people jumped straight to 2nd interviews as they came from recommends of people I trusted, or I had already directly worked with them.
  - 19 people got offers
  - 3 people declined
  - which means after interviewing ~300 people, we hired 16 people.
  - 4 of those people didn't work out.

I'm not giving you this data to tell you how bad or good I was at interviewing, but to allow you to see what the process was like for me. I had to phone screen every resume (300, ~150 hours), talk with 50 people once (minimum 25 hours), setup a team of 3 people to talk with another 25 (minimum 150 hours, I count prep time and debriefing time), then extend offers, then worry and wait. 325 hours to get 12 people that worked out. That's three quarters of a week to find each person. 

What is the point of an interview? To find the most qualified candidate for a given job. Or at least for now that's what I'm going to use.

So at first I started with question like I had observed. Why do you want to leave your current employer? Do you know Agile? Do you know C#? Write me an algorithm in some language. Plus a host of other tech skill questions. Over the course of an interview I'd get a feel about someone from body language, terms and sentences they used, and then extrapolate information from the way they answered.  This system worked semi-okay for 1-2 years, but about that time I noticed that I really didn't care 'what' they answered, I wanted to hear 'how' they answered. I was just asking questions to get them talking, trying to get them to talk long enough to understand how they thought.

That's roughly when I started to change up the game. It took about 6 months of on and off introspection to come to the next level. Why was I using these questions? What was I really looking for in these people? After reading several books (Strengths Finder and Good to Great) I had a more solid idea of the traits I wanted to see in people: character, work ethic, intelligence, responsibility and values. Observing people I knew that were good testers, and people I knew that were bad, I clarified my definitions of those traits. Then I tried to design questions about those traits that I wanted to investigate in each interviewee. 

It's not that I don't ask tech questions anymore. Merely that tech questions are not my first level weed-out questions. I have found over and over again that personality and mentality are the most important parts. It's nice if you have tech skills, but I can judge your ability to learn tech skills independent of your personality and mentality at a second interview. And I'm more likely to hire someone with the right mindset and little skills, than I am someone with the wrong mindset but the the right tech skills.

*****************
As an example of how I would change up interviews (occurred just today):

As I was sitting in the bathroom, I noticed that this plastic piece had shimmied down the hinge pin and the lock was wiggly. I pulled out my swiss army knife and moved the plastic piece back into position and tightened the 2 screws holding the lock. After I did this it occurred to me, is that normal behavior? Do people just fix things when they notice them broken, or do they just complain to someone else?

So my new thought on how to change my current test interview. Put something in the room that anyone would want to fix. Something easy to fix, but not so weird that someone in an interview wouldn't want to do it for fear of looking strange. A marker with a cap off is my first thought. Do they notice it? Do they fix it? Does that make them proactive?


So I brought this test to my crew of testers. They of course shot me down, like all good testers would. What does being proactive prove? That they have OCD?
Probably the best rebuttal of this was Wayne, "talents and(sic) patterns of thought ... you(sic) can't determine a pattern from one event". 

Then JCD piped in with "Is that as much cultural behavior(sic) as otherwise? Some people think it is better to be polite, or maybe that is some sort of test tool you will use later?"


I'll still be using the marker test in my next couple of interviews. However, I won't use the data I collect from it. It might end up being useless, it might be an indicator of something I wasn't looking for, who knows.

Lastly, I'd like to point out, this interview thing is a constantly evolving subject. Finding good people is difficult and making it easier is a goal of mine (short of just opening a training school). Lately, I've dived into blogs about hiring and have 2 new books, by Lou Adler, I want to read on the subject. Stay tuned...


Online Books I

I'm being somewhat lazy with this, but I thought I'd list out a few online books I have read or at least partly read in the last few months.

The first is a interesting look into the world of Word Perfect.  While there are a large number of issues I have with the author's philosophy, I did appreciate his candor and it gave me some insight into management.

In part of the book, the author was speaking of his training efforts he was giving to managers who were both internal to the company and external.  From the picture the author paints, and from his point of view, he was the sole responsible 'owner' of the three running the company.  Often he seems to indicate he felt like he was not teaching but rather allowing things to go all ways without enough order.  His goal and one of his ideals or guiding principles was to teach people how to act and they would then do the right thing.  From my personal feelings, he seemed very "Type A" personality.

At the time in the book that this excerpt was taken, the company was explosively growing and Word Perfect was quickly becoming number one in the field.  Here is what he told his managers:
WordPerfect Corporation was not a platform for personal achievement, a career ladder to other opportunities, or a challenging opportunity for personal improvement. The company did not put the needs of the individual ahead of its own. The company was not concerned about an employee's personal feelings, except as they related to the company's well-being.

WordPerfect Corporation was not intended to be a social club for the unproductive. While other companies might condone many personal or social activities at the office, ours did not. Things like celebrating birthdays, throwing baby showers, collecting for gifts, selling Tupperware or Avon, managing sports tournaments, running betting pools, calling home to keep a romance alive or hand out chores to the children, gossiping or flirting with co-workers, getting a haircut, going to a medical or dental appointment, running to the cafeteria for a snack, coming in a little late or leaving a little early, taking Friday afternoon off, and griping about working conditions were all inappropriate when done on company time. Even though these activities were condoned by many businesses across the country, we felt there was no time for them at WordPerfect Corporation.

WordPerfect Corporation was also not an arena for political games. A good old boy network method of trading favors inside the company to get things done was frowned upon. Kissing up, back stabbing, and seeking for power and position were inappropriate. Making decisions by compromise, the politician's favorite tool, was not acceptable.

Finally, WordPerfect Corporation was not a "New Age" company. We were neither employee-owned nor a democracy. We were not primarily interested in focusing all our attention on either the employee or the customer. We did not feel it appropriate to check an employee's body fat or prescribe a diet or exercise program. We were not trying to stay in step with current business philosophies.
Now to be fair (in pulling this quote out of the air), he had never run a particularly big company before and he did get advice from the only place he knew to get advice from, his father.  It is a really interesting account and worth a glance at the very least.  This is not because I feel his advice to be particularly good, but because often people develop their management ideas on their own rather than gaining the knowledge by working their way up.

Since most companies grow from a small number of people, by a owner/manager who might know how to run a small company or might have a great product, but have not necessarily studied management, I felt I gained some insight on where their ideas of management come from.  In this case, religion, a large heap of work ethic and a father who had a great deal of influence seem to be largest factors.  I'm not saying there isn't value in those things, just that when you say, "What were they thinking?" sometimes the answer is not all that complicated.  This also means that sometimes when you argue about things using logic, sometimes you will fail to convince someone to change as their ideas come from fundamental bed-rock concepts in their life.  In those cases, you probably can't get them to change and the answers are:

A. Accept it.
B. Attempt to get that person removed.
C. Change companies.

In Word Perfect it sounded like the other two owners elected B.  For better or worse, Word Perfect declined, was sold and is now barely surviving.  I'm not so sure it is related to that choice, but who knows, maybe that sort of culture shock was too much at a delicate time.

On a very different topic, I thought I would talk about a technological 'ebook' I read (parts of) recently.  While I do 'know' JavaScript, I'm not an expert.  I write automation that is usually in Java or C#.  So when I have to go into the 'lower levels' of a browser and use JavaScript, I do find it handy to be able to look up the particular details of how it works.

Sometimes Google or Stack Overflow work well, but other times I need a more general look, and for those cases I have found Eloquent JavaScript to be of value to me.  I don't really have any useful quotes to pull from this book as I think each person will find different parts useful based upon their own level of knowledge in both JavaScript and programming in general.  If you need to study up on JavaScript and don't want to wait (or pay) for a book, this is a decent choice.  If you are interested in a paper variety then this isn't for you.

Tuesday, September 10, 2013

Word of the Week: Trilemma

Sometimes when reading, I run into interesting words.  In this case I found the word "trilemma", which like a dilemma, is about a difficult choice.  The only difference is that Tri literally means three options but a dilemma is a uncounted number of options.  I sometimes randomly look through wiki entries to see what additional data I might gain, and think critically about a subject (E.G. Keeping the saw sharp).   If you want to read this, you will need to have at least glanced at that wiki link, as I will reference it multiple times.

 In wiki they seem to cite multiple different styles of usage.  The computer-oriented example they give is the very well know "Good, Fast, Cheap, pick two" phrase in reference to computers.  This doesn't sound exactly like a trilemma to begin with, but rather a dilemma of some sort, where you can choose two out of a set of three.  Then, when you think more deeply about it, there are three bad groupings you can pick: "Fast-Cheap", "Fast-Good", "Good-Cheap."  This makes it a trilemma and now you can label that triangle.

Another style of this is the Apologetic trilemma, roughly simplified that Jesus is either a "Lunatic, Liar, or Lord."  I don't want to comment on the merits of the argument other than to note that it appears only like a trilemma because the list of options was artificially limited to three.  Where is Legend, Lithographic error or any of the other possible choices?

As the last "style" of trilemma I want to talk is the trilemma in law example given.  Basically it is 3 singular choices, each of which is direct and unique.  Either you can swear to something and lie, keep silent and go to jail or tell the truth and go to jail.  This is the easiest to identify and it doesn't appear there are any other options.  That means it really is a trilemma.

Ultimately, I just enjoy pulling out my thinking error bingo cards when reading a wiki page, not because I think Wikipedia is intentionally misleading, but because the sometimes author's perspectives are limited and concentrated on the work at hand.  QA is a second (or Nth) set of eyes to consider a topic.

Friday, September 6, 2013

A subject of the hiring process...

Independently of Isaac (as in he didn't know I'd do this), I was writing a blog on the hiring process.  However, now that he's done so first, I'm going to use it as a thing to reply to, as I feel I have something to add.  In particular, I'm going to talk about the experiences I have had hiring, being hired and being semi-reinterviewed in the 10 years I worked near/with/for/against/whatever with Isaac.  I'm editing out some of the experiences to protect other people (who interviewed really badly, and making this as 'my personal view' as I can).  So let's start at the beginning...

My first time I meet Isaac, he was leading a different team, in the same lab, working very different hours (testing in shifts), but I saw him once a week or so.  Now I don't know who came up with the idea, but they decided to do 'testing' for the testers in the lab to see where their strengths were and where their weaknesses were.  Each test lead wrote questions and tests were given to about 100 employees.  I believe I lost points on 3 questions.  The one question I recall being bitter about was "You have a screen with 8 widgets, 2 buttons (cancel, save), and each widget has 4 states.  How many tests would you run?"  I worked for a small look and feel group and so I answered "1".  I backed this up by the fact that I'd run one automated test that checks all of the UI positions of the elements, that all the elements were standard controls and that no text was truncated in English.  That is exactly how we were writing tests at the time.  I also noted when more languages showed up, I'd write more tests, one additional for each language.  Needless to say, I was marked down on that answer because they wanted a total number of states you could test.  I note this because I think it is a mistake to attempt to "Grade" your employees without really considering what they do and that context.  In other contexts, my answer would have been different.  If I recall right, Isaac specifically said it was a stupid idea, and that if the leads didn't know their own people's strengths, that was a problem unto itself.

A few years later, I am looking for a new job and I interview at the first place I had gotten an interview opportunity with, and there sitting on the other side is Isaac.  We chatted about the old days for a few mins and what I had been doing since.  He mentioned another guy I had worked with at HP was working there.  At some point, he started in on the interview.  He asked me to test a login page, which was odd considering most people asked about testing a pen.  Isaac did take notes, and I did try to look at them, but I don't recall actually seeing anything but sort of set of check boxes with skills.  SQL came up at some point, and keeping in mind I had mostly been doing black box hardware/software testing, SQL was not something I had done day to day.  I explained this but noted I had played with a very light version of SQL for an open source project.  I noted it didn't support stored procedures, so I didn't know how they worked, etc., but I could do a simple select/insert/delete/update.  He had me demonstrate it and I think that was the end of the interview.

I won't go into the second interview other than to say it was a round robin with 5 guys in a 10x10 room with a table in it.  Yeah, not the most fun I've had and maybe worth a different blog post.  Oh, and I was hired.

Isaac continued to do all the first interviews, and so I didn't get to experience much of his technique or the people he said no to.  However, I can speak to the people we had a second interview with.  Keeping in mind, the average time between hires was 9-16 months, we did a lot of interviews.  I will mention only the style and some highlights.  We used a round robin technique and each of the QA Engineers would ask a question, with occasional follow up questions.  Some people had signature questions, like "You have a 2 gallon bucket and a 5 gallon bucket and you need to get 3 gallons of water..." or Randomly pick a phrase from the resume and ask about it in detail.  Isaac did not do those things as best I can recall.  Isaac mostly asked questions to see if he could find any value the person might be able to give.  Ask an automation question.  They fail.  Okay, what about critical thinking...?  What sorts of learning do they do?

We had almost all duds, and on occasion I do recall that "Why did you okay this person?" looks.  More often than not, the candidate was meh enough that I can get that Isaac hoped they might study up, but they didn't.  On rare occasion, we found someone who just bombed the second interview after doing well on the first with no reason for the change.  As interviews are just as much 'keeping the wrong people out' as they are about 'getting the right person', we might have missed a person or two who was good but had a bad day.

I should note, I write this with little fear that someone will use these notes in an interview as I would thinking it a good thing they read blogs if they prepped that way.  I see all research valid for preparing for an interview up to asking someone internally about the position.  That being said, I don't plan on giving all of our specific interview questions here, in part because I change up my questions based upon the candidate and how well they do on some area.  Sorry for the digression.

After the round robin, we would all think a little about it and sort of talk through the good and bad points.  If anyone said no, it was a no.  There were only one or two times when only one of us said no.  We did all take context of the position in mind, but we also all were aware that times change and the person we interviewed had to be able to adapt some.

Since then, I think the interview process has changed a little based upon the company and how practical it is.  That is to say, in a company with 30 testers, only 5 or so might actually interview the person.

I do want to make some additional comments, things that don't fit well into the narrative, but are equally useful.  I have on occasion look at Isaac's interview notes, and sometimes I find interesting little bits I miss.  One of the big ones is 'temporal' observations.  Like "Hesitation", or "Long Pause" that he will note, showing that someone thought about their answer.  He also would write down weaknesses to observe if they improve in the next interview.  I never get to see if that happens, so often when we consult about a person, we depend on Isaac's input of 'what improved'.  Sometimes we would consult Isaac before an interview about the person, but not always.  I'm not sure if the data is worth the bias or not.  Last but not least, sometimes LinkedIn or Google is consulted about a person to see what they want to share.  I have never used Facebook or any social data for an interview and as far as I know, nor has Isaac.

In perhaps a part 2, I may go over how I conduct a second interview rather than the structure.

Thursday, September 5, 2013

Testing the hiring process for testers...

 tldr: Hiring is hard.

  Having now interviewed hundreds of people for testing positions. I can say, "hire the people you, or someone you trust, knows." Learn to adjust who you trust based on interviews you do, and the people they recommend. Occasionally you'll get a diamond in the rough and you'll interview someone who you walk out saying "please stay here, I'll be right back" and off you run to get an offer letter made quickly. More often than not you will walk out saying "another 30 minutes lost." It's all the people in between that are the hardest pieces to sort, people that are pluses on some things but minuses on others. Which pieces are more important? Can someone be a plus here and does that offset the minus over there enough to hire this person? If you're on the fence about a hire, say no, trust your instincts. I think this one piece has saved me more then anything else.

A good testing person is hard to identify.
   I've always been a 2-stage interview person. During the first interview ask some personality questions, some company culture questions. See if they are even going to fit within your company / team. Then I dive into tech questions until I get 2-3 "I don't know" answers, this is where I end most first interviews. Second interview I open with the 2-3 questions they missed, if they didn't research them or have a decent answer the second time that's a major strike against them.
  I've also tried the more overt tactic of telling an interviewee who I thought was good enough for a second, "Here are two topics, I would like you to research one and we'll talk at your second interview next week." The first person I tried this on blew my mind with what she was capable of learning (over a weekend due to a miscommunication with HR). She learned both topics and to a depth that I'll admit I wasn't able to verify if she was correct. We hired her immediately, which is what might of set me up for a serious failure of the next person to get this same test. The second person came back the next week and gave me nothing. They didn't even have excuses for why they didn't learn anything. I was more then disappointed, I was dumbstruck. Here I had given this individual the perfect chance to shine and they had slapped me in the face. I stopped performing this overt tactic as it caused me too much stress to think that people like that existed.
  Over time the questions have evolved from technology to one of personality, intelligence, self-motivation and open ended questions. I have written across the top of my interview sheet "Ask the question, then shut-up Isaac". I've had so many people impress me with their answers to simple questions, and just as many burn themselves when I just listen. Questions like: 
  "Tell me about the most memorable bug you've found."
  "Describe a difficult co-worker and how you handled it."
  "What causes bugs?"
  "How have you improved your testing skills in the last 3-6 months?"
  "How do you prefer to communicate to developers / Product Owners / management?"
  "Describe a tool you use, and how you would improve that tool." 
  Now none of these questions are better than others, they all fit different people and different situations. sometimes I have to ask a variety because someone is clearly not interested in one of two of them.

Hire for the position:
  If you are hiring for a senior level position the person who is going to fill that better be able to be shown a problem, left alone for 1-3 months and when you come back *poof* results. If you are hiring for a junior level position remember YOU are the one responsible for showing this person the ropes, and  responsibilities, through a mentor; and that's you or someone you assign to do that. 
  Don't forget individual team dynamics. Some teams are uber-quiet get-work-done types, bringing in a loud obnoxious likes-to-talk person probably isn't the best idea.

Test the process:
   Keep the resumes of everyone you hire, along with your notes on what they said when you interviewed them. That way, as time progresses and you find out who the really great testers are, you can look back and see what they answered for questions you asked. Same for people who turned out poorly, you can look for patterns in how they answered. If your people really, really trust you re-interview them with new questions and get a feel for how your rock-stars answer these types of questions. Do this with caution as most people will freak out if their boss asked them to re-interview.

A team of interviewers is required:
  While I generally perform the first interview, no one gets through the gate without at least a second set of eyes. I try and get at least three other sets of eyes on a candidate, depending on position, seniority, skill set and any other number of attributes. 
  This was sunk into me on the first time I passed someone through a first interview, just to have a squad of people I trusted walk out of the second interview, and none of them would look me in the eye. I knew immediately that they wanted to know, "How the hell did this guy get this far?." Truth be told though, I still think the guy was a decent candidate. However, with three current employees I trusted saying "No", I couldn't ignore them because I asked for their opinion.
  With a team of interviewers it's also handy to have a code phrase, that instantly ends an interview. When the phrase is said, you allow the candidate to ask some questions so they feel comfortable then end the interview. That way no one wastes time on someone that is a hell no. This practice is mainly for those who are uncomfortable saying, "Look, clearly you don't fit, thanks for coming in, goodbye".

Know when to break the rules:
  Yes, I've ignored my own rules as I've laid them out, but I've hired over 50 people now. I've learned when a position I'm filling can take a certain amount of risk on a new tester that is going to require lots of training. But that was always explained to the candidate while making them an offer. Yes, some of them got offended cause they had "5 years experience" which amounted to nothing for me. Not because 5 years is nothing, but clearly their 5 years was only not spent on what I needed them to know.

  Apologies to all those people who have endured my interviews. Mainly to those who I have hired and then used as guinea pigs to find the next level of great testers.

Wednesday, September 4, 2013

Pex, a GTAC presentation

Having watched about Pex in the GTAC 2013 presentations, I thought it was a rather interesting technology.  The basic idea is that you tell Pex to generate test cases for a particular method and it will find bugs in your implementation.  It tries to be relatively exhaustive within equivalence classed systems.  It really is only meant for unit testing, however I could imagine it being used to generate interesting test cases on a wider basis.

The interesting thing outside of the particular testing technology is that it can also be used as a game, training tool or for interviewing.  In particular, you can take a previously created secret implementation, and ask for someone to re-implement the solution.  Since Pex knows how to test compiled .net code, Pex just finds interesting values based upon the secret implementation (E.G. the oracle) and then sees if the output it creates equals the output you generate for the same value.  If they are not equal, your implementation is wrong.  Since you can retry almost instantly it becomes relatively easy to generate possible solutions until your implementation matches that of the secret implementation.  Having tried a good number of the challenges, I must say my current favourite is making English words plural.  While this isn't exactly how I generate automated test cases, it certainly made me think about other possible input values I should be testing.

Let me know if you solve it in the comments and if you want, post your own solution.  I have a solution, but I'll resist posting it for now.  In some future point (maybe a few weeks from now), I'll post a comment with my solution.

Tuesday, September 3, 2013

Strings? Those are easy!

Let me start with The Problem: Create random Strings, including all UTF-8 characters.

These are not exactly problems but rather testing requirements. I need to be able to test my assumptions using some sort of oracle to determine the number of bytes in a particular element of a String. The tools I have are IntelliJ, Java and the Internet.

First things first, lets research UTF-8; what is it and how does it work? The wiki article is really useful and having read Joel's blog entry on Unicode, I think I can define it in a simple, if slightly inaccurate way. It is a super set of character sets using a addressing system that can go up to 6 bytes. Those other blogs can hit the deeper details.

Now a bigger question is how does it work in Java? Well, that is also a complex set of technology which I thought I didn't need to know about. So I tried a simple little program (that I have simplified even more):

//generateString
StringBuilder sb = new StringBuilder();
for(int i=0; i!=length;i++) {
  sb.append(Character.toString((char) (random.nextInt(characterRangeMax) + characterRangeMin)));
}
String testValue = sb.toString();

It seemed to work and I thought I was all good. Now granted I also was using a separate API for most of my random strings, but this was for cases where the API I had didn't work. However, like all software development, requirements change. I was asked to solve a new problem...

The Problem v2: Create four byte Unicode strings randomly.

The thing is, problems are rarely singular. Let me see what else might be a problem:
  1. How do I verify a string is in fact four bytes?
  2. How do I create mixed byte characters?
  3. How do I get the hex version of the character? For that matter, can I test with the Unicode (UTF-8 in this case) hex value?
Since I knew I could do a range and I had some idea of what the starting integer of the range should be, I thought this should be pie. All I have to do is verify the byte length. I happened to know that URL encoding converted this into bytes from to %FF so all I had to do was convert a single character and look at how many "bytes" the character had... by dividing the length of the URL encoded string by 4 and look for a length of 4. So here I go again, but this time I'm going to pseudo code it:

int length = 1;
for(int rangeMin = 33 to Int.Max) {//Starting at 33 due to ASCII's limits.
  String s = generateString(length, rangeMin, rangeMin + 1);
  if(URLEncoder.encode(s).length()>11) {
    print(s)
    return;
  }
}

Guess what... 15 mins and I got NOTHING. Gulp. Ummm....? I know I got Unicode characters...? I got all sorts of Chinese characters, so... why you no work? Ok, well I guess I best study Java's underlying Strings...

Having now studied Strings in Java for some time, I have come to appreciate some of the leakiness of the abstraction. It seems clear to me now that Strings are a compromise between keeping up with modern times and keeping compatibility. In particular, I would say Java seems to have sided with compatibility. So when you create a string, say "Hello World", what Java basically does behind the scenes is create a Array of characters known as "char"s. These chars all point to a specific character using an integer value. The problem is characters go from 0-65535, but UTF-8 hits 110,000+/- characters. So if the number is too big, they internally use 2 characters, something my code did not handle. What I was doing was casting a number to a char, which means 65536 would be the value 0 (I believe, this is untested). Instead I wanted to do this:

String character = new String(Character.toChars(i));

This creates a set of characters, into a single displayed character. The good news is that accomplishes what I desire, however for my testing it does create a few interesting side effects. If you see the rough guide, you will start to see that the length of characters doesn't even make sense because it doesn't deal with what is displayed but rather the number of 'char' values in the array that make the string. Also, the open source library we use doesn't really take this into account. Or perhaps I should say, it does, but not in the way you would likely think of it. They generate a set of chars and convert them into a string. From reading their code, it appears they don't take into account that the word 'length' has 3 different meanings and so when you ask for a String with a length of 10, they will provide you 10 characters, but it might only have 5 displayed characters, depending on the value it generates.

This means I have to create my own random generator, but that is a blog post for a future time. However, I still have a few things in my list in my initial list. Generating mixed character sets is relatively easy now. I can create 4 byte characters as long as my random number is high enough. I can test the number of bytes a character has via my URL encoding method. I can generate characters via the convention "\uXXXX" which will create a Unicode character. This leaves only one question left. How do I convert a character back into the hex style string. To be honest, this was a less important thing, and it turns out from the research I made (admittedly limited), that this is difficult. Since it is difficult and less important, I followed the 80-20 rule and skipped that step.

First Level Abstractions

The Situation:

When I was much younger, I had written a simple program that was designed to have a couple of text characters (as in 'people' as well as literal ascii letters) on the screen who moved upon a particular event.  I had another character who you controlled using the keyboard.  The character you controlled, the good guy, was chased by the 'AI' of the other characters, the bad guys.  The characters would all chase down the good guy, but for some reason, if the good guy was in a location relative to the bad guy (say he was below and to the left of the bad guy), the bad guy failed to chase him.  The system used a x-coordinate and a y-coordinate that I had to keep track of the location for each bad guy and one set of coordinates for the single good guy on the screen.  Now I want to give a little pseudo code to give you an idea of what this looked like.

The Code:


if(badguy[1]_X<goodguy_X) {
    if(badguy[1]_Y>goodguyY) {
    //... do something that is a bug.
   }
 }
 if(badguy[2]_X<goodguy_X) {
     if(badguy[2]_Y>goodguyY) {
     //... do something that is a bug.
    }
 }
 if(badguy[3]_X<goodguy_X) {
     if(badguy[3]_Y>goodguyY) {
       //... do something that is a bug.
    }
 }
 if(badguy[4]_X<goodguy_X) {
     if(badguy[4]_Y>goodguyY) {
      //... do something that is a bug.
    }
 }
 if(badguy[5]_X<goodguy_X) {
     if(badguy[5]_Y>goodguyY) {
       //... do something that is a bug.
    }
 }

What I Knew:

Now keep in mind, my game used loops and arrays.  My main game loop was something like: do { /* game */ } while(key!='q');

So clearly I had some understanding of loops, but I didn't get how loops and arrays could work together.  I understood that "I could just fix the code via find and replace."  However, when the code gets complex, find and replace started to fail me.  I understood that I could in fact loop, but how would a do/while or while loop help me?  I thought for-loops were silly... why would I ever want to count up like how  a for loop structure would want you to.  Heck, why would you use that confusing for(X;Y;Z) when you only need to evaluate a boolean at the end or beginning of a loop.

I think the problem ultimately stems from a lack of understanding abstractions.  To me, I needed no abstractions, I had 5 concrete bad guys, each whom had the same behaviour.  If I wanted a 6th bad guy, I would simply need a 6th copy and paste.  Arrays to me were a way of not having to write out the same variable 6 times, which was nice, but they were not a 'collection' of data but rather individual pieces of data each unique unto themselves.  The tie between arrays and for loops had not even occurred to me.   Then I went to fix a bug in my code and kept having to fix it over and over again...  and found I had only fixed 4 out of the 5 bad guys and I thought...

 'There has to be be a better way!'

Abstractions I find often come from that conclusion.  Often abstractions actually mean writing more code and adding more complexity at first was odd, but is not shocking anymore.  Ultimately, abstractions will save in code (in that you don't repeat yourself) and likely will save in maintenance at the cost of complexity.  So lets consider my example again.  How was it written?
  1. Write the logic: if (badguy[1])  ...
  2. <copy> & <paste>
  3. replace [1] with [2]
  4. <copy> & <paste>
  5. replace [1] with [3]
  6. ... up to [5]
  7. test game
  8. debug game, and find error
  9. fix all 5 places that the error exists in
  10. test game and find bug still exists with 1 / 5 bad guys
  11. debug game and find error
  12. fix in the one place copy and paste failed
  13. test game
Now how should that code be written?

for(int i = 1 to 5) {
if(badguy[i]_XgoodguyY) {
//... do something that is a bug.
}
}
}

Once you have this really nice piece of code written out, you can see the abstraction.  We don't have 5 individual bad guys, each with his or her own set of logic but rather 5 bad guys with 1 set of logic.  The bad guy count didn't change but the logic now exists once. Every time you write code, you are using abstractions, but when you avoid copying and pasting, your probably adding at least one more layer. As a programmer, I don't even really see them as abstractions anymore, but they are there.  So what do we have now as far as steps go to create and fix that bug?
  1. Write the logic: if (badguy[1])  ...
  2. reconsider and then refactor the logic to make it a loop
  3. test game
  4. debug game, and find error
  5. fix one place that the error exists in
  6. test game

The Takeaway:

Clearly the abstraction made the steps easier, even if I had to think a little harder to get there.  However, the nice thing is, once you have this new tool in your tool box, you can start seeing patterns.  Duplication is a bad thing.  Lets consider my loop code again:

for(int i = 1 to 5) { /* Logic */ }

What happens when we want to change the number of bad guys?  Well now we have to change at least 3 places.  The badguy_x[], badguy_y[] definitions and the "to 5" part of the for loop.  How do we fix that?  Well maybe we make a BADGUYCOUNT constant.  Maybe we detect the size of the badguy_x size and assume x and y are always the same.  What about creating a badguy object?  What about ArrayLists?  It became rapidly clear that I could solve this multiple ways, but the solution wasn't the important part, the important part was that I started to be able to detect problems in my code before they really became a problem.  Later I learned the term for this was code smell.  Detecting code smells is something that can be taught to some degree, but ultimately I think it is something that is learned from doing.  So next time your in your code and something keeps breaking, the thing you should start thinking is 'There has to be be a better way!' and start searching for it.

Testing Babies for Learning

According to yahoo news, babies (ages 0-2) don't in fact learn via apps.  In fact, to quote from the article:
"Everything we know about brain research and child development points away from using screens to educate babies," said Susan Linn, the group's director. "The research shows that machines and screen media are a really ineffective way of teaching a baby language. What babies need for healthy brain development is active play, hands-on creative play and face-to-face" interaction.

The American Academy of Pediatrics discourages any electronic "screen time" for infants and toddlers under 2, while older children should be limited to one to two hours a day. It cites one study that found infant videos can delay language development, and warns that no studies have documented a benefit of early viewing.
What interests me in this is how do you test if a baby is learning anything from a screen.  While I admit I am reading into the article, the current research appears to be:
  1. Subject the baby to multiple hours of screen time per day*.
  2. Wait a few years to see if the baby does in fact learn anything.
* They might just be surveying parents, so then it is the parent who is doing it, but it amounts to the same thing.
So with that being said, lets look again at that article:
Linn's group alleges that the companies violate truth-in-advertising laws when they claim to "teach" babies skills. For example, Fisher-Price claims that its Laugh & Learn "Where's Puppy's Nose?" app can teach a baby about body parts and language, while its "Learning Letters Puppy" app educates babies on the alphabet and counting to 10. Open Solutions says its mobile apps offer a "new and innovative form of education" by allowing babies to "practice logic and motor skills."

"Given that there's no evidence that (mobile apps are) beneficial, and some evidence that it may actually be harmful, that's concerning," Linn said.
Granted I am still reading into the article, it appears that they are saying that parents who tend to use their apps aren't see any benefit.  The questions are:
  1. Is that due to parenting techniques (E.G. Parents ignoring the babies by giving them a electronic device)?
  2. Is that something the manufacturer saw, or where they watching the babies to see how they would respond?
  3. Are the producers of the system responsible for any lack of learning that a particular baby might experience if they can show that another baby found it useful?  This is particularly interesting as babies can't possibly known their own learning strategies nor could the parents.
Once you start looking at this, it sure seems like our ability to evaluate either side's claims is near impossible.  I'm not a legal scholar, so I haven't got the legal knowledge to judge the manufacturer's responsibility, but I do hope for their sakes (and the children's) they did in fact test the thing in some way shape or form.  Since I can't evaluate the claims this does leave me with one interesting exercise.  How would I test the claim?  As a software tester I don't often test children, so excuse any mistakes I might make.
  1. Look at typical cognitive values for that locale.  Compare that to the children who will likely get the device/app.  If there are differences, look into why.
  2. Look at the typical usage pattern of the children.  Are the children who use it frequently better able to hook the nose to the puppy?  How about those who use it less frequently?  How about those who don't use it at all?
  3. Are the more frequent users using it 8 hours one day and then skipping 5 days or are they using it 2 hours each day?  What happens when a baby is given the device to use for 8 hours 6 days a week?
  4. Of course we can't ignore device reliability.  Does the app die after 1 hour of play or does it work for hours?  Does the device die after 1 hour of play (due to batteries) or does it last for many hours?
  5. Are parents who give the devices present and engaged when the devices are being used by the babies?
Obviously there are plenty more questions I could come up with, but by looking at those factors one could say who this might be successful for if it would in fact be successful at all.  Next time you have an article that you are looking at, maybe you too can ask, how would I test that?

UPDATE:

It appears that additional studies have been done on children with non-interactive mediums and have been shown to be consistently (as in multiple studies) found as problematic.  This does not mean digital/interactive devices suffer from the same issues, but I'm sure those studies are coming soon.

Legislative Code I

Having just been listening to NPR, I caught a certain word used in describing what the law provided. It provided a set of "requirements" to define certain health care facilities and what they need to provide to call themselves a nursing home, assisted living facility, etc. I found this to an interesting word, perking my ears up, because as a tester the word requirements means something very important. It is what I use to create my tests. I am aware that this is both under and over generalized, but I think most testers who aren't James Bach will give me some leeway on that simplistic definition for my purposes[1]. This in turn got me thinking about Lawrence Lessig's book about code being equal to law maker's law.

Now multiple people have addressed the question of analogy in regards to software development. One example I could find was one of my favorite authors, Jeff Atwood, who discussed if software development is like manufacturing. There are many others I have found, such as the analogy with manufacturing a stimulating conversation piece. Then there is the opinion that software is [or isn't] like construction. Yet still others find it like writing. Even stranger (to me, a non-gardener), some find it comparable to gardening. Finally I will mention one last interesting analogy, software development is like handling a helicopter. I find that one particularly interesting, as I don't consider being a pilot as a well understood activity, however I'm getting ahead of myself with that comment. Of course, I'm not the first to note how many of these variations of analogy exist.

I don't intend on debating too heavily in this particular post what software development is like, but it did make me wonder if software development and legislation are similar in nature. This is a topic I may explore in a future post.  However, it does make me wonder if simple experiences shade people's analogies to the point that we are not even talking about the same activity.

To go back to my initial analogy of code smithing (an intended word choice) being like legislation, much of our language is coloured by the law. For example, we talk about code, something that at least goes back to the BC era. Then we have these legal systems, which have completely different ideas of what the law is. If the concept of the law was at one point a single entity, it no longer is. It now is something very different. It is a divergent set of ideas about what code is.  Does the law apply to you if you are a person of great ability? Well it might not, depending on where and when.  Now just think about what you think of as a system in software development.  Go on, I'll wait.  Done?  Now, why not look at the plethora of data just one wiki article has on it. Then there is the human factor. We often believe things that are simply untrue. As a simple example, keeping with the law, conviction rates are lower and have shorter sentences for attractive people, yet we overwhelming believe that they shouldn't (in at least one group in a particular society).

To wrap all of this up[2], we are biased by our own experiences and human nature into seeing some analogies as more valid than others. Be it culture, community, personal or just human limitations, we have a hard time seeing around our own biases, if we can in fact do so. So when we disagree, part of what we should be doing is asking "Why do you believe this?." Then look at the value they find in what they believe and compare it to the value you find in your own bias. If you find more value in your analogy, keep with it. If not, maybe it is time to research the field and find out what does make sense to you. Oh, one last thing. Like James Bach said in regards to categorization (similar to analogy), "Of course. It's a heuristic. Any heuristic may fail...", I believe analogies to be heuristic tools, not enforcement mechanisms. I don't think a category should force someone to be stuck in a role. To paraphrase Kranzberg's laws of technology, tools are neither bad nor good, it is how you use them.

EDIT: Like a doubly linked list, this article now points to the second of three articles.

[1] I really do appreciate Jame's position that words matter.  Just sometimes words are meant to be vague.  As Saki put it, "A little inaccuracy sometimes saves a lot of explanations."  I do recognize that sometimes in depth definitions are required to be clear about a subject.

[2] Clearly there are lot of things I hit in this short article.  I hope to come back and hit each one in greater depth over time.  I just don't want to end up like Steve Yegge whose posts are awesome yet super long.