Monday, December 15, 2014

More on No Testers

I sometimes look to see who is reading my articles, and if the incoming link looks interesting, I try to see if I can find the blog it came from, maybe even comment on it.  I particularly wanted to see if anyone had comments on my latest blog about No Testers.  Well I did that today.  I went to look at a Russian article (in Russian), and found a comment by Maxim Shulga saying (translated by Google):

Detailed and very adequate answer to the same question to my http: //about98percentdone.blog ... The only pity is that uncomfortable read: white on black. Burns my eyes :)

Now normally I would just reply to such a comment by filling out the form or using my gmail credentials and writing back.  I might say something silly about how I know not everyone likes the black on white style, as Jeff Atwood pointed out years ago.

While Google translate did translate the article, it didn't translate the comments nor the method to reply to a comment... using some developer tools I was able to establish this was from Disqus.  I am now going to chronicle my efforts to get an account and demonstrate how usability might have benefited from a tester, not to mention a bug tracking system.  Before I go on, I want to say I have no business relation with Disqus or any other discussion or forum/blog related software outside of this blog.  I am not trying to pick on them, it just happens I was trying to get something done and they are blocking me.  They also happen to have no testers, or at least no one with a label of "QA" or "Tester" or anything outside of 'engineering'.  This just happens to be an interesting example which relates to my previous post. I did not intentionally go looking for a company that has no testers and only discovered after I started writing this blog post that they have no testers.  I have contacted them about the issues I have noted here and none of the issues relate directly to security or should go unpublished for ethics reasons.

What I Found


Item 1:

Disqus did not give me English versions of their UI for replying even though my browser should be requesting English.  I tried several browsers, including IE which I checked to see it was set to en-US.  I am not sure if this could be detected via any sort of metrics to figure out they should be looking at the browser language rather than some user/blog setting.  To me this is the sort of choice that happens when you don't think long and hard about localization.  But perhaps they know and plan on changing it or maybe they think this is the right choice.

Item 2 & 3:

So I go to https://disqus.com/ and click sign up.  I am asked for a email AND a username.  I don't know what the username is used for in this context and there is no useful description or even an icon to click.  So I enter an email, username I enter JCD and a password and get told "Username already exists."  So I try JC-D.  Nope, "Letters and numbers only please."  Okay, what about JÇD.  It is all ASCII characters but I just get "Letters and numbers only please."  In fact, it thinks all sorts of things are not letters or numbers in their view but which I would consider letters.  I didn't bother with much Unicode but I imagine it would have the same sorts of issues.  Maybe this is intentional, but their error is not meaningful to me.  Worse yet, it excludes many people with names that are not just alpha characters.  My name has a hyphen in it, thus the JC-D.  The hyphen was excluded, excluding me using my real name.  Granted the math models would say I don't count as few people have hyphenated names.  Maybe I shouldn't care, maybe the name doesn't matter, but the username's usage is not clearly explained.  However, not including Markus Gärtner because his name has a non-English character seems really wrong.

Item 4:

I thought for a moment to use their gmail integration.  I have used it for Khan's Academy to log in and it worked fine.  So I tried but it looked like they wanted me to have a Google+ account, which I intentionally don't have.  I am a bit odd, wanting both privacy and a voice.  I don't like giving away personal details and being tracked, even if I have professional opinions that I wish to voice.  My professional and personal lives mix, but only a little.  So that was a no-go for me.  Worse yet, I get an warning saying that OpenID2, the method Disqus uses to sign up, will being no longer supported early next year (April I believe).  Clearly they have some updating to do.

Item 5:

I considered using a fake email address, but their terms of service were not on their sign up page.  In fact, most of the links on their page went away on the sign up page.  Maybe that is intentional.  Maybe it was A/B tested.  If so, awesome, but for me it was less than optimal.  I admit to not being the main use case, and perhaps that is one problem with testers.  We are not equal to users.

Item 6:

I go to report these issues and the best I can find is a contact support.  Not saying support is a bad place to start, but the form of input gives me about 3 sentences and a scroll bar.  If I wanted to tweet the error, that might work, but I had detailed points to give.  Only someone who is concerned with the customer would notice this, but I, a potential customer did.

Analysis of Why the Issues Exist


I suppose the question is, how do we capture this sort of data or if we care.  Maybe annoying your users is okay when you are a free product.  Maybe alienating users is fine when your metrics show few users try to use Unicode.  Perhaps that is what data scientists are useful for, deciding which problems matter?  Maybe having a functional tester would notice these issues and bring them up?  Having the customer deal with the problems until you figure out if it is a good idea is not an uncommon model, particularly if you own the market.  But keep in mind I never did get to make that comment.  Speaking in psychological terms, even with professional distancing, I will have a more negative view of their product and it will take effort on their part to turn me around.  Even if they magically changed it all tomorrow a potential customer like me might be long gone.  Perhaps with a billion users, it doesn't matter.

While I personally don't feel this way, but maybe the Disqus team is not the right team, which is the argument made for hiring 'the right team' that I have heard from the no tester camp.  If they were, I'd not be writing this post, with so many issues.  I think the language used by the no-tester side is unclear what the 'right team' is, and perhaps what they think testers do.  I am not sure that a mythical right team, with or without testers will ever produce bug-free code, but there are certainly good and bad team mixtures.  I feel that it is rather more difficult to evaluate their team dynamics having never met any of them.

Perhaps they are 'the right team', and I am just the wrong customer?  That is the other half of this particular no tester argument.  That testers are not like customers, so use customers.  As a customer who thinks like a tester, maybe I'm not representative?  Then again, if I pull out my heuristics, I can compare this to other products that don't require you to sign up at all to write a comment.  That may not be a complete defense in comparing myself to a real customer, but it certainly gives credence to these issues.

Finally, it could be they do have QA/Testers but they were renamed to some other title.  That just made it harder for me to figure out if they have testing and if what was built was what was intended.  These are design choices, but it is unclear if there was anyone questioning these design choices. Without someone in that role, the 'get it done' mentality comes into play, at least in some organizations.  Perhaps that happened here.

I am sure the reality of Disqus is way more complex than I have presented it, but I am an outsider.  I welcome any feedback from the company and will update this accordingly.  I also was not looking for this example case.  I was not planning on posting any more this year.  It just showed up and I thought it was interesting.  I'd love to hear from those who feel no testers is an appropriate choice and how they would interpret this.

To Maxim Shulga, I am sorry you don't like my black background with white text.  I will take your view under advisement if I ever try to re-theme this blog.  I hope at least the content is useful.  And next time, just leave a comment on my blog... this reply to your comment took way too long to write. :)

Friday, December 12, 2014

Thinking about No Testers

In attempting to hear out the various viewpoints of those whom subscribe to the idea that Agile should move towards removing the role of tester.  I have yet to see anyone who actually suggested eliminating testing completely, if that is even possible.  So let us unbox this idea and examine it, with as little bias towards keeping my own job as possible.  Here are the rough set of arguments I have seen/heard in TL/DR (Too long, didn't read) form:
  • Limited deploys (to say .01% of the users and deploy to more over time) with data scientists backing it will allow us to get things done faster with more quality.
    • Monitoring makes up for not having testers.
  • Hiring the right team and best developers will make up for the lack of testers.
    • Writing code that "just works" is possible without testing, given loose enough constraints (which leads to the next argument).
  • Since complete testing is impossible, bench testing + unit testing + ... is enough.
  • Quality cannot be tested in.  That is the design/code.
    • Testers make developers lazy.
  • It is just a rename to change the culture so that testers are treated as equals.
    • Testers should be under a dev/team manager, not in a silo.
  • It is a minor role change where testers now work in a more embedded fashion.
  • We hire too many testers who add inefficiencies as we have to wait <N time> before deploying.
    • We only hire testers to do a limit set of activities like security testing.
  • Testers do so many things the name/profession does not mean much of anything.
  • With the web we can deploy every <N time unit>, so if there are bugs we can patch them quickly.
  • As a customer I see those testers as an expensive waste.

That is sure a lot of different reasons, and I'm sure it is not an exhaustive list.  Now I shall enumerate the rough responses I have seen:

  • A little genuine anger and hurt that 'we' are unwanted.
  • A little genuine disgust that 'we' are going through the 80's again.
  • Denial ('this will never work')
  • It can work for some, but probably not all.  (This follows a very CDT 'it depends' point of view)
    • Not everything can easily be instrumented/limited deployed.  For example, I have heard Xbox 360 games cost ~10+k to do an update.
    • In some software, having less bugs is more important than the cost of testing.
  • This is why we need a standard (I never heard this one, but it is an obvious response)
  • Concern about the question of testing as a role vs task.
  • Different customers have different needs they want to fulfill.
  • Good teams create defects too.
  • We need science to back up these claims.
  • Testing should move towards automation to help eliminate the bottlenecks that started this concern.
  • It is very difficult to catch your own mistaken assumptions.
    • If you need someone else to help catch mistakes, why not a specialist?
While I don't have a full set of links I have heard this back and forth viewpoints, but I think it is an interesting subject to consider.  I am going to go through my bulleted list top down, starting with the "no testing" side of the world.  I believe I have seriously considered the limited deployment and letting the tester be the customer in a previous post.  In that same post I noted that at least not all teams can make up for a lack of testers, so again, I think it has been addressed.

Complete Testing is Impossible & Quality Cannot be Tested in


The premise is true, complete testing is in fact impossible, I do agree with that.  The conclusion that hiring testers is not needed is the part I find interesting.  Quality of code and design cannot be tested in is also in interesting statement.  The idea that testers make developers lazy feels very unsubstantiated, with possible correlation-causation issues (and I have seen no studies on the subject), so I am going to leave that part alone for now.

I write software outside of work.  Mostly open source projects, of which I have 5 projects to my name.  Most of them were projects in which I was looking to solve a problem I had and wanted to give to the world.  In more than half of them I wrote no formal tests, no unit tests and to be honest the code quality in some of them is questionable.  There are bugs I know how to work around in some cases, or bugs that never matter to me and thus the quality for me is good enough.  I have had ~10k downloads from my various projects and very few bugs have ever been filed.  Either most people used it and then deleted it or the quality was good enough for them from a free app.  I hired no testers and as the sole dev I was the only one testing the app.  I think this is a +1 for the no tester group, if you work on an open source app for yourself, no tester (maybe) required.  CDT even would agree with this as it does depend.  Context matters.  I am sure other contexts exist, even if I have not seen them.  What I have not seen is specifics around contexts that might require no testers, with the exception of the Microsoft/Google sized companies.

Now can you test code quality in?  If you are about to design the wrong thing and a tester brings that up, did the tester 'test' code quality in?  Well he didn't test any software, the no tester side would say.  Sure, but he did test assumptions, which is often part of testing.  What about the feature the tester notes the competition has that you don't, which was discovered when comparing competitor protect quality?  What if I suppose the dev knows exact what to develop, can the tester add value?  Well I know I have made code suggestions that have gotten a developer thinking.  However, I could have just as easily been an XP developer sitting beside him as he coded rather than a tester.  Well what about TDD?  Is that testing quality in?  If I write tests first, I have to think about my design.  That is testing quality in, but again that is just the developer.  What if the tester and the developer co-design the TDD or the tester recommends additional tests added during development.  Again, this could be a developer, but a tester is often thinking about the problem differently.  So while I think the statement is false, if I look at it from the spirit, I think it might be true in some cases.  There are some really great developers who do think like testers and they could do it.  However, I think that is only some of the developers.  There will always be junior developers and there are always other mind set factors that play in.

 What is in a Name & Quick Deploys?


What's Montague? it is nor hand, nor foot,
Nor arm, nor face, nor any other part
Belonging to a man. O, be some other name!
What's in a name? that which we call a rose
By any other name would smell as sweet;
- Juliet, Romeo and Juliet
Does the fact that we are in a in-group called 'test' and some see developers as an outsider matter?  Visa-versa is probably true too.  In fact, the other day Isaac and I had a small debate about correcting a tester in some very minor minutia about deep mathematical and development theory.  To me, perhaps a developer who was neck deep in building new languages would find such a correction helpful, but a tester who was doing some exploration around the subject didn't need such a correction.  In fact such a correction might make things more confusing to anyone who didn't have a deep understanding of the subject.  I myself am guilty of the insider/outsider mentality.  We all categorize things and use those categories to make choices.  Perhaps renaming us all as human beings would make things better.  If an organization is particularly combative, this might be a useful change.  That being said, there is also a group identity for people who see themselves as testers or QA.  There is a specific field of study around it, and that also adds value.  Also, by having a sub-group, the tension, even if minor, might possibly help people grow.

This is a mushy, non-business oriented view of the world, and perhaps those who are just interested in the how fast things go today would prefer avoiding the conflict in order to get things done.  The fear of doing this from a QA point of view comes from the 1980s, where defects were a big deal.  Renaming people feels like going back to that era.  The argument back is that this isn't the 80s, we can ship software almost instantly.  The argument to this is that some software can't just be shipped with broken code.  Work on many different areas must be 'mostly right' the first time.  Also, this grand new world that people want to move into has not been vetted by time.  Wait a while and see if it is the panacea it is believed to be.  I recognize the value in renaming positions in order to improve team cohesion and believe it comes from a genuine desire to improve.  Just keep in mind what is lost and the risks in doing so.

I should note, I have seen some of this implemented before in organizations I have worked in.  Working under a single development manager, the development manager would often ignore the advice given by the "QA" person.  Perhaps a full rename would have fixed things, but often development wants to move forward with the new shiny thing.  Having no testers is perhaps a new shiny, and so development would be on board.  The people who are more aware of risk of change are the QA and operations people.  This is a generalization, unsubstantiated by anything but my observation, but it feels like QA is now calling this a risk and because it threatens our jobs we are being told we are biased, so we can be ignored.  Psychology and politics are complex things and anything I write will not adequately cover this subject.  I don't object to people trying this, maybe it will work.  I still see it as a risk.  I hope some people take it and report back their observations.

Perhaps the final argument about us is that we are many different roles tied with one name.  This is a complexity that came up at WHOSE.  We talked less about testing than you might expect.  Communication and Math and Project Management and ...  We had a list of about 200 skills a tester might need.  We are a swiss-army knife.  We are mortar for the projects developed, and because every project is different, we have to fill in different cracks for each project.  We happened to do that best in some ways because we happen to see many different areas of a system.  We have to worry about security, the customers who use it, the code behind it, how time works, the business, and lots more.  That is ignoring the testing pieces, like what to test, when to test and how to justify the bugs you find.  Maybe we could remove the justification piece, but sometimes the justification matters even to the developer, because you don't know if it is worth fixing or what fix to choose.  I think we can't help being many different things because what we do is complicated.

Customers Don't Want QA?


While that is a lie in regards to me personally (I get excited when I see QA in the credits of video games), perhaps it is different for others.  I have written elsewhere how users of mobile devices are accepting more defects.  Perhaps that is true in general.  We know there is such a thing as too much QA.  To go to the absurd, when QA bankrupts the company, it is clearly too much QA.  We also know there is too little QA, when the company goes bankrupt from quality issues.  Perhaps we could say when there are regulatory constraints that close the company down because of a lack of QA could also be there.  Those are the edge cases, and like a typical tester, I like starting at boundary cases.  Customers clearly want some form of quality, and as long as testers help enhance quality (E.G. Value to someone who matters), testers are part of that equation.  Are they the only way to achieve quality?  Maybe not, but that involves either re-labeling (see above) or giving the task to others, be they the developers or the customers or someone else.  Some customers like the cheaper products with less quality, and that does exist.  Having looked at the games that exist on mobile devices that are free, my view is the quality is not nearly as good.  Then again, I want testers.  Maybe I'm biased.  Even if I was a developer for a company, I would like to have testers.

Should the ideas of no testers be explored?  I have no objection, but I would say that it should be done with caution and be carefully examined.  Which makes the companies who try it...well, testers.

 Brief Comment on the Community


I have heard a lot of good responses, albeit limited.  I have also seen some anger and frustration.  Sometimes silly things are said in the heat of the moment, likely from both sides.  We also have some cultural elements whom are jagged edged.  These can create an atmosphere of conflict, which can be good, but often just causes shouting without much communication.  One of the problems is most people don't have the time or energy to write long winded detailed posts like this.  We instead use twitter and use small slogans like #NoTesting.  I personally think twitter as a method for communicating is bad for the community, but I acknowledge it exists.  How can we improve the venue of these sorts of debates?  How can we improve our communication?  I don't pretend to have an answer, but maybe the community does.

UPDATE: I wrote a second piece around this topic.

Monday, December 1, 2014

Shorts: Ideas described in less words; Code Coverage, Metrics, Reading and More

Most times I write more 'essay' style articles, but Isaac and I have sometimes had small ideas we wanted to discuss but didn't feel like they were big enough to post on their own.  So I'm trying this out, with a series of small short ideas that might be valuable but are not too detailed.  Please feel free to comment on any of these shorts or on the idea of these smaller, less essay-style posts.  If you are really excited about a topic and ask interesting questions, I might try to follow it up with another essay-style post.

Code Coverage


Starting with a quote:
Recently my employer Rapita Systems released a tool demo in the form of a modified game of Tetris. Unlike "normal" Tetris, the goal is not to get a high score by clearing blocks, but rather to get a high code coverage score. To get the perfect score, you have to cause every part of the game's source code to execute. When a statement or a function executes during a test, we say it is "covered" by that test. - http://blog.jwhitham.org/2014/10/its-hard-to-test-software-even-simple.html

The interesting thing here is the idea of having manual testing linked to code coverage.  While there are lots of different forms of coverage, and coverage has limits, I think it is an interesting way of looking at code coverage.  In particular, it seems like it might be interesting if it was ever integrated into a larger exploratory model.  Have I at least touched all the code changes since the last build?  Am I exploring the right areas?  Does my coverage line up with the unit test coverage and between the two what are we missing?  This sort of tool would be interesting for that sort of query.  Granted you wouldn't know if you covered all likely scenarios, much less do complete testing (which is impossible), but more knowledge in this case feels better than not knowing.  At the very least, this knowledge allows action, where as just plain code coverage from unit tests as a metric isn't often used in an actionable way.

I wonder if anyone has done this sort of testing?  Did it work?  If you've tried this, please post a comment!

 Mario’s Minus World


Do you recall playing Super Mario Brothers on the Nintendo Entertainment System ("NES")? Those of you who do will have more appreciation for this short but I will try to make it clear to all.  Super Mario Bros, which made Nintendo's well known series of Mario games famous.  In it you are a character who has to travel through a series of 2D planes with obstacles including bricks and turtles that bite, all to save a princess.  What is interesting is that fans have in fact cataloged a large list of bugs for a game that came out in 1985.  Even more interesting, I recall trying to recreate one of the bugs back in my childhood, which is perhaps the most famous bug in gaming history.  It's known as the minus world bug.  The funny thing is in some of these cases, if a tester had found these bugs and they had been fixed and the tester would have tested out value rather than adding value in, at least for most customers.  I am not saying that we as testers should ignore bugs, but rather one man's bug can in some cases be another man's feature.

How Little We Read


I try not to talk much about the blog in a blog post (as it is rather meta) or post stats, but I do actually find them interesting.  They to some degree give me insight into what other testers care about.  My most read blog post was about the Software Test World Cup, with second place going to my book review of Exploratory Software Testing.  The STWC got roughly 750 hits and EST got roughly 450 hits.  Almost every one has heard of Lessons Learned in Software Testing (sometimes called "the blue test book").  It is a masterpiece made by Cem Kaner, James Bach and Bret Pettichord back in 2002.  I just happened upon a stat that made me sad to see how little we actually read books.  According to Bret Pettichord, "Lessons Learned in Software Testing...has sold 28,000 copies (2/09)".  28,000 copies?!?  In 7 years?!?  While the following assertion is not fully accurate and perhaps not fair, that means there is roughly 30k actively involved testers who consider the context drive approach.  That means my blog post which had the most hits saw roughly 3% of those testers.  Yes, the years are off, yes I don't know if those purchased books were read or if they were bought by companies and read by multiple people.  Lots of unknowns.  Still, that surprised me.  So, to the few testers I do reach, when was the last time you read a test related book?  When are you going to go read another?  Are books dead?

 Metrics: People Are Complicated

Metrics are useful little buggers.  Humans like them.  I've been listening to Alan and Brent in the podcast AB Testing, and they have some firm opinions on how it is important to measure users who don't know they are (or how they are) being measured.  I also just read Jeff Atwood's post about how little we read when we do read (see my above short).  Part of that appears to be those who want to contribute are so excited to get involved (or in a pessimistic view, spew out our ideology) that they fail to actually read what was written.  In Jeff Atwood's article, he points to a page that only exists in the Internet Archives, but had a interesting little quote.  For some context, the post was about a site meant to create a community of forum-style users, using points to encourage those users to write about various topics.

Members without any pre-existing friends on the site had little chance to earn points unless they literally campaigned for them in the comments, encouraging point whoring. Members with lots of friends on the site sat in unimpeachable positions on the scoreboards, encouraging elitism. People became stressed out that they were not earning enough points, and became frustrated because they had no direct control over their scores.
How is it that a metric, even a metric as meaningless as score stressed someone out?  Alan and Brent also talked about gamers buying games just to get Xbox gamer points, spending real money to earn points that don't matter.  Can that happen with more 'invisible metrics' or 'opaque metrics'?  When I try to help my grandmother on the phone deal with Netflix, the fact that they are running 300 AB tests.  What she sees and what I see sometimes varies to my frustration.  Maybe it is the AB testing or maybe it is just a language and training barrier (E.G. Is that flat square with text in it a button or just style or a flyout or a drop down?).

Worse yet, for those who measure, these AB tests don't explain why one is preferred over another.  Instead, that is a story we have to develop afterwards to explain the numbers.  In fact, I just told you several stories about how metrics are misused, but those stories were at least in part told by numbers.  In more theoretical grounds, let us consider a scenario.  Is it only mobile users who are expert level liked the B scenario while users of tablets and computers prefer A?  Assuming you have enough data, you have to ask, 'does your data even show that'?  Knowing the device is easy in comparison, but which devices count as tablets?  How do you know someone is an expert?  Worse yet, what if two people share an account, which one is the expert?  Even if you provide sub-accounts (as Netflix does), not everyone uses one and not consistently.  I'm not saying to ignore the metrics, just know that statistics are at best a proxy for the user's experience.

Monday, November 17, 2014

Introspection on why I left...

This is a story about a job long, long ago. 
In having gone through this, I developed a system of heuristic flags to look at an organization that you work for. While these are context sensitive, I tried to keep a little of the context in the flags to give the reader some idea of what I saw. Seeing these things as they happen is extremely difficult. When you are in the moment they can look like a compromise, yet remove yourself from the situation for 2-3 months and suddenly they become “OH MY GOD, I can’t believe they even asked us to do that!”
The reason that some of us persevered through lots and lots of these yellow and red flags is many. People don't like change, interviewing is change. People like to see things through to the end, quitting feels like you failed. I genuinely liked the people I worked with, or at least lots of them, but that doesn’t change the culture of the sociopaths in charge (Gervais Principle). Hindsight is 20/20, looking back, I should have quit way earlier, but recognizing the signs in real time is much more difficult then it would seem when writing this down years later.

(the stuff in italics are the warning signs)

It all started slowly. That’s the best way to disguise failure. It’s how we can convince ourselves, “we're just learning"…
I had been working for this company about three months when it was decided that we needed to go Agile. We hired 2 product owners did a couple of days of training. Made a backlog, estimated, planned a sprint and brought new team members on board. We didn’t have a scrum master but we ‘were’ looking for one. Can’t have everything at once we told ourselves.

Yellow Flag 1: Starting a process by ignoring pieces of the process before trying them is folly.

We start this grand adventure with a meeting where it was deemed, the biggest oldest crappiest hard-to-read, difficult-to-modify code would be the starting point for the new hotness. Not lets rewrite it, but take the ‘scroll of insanity’ as it was called, and use it to build on for the new hotness. But we sallied-forth, new process, one bad decision won’t kill us, and we plodded along for a couple of months. 
Then the QA manager had a life event and never recovered, he left the company and the rest of this story is sans QA having a voice. We told ourselves we can work without a manager ‘for the team’.

Yellow Flag 2: Working without a direct supervisor for more then 3 months. 

Don't get me wrong I don't want to work for just any manager, and neither should you. But if a decent (notice NOT GOOD) manager doesn't show up within 3 months, I can think of 2 things. The company doesn't attract quality people. Or they aren't making it their highest priority.

Soon our VP was walking through the building explaining to other people / board members the agile process in exquisite detail. Basically lying to their face, in front of ours.

Red Flag 1: When management tells everyone including you, we are Agile. Then proceeds to prevent you from being a self-directed team, doing backlog estimations, working on a single sprint at a time or defining done. Now that I see this, I wonder if it was a sadistic way of daring us to challenge him.

At this point we had a new development architect team. They had a working pre-alpha. As a concurrent B2B web application the VP decides we should load test this. I can handle a new challenge and was excited to get started.
First round of testing showed two concurrent users crash the new hotness. Spent a week tweaking on load testing cause clearly I’m missing something with the load test. After showing the development manager and VP the results. I was called a liar, and the load part of testing was given to someone else. Two weeks later he has the same results I do, and the development manager admits maybe there is an issue in our session management.
Meanwhile I'm off to functional automation. Automation is brittle and the 3rd party GUI libraries aren't helping. No hooks, dynamic id’s, I can’t work with this.  I of course had no manager to explain this to, but was informed, “It helps the developers move faster.” Which just puts QA, without a voice, further behind…

At this point the only thing keeping me at the company is the Warhammer Fantasy board between a developer and myself.
A co-worker and I start to share any job openings we find in an effort to get out, but due to the recession, there isn’t much.

As all this is happening, we move to production with two active sprints. In and of itself wasn’t insane until you note that shortly we had 3 active sprints in production with only two sprint teams. At one point we had three active sprints in production being worked on with bugs, an active sprint in stage, one in QA and a sixth in development.

Yellow Flag 3: Is having multiple sprints active per sprint team. 
Red Flag 2: Is having six different sprints being actively worked on, by two sprint teams.

We had a sprint board which was viewed as sacred ground, only the PO could touch, move, write, etc. on or with it. At one point one of the testers called the board the joke that it was, and said that without the ability to move, edit or touch things, it was stupid. He said this to the POs face. Within 10 minutes he was in the VP’s office being told to apologize or he was fired.

Red Flag 3: Anyone who criticizes the process, being threatened with their job. 

So, we were fast approaching an imaginary deadline of an alpha version of our software, and we were behind, working OT and generally trying to get 'er done. This particular company prided itself on the 2 weeks over christmas only requiring a skeleton crew, so most of us didn’t mind the OT, in a couple of weeks most of us would get a respite and then come back refreshed. When the VP came out and announced, “ very one will be working over christmas, no PTO except on the holidays themselves, and mandatory 50 hours every other week“, yes they had canceled christmas. For a date that only they believed in, never mind that most people were doing 60-80’s.

Yellow Flag 4:(approaching red) Is when management changes work schedules for you.  

Projects dead. QA ignored. Automation maintenance nightmare. VP unwilling to compromise. 5 active sprints. 

The day I gave my two week notice I had to write a letter to quit, cause I couldn’t find the VP (who was my manager) for the entire day. He never did show up.

So he politely asked me to not inform people of my two week deadline, he’d like to handle it. Of course I was more them willing to allow him to  handle it in his own way.  A week rolled by, with no announcement, so I was unable to tell anyone. As a result no one thought it important to learn and take over the tasks that I was doing. Week two started with a scum where people started to assign me work, and at this point I was tired of waiting and kinda upset. I told the entire team that most of that work needed to be someone else’s cause Friday was my last day. Everyone was upset, and yet officially my leaving still wasn’t announced till one day before my last.

To top off a wonderful last week of trying hard to make sure people are prepped for me being gone. My exit interview consisted of HR telling me over the phone, “No, that’s not why you’re leaving.” What do you say to that?

Red Flag 4: The process of handing off your knowledge to someone is stymied, and no one in management cares why you are leaving. (Albeit this flag is more for people still there.)

Monday, November 10, 2014

James Bach and Dr. James Whittaker: An attempt to understand viewpoints


Introduction, Intentions and Limits


I don't have the political ties to many of the big thinkers of the industry.  I have seen James Bach speak, had him address my blog, addressed his blog, read his book and I have even asked him a question at CAST, but I don't know him personally.  I have attempted to address Dr. Whittaker in his blog, but have never got a response and I have never meet him.  I have read Dr. Whittaker's (in)famous exploratory testing book, which is currently the second most popular posts I have written.  I take that to mean I am not the only one who has interest in Whittaker's views and I suspect I'm not the only one to never have met him.  In my consideration of both these individuals, I am not attempting to make it personal.  As you read this, please keep in mind that this is my attempt to understand their viewpoints.  This is an attempt to parse the words of each of them to gain insights.

I came from CAST a few months ago and while I'm not going to directly talk about CAST I want to talk about a subject near and dear to my heart. In listening to Jame Bach's talk, most of which was around the process of testing, but one piece was clearly controversial. This was the Testing vs Checking and the ideas around it. In talking with Bach, I now know I hold disagreements with some of his model. I mostly kept silent regarding this, except for my consideration of Heuristics and Algorithms where I pondered if what humans and computers do is equal, it just happens human are complex and unknown currently compared to that of a computer. This leads down the free will question that is unanswerable, which is why I didn't deep dive into that subject. I still am resisting that particular topic for now, but I may return to it on a later date.

Bach has interesting ways of subdividing his world, with the terms Testing and Checking, with the word Test meaning something different in the past, even for Bach. Some day I will do a word of the week on Test, but that isn't today's subject either. In fact I'm still dodging giving my detailed, full opinion on checking vs testing, but be assured it will come up. Today's topic actually is about my studies of James Whittaker and what appears in my view as James Bach's answer to that viewpoint. I choose these two in part because I was interested in why Whittaker left testing and for Bach's part, he is a prolific exemplar of a testing professional for me to consider.  Bach also has address some areas that Whittaker has talked about, making him a easy to cite exemplar as well. This is not meant to be name calling, but rather I am trying to describe how I see these thinkers in test hold positions and viewpoints that are deeply held by many individuals. Certainly other people have variations of these viewpoints and certainly I maybe miss-reading parts of these viewpoints. I struggle not to put words in people's mouths, but I have to admit some of this is speculation based upon the comments I've found that Whittaker has made between 2008ish and 2014. Those are limited and I have studied them over months, so my sourcing is limited. James Bach, on the other hand, is prolific, which means I might be missing data. With that said, on with the show!

Warning: This is a long winded personal discovery post, you have been warned.

 What Dr. Whittaker has said on Testing


In .NET Rocks 408 from 2009, Whittaker complained about only having guiding practices, with no solid knowledge, saying,
“I'll just be happy when testing works... because they have guiding practices, guiding theory....”   
Later he noted how more testers means, in his words, lazy developers:
“We claim in many groups a 1 : 1 dev/testing ratio [in Microsoft]... Google [has]... 1:10 tester ratio... 1:1, you think oh wow, that is a lot of testing, also a lot of lazy developers to be perfectly honest... At Google they have no choice but to be a little more careful in development.”
Whittaker wants to remove as much inventive thinking from the process as possible, saying things like “In Visual Studio 2010... much of that [change analysis] we will automate completely..." and
“What can you tell me about the new tools... we think you will be able to reproduce every bug.”  That is a fairly wide and amazing claim.  He even starts talking about intelligent code that will find bugs all by itself,  “Only if it is the right sort of unit testing. If it isn't finding actionable bugs... throw it out. If the test cases themselves were feature aware, they know, they know what bugs they are good at finding... this is the future I want.”  Why do you need to inject intelligent reasoning when all problems have been solved before, but are hard-coded to one particular program?  “I'm convinced, on this planet, everything that can be tested, has been tested.”  In fact, why not remove testing as an activity all together?  Just have the machine figure out what needs to be tested, even before you hit compile!  “We[test] need to be invited in to be a full fledged member... we have to get to the point where we [test] are contributing more. Can we get software testing to be more like spell check is in word.... this needs to be the focus of software testers. These late cycle heroics and these testers who are married to late cycle heroes are Christmas Tester Past. ”

Dr. Whittaker imagines a future where testers are more-or-less developers. They might have a few test skills but basically once the magic tools exist, then developers should be solving these problems.  But of course he might have changed his mind since then?  I mean that was in 2009!  What about modern times?

Quality is no less important, of course, but achieving it requires a different focus than in the past. Hiring a bunch of testers will ensure that you need those testers. Testers are a crutch. A self-fulfilling prophesy. The more you hire the more you will need. Testing is much less an actual role now as it is an activity that has blended into the other activities developers perform every day. You can continue to test like its 1999, but why would you? - March 25, 2014, http://www.stickyminds.com/interview/leadership-and-career-success-purpose-interview-james-whittaker
Nope, Dr. Whittaker's viewpoint has gotten even more extreme. Dr. Whittaker might say I am a developer given the 'advanced' automation I write, but he certainly thinks most testers should just go away. He would rather have testers all fired, because [manual?] testers just enable developers to write bad code.  Why does he feel this way? Well first of all, he's in a super-large company that hires a great deal of the best talent there is.  Microsoft has some amazing developers and clearly they can write some amazing code, so maybe he sees them and assumes everyone else is like that.  Before he was at Microsoft, he worked for Google, so his perspective might be skewed.  Perhaps Microsoft and Google can get away with fewer or no testers.  Facebook seems to do it. He certainly seems to hint at it with his most recent blog post on the subject.  His infamous 'traditional test is dead' talk also seems to indicate that you need enough people to dog food your product, which requires a large community of people willing to file bugs for you. That does not apply to everyone, but he might not realize that?  Dr. Whittaker's history has, as best I can tell, been mostly about bigger companies and teaching. Perhaps there are other reasons I have not been able to discover.  I'd love to hear from Dr. Whittaker!  Clearly James Bach, has a reply...

James Bach and Automation


James Bach, a man who consults with and trains hundreds (or more?) of testers each year also has an interesting vantage point.  He mostly works on testing from an intelligent gathering of information through investigation.  He argues, in essence that any investigation made by a programmer in order to write automated tests (i.e. "checks") or executing the automated tests is in fact not equal to that of a human being.  All of Dr. Whittaker's magic tools make less sense in the mind of Bach.  Not that Bach doesn't want these tools, but he would argue without an intelligent human, the tool provides no additional value.  He has gone so far as to start labeling automated testing as checking. Not saying they have no value, but saying that their scope is limited compared to a human. I think Bach would even argue an AI-oriented test method, load testing, model driven testing and other "beyond human" testing could be done by a tester, given infinite amounts of time and/or humans. To be clear, James Bach seems to not view it as possible that there are tests that are beyond human, but I will address this later.  Therefore, I could simplify the argument to this:

if(Human.Testing > Computer.Testing) then throw new DoNotOverloadWord("Computer Testing and Human Testing are not the same!");

Ironically, Dr. Whittaker might agree with that simplification if only you change the ">" to "<".  While I might agree with James Bach in his basic point, I think that there are a few concerns.  One is that trying to change a word used in many different cultures, not just software testing to something new just makes another standard.  He's just added another definition to an already complicated word, making it even harder to understand. Worse yet, it seems like in defining the word that way, Bach has accidentally made his own standard for the word test, which will compete with other standards. I don't know if Bach wants to be a sort of defacto-ISO standard or not, but that's how it might become.  I certainly sometimes see how his passionate arguments could be perceived as an attempt to be sort of standard.  On the other hand, I also hate the answer 'it depends on context' (without something to back it up), so at least when I read Bach's blogs, I know what he means when he says checking.  Dr. Whittaker doesn't seem to put in nearly as much rigor in his words, making his ideas a little more slippery.  I don't know that there is an ideal way of handling this problem.  Perhaps that is part of why schools of testing showed up, so you know whose standard definitions you use.

When you say "Test" which definition are you using?  The Bachian 2014 definition.  Oh, you've not moved on to the 2015 definition?  No, it's out now?

Back to the question of computers and if they can go beyond humans in testing/checking.  I take it that Bach feels that computer's ability to test/check cannot go beyond people from this graphic:


Machine Checking is inside Testing and James Bach defines Testing as:
Testing is the process of evaluating a product by learning about it through exploration and experimentation, which includes to some degree: questioning, study, modeling, observation, inference, etc. 
So if you run a load test, the load that the machine generates allows you to observe the system under load, using machine generated metrics like page load times or database CPU usage. The question is where does the load test code belong?  Is it 'checking' in that it navigates the pages?  I suppose it does blow up if it can't load the page, which is a check, but the primary point of load tests is not about the functions of the page as much as the non-functional requirements. Is the data gathered by the load test a model?  If I have a graph of users to page load times, isn't that a model... created by a machine?  Bach does not seem to have a reply to this, other than to say that a machine embodies its script and thus is separate from humans. Whittaker on the other hand does not care as much about the distinction as he does about getting the technology good enough to get the data into engineers hands without a middleman tester writing a specialized load test script. Better yet, in Whittaker's mind, get your users to do the testing by deploying the code.




Another issue is that while the test executes, it is limited to the data and code written, even if you have a genetic algorithm for testing and use Google as a datasource.  These are powerful algorithms and Google as a datasource outmatches my own knowledge base in a sense. While I agree this is perhaps different than a human who is perhaps in some way more advanced than said algorithms, it can also be beyond human.  Perhaps because humans can reason and the algorithms are limited to the code written there is a distinction to be made, which is fair enough, even if I still am not 100% convinced.  Is reasoning just another term for 'a process we don't fully understand' while code is something we can?  Is 'tacit knowledge' and 'cultural understand' really limited to just humans or will computers with enough data be able to generate similar models some day?  I admit human brains seem different, maybe we have a soul, maybe we have free will, maybe....  But to be fair, even with my imagined super-advanced testing system, I know 98% of automated testing systems are not attempting to approach these sorts of advancements.  Even if our automation systems aren't nearly as good as I think they can be, does that mean that testing vs checking will always be a dichotomy or will programs eventually get good enough that we can't tell the difference?  And what if humans are always special and better, what about diminishing returns? Hand-washed clothing maybe cleaner and possibly more water efficient, but a washing machine is preferred by nearly everyone in the modern world.  Perhaps all of this doesn't matter right now, but Dr. Whittaker seems to think we're there.  Maybe places like Microsoft are there?  Maybe "Tester" as a role makes less sense as more of that role is automated away, and maybe when you throw enough developers to create the automation, you can have most of your testing automated?

So then there is another question that comes to mind, if executing a "automated test" is checking, while the human is writing the algorithm to do the 'checking', is that testing?  In speaking with Bach, it is not in his mind.  My brain, thinking about the problem, writing code, creating models in my head about the system under 'test' (system under check?) is not testing according to Bach.  Very interesting viewpoint, making my claim of about this site being 98% about testing untrue in Bach's viewpoint.  Dr. Whittaker would seem to say that of course that is testing, even if I personally shouldn't be doing it or at least I should also be writing production code.

To be fair, Dr. Whittaker also argues that users are better than machines or testers.  This is a point both Whittaker and Bach might agree on, even if Bach might say only in some context.  But, in saying that I think Dr. Whittaker loses the point that users of a product can change products quickly and are fickle.  The mighty IE 6 failed and clearly IE is no longer top dog.  Listening to users might have been part of that, but an insider advocating for the users also might have been helpful! Users often don't know what they want and what they want changes quickly. This has known this for years. What makes Dr. Whittaker think that all products have a single user base that only wants one set of features? One man's bug is another man's feature.  For that matter, what about products that don't have a exploitable user base?  How do you dog food software for shipping products (OMS) when it costs money every minute the software is broken?  How about we dog food a pace maker; that will work, right?  James Bach would note that quality is not a thing you can build into software but rather a sort of relationship.  I suspect that if you break the software too often, you harm the user's interest in that software and they will move on.

What can I see from here?


I very much appreciate that these giants of the industry allow me to stand on their shoulders and look out upon the sea of possibilities.  I don't know exactly where I stand in an 'official' sense, but I am sure that both of them hold good ideas, ideas that should be investigated... nay, should be tested.  To his credit, James Bach agrees and wants to continue to make his ideas better.  The problem is that with different context there are different needs.  I have pondered from time to time if we could lock down context (e.g. E-Commerce website, X-Y income, X-Y products sold, Z types of browsers), could we say something about which style of testing would make more sense?  In considering both arguments, I have to say that in my experience, automation can go beyond what a human can test, but the value provided often is different than that provided by manual testing.  On the other hand, often the results require human analysis, meaning that human judgment is still required. Like a load test that knocks over a system after throwing 500% of normal load, a human is required to do the analysis and render a value judgment.  I having also seen some development shops that have gone without testers. At least in one case, they seemed to have almost pulled it off in a superficial view, but once you dig in, I think it became clear that some people or places do need testers.  I don't think there is a way of giving detailed advice that is universal, and without some well controlled studies, it maybe impossible to tell when a particular methodology is superior in a given context.

While I can't speak to what method is the best or which are superior to others, I can speak to what worked and what didn't for me in my context. Using only my own contexts and what I have seen I can make a few broad statements.  Few people do automation well.  The most successful automation does not involve a UI or when it does, it is in a manumated process. Those who do automation well are treasures.  Few, but more people do manual testing well.  These people are also treasures.  If you have only bench testing by the developers, adding either automated or manual testing (or checking) is likely to increase the quality for the product in small to medium size organizations.  I am not sure how to logically organize test/development for large organizations (more than 500 people in software development).

Not everything is easy to be tested by code.  Not everything is easy to test by people.  If you can't get people to do the testing, either it might not matter enough or you should automate it and accept the limits of checking.  If your automation keeps breaking either your software is flaky or perhaps it shouldn't be automating (yet, if ever).  Testing is hard and you need people who understand testing, be they developers or testers.  For that matter, you also need people to understand software development, the business, the customers and probably the accounting.  How you label these people may affect how they do the job, and maybe it matters, but what matters most is that the job is done as much as it needs to be done.

Ultimately, software development is hard.  Dr. Whittaker and James Bach come at the problems from very different places, but both want to improve the quality of software.  Even with their extremely different views both have contributed towards their goals.  These extremes do cause some polarization and sadly this post might sound negative towards both their views, but it is hard to find common ground between two extreme views. On the other hand, Whittaker helped make Visual Studio 2008 and 2010 better at testing while Bach has been working on perfecting RST!  Both these men have worked towards their visions and have created useful artifacts from their near disparate visions. While I have disagreements with both sides, I am glad they were around making my job of testing better and easier.

So if I haven't said it yet, thank you.

One last thing.  If Dr. James Whittaker or James Bach respond to this with any corrections, I will update this post.  I maybe challenging my understanding of their ideas but I am not trying to put words in their mouths.

Tuesday, November 4, 2014

Consultant's War

My co-contributor Isaac has been pondering the fact that we see more consultants blogging regularly, speaking their mind and controlling the things we talk about.  I don't care if it is  Dorathy "Dot" Graham, Markus Gärtner, Matt Heusser,  Michael Bolton, Michael Larsen (Correction: As Matt pointed out, I got my facts mixed up and Micheal is not a consultant), Karen Nicole Johnson,  James Bach or Rex Black.  There are dozens of consultants I could have listed (I mean, how could I have missed Doug Hoffman!), some of whom I have personally worked with.  So this is not meant to be a personal attack on any of them.  I realize I list more CDT folk than others, and while that is not intentional, I read less of the thought leaders from the other schools.  If you look, I bet most of the people you read either are or were consultants [In interest of full disclosure, I have written a hand full of articles and have been paid, but I am not nor have I ever been a consultant, and with you reading this, it is at least one counter-argument, but we'll get to that in a bit.].  If you made it here, there is little doubt you've read at least of those author's works at least once.  They are stars!  I mean that literally.  I found this image of James Bach just by searching his name:
From: http://qainsight.net/2008/04/08/james-bach-the-qa-hero/
I didn't even use image search to find that, it came up on the first main page of my Google search results!  I respect James, he writes well and has interesting ideas, but I have no opinion on his actual testing skills, as using his own measure, I have not seen him perform.  Keeping that in mind, I have not seen any of the above consultants do any extended performances in testing, even if I have listened to many of them talk and gained insights from them.  They know how to communicate and they are often awesome writers.  Much better than I am.  They all are senior in the sense that they have done testing for years.  Some of them disagree with others, making it often a judgment call of their written works on who to listen to.  Some of it is the author's voice.  In my opinion, Karen Johnson is a much softer and gentler voice than James Bach.  But some of it is factual.  I have documented several debates between Rex Black and various other people I have listed.  Rex has substantially different ideas on how the world should work regarding testing.

 ISO 29119


James Christie, another consultant brought up a standards in CAST 2014.  It was a good talk and clearly he had some valid concerns about the standards that have been created.  I have talked about that earlier (I am not going to include as many 'justifying' links as all my points and links are in the earlier article).  I am actually not terribly interested in the arguments and in reality, ISO 29119 is more of a placeholder than the actual point of interest.  I am more interested in discussing who is creating these debates.  In part we are in a war between consultants and in part we are in a war of 'bed making'. Clearly the loudest voices are from those who have free time and have money at stake. The pro-ISO side has take the stance of doing as little communication as possible, because it is hard to have a debate when one side won't talk. It is a tactical choice, and it is very difficult to compromise when one side doesn't speak. Particularly when the standard is already in place, the pro-standards side doesn't have much to gain from the messy-bed side who wants the standard withdrawn.

Both groups of consultants have money to gain from it.  A defacto-RST-standard would make Bach a richer man that he already is.  Matt Heusser's LST does give at least some competition of ideas, but still that isn't much more diversity. The standards would probably make the pro-standards consultants more money as they can say, "as the creator of the standard I know how to implement it."  Even if they made no more money in so far as making the standard, they will be better known for having made the standard. Even if I assume that no one was doing this out of self-interest, the people best represented are the consultants and to a lesser degree, the academics, not the practitioners who may feel the most impact.  Those leading the charge both for and against the standard are primarily consultants or recent ex-consultants. Clearly this is a war for which the people with the most time are waging, and those are the consultants. Ask yourself, of those you have heard debate the issue, what percentage are just working for a company?

Granted, I am not a consultant, but it takes a LOT of effort to write these posts.  It isn't marketing for me, except possibly for future job hunting, and the hope that I will help other people in the profession.  I know non-consultants are talking about it, but we don't have a lot of champions who aren't consultants.  Maybe most senior level testers become consultants, perhaps due to disillusionment of testing at their companies.  Maybe that is why consultants fight so bitterly hard for and against things like standards.  Perhaps my assertion that money at the table is a part of it is just idle speculation not really fit for print.  I can honestly believe it to be that is possibly the large majority of the consultants involved.

 Bed: Do you make yours?


Then what is it that causes these differences of view?  Well let me go back to the bed making.  To quote Steve Yegge:
I had a high school English teacher who, on the last day of the semester, much to everyone's surprise, claimed she should tell just by looking at each of us whether we're a slob or a neat-freak. She also claimed (and I think I agree with this) that the dividing line between slob and neat-freak, the sure-fire indicator, is whether you make your bed each morning. Then she pointed at each each of us in turn and announced "slob" or "neat-freak". Everyone agreed she had us pegged. (And most of us were slobs. Making beds is for chumps.)
That seems like a pretty good heuristic and I think it is also a good analogy.  Often those who want more documentation, with things well understood before starting the work and more feeling of being in control via documentation are those who feel standards are a good idea.  They want their metaphorical beds made.  They like having lots of details written down, they like check lists and would rather make the bed than have a ‘mess’ left all day. A nice neat made bed feels good to them. Then there are the people who see that neat bed and think it is a waste of time at best. At worst, they think that someone will start making them make their bed too. Personally, I think "Making beds is for chumps", just like Steve Yegge. Jeff Atwood would go even further and call it a religious viewpoint:
But software development is, and has always been, a religion. We band together into groups of people who believe the same things, with very little basis for proving any of those beliefs. Java versus .NET. Microsoft versus Google. Static languages versus Dynamic languages. We may kid ourselves into believing we're "computer scientists", but when was the last time you used a hypothesis and a control to prove anything? We're too busy solving customer problems in the chosen tool, unbeliever!
Clearly the consultants, many of whom I take valuable bits of data from care about what they do and they tried to lock down their empirical knowledge into demonstrable truths.   But that doesn't mean they have 'the truth'. I know as testers we try to have controls, but I am not so convinced we have testing down to a science.  It is why I feel we aren't ready for standards, but I also recognize the limits of my own knowledge.

For what it is worth, I tend to be against bed making, I think having all this formal work and making checklists is rather pointless unless they fit what I am doing. Having one checklist to rule them all with a disclaimer that YMMV and do the bits you want doesn’t sound much like a standard, but those whom like their bed nice and neat probably feel very happy when they come home at night. The ‘truth’ about the value of making your bed, the evidence that it is better is less than clear. Maybe one day we will have solid evidence in a particular area that a particular method is better than another, but we aren’t there yet in my opinion. But still I don’t make my bed.  In case you are wondering, I think this is probably one of the hardest questions we have in the industry:  What methods work best given a particular context and what parts of the context matter most?

Wednesday, October 29, 2014

In consideration of ISO 29119

I have been aware of this debate ever since attending CAST 2014, but I've not been quick to sign.  I wanted to investigate and see what other viewpoints there might be.  To quote Everett Hughes, a sociologist:
“In return for access to their extraordinary knowledge in matters of great human importance, society has granted them a mandate for social control in their fields of specialization, a high degree of autonomy in their practice, and a license to determine who shall assume the mantle of professional authority. But in the current climate of criticism, controversy, and dissatisfaction, the bargain is coming unstuck. When the professions’ claim to extraordinary knowledge is so much in question, why should we continue to grant them extraordinary rights and privileges?”
This is a serious question, one for which I appreciate that a standard might seem like a correct method for 'proof' of our professional authority.  However, even if the standard is in fact a professional guide, I don't think anyone but practitioners can judge its value.  As someone who has had a keen interest in trying to understand this standard, and not wanting to judge too quickly, I have tried to get a hold of a great deal of material about it.  I have engaged with some of those whom disagree with my point of view, such as professional tester.  In fact, they have published my response in there Oct. 2014 issue (albeit with some minor mistakes from my draft).  I looked into purchasing the standard.  I have looked for those who are pro-standard and what they've had to say about this movement as well as what they have said in the past.

That being said, I'm not convinced we actually do know what a standard should look like, much less if this is the standard we need.  Maybe it is what we need, but I strongly doubt it.  I think we are likely many years or decades away before we will even be able to claim we have true repeatable practices oriented towards different contexts, assuming it is possible.  I am unable to judge at present what the standard does say, as the standard and the standard body's work is less than transparent.  I've been forced to sign up with personal details in order to read documentation about the standard's creation, although that has been changed since (NOTE: The file name also changed, including a date, which makes it hard to know if anything has changed, as there was no change log as of 10/26/2014).  The standard requests I pay in a currency that I don't use.  I've signing the petition to withdraw the standard not because I know it is wrong, but because I can't tell, which makes it useless at best and dangerously wrong at worst.  What I can tell is that the standard's various author's other documents around the standard demonstrate what I consider to be confusing if not out right contradictory statements, making me doubt the actual standard.  That breaks the social bargain us professionals have that Everett Hughes so eloquently described.

Perhaps you could say that I should have been personally involved in the standard, and that is a valid complaint.  However, I'm not aware of anyone particularly reaching out to the AST, nor have I heard of it from any other group except for James Christie's talk in 2014.  I have attended both AST sponsored and non-AST sponsored conferences for years, so this isn't a case of willful ignorance.  This is my first chance to review the material and process, yet I have not found the process particularly open or transparent.  I see claims of no best practices and claims that the standard will create best practices.  I have found so many confusing statements by the standard's body that I must conclude that the standard should be withdrawn until it can be thoroughly reviewed and modified, if it can even be modified into something useful.

Even ignoring past statements, the recent defense of the standard creates questions.  One of the easiest and most obvious to consider is who wrote the standard.  Then there is the question of who pays for the creation of the standard?  Well clearly this was not just a labor of love, as Dr. Reid says the costs of development have to be passed on to the customer's of the standard.  I should note, my blog makes me no money and I am not a consultant so I have little incentive to make money speaking about this.  I made no money in writing my letter to the editor, and I certainly don't demand you pay to read my work.  I am not discounting the cost of writing the standard, just simply saying that if you plan on having expenses paid by publishing your work, you are not simply doing this out of the kindness of your heart.  There is lots of analysis that could be done just on the defense of the standards alone, but is outside the scope of this particular post.

One of the oddest and most compelling arguments for both sides is from Rex Black, in which he notes that about 98% of all testers won't care one way or another.  I think this is true, which makes the standards mostly not matter, but it also means that those 98% who are silent count on us to ensure these are the right standards, lest they become popular and that silent 98% ends up forced into using them.  I fear that this non-involvement is also further evidence that we testers as a group are not acting like a profession.  It isn't that we don't claim to have "extraordinary knowledge" and Dr. Reid at least seems to argue we mostly agree on this knowledge, but rather the majority of people don't feel any need to actively participate.  I realize this might be an argument for why we need a standard -- to show the disinterested the 'right' way to test, but to me it seems to indicate just how young our industry is and seems to me that shows why we aren't ready for a standard.

Even if the ISO body decided these documents ultimately should stand, the objections of the AST/Context Driven community need to be noted in such a standard.  Furthermore, making the document open will go a long way in allowing the community to discuss this document beyond the smaller standard's committee.  I recognize that ISO needs income to maintain itself and won't publish them for free for everyone, but certainly some sort of 'for individuals, not corporations' license could be used (and I don't mean the sort of non-sense Matt Heusser describes).  Finally, if this is an attempt to demonstrate our commitment to professional testing, then it needs to be accessible to our community.  The work needs to demonstrate it's value rather than being buried away inaccessible to those who would use it.

Tuesday, October 14, 2014

Noise to Signal: An Attempt to Understanding What Smart People Have To Say

Where I've been


Frankly, I don't know how to start this thing.  Perhaps it belongs in my bedroom when I was in high school writing little programs.  Perhaps it belongs just a few years back.  I have and am constantly looking to, in some way, do better for whatever "better" is for me.  I don't pretend to have answers for everyone or even that there is a static answer for myself.

In my bedroom I had finally found an interesting productive task.  I was learning to program.  I would search for hours for that one little magic bean to get me moving along again.  To use a game metaphor, I was leveling up in my knowledge at a rapid pace.

At my first testing job, I did firmware and hardware testing.  I sat around, watching machines work and sometimes break down.  As there was a large hardware component, I found quickly there was little to learn in rapid fashion, so instead I leveled up my political and social abilities, talking with the leads, learning how the organization worked.  I became a lead, and saw that it too was a trap, with just more paper pushing, so I found a different area to do more software-oriented testing.  I learned about LDAP and Kerberos and, and and ....  I was on the path to leveling up.  I was still programming in my bedroom, still writing games, and later different applications I found a need for, or an interest in developing.

Skipping a little, I started developing my own automation.  I built framework after framework, learning what things I didn't like and what I did.  I started in on teaching people.  I have mentored several people, sometimes more successful than others.  I leveled up in different areas around testing and development.  I had built my own testing tool, which admittedly has been neglected for years at this point, but I gained a lot of insight into the underbelly of an automation tool.  I even did market research to see if it was worth starting my own company.  I didn't start my own company for reasons not within scope of this blog, but it appears to have been a wise choice.

All of these things I did on the basis of my judgment and learning.  Isaac might have been a something of a mentor, I did read a few books, but mostly I did it on my own.  The last few years I have been looking at where to go from 'here'.  I have done most of the interesting difficult forms of automation, I have even worked towards pioneering my own methods and frameworks.  So in looking for what's next, I've been trying to learn from smart people and what they have to say, and not just within testing.

What do smart people say?


If you Google about what smart people know or do, you are bound to find pithy pieces of generic advice.  Not to say there aren't perhaps hidden gems in this, but these gems are minor.  Frankly in the world of wisdom, anything involving a list seems like someone who doesn't know what they are talking about but hope there work gets hits.  Add exclamation marks for further effect, and it feels less like real data and more like an advertisement.  Let me give you some examples:
  1. Work Hard!
  2. Don't Quit, Even When It's Hard!
  3. Never Stop Trying!
  4. Ignore Everyone Else and Do What You Think Is Right!
  5. Don't Look Back!
These are all easily disqualifable, be it too generic to be useful or falsifiable (E.G. Ignoring your past makes you a future fool ; another pithy statement, but sometimes true).  So seriously guys, how do I get better?

Well let me give you something that looks like a list but is rather a set of sources I have considered.  These are not quotes, but roughly the idea I took from them.
  • Don't set goals, use systems. - Scott Adams
  • Find things that don't scale in a linear fashion. - Scott Adams, Randy Pausch
  • Change is inevitable. - Robert Heinlein
  • Progress comes from people trying to solve a problem in a more effective/lazy way - Robert Heinlein, Paul Graham, Dr. Richard Hamming
  • Luck matters, but only a little - Too many to list
  • Knowledge and productivity compounds, the earlier or more you learn, the more you'll know later. - Dr. Richard Hamming, Steve Yegge
  • I have managed doubts in what I 'know', and that is okay. - Dr. Richard Hamming, James Christie
  • People's requests and errands take time, up to all of it. - Heinlein, Paul Graham
For those interested, I will include some links at the end of this blog for where I got some of my ideas.  I will stop there, because this isn't meant to be a final list of smart people's knowledge. Instead, I'm going to talk about the challenge Heinlein, Dr. Hamming, Paul Graham and Steve Yegge have all presented in some way or fashion.

Do you know what problems matter?  Why aren't you working on them?


After thinking about this mind-blowing set of questions, it makes me ponder what answer can I give?  I could give answers for my own company, but that feels too small.  I could give answers to the world's problems, but that is too big.  So maybe I can give answers around my industry, with a focus on testing:
  • Why do testers leave at year 5?  
  • Why do automation efforts fail so often (and is that related to people only staying in testing for 5 years or less)?  
  • How should we be teaching the next generation of testers?  Do we have accessible training designed to get people to the 5 year mark?  What about beyond that?
  • Why is only a small percentile of humanity appear useful to me (and am I in that percentile)?  
    • For that matter is my definition of useful a good and valid definition and would investigating that provide me with an interesting problem?  
  • How do we extract the most useful data in the world and provide it to the people who care?
  • How can I get machines to solve most basic problems?  
    • What do we do with those who were working before these problems were automated?
  • How do we effectively train the next generation so they don't repeat the same mistakes as the last generation?
  • When are particular methods of testing more or most productive?
  • How can we significantly improve testing without additional resources (E.G. Tools)?
  • How do we improve or prevent the conditions for sweatshop-like development companies?
  • How do we improve security in our software development?
This list was developed over one night and is not complete, but the important question to me is am I working on them?  Any of them?  However, I didn't write this blog post for me.  The real question is...  Are you working on problems that matter?  And if not, why not?  If you are working on important problems, then let's talk!



[Some] Sources:
http://www.paulgraham.com/hs.html
http://www.paulgraham.com/hamming.html
http://www.cs.virginia.edu/~robins/YouAndYourResearch.html
http://www.amazon.com/How-Fail-Almost-Everything-Still/dp/1591846919/
http://steve-yegge.blogspot.com/2008/06/done-and-gets-things-smart.html
http://www.youtube.com/watch?v=ji5_MqicxSo
Edit, Additional Link: http://www.youtube.com/watch?v=vKmQW_Nkfk8

Thursday, October 2, 2014

Societal Norms, Software Development and Culture

Preamble

I don't think that everything we do and consider in regards to test should actually revolve around testing.  That is to say, while testing is a primary consideration, it is just one of many considerations.  For example, making our team as optimal as possible might be a consideration.  Say that testing a particular feature would set back the sales group who are trying to do a demo of the product in the environment you are testing on.  Now not testing might in fact be considered an important activity in your testing career.  This post is simply about factors that while are very important in software development, they are not limited to that subject.

I personally tend to read many different and varied development related blogs, and you will find I write broad posts, posts that are not directly tied with testing, but certainly can be applied to testing.  This is one of those; you have been warned.  I have held this article for many months while I have struggled with the words, which I am still not completely happy with, so I hope you forgive the age of the linked article.  In my reading of the post Games Girls Onions, a post specifically about game development, I found some interesting questions I have asked in my career.  I think I can perhaps boil it down to two questions:

1. How do I deal with individuals?

2. How do I deal with groups of individuals?

This particular person found some of the interactions of her male co-workers to be sub-optimal, but the question is, did they intentional ignore those questions, or were they trying to find the right balance between the group and the individual?  Furthermore, if that balance is wrong, is that a cultural thing or a issue with a particular person?  Last but certainly not least, if some particular external group dislikes what the culture does, is it the individual or the group that needs to change?  Lets try to dive into these questions.

In my personal dealings, I have found that each and every individual is different.  That is not news worthy of course, but it is worth thinking about.  I have dealt with thousands of different people, and each one of them had some subtle variation to them.  However, most high level personality traits could be categorized, and those categories can play a part in how you deal with someone in a new situation.

Complexity


Consider this example, you know your over-zealous co-worker who cares a great deal about usability.  Even though you don't know for sure, you can guess that like other testers you've dealt with in the past, they too will enjoy working in an area they are passionate about.  Offering to trade your UI piece for their API piece might be a good idea if you don't have any particular passion for UI testing.  This of course assumes your boss is okay with it, which makes this at least a small group choice.  Since you know the last few times you have juggled work around, she was totally good with it, so you don't even bother to ask first and just do it.  Lets pause here and consider.  You choose to do the thing that made your co-worker happy while not bothering your boss, as they are likely busy.  From an individual basis, you made your co worker happy.

You didn't consider the team, as you don't 'report' to the team.  Now in a culture that appreciates employee happiness and efficiency, this might work out fine.  In a culture that rather have consistency and order, this choice may be a problem, even when no one is directly affected.  Your boss might respect your choice of taking initiative, but still might chew you out because the group or culture doesn't embrace that style of work.  Maybe the guy you swapped with is disliked by one of the developers on the UI team (as he is too passionate), so this makes things worse for the team.  But who decides these things?  Human activities are really complicated and to put blame on any individual is likely unfair.

Norms


Social norms start to show up in a particular culture to define what is or isn't acceptable so that way a group knows how to react and what is acceptable.  I realize saying this might offend someone, but when you try to break social norms, expect the society to attempt to break you.  In the culture of development, there is an assumption that it was a male dominated activity.  Like it or not, that is presently, in most countries, a social norm and those who break that norm are more likely to feel social pressure.  Is it natural?  Norms are natural in human society, this particular norm is just cultural.  I recently heard in a BBC radio interview that one country (no citation, as I can't find the details) has about a 1:1 ratio of women to men in software development.  To consider a different culture and norm, Victorians expected a particular style of dress to fit into a particular class.  If you didn't fit into that dress, you failed their normative test and you were looked down upon.  Now that norm is broken and so we can now wear Jeans acceptably.  On the other hand, in my culture, I can't wear slippers into work everyday without being looked at strangely (if not fired, depending on which job I look back at).

Back to considering the article, the developer talks about her nice window office -- a thing that is usually reserved for higher level developers.  Maybe that isn't the case at her office, but to me that is automatically a status symbol.  Now she notes that one time, a weird incident happen where a drunk co-worker showed up and talked to her in a way she didn't like for 20 minutes.  She talks about being treated differently, because she is being classified in what is defined as a social norm that she did not agree to.  The problem is, we as a society don't have sophisticated categories in our culture which model real human beings.  It is difficult to know exactly how to handle a female developer who accepts crude jokes, wants to be looked upon as socially confident, but then feels threatened by being alone with a drunken coworker.  It is not that I don't appreciate the dichotomy, how situations can be uncomfortable, nor how the details might be important in giving me a more solid opinion.  In culture, norms tend to be broad heuristics and in order to change the cultural and norms, it takes a long time and requires us to make mistakes along the way.

Being a Fraud


In fact, right now, we have a very cold, unfeeling society.  Consider Kevin Fishburne's comment in that post,
From the mortgage industry to pizza joints it's been my experience that regardless of how good or bad your relationships appear at work once you walk out that door they're over. I liken being at work to acting in a play; you're not being you, but what you think you need to be at the time.
 I'm sure there are plenty of exceptions where great lifelong friendships are formed on the job but it hasn't happened to me, sometimes much to my surprise. Not sure if this is just me or an actual phenomenon, but if the latter it may have some bearing on why people act differently at work then they would elsewhere....
The idea that we all follow social norms, faking our way through the work experience appears to not be particularly a-typical.  The fact that the author feels the need to complain about these issues shows she is fighting at least one social norm.  As a tester, it is not unreasonable for us too to want to test our current social situations and attempt to improve them.  One item that I agree with the author on, but am not sure is a 'bug' is the feeling that you are a fraud.  To take a few quotes from the movie Art & Copy,
The frightening and most difficult thing about being what somebody calls a creative person is that you have absolutely no idea where any of your thoughts come from, really. And especially, you don't have any idea about where they're going to come from tomorrow. - Hal Riney

I think most of the creative people are so damn insecure that they want to think they know everything, but they know deep in their hearts they're just in deep trouble from the minute they get up in the morning. So if you can tell them "that's what you're supposed to be", that's kind of liberating. - Dan Wieden

When we are often not even genuine with ourselves, how can we be honest with others?  Dunbar's number (and others) suggests we can't care about more than about 300 people, with the number more likely 150 people.  This isn't just other people, this is you too.  This isn't just you, this is other people as well.  Keep that in mind.  Since our capacity to care about humanity is so low, our ability to repair or change an entire culture is extremely limited.  Thus our social norms are not and can't be designed by any particular person for the masses.  On the other hand, I can only keep track of so many people's personal preferences.  When a culture says something like, "Dirty jokes are okay at work" and the broader culture says "Dirty jokes are never okay when a woman is around," what do you do?  Perhaps your mind checks to see if you know the woman and decide based upon that?  Perhaps you test them with a joke?  What happens if that one joke gets you called in front of HR? I recall one man being written up for calling a group of testers 'test monkeys' (he too was a 'test monkey' in his own opinion), but that insulted one man's sensibilities.  You can't please everyone.

We Are All Unique


The alternative is to say we are all unique.  This idea of course runs counter to Dunbar's number that I can't handle everyone and have to limit myself to a certain set of people whom I use categories for.  Even if we are all unique I can't know how to start with each unique personality.  I start to develop rules, like "this group seems to often act like X" and apply that, but it takes time.  I can't start fresh, not knowing anything.

For example: Does silence means yes or no?

If you answered the question you are wrong.  If you remained silent you might too have been wrong (unless silence means I don't know).  As an example of yes, consider the political process in the EU.  Wiki itself notes that silence doesn't always mean consent.  In one of my Psychology text books back in college, I recall read that in the middle east, a pilot attempted an unscheduled landing and asked for permission to land.  He got no response back and assumed that meant yes.  He got into some rather big trouble when he tried to land and they shot at him, because silence meant no.

If you attempt no assumptions ever, you have to relearn everything every time, and with reading other's writing, you can't even know if the person you are learning is the same each time, as a persona does not equal a person (as Ben Franklin demonstrated).

At least in one example in the article, the author mentions how swearing is fine by her.  One of the commenter noted that she did not feel the same way.  If you can have people upset both ways, you bet there is a third way.  Hell, even asking, which feels a little odd, might upset someone who assumes that the matter is already culturally settled.  Note how I wrote hell just now.  I wonder if I upset any readers or caused them to pause.

Impossibility to Adapt


If you can't adapt because everyone's situation is too dynamic and everything depends, what do you do?  Since we can't adapt and culture is impossible to change by one person, we must suffer, with those who fight to change the situation slowly, making sacrifices for what they believe in.  I don't mean to sound harsh, but the only ways to change a culture is to work really hard at it and often suffer slings and arrows.  Sometimes waiting for others to move on or die off helps, but that takes years.  Alternatively, one can leave or build their own culture.  Basically, no matter which option you take, expect that it will take a lot of work!