Showing posts with label WHOSE. Show all posts
Showing posts with label WHOSE. Show all posts

Sunday, March 25, 2018

Code Camp Presentations

I present two hours at code camp earlier this month on the topics of blackbox and whitebox software testing.  The two topic were broken out in a way that anyone could attend one session or the other and get value out of them, but both was better.  Blackbox Testing was the first hour and Whitebox Testing was the second hour.  For those who did not attend my presentation, I should note you maybe missing some context that I gave during my talk but was not recorded in the slides.

One important aspect not in my slides but I told the audience was my blackbox talk was heavily influenced by the BBST courses developed by Cem Kaner and the book Lessons Learned in Software Testing.  So if you have questions, please feel free to bring them up in the comments below.

Blackbox Testing Abstract

Blackbox Testing involves testing software without knowing the internals of the code. This is a survey presentation, covering a broad set of topics meant to expand your interests and provide self-study opportunities. We will cover: 
* Schools of Thought: Where we have been, why testing has changed
* Testing Missions: How to know what you should test 
* Testing Strategies: How to make your testing organized 
* Testing Tactics: How to make your testing better

This will include practical examples as well as theory. You do not need to attend the whitebox session to gain value from this talk. However, these presentations are meant as a pair and will not cover the same material.

Whitebox Testing Abstract

Whitebox Testing involves testing software with deep knowledge of the internals of the code. This is a survey presentation, covering a broad set of topics meant to expand your interests and provide self-study opportunities. We will cover: 
* Schools of Thought: Where we have been, why testing has changed. 
* Limits: Why you need both the light and dark sides. 
* Techniques: Static Analysis, Security Analysis, Unit, Integration and System Testing 
* Tooling to make that blackbox system more white. 
* Automation techniques, including both white and blackbox techniques.

Almost none of the content here will be a repeat of the Blackbox Testing presentation. While it is not required, it is recommended that you attend the Blackbox Testing presentation.

Friday, December 12, 2014

Thinking about No Testers

In attempting to hear out the various viewpoints of those whom subscribe to the idea that Agile should move towards removing the role of tester.  I have yet to see anyone who actually suggested eliminating testing completely, if that is even possible.  So let us unbox this idea and examine it, with as little bias towards keeping my own job as possible.  Here are the rough set of arguments I have seen/heard in TL/DR (Too long, didn't read) form:
  • Limited deploys (to say .01% of the users and deploy to more over time) with data scientists backing it will allow us to get things done faster with more quality.
    • Monitoring makes up for not having testers.
  • Hiring the right team and best developers will make up for the lack of testers.
    • Writing code that "just works" is possible without testing, given loose enough constraints (which leads to the next argument).
  • Since complete testing is impossible, bench testing + unit testing + ... is enough.
  • Quality cannot be tested in.  That is the design/code.
    • Testers make developers lazy.
  • It is just a rename to change the culture so that testers are treated as equals.
    • Testers should be under a dev/team manager, not in a silo.
  • It is a minor role change where testers now work in a more embedded fashion.
  • We hire too many testers who add inefficiencies as we have to wait <N time> before deploying.
    • We only hire testers to do a limit set of activities like security testing.
  • Testers do so many things the name/profession does not mean much of anything.
  • With the web we can deploy every <N time unit>, so if there are bugs we can patch them quickly.
  • As a customer I see those testers as an expensive waste.

That is sure a lot of different reasons, and I'm sure it is not an exhaustive list.  Now I shall enumerate the rough responses I have seen:

  • A little genuine anger and hurt that 'we' are unwanted.
  • A little genuine disgust that 'we' are going through the 80's again.
  • Denial ('this will never work')
  • It can work for some, but probably not all.  (This follows a very CDT 'it depends' point of view)
    • Not everything can easily be instrumented/limited deployed.  For example, I have heard Xbox 360 games cost ~10+k to do an update.
    • In some software, having less bugs is more important than the cost of testing.
  • This is why we need a standard (I never heard this one, but it is an obvious response)
  • Concern about the question of testing as a role vs task.
  • Different customers have different needs they want to fulfill.
  • Good teams create defects too.
  • We need science to back up these claims.
  • Testing should move towards automation to help eliminate the bottlenecks that started this concern.
  • It is very difficult to catch your own mistaken assumptions.
    • If you need someone else to help catch mistakes, why not a specialist?
While I don't have a full set of links I have heard this back and forth viewpoints, but I think it is an interesting subject to consider.  I am going to go through my bulleted list top down, starting with the "no testing" side of the world.  I believe I have seriously considered the limited deployment and letting the tester be the customer in a previous post.  In that same post I noted that at least not all teams can make up for a lack of testers, so again, I think it has been addressed.

Complete Testing is Impossible & Quality Cannot be Tested in


The premise is true, complete testing is in fact impossible, I do agree with that.  The conclusion that hiring testers is not needed is the part I find interesting.  Quality of code and design cannot be tested in is also in interesting statement.  The idea that testers make developers lazy feels very unsubstantiated, with possible correlation-causation issues (and I have seen no studies on the subject), so I am going to leave that part alone for now.

I write software outside of work.  Mostly open source projects, of which I have 5 projects to my name.  Most of them were projects in which I was looking to solve a problem I had and wanted to give to the world.  In more than half of them I wrote no formal tests, no unit tests and to be honest the code quality in some of them is questionable.  There are bugs I know how to work around in some cases, or bugs that never matter to me and thus the quality for me is good enough.  I have had ~10k downloads from my various projects and very few bugs have ever been filed.  Either most people used it and then deleted it or the quality was good enough for them from a free app.  I hired no testers and as the sole dev I was the only one testing the app.  I think this is a +1 for the no tester group, if you work on an open source app for yourself, no tester (maybe) required.  CDT even would agree with this as it does depend.  Context matters.  I am sure other contexts exist, even if I have not seen them.  What I have not seen is specifics around contexts that might require no testers, with the exception of the Microsoft/Google sized companies.

Now can you test code quality in?  If you are about to design the wrong thing and a tester brings that up, did the tester 'test' code quality in?  Well he didn't test any software, the no tester side would say.  Sure, but he did test assumptions, which is often part of testing.  What about the feature the tester notes the competition has that you don't, which was discovered when comparing competitor protect quality?  What if I suppose the dev knows exact what to develop, can the tester add value?  Well I know I have made code suggestions that have gotten a developer thinking.  However, I could have just as easily been an XP developer sitting beside him as he coded rather than a tester.  Well what about TDD?  Is that testing quality in?  If I write tests first, I have to think about my design.  That is testing quality in, but again that is just the developer.  What if the tester and the developer co-design the TDD or the tester recommends additional tests added during development.  Again, this could be a developer, but a tester is often thinking about the problem differently.  So while I think the statement is false, if I look at it from the spirit, I think it might be true in some cases.  There are some really great developers who do think like testers and they could do it.  However, I think that is only some of the developers.  There will always be junior developers and there are always other mind set factors that play in.

 What is in a Name & Quick Deploys?


What's Montague? it is nor hand, nor foot,
Nor arm, nor face, nor any other part
Belonging to a man. O, be some other name!
What's in a name? that which we call a rose
By any other name would smell as sweet;
- Juliet, Romeo and Juliet
Does the fact that we are in a in-group called 'test' and some see developers as an outsider matter?  Visa-versa is probably true too.  In fact, the other day Isaac and I had a small debate about correcting a tester in some very minor minutia about deep mathematical and development theory.  To me, perhaps a developer who was neck deep in building new languages would find such a correction helpful, but a tester who was doing some exploration around the subject didn't need such a correction.  In fact such a correction might make things more confusing to anyone who didn't have a deep understanding of the subject.  I myself am guilty of the insider/outsider mentality.  We all categorize things and use those categories to make choices.  Perhaps renaming us all as human beings would make things better.  If an organization is particularly combative, this might be a useful change.  That being said, there is also a group identity for people who see themselves as testers or QA.  There is a specific field of study around it, and that also adds value.  Also, by having a sub-group, the tension, even if minor, might possibly help people grow.

This is a mushy, non-business oriented view of the world, and perhaps those who are just interested in the how fast things go today would prefer avoiding the conflict in order to get things done.  The fear of doing this from a QA point of view comes from the 1980s, where defects were a big deal.  Renaming people feels like going back to that era.  The argument back is that this isn't the 80s, we can ship software almost instantly.  The argument to this is that some software can't just be shipped with broken code.  Work on many different areas must be 'mostly right' the first time.  Also, this grand new world that people want to move into has not been vetted by time.  Wait a while and see if it is the panacea it is believed to be.  I recognize the value in renaming positions in order to improve team cohesion and believe it comes from a genuine desire to improve.  Just keep in mind what is lost and the risks in doing so.

I should note, I have seen some of this implemented before in organizations I have worked in.  Working under a single development manager, the development manager would often ignore the advice given by the "QA" person.  Perhaps a full rename would have fixed things, but often development wants to move forward with the new shiny thing.  Having no testers is perhaps a new shiny, and so development would be on board.  The people who are more aware of risk of change are the QA and operations people.  This is a generalization, unsubstantiated by anything but my observation, but it feels like QA is now calling this a risk and because it threatens our jobs we are being told we are biased, so we can be ignored.  Psychology and politics are complex things and anything I write will not adequately cover this subject.  I don't object to people trying this, maybe it will work.  I still see it as a risk.  I hope some people take it and report back their observations.

Perhaps the final argument about us is that we are many different roles tied with one name.  This is a complexity that came up at WHOSE.  We talked less about testing than you might expect.  Communication and Math and Project Management and ...  We had a list of about 200 skills a tester might need.  We are a swiss-army knife.  We are mortar for the projects developed, and because every project is different, we have to fill in different cracks for each project.  We happened to do that best in some ways because we happen to see many different areas of a system.  We have to worry about security, the customers who use it, the code behind it, how time works, the business, and lots more.  That is ignoring the testing pieces, like what to test, when to test and how to justify the bugs you find.  Maybe we could remove the justification piece, but sometimes the justification matters even to the developer, because you don't know if it is worth fixing or what fix to choose.  I think we can't help being many different things because what we do is complicated.

Customers Don't Want QA?


While that is a lie in regards to me personally (I get excited when I see QA in the credits of video games), perhaps it is different for others.  I have written elsewhere how users of mobile devices are accepting more defects.  Perhaps that is true in general.  We know there is such a thing as too much QA.  To go to the absurd, when QA bankrupts the company, it is clearly too much QA.  We also know there is too little QA, when the company goes bankrupt from quality issues.  Perhaps we could say when there are regulatory constraints that close the company down because of a lack of QA could also be there.  Those are the edge cases, and like a typical tester, I like starting at boundary cases.  Customers clearly want some form of quality, and as long as testers help enhance quality (E.G. Value to someone who matters), testers are part of that equation.  Are they the only way to achieve quality?  Maybe not, but that involves either re-labeling (see above) or giving the task to others, be they the developers or the customers or someone else.  Some customers like the cheaper products with less quality, and that does exist.  Having looked at the games that exist on mobile devices that are free, my view is the quality is not nearly as good.  Then again, I want testers.  Maybe I'm biased.  Even if I was a developer for a company, I would like to have testers.

Should the ideas of no testers be explored?  I have no objection, but I would say that it should be done with caution and be carefully examined.  Which makes the companies who try it...well, testers.

 Brief Comment on the Community


I have heard a lot of good responses, albeit limited.  I have also seen some anger and frustration.  Sometimes silly things are said in the heat of the moment, likely from both sides.  We also have some cultural elements whom are jagged edged.  These can create an atmosphere of conflict, which can be good, but often just causes shouting without much communication.  One of the problems is most people don't have the time or energy to write long winded detailed posts like this.  We instead use twitter and use small slogans like #NoTesting.  I personally think twitter as a method for communicating is bad for the community, but I acknowledge it exists.  How can we improve the venue of these sorts of debates?  How can we improve our communication?  I don't pretend to have an answer, but maybe the community does.

UPDATE: I wrote a second piece around this topic.

Thursday, January 16, 2014

Why can't anyone talk about frameworks?

In writing for WHOSE, I was dismayed at the total lack of valuable information regarding automation frameworks and developing them.  I could find some work on the frameworks with names (data driven, model driven and keyword driven), but almost nothing on how to design a framework.  I get that few people can claim to have written 5-10 frameworks like I have, but why is it we are stuck with only these 3 types of frameworks?

Let me define my terms a little (I feel like a word of the week might show up sometime soon for this).  An architecture is a concept, the boxes you write on a board that are connected by lines, the UML diagram or the concepts locked in someone's head.  Architecture never exists outside of the stuff of designs and isn't tied to anything, like a particular tool.  Frameworks on the other hand have real stuff behind them.  They have code, they do things.  They still aren't the tests, but they are the pieces that assist the test and are called by the test.  A test results datastore is framework, a file reading utility is framework, but the test along with its steps is not part of the framework.

Now let me talk about a few framework ideas I have had for the past 10 years.  Some of them are old and some are relatively recent.  I am going to pull from some of my presentations of old, but the ideas have at least been useful for one framework of mine, if not more.

Magic-Words


I'm sure I'm not the first one to come to this realization, but I have found no records of other automation engineers speaking of this before me.  I have heard the term DSL (Domain Specific Language) which I think is generally too tied to Keyword-driven testing, but a close and reasonable label.  The concept is to use the compiler and auto complete to assist in your writing of the framework.  Some people like the keyword driven frameworks, but in my past experience, they don't give compile time checking nor do they help you via auto complete.  So I write code using a few magic words.  Example: Test.Steps.*, UI.Page.*, DBTest.Data, etc.  These few words are all organizational and allow for a new user to 'discover' the functionality of the automation.  It also forces your automation to separate out the testing from the framework.  A simple example of that can be given:

@Test()
public void aTestOfGoogleSearch() {
 Test.Browser.OpenBrowser("www.google.com");
 Test.Steps.GoogleHome.Search("test");
 Test.Steps.GoogleSearch.VerifySearch("test");
}

//Example of how Test might work in C#, in Java it would have to be a method.
public class TestBase { //All tests inherit this
  private TestFramework test = new TestFramework();
  public TestFramework Test { get { return test; } }
}

Clearly the steps are somewhere else while the test is local to what you can see.  The "Test.*" provides access to all the functionality and is the key to discoverability.

Reflection-Oriented Data Generation


I have spoken of reflections a lot, and I think reflections are a wonderful tool for solving data-generation style problems.  Using annotations/attributes to tell each piece of data how to generate, what sorts of expectations there are (success, failure with exception x, etc.), filter the values you allow to generate and then picking a value and testing with it is great.  I have a talk later this year where I will go in depth on the subject and I hope to have a solid code example to show.  I will certainly post that up when I have it, but for now I will hold off on that.

...

Okay, fine, I'll give you a little preview of what it would look like (using Java):

public class Address {

 @FieldData(classes=NameGenerator.class)
 private String Name;
 @FieldData(classes=StateGenerator.class)
 private String State;
 //...

}
public class NameGenerator {

  public List<Data> Generate() {
   List<Data> d = new ArrayList<Data>();
   d.add(new Data("Joe", TestDetails.Positive);
   d.add(new Data(RandomString.Unicode(10),  {TestDetails.Unicode, TestDetails.Negative));//Assume we don't support Unicode, shame on us.
   //TODO More test data to be added
   return d;
  }

}

Details


Why is it that we as engineers who love the details fail to talk about them?  I get that we have time limits and I don't want to write a book for every blog post, but rarely do I see anyone outside of James McCaffrey and sometimes Doug Hoffman talk on the details.  Even if you don't have a framework, or a huge set of code, why can't you talk about your minor innovations?  I come up with new and awesome ideas once in a while, but I come up with lots of little innovations all the time.

Let me give one example and maybe that will get your brain thinking.  Maybe you'll write a little blog on the idea and even link to it in the comments.  I once helped write a framework piece with my awesome co-author, Jeremy Reeder, to figure out the most likely reason a test would fail.  How?

Well we took all the attributes we knew, mostly via reflections of the test and put them into a big bag.  We knew all the words used in the test name, all the parameters passed in, the failures in the test, etc.  We would look at all the failing tests and see which ones had similar attributes.  Then we looked at the passing tests and looked to see which pieces of evidence could 'disprove' the likeliness of a cause.

For example, say 10 tests failed.  All 10 involving a Brazilian page. 7 of those touched checkout and 5 of those ordered an item.  We would assume that the Brazilian language is the flaw if all tests failed, as that might be the most common issue.  However, if we had passing tests involving Brazilian, then that seems less likely, so we would see if we could at least establish if all checkout failures had no passing tests involving checkout.  If none had, we would say there was a good chance that checkout was broken and notify manual testers to investigate that part of the system first.  It worked really well and solved a lot of bugs quickly.

I do admit I am skipping some of the details in this example, like we did consider variables in concert, like Brazilian tests that involved checkout might be considered together rather than just as separate variables, but I hope this is enough that if you wanted to you could build your own solution.

Now your turn.  Talk about your framework triumphs.  Blog about them and if you want to put a link in the comments.

Tuesday, December 10, 2013

WHOSE up for a little skill?

A Brief Summary:


I went to WHOSE, a workshop with a mandate to create a list of skills and am now back.  I want to briefly summarize my experiences.  The idea was generated by Matt Heusser to attempt to have a list of skills that could be learned by anyone interested in a particular skill in testing.  The list of skills was to be put in a modern format and presented to the AST board.  In part I did it because I felt like the CDT community had too little organized detail on how to implement the activity of test.

WHOSE worked on this thing?


I was not present for the first few hours, so I missed out on giving a talk.  When I got there I met my fellow WHOSE'rs:

Jess Lancaster, Jon Hagar, JCD (me), Nick Stefanski, Pete Walen, Rob Sabourin, David Hoppe, Chris George, Alessandra Moreria, Justin Rohrman, Matt Heusser (facilitator), Simon Peter Schrijver (facilitator), Erik Davis (facilitator).

It was a blur of new faces and people I had read from but had not gotten to meet before.  Later, Doug Hoffman showed up.  I was made late when my plane did not actually make it to the airport, having been rerouted.

WHOSE line is it anyway? (Day 1)


This being my first conference, my initial reaction was to keep silent and just observe.  The group had about 200 cards on a table with a bunch of skills semi-bucketed (skills that 'felt' similar).  The definition of a skill was unknown to me still, in spite of the fact that I had researched and considered this problem for hours.  I had also looked into how to model the information, I had considered the Dreyfus model and how it might be used

Many of the blog posts I have written were in fact considerations of skills, such as my reflections post, to help prepare me.  I had debated what a skill is with my fellow testers, and even created a presentation, and now I was standing facing what felt like 200 or so skills.  How do you organize them?  Outside of that, what questions do I ask and who do I ask?  Sometimes when a tester doesn't have a lot of answers and no one obvious to ask, who has the time, you just poke around and that is what I did.  I created a few skills I saw were missing and possibly a few that might not be considered skills, or at least not as written.  For example, I wrote out Reflections, Tool Development both which I thought were reasonable and XML which I thought was questionable.  For some reason, as a young tester I found XML to be scary because I didn't understand the formatting and so XML seemed to belong, yet did it?  Eventually the task moved to grouping which seemed to happen while I was still behind a little.  Clearly my impromptu skills were a bit lacking.

WHOSE cards are these?


I wanted to take on the technical piece since I feel like that is the part I can provide the most feedback.  I had written in the airplane a hierarchy of technical skills in the hope that they could use, but feared might be too technical, too 'computer science'.  Having mentioned this, no one seemed enthused to combined the two lists, which I'm not sure if that is for better or for worse.  Having Object-Oriented Programming (OOP) without going into the depth of typing, generics, etc. and how that is useful in testing much less automation or tool smithing seems incomplete.  Coding as the only technical skill involving programming is clearly too small.  Is OOP a good stopping point?  I would have chosen more depth, but I also know there is a limit to what 15 or so people can get done in a few days.

We started through the cards and skipping my list, I acted as a sort of wiki gate keeper (a role I didn't much like) while other people did research on the definition of the skill as well as resources for the skill.  Some people seemed interested in very formal definitions and resources while others liked informal 'working' definitions and web blogs.  I mean no criticism on any particular view point, but I tended towards the informal side.  We ended with a show and tell of our work which seemed interesting.  One group had a lovely format.  Another group had extensive research and a third group had lots of definitions completed.  I noted that if we moved each definition onto its own page no gate keeper would be required.  We closed up feeling a little bit dazed by the amount of work left.

WHOSE on first? (Day 2)


Rob S. had emailed Matt H. over the night suggesting we change the format a little.  Why not make these into a more CDT style.  Instead of having very formal works move to a context-based definition set of skills.  That is to say, skills based upon stories of how we used a skill.  While Matt generated a proof of concept around this, we observed the formatting and tool usage.  Once it was understood, we started writing up skills based upon our interests.  We wrote and we wrote and we wrote.  The question of what skills belong where and what is a skill was pushed aside for a more formal editing process.  XML as as skill was removed even though the original list of 'skills' was saved in a deprecated area.

I wrote somewhere between 10-15 skills over the course of a day.  I know my skills as a writer were stretched that day.  I heard warnings about the flame wars and anger we might see from this venture.  I expect that, people in testing have a habit of finding bugs.  I still have lots of open questions on where this goes next.  I wonder how we will edit the skills.  I wonder a lot about how this process will be managed.  I wonder where this will be posted and how it will be viewed.  Those are still open questions.  Questions I hope to be resolved at some point.  After writing until our fingers bled, we finally went to dinner.  Much thanks to AST for bribing...er... treating us to dinner. :)

WHOSE going to finish this? (Day 3)


We as a group attempt to finish off with figuring out who will finish which skills.  I have a lot of skills still needing finishing.  I know others do too.  I signed up to help deal with the format shifting question so when this comes out it'll be readable in useful formats.  I appreciated Matt's openness in considering methodologies and talking through 'what is next'.  I maybe slower to blog for a while as I work through my WHOSE backlog.  Truthfully, I was not as 'on top of it' the last day as I was the previous two days.  I think exhaustion had finally hit, so I'm glad it was just a half day.

WHOSE missing from this narrative?


This was a rather interesting experience.  I have never been to a workshop before.  I never saw any 'LAWST-Style' workshops before so I didn't have that to compare to.  I have worked with a bunch of bright people before, but not some of the 'experts' of the world (even if they would reject that label...).  That is a little humbling.  Seeing Rob write is amazing.  Watching the speed Matt can break out an article is...well... something to be seen.  In fact, in the spirit of that, I have written this entire article straight and will attempt to limit my editing.  Sorry, poor readers. :)  

The group as a whole had some nice philosophical discussions, but no one got angry, and overall I think that helped make the content better.  Is the content useful?  I honestly don't know, I'm not an oracle, but I hope so.  Is the content done?  No, I hope this to be a living document and that others will get a chance to help contribute to this and grow.  I hope they too can understand that their context and usage of a skill will be considered just as valuable as our context and usage of a skill.

I would like to make a special thanks to Matt, Eric and Simon for setting up this conference.  Also thanks to the AST and Doug Hoffman for feeding us.  Thanks to Hyland for hosting the conference.  For other perspectives, see the following blogs: here, here, here, and here.  One last piece I would be remiss to neglect to mention.  At the airport afterwards I got to have a long chat with both Rob and Jon.  That was a great conversation which I really enjoyed.  I'm still considering some the questions Rob posed to me.  Expect some future blog posts!

UPDATED 12/27: Added more blog post links.

Tuesday, December 3, 2013

WHOSE - Part 1

I'm keeping this brief and unedited, as I don't have much time.  As many people, I have been getting busier for the holidays, but for me part of it is going to WHOSE.  I will perhaps be delayed by a few weeks as I gather together material and prepare for some presentations for next year.  That being said, I hope to have some good material from WHOSE and I will make sure to publish it as quickly as I can.  I do have a few older review I wrote for some classic books.  I may post them to keep you all entertained.  Be back soon.