Showing posts with label CAST. Show all posts
Showing posts with label CAST. Show all posts

Thursday, January 22, 2015

Leveling Up Your Testing Skills

Recently I gave a talk intended to be a quick survey of some of the core areas of testing.  I certainly didn't hit everything, but meant to give people tools to go exploring on their own.  I used a text editor for my presentation, because not everything needs power point.  I thought it might be useful to others so I am recreating it here.  I have done some minor modifications, removing things that only made sense in context of a performance.  I also added a few things based upon the discussion during the presentation.

 ----- Page 1

Survey:

Who has heard about AST?

How many of you have read or even skimmed a testing book in the last year?

How many of you have read a white paper?

How many of you have written ANY bugs outside of professional obligations (your job, school, etc.)?

Has anyone tried to formally QA an article?

How many of you have heard about the “No Tester” movement?

How many of you have heard about ISO 29119?

 ----- Page 2

Purpose:

My purpose is not to teach you facts about testing, even if some of that happens, but rather to teach you what questions you didn’t know that you should even ask and show you places you didn’t know to look.

NOTE: I know I cite myself several times, and while there are other sources, few are as narrowly focused to the strategies I am going to talk about..

 ----- Page 3

Do you know the different ‘schools’ of testing?

Four Schools of Testing*: www.testingeducation.org/conference/wtst_pettichord_FSofST2.pdf


Analytical School - Code Coverage, Unit Testing


Factory School - Metrics, Traceability


Context-Driven School: Exploratory, Multidisciplinary based upon context


Quality Assurance School: Protect user, Gatekeeper, Process-Oriented

* [Edit: Since this presentation, I have discovered an updated model including a 5th school, Agile.  See https://www.prismnet.com/~wazmo/papers/four_schools.pdf)
 ----- Page 4

I’m mostly going to speak from the CDT point of view…

(Aside: While I appreciate CDT’s philosophy, be aware that if you approach the community online, some people participating in CDT’s development tend to be passionate/challenging/aggressive and some are very interested in defining words.  Even if you are not interested in that, CDT is still worth looking into, just figure out who you find value from and whom you don't get value from.)
  
 ----- Page 5


Two of the largest “CDT“ classes:

BBST: http://bbst.info/ ; http://www.testingeducation.org/BBST/ ; http://www.associationforsoftwaretesting.org/training/courses/ ; http://altom.training/bbst-foundations/

RST: http://www.satisfice.com/rst.pdf

(Both have free portions, BBST includes all the lectures for free)
  
 ----- Page 6-8


Models and Heuristics:

What are they? Read my work: Words of the Week: Heuristic [& Algorithm]: http://about98percentdone.blogspot.com/2013/10/words-of-week-heuristic-algorithm.html

“A Heuristic is an attempt to create a reasonable solution in a reasonable amount of time.  Heuristics are always Algorithmic in that they have a set of steps, even if those steps are not formal.” 

List of Heuristic Mnemonics: http://www.qualityperspectives.ca/resources_mnemonics.html

SFDIPOT - Structure, Function, Data, Integrations, Platform, Operations, Time

HICCUPPSF - History, Image, Comparable Product, Claims, User Expectations, Product, Purpose, Standards and Statutes, Familiar Problems

Heuristic Test Strategy Model: http://www.satisfice.com/tools/htsm.pdf

How do you know when you are right? http://about98percentdone.blogspot.com/2013/11/how-do-you-know-when-you-are-right.html



Models:


Session-Based Test Management: http://www.satisfice.com/sbtm/index.shtml

Thread-Based Test Management: http://www.satisfice.com/blog/archives/503

Tours: http://www.developsense.com/blog/2009/04/of-testing-tours-and-dashboards
   
 ----- Page 9


Books (just a few):

An Introduction to General Systems Thinking: http://www.amazon.com/exec/obidos/ASIN/0932633498

Exploratory Software Testing: Tips, Tricks, Tours, and Techniques to Guide Test Design: http://www.amazon.com/Exploratory-Software-Testing-Tricks-Techniques/dp/0321636414

Lessons Learned in Software Testing: A Context-Driven Approach: http://www.amazon.com/Lessons-Learned-Software-Testing-Context-Driven/dp/0471081124/

Agile Testing: A Practical Guide for Testers and Agile Teams: http://www.amazon.com/Agile-Testing-Practical-Guide-Testers/dp/0321534468/
 
 ----- Page 10


Self-based learning

Go find a testing job for the weekends like:
http://www.utest.com/
http://sourceforge.net/p/forge/helpwanted/testers/
http://www.guru.com/d/jobs/
http://weekendtesting.com/america
   
 ----- Page 11


Where are you going?

http://blog.red-gate.com/wp-content/uploads/2014/02/Test-Engineering-Skills-v3.pdf

(Go look at this guy, it is an interesting list of skills)
   
 ----- Page 12 & 13 

Experts and active members of the community:

There are different people who have different expertise’s that are worth reading.  Not everyone does much automated testing, while other people are more interested in coaching or test design.  Look at where your strengths are and find people who will be able to supplement them.  Some of these people may in fact oppose my own views, but that too can be valuable.  Debate helps you understand your own point of view.  Here is an example or two….

James Bach* – Testing is an Intellectual Inquiry
James Christie* — Consultant, Professional Auditor and Tester, Started entire ISO 29119 discussion
Jon Bach* — Session-Based Test Management (with his brother)
Michael Bolton — Testing vs Checking
Huib Schoots — Dutch Testing Coach
Scott Barber — Load Test Guru
Rex Black — Proponent of Factory School of Testing***
Harry Robinson — Model Based Testing
Justin Rohrman* — Active in community, board member of AST
Robert Sabourin** — Author of I Am Bug (pictures by his 12 year old daughter)
Jerry Weinberg  — Systems thinking, consulting, author
Karen N. Johnson* – Professional Tester’s Manifesto
Fiona Charles* — Author, Speaker
Matt Heusser* — Founder of Miagi-Do
Cem Kaner — Originator of CDT, BBST, Legal concerns around testing
Alan Page — (Former?) Microsoft Test Architect



* I met these folks at CAST, and you too can meet these sorts of folks at CAST.
** I met Rob at WHOSE, I enjoyed his writing at WHOSE but he tends to be more academic in my experience.
*** [edit:  Since publication my understand of Rex Black's ideas has evolved. Rex specifically rejects the concept of Schools and, as best I understand would prefer I say he's a Proponent of the Strategy of Factory Style Testing in some contexts.  I leave the original work as stands for historical reasons.]

(Some of these people are accessible to talk with, some only speak through their work, some through conferences…)
    
 ----- Page 14

An Exemplar (with more details): 

Doug Hoffman

http://www.softwarequalitymethods.com/h-papers.html
(pay attention to the dates of the papers, some have updated versions)

He has written exhaustively around automation and testing.  While that isn’t the only work he’s done, you will get a good idea of the depths you can go in with automation.  On the other hand, if you want to learn to code, Doug is probably not the man to visit.  He’s interested in theory behind automation.  He was one of the minds behind the ideas High Volume Test Automation and Exploratory Automation.

I wouldn’t go to Doug to find out the latest idea behind Selenium.  I would go look at Doug’s work to understand what sort of automation techniques might make sense to apply to a problem.
    
 ----- Page 15

Questions you need to ask (that I can’t answer):

How much time am I willing to dedicate to learning?
What sort of things do I want to learn?
How varied do I like my studying?

Is this a job or a profession?

Do you want to do something that matters?

Are you okay with having 10 years of experience that really only add up to 1 year of valuable experience?

 ----- Page 15-17

Questions you need to ask (that I can give hints on):

Where can I learn?
 - Blogs
 - BBST/RST and other classes (LST is another CDT class, RBCS is from the factory school)
 - Books
 - Co-workers (particularly challenges)
 - Conferences (CAST - https://www.youtube.com/playlist?list=PLQB4l9iafcelXpJnK6IyDsoFeEb1icqrl , Code Camp, Star West, Star East, Agile testing days, GTAC - https://developers.google.com/google-test-automation-conference/2014/stream , etc. ; Often recorded on Youtube)
 - White papers: http://about98percentdone.blogspot.com/2015/01/white-papers-and-who-owns-your-education.html

What should I learn?
 - Start by glancing at the foundations class.  See how much of it you know already.
 - Read blogs about what interests you.
        - Look up the people I talked about (I suggest Googling "<Name> Testing”.  Almost all of them have blogs).
        - Look at the top blogs and see what they have to say: http://www.testbuffet.com/lists/top_114_software_testing_blogs_2014/page/1/
        - If those methods don’t work, try looking at the fire hose and see if there is an interesting topic: http://www.testingreferences.com/testingnews.php
 - Start your own blog and learn to write; here is mine: http://about98percentdone.blogspot.com
       - Feeling shy?  Why not start by writing comments.
 - Pick up one book.  Read it.  Don’t like the ones I listed?  Check out this extensive list: http://www.testingreferences.com/software_testing_bookstore.php
   - Write a review on your blog
 - QA a news article.  Write up bugs on failed assumptions.
      - http://about98percentdone.blogspot.com/2013/09/testing-babies-for-learning.html

Learn about your interests AND figure out how much you are gaining.  If your energy wanes, move to a different topic or take a break.

What else can I do?

- Figure out what matters and work on that: http://about98percentdone.blogspot.com/2014/10/noise-to-signal-attempt-to.html
- Don’t keep working in the same position for too long.  Changing jobs is healthy.
- If you aren’t much of a reader, but rather audio or video, try some of these: https://flowoftesting.wordpress.com/2014/09/02/podcasts-and-videos-for-testers/
- Learn from the history of testing: http://www.testingreferences.com/testinghistory.php
- Find some heroes and some villains and learn from them (villains will help you clarify your own views): http://about98percentdone.blogspot.com/2014/01/heroes-and-villains.html

Have fun.  If this doesn’t sound fun at all, go find something that does, even if it is a different career.

Tuesday, November 4, 2014

Consultant's War

My co-contributor Isaac has been pondering the fact that we see more consultants blogging regularly, speaking their mind and controlling the things we talk about.  I don't care if it is  Dorathy "Dot" Graham, Markus Gärtner, Matt Heusser,  Michael Bolton, Michael Larsen (Correction: As Matt pointed out, I got my facts mixed up and Micheal is not a consultant), Karen Nicole Johnson,  James Bach or Rex Black.  There are dozens of consultants I could have listed (I mean, how could I have missed Doug Hoffman!), some of whom I have personally worked with.  So this is not meant to be a personal attack on any of them.  I realize I list more CDT folk than others, and while that is not intentional, I read less of the thought leaders from the other schools.  If you look, I bet most of the people you read either are or were consultants [In interest of full disclosure, I have written a hand full of articles and have been paid, but I am not nor have I ever been a consultant, and with you reading this, it is at least one counter-argument, but we'll get to that in a bit.].  If you made it here, there is little doubt you've read at least of those author's works at least once.  They are stars!  I mean that literally.  I found this image of James Bach just by searching his name:
From: http://qainsight.net/2008/04/08/james-bach-the-qa-hero/
I didn't even use image search to find that, it came up on the first main page of my Google search results!  I respect James, he writes well and has interesting ideas, but I have no opinion on his actual testing skills, as using his own measure, I have not seen him perform.  Keeping that in mind, I have not seen any of the above consultants do any extended performances in testing, even if I have listened to many of them talk and gained insights from them.  They know how to communicate and they are often awesome writers.  Much better than I am.  They all are senior in the sense that they have done testing for years.  Some of them disagree with others, making it often a judgment call of their written works on who to listen to.  Some of it is the author's voice.  In my opinion, Karen Johnson is a much softer and gentler voice than James Bach.  But some of it is factual.  I have documented several debates between Rex Black and various other people I have listed.  Rex has substantially different ideas on how the world should work regarding testing.

 ISO 29119


James Christie, another consultant brought up a standards in CAST 2014.  It was a good talk and clearly he had some valid concerns about the standards that have been created.  I have talked about that earlier (I am not going to include as many 'justifying' links as all my points and links are in the earlier article).  I am actually not terribly interested in the arguments and in reality, ISO 29119 is more of a placeholder than the actual point of interest.  I am more interested in discussing who is creating these debates.  In part we are in a war between consultants and in part we are in a war of 'bed making'. Clearly the loudest voices are from those who have free time and have money at stake. The pro-ISO side has take the stance of doing as little communication as possible, because it is hard to have a debate when one side won't talk. It is a tactical choice, and it is very difficult to compromise when one side doesn't speak. Particularly when the standard is already in place, the pro-standards side doesn't have much to gain from the messy-bed side who wants the standard withdrawn.

Both groups of consultants have money to gain from it.  A defacto-RST-standard would make Bach a richer man that he already is.  Matt Heusser's LST does give at least some competition of ideas, but still that isn't much more diversity. The standards would probably make the pro-standards consultants more money as they can say, "as the creator of the standard I know how to implement it."  Even if they made no more money in so far as making the standard, they will be better known for having made the standard. Even if I assume that no one was doing this out of self-interest, the people best represented are the consultants and to a lesser degree, the academics, not the practitioners who may feel the most impact.  Those leading the charge both for and against the standard are primarily consultants or recent ex-consultants. Clearly this is a war for which the people with the most time are waging, and those are the consultants. Ask yourself, of those you have heard debate the issue, what percentage are just working for a company?

Granted, I am not a consultant, but it takes a LOT of effort to write these posts.  It isn't marketing for me, except possibly for future job hunting, and the hope that I will help other people in the profession.  I know non-consultants are talking about it, but we don't have a lot of champions who aren't consultants.  Maybe most senior level testers become consultants, perhaps due to disillusionment of testing at their companies.  Maybe that is why consultants fight so bitterly hard for and against things like standards.  Perhaps my assertion that money at the table is a part of it is just idle speculation not really fit for print.  I can honestly believe it to be that is possibly the large majority of the consultants involved.

 Bed: Do you make yours?


Then what is it that causes these differences of view?  Well let me go back to the bed making.  To quote Steve Yegge:
I had a high school English teacher who, on the last day of the semester, much to everyone's surprise, claimed she should tell just by looking at each of us whether we're a slob or a neat-freak. She also claimed (and I think I agree with this) that the dividing line between slob and neat-freak, the sure-fire indicator, is whether you make your bed each morning. Then she pointed at each each of us in turn and announced "slob" or "neat-freak". Everyone agreed she had us pegged. (And most of us were slobs. Making beds is for chumps.)
That seems like a pretty good heuristic and I think it is also a good analogy.  Often those who want more documentation, with things well understood before starting the work and more feeling of being in control via documentation are those who feel standards are a good idea.  They want their metaphorical beds made.  They like having lots of details written down, they like check lists and would rather make the bed than have a ‘mess’ left all day. A nice neat made bed feels good to them. Then there are the people who see that neat bed and think it is a waste of time at best. At worst, they think that someone will start making them make their bed too. Personally, I think "Making beds is for chumps", just like Steve Yegge. Jeff Atwood would go even further and call it a religious viewpoint:
But software development is, and has always been, a religion. We band together into groups of people who believe the same things, with very little basis for proving any of those beliefs. Java versus .NET. Microsoft versus Google. Static languages versus Dynamic languages. We may kid ourselves into believing we're "computer scientists", but when was the last time you used a hypothesis and a control to prove anything? We're too busy solving customer problems in the chosen tool, unbeliever!
Clearly the consultants, many of whom I take valuable bits of data from care about what they do and they tried to lock down their empirical knowledge into demonstrable truths.   But that doesn't mean they have 'the truth'. I know as testers we try to have controls, but I am not so convinced we have testing down to a science.  It is why I feel we aren't ready for standards, but I also recognize the limits of my own knowledge.

For what it is worth, I tend to be against bed making, I think having all this formal work and making checklists is rather pointless unless they fit what I am doing. Having one checklist to rule them all with a disclaimer that YMMV and do the bits you want doesn’t sound much like a standard, but those whom like their bed nice and neat probably feel very happy when they come home at night. The ‘truth’ about the value of making your bed, the evidence that it is better is less than clear. Maybe one day we will have solid evidence in a particular area that a particular method is better than another, but we aren’t there yet in my opinion. But still I don’t make my bed.  In case you are wondering, I think this is probably one of the hardest questions we have in the industry:  What methods work best given a particular context and what parts of the context matter most?

Wednesday, October 29, 2014

In consideration of ISO 29119

I have been aware of this debate ever since attending CAST 2014, but I've not been quick to sign.  I wanted to investigate and see what other viewpoints there might be.  To quote Everett Hughes, a sociologist:
“In return for access to their extraordinary knowledge in matters of great human importance, society has granted them a mandate for social control in their fields of specialization, a high degree of autonomy in their practice, and a license to determine who shall assume the mantle of professional authority. But in the current climate of criticism, controversy, and dissatisfaction, the bargain is coming unstuck. When the professions’ claim to extraordinary knowledge is so much in question, why should we continue to grant them extraordinary rights and privileges?”
This is a serious question, one for which I appreciate that a standard might seem like a correct method for 'proof' of our professional authority.  However, even if the standard is in fact a professional guide, I don't think anyone but practitioners can judge its value.  As someone who has had a keen interest in trying to understand this standard, and not wanting to judge too quickly, I have tried to get a hold of a great deal of material about it.  I have engaged with some of those whom disagree with my point of view, such as professional tester.  In fact, they have published my response in there Oct. 2014 issue (albeit with some minor mistakes from my draft).  I looked into purchasing the standard.  I have looked for those who are pro-standard and what they've had to say about this movement as well as what they have said in the past.

That being said, I'm not convinced we actually do know what a standard should look like, much less if this is the standard we need.  Maybe it is what we need, but I strongly doubt it.  I think we are likely many years or decades away before we will even be able to claim we have true repeatable practices oriented towards different contexts, assuming it is possible.  I am unable to judge at present what the standard does say, as the standard and the standard body's work is less than transparent.  I've been forced to sign up with personal details in order to read documentation about the standard's creation, although that has been changed since (NOTE: The file name also changed, including a date, which makes it hard to know if anything has changed, as there was no change log as of 10/26/2014).  The standard requests I pay in a currency that I don't use.  I've signing the petition to withdraw the standard not because I know it is wrong, but because I can't tell, which makes it useless at best and dangerously wrong at worst.  What I can tell is that the standard's various author's other documents around the standard demonstrate what I consider to be confusing if not out right contradictory statements, making me doubt the actual standard.  That breaks the social bargain us professionals have that Everett Hughes so eloquently described.

Perhaps you could say that I should have been personally involved in the standard, and that is a valid complaint.  However, I'm not aware of anyone particularly reaching out to the AST, nor have I heard of it from any other group except for James Christie's talk in 2014.  I have attended both AST sponsored and non-AST sponsored conferences for years, so this isn't a case of willful ignorance.  This is my first chance to review the material and process, yet I have not found the process particularly open or transparent.  I see claims of no best practices and claims that the standard will create best practices.  I have found so many confusing statements by the standard's body that I must conclude that the standard should be withdrawn until it can be thoroughly reviewed and modified, if it can even be modified into something useful.

Even ignoring past statements, the recent defense of the standard creates questions.  One of the easiest and most obvious to consider is who wrote the standard.  Then there is the question of who pays for the creation of the standard?  Well clearly this was not just a labor of love, as Dr. Reid says the costs of development have to be passed on to the customer's of the standard.  I should note, my blog makes me no money and I am not a consultant so I have little incentive to make money speaking about this.  I made no money in writing my letter to the editor, and I certainly don't demand you pay to read my work.  I am not discounting the cost of writing the standard, just simply saying that if you plan on having expenses paid by publishing your work, you are not simply doing this out of the kindness of your heart.  There is lots of analysis that could be done just on the defense of the standards alone, but is outside the scope of this particular post.

One of the oddest and most compelling arguments for both sides is from Rex Black, in which he notes that about 98% of all testers won't care one way or another.  I think this is true, which makes the standards mostly not matter, but it also means that those 98% who are silent count on us to ensure these are the right standards, lest they become popular and that silent 98% ends up forced into using them.  I fear that this non-involvement is also further evidence that we testers as a group are not acting like a profession.  It isn't that we don't claim to have "extraordinary knowledge" and Dr. Reid at least seems to argue we mostly agree on this knowledge, but rather the majority of people don't feel any need to actively participate.  I realize this might be an argument for why we need a standard -- to show the disinterested the 'right' way to test, but to me it seems to indicate just how young our industry is and seems to me that shows why we aren't ready for a standard.

Even if the ISO body decided these documents ultimately should stand, the objections of the AST/Context Driven community need to be noted in such a standard.  Furthermore, making the document open will go a long way in allowing the community to discuss this document beyond the smaller standard's committee.  I recognize that ISO needs income to maintain itself and won't publish them for free for everyone, but certainly some sort of 'for individuals, not corporations' license could be used (and I don't mean the sort of non-sense Matt Heusser describes).  Finally, if this is an attempt to demonstrate our commitment to professional testing, then it needs to be accessible to our community.  The work needs to demonstrate it's value rather than being buried away inaccessible to those who would use it.

Thursday, August 21, 2014

Words of the Week: Critique and Criticism

Preamble / Ramble


This is a first for me and word of the week, as I typically consider a word without much personal context, but this week I am going to include a context. While at CAST this last week, I heard some valuable input regarding my blog. No matter how I get feedback, I try to take it and make it useful. I tend to be direct with my feedback, but for some, that can sound harsh, so this week I’m going to consider two different words. If you don’t want to hear the backstory, just skip down to the next section. However, for those that care, I do want to give some context and clarity in my own words. I have now heard from multiple sources that they felt my writing reflected a negative viewpoint towards the Software Testing World Cup 2014. I think that to be an inaccurate statement. I did have some issues with it, and I felt I was doing a deep dive analysis of both the pros and the cons of the contest, up to the point I had gotten. In assuming that I was writing a report about my experience with the contest, going into both the good and the bad. I also wanted to talk about what I could gain from the contest and what could be done better. In my second part I wanted to consider what the judges had to say, so I could learn where to improve as well as note what additional feedback would be nice to have. I understand that this is a contest, but I think the more important piece is what can be learned from it. To be fair, perhaps the organizers didn’t want what I had to say, as they actually asked for:

“At the moment we are very curious about your personal STWC experience. We would love to read blog posts or any other kind of written reports about your participation. It doesn’t matter if you write from your personal point of view or from the perspective of your whole team. We are interested in hearing how much fun you had during the competition, what you have learned from this experimental challenge, and what the difference is between testing while having fun together with friends and testing at work. We are also interested in how you communicated with your team members and what your team’s strategy was. Did you start testing right away or did your team follow a strategy? How did you create that strategy? What was your biggest challenge within your team, what kind of bugs were you able to detect and how did you like working with the Agile Manager tool?”

I took this to mean they were excited to receive any feedback regarding the design of their contest. They also wanted to hear what we did and why we did it. I heard that they were open for a critique of the process so that they could attempt to improve it. Furthermore, we were not told not to post scores, as we felt that it was important that those who participated could compare scores and learn where they could improve.  In fact I encouraged others to post scores, but no one actually did so. We also thought it might add value to future judging to see a relatively unemotional reaction to the scores given. We made the judge’s scores more anonymous as it felt important not to provide any identifiable information. However, it seems some saw what I thought was a critique others felt was criticism. This leads me to the two words, critique and criticism. Having re-read my content I can see that it could be read as criticism, and while I won’t take back my words, I will keep that in mind for the future. So on to the words of the week.

Back to our Regularly Scheduled Program


First allow me to define these two words in the way I think of them, without doing much research nor using Google to define the words. I think of criticism and critique as similar, but with a difference in tone and intent. To criticize someone is to give back negative statements, with either the intent to hurt/harm, with no positive points and to lack any possible ways to improve, be they implicit or explicit. Calling someone stupid is to be critical of someone’s intellectual factuality. It is likely intended to harm, has no positive points and does not hint at a way of improving. To critique is to provide feedback with the intent to improve a person or person’s work in some way. There is a mix of some positive points in the comments as well as negative comments. Gushing praise or all negative points is probably not a critique. Furthermore, at least some of the statements need to be actionable. Critique also feels very student/teacher-y. Criticism on the other hand seems like it refers to bullies and angry people shouting at each other. I want to capture this in a readable way for later usage:

Criticism Critique Gushing Praise

  • Negative comments
  • Intended for harm/hurt
  • No method for improvement provided.
  • Negative and positive comment mix
  • Intended to improve the content, person(s) or future
  • Provides methods for improvement.
  • Positive comments
  • Maybe intended for manipulation (even if meant to be ‘improve another person’s day’)


Now let’s see how I did. There are a lot of different people’s views on these words, so let me try to capture just a few of them.

Various versions of Critique and Criticism


According to the presently [August 20th, 2014] most popularly rated stack exchange view the difference is zero. They point out how in 1960’s academics started to try to use critique as a form of analysis rather than meant to censure, but that didn't stick.  They cite several modern examples of mixed usage of the terms. In comparison, Paul Brians, the author of Common Errors in English Usage, says:
“Josh critiqued my backhand” means Josh evaluated your tennis technique but not necessarily that he found it lacking. “Josh criticized my backhand” means that he had a low opinion of it.
Clearly their is some difference of opinion on the meaning of the word. In Writing Alone, Writing Together; A Guide for Writers and Writing Groups by Judy Reeves, there is a interesting comparison of criticism and critique:

“[1]Criticism finds fault/Critique looks at structure
[2]Criticism looks for what's lacking/Critique finds what's working
[3]Criticism condemns what it doesn't understand/Critique asks for clarification
[4]Criticism is spoken with a cruel wit and sarcastic tongue/Critique's voice is kind, honest, and objective
[5]Criticism is negative/Critique is positive (even about what isn't working)
[6]Criticism is vague and general/Critique is concrete and specific
[7]Criticism has no sense of humor/Critique insists on laughter, too
[8]Criticism looks for flaws in the writer as well as the writing/Critique addresses only what is on the page”

(NOTE: I numbered the following quote for later convenience)

It is not completely clear some of the differences between criticism and critique, as I can find fault in the structure of a work, which leaves me in no mans land. Wait… wait…

I want to analyze my last sentence, using the rules listed above. In the second rule, I clearly have a criticism of the writing. The the third rule says if I had asked in the form of a question, if I had said “It appears not completely clear some of the differences…” and perhaps ended with a question mark it would have been a critique rather than a criticism. That appears to me to mean the syntax matters more than the semantic content in the author’s opinion?* Oh, this gets us to rule four. While I made my statement objective and honest, it might not be considered kind. I can’t tell if my statement could be seen as negative or positive. It certainly isn’t positive, so I’m going to read between the lines and call it criticism. I think my statement seems concrete and specific, even if no exact example was given. However, it isn’t full of humor, nor is most of my writing.** The final dimension is about the author. With no reference to the author, I think I am addressing only what is on the page. Ultimately it feels that this set of tools has limited value as it takes too much analysis and when you have 5 / 8 on the criticism side, is it criticism? To the definition's benefit, it does try to capture the concepts I stated early about providing some positive points and providing negative points without beating the subject up. It adds to my definition attempt that you should be addressing the content rather than external influences and adds the idea that the critical comments be concrete. However, it fails to address the possibility of simply gushing rather than capturing the areas of improvement.

Obviously my example is contrived, but look at the above paragraph as whole (yes, I’m getting meta) I would call it a criticism using the rules Judy Reeves developed, yet it wasn’t intended to be. So it feels wrong in some way or perhaps I’m simply wrong. In my mind, she is looking for more positive views rather then stressing the analysis. That is to say the message is less important than the presentation. In my opinion, I’m critiquing the content, because I follow my own rules as well as the new rules I have developed.


* Yes, that was intentionally sarcastic. I hope you can appreciate the joke.
** Ignoring the above *’d item. It wasn’t that funny anyway.*

To Sum Things Up


Some people don’t see a difference between criticism and critique. In more academic circles, it appears that a divide is seen, even if the divide is a little fuzz. Some see it as a difference of purpose, some see it as a difference of the message and some see it as a difference of the content.  How do I see it?  Well, let me capture the attributes I think matter:


Criticism Critique

  • Negative comments
  • Intended for harm/hurt
  • No method for improvement provided
  • May focus more on the authors over the content
  • "Censure, attack, abuse and name call"
  • Negative and positive comment mix
  • Intended to improve the content, person(s) or future
  • Provides methods for improvement
  • Address the content rather than the authors
  • Concrete examples
  • "Deep analysis of other people's work"

In reviewing my own work, I think it falls under the label critique, but I can see how others might read something into it.  Written critiques are hard to convey personality.  You can see that with my footnote joke above.  In going so meta, I was intending on injecting some levity into an otherwise highly intellectual process.  But others might see it as 'unprofessional' or they might read my words as snarky.  In getting my article reviewed, Isaac said he thought it was a little on the snarky side, but that in his view it was just right to make the point.  Outside of getting my content reviewed, I am not sure I can defend against hurting people with a wide audience and perhaps that is Judy's point.  By being kind, you avoid the question completely.  Do you have a critique or feedback for me?  Feel free to post in the comments.

As a sort of post script, I happened upon this site after I wrote this article.  As I didn't want to shoe horn it in, yet felt it was of some value, so I put it here.

Monday, September 2, 2013

CAST2013 Attendance Report

tldr: go to CAST


What did I expect? A useful conference with a context-driven theme.  What did I get? Three and a half days of applicable tutorials, practical sessions and speaking time with more people then I can count, who are as passionate about testing intelligently as I am.
  Sunday night was dinner with a cadre of testers and speakers. We chatted about backgrounds and random testing topics over dinner. Everyone was friendly; from people I've never met, to people whom I've read blogs from and are icons in the testing world. We eventually migrated to a bar where Meike Mertsch displayed great patience in teaching me the game of Set. A game where you try and define sets from cards given 4 rule sets, and each ruleset must be individual true or false. Took me longer then it should have given my less then sober self. So when I got home I bought the game and thought I'll teach my 7 and 9 year old how to play, that way I could progress at a steady pace with them (and not feel too stupid :). However after showing them 2 sets and the rules...off they went running the deck (aka they beat the snot out of me). So maybe I'll just play solitaire for awhile.
  Monday began with Lean Coffee directed by Peter Walen and Matt Heusser. This is where everyone comes together, places questions or topics they'd like to discuss on cards, and then votes on which topics they want to speak about. Five minutes is spent on each topic before being voted on for an extension of three minutes, or moving to the next topic. (More specifics can be had in Michael Larsen's blog)
  Monday's tutorial was by Anne-Marie Charrett as she educated us on "Coaching Testers". Anne-Marie explained in her own words her experience in coaching testers, educating us in ways and pitfalls that her (and others) have discovered over the coarse of educating other testers while using their coaching methods. Probably the biggest takeaway came from a demonstration she gave to me. She handed me a blue Sharpie flipchart marker, and then asked me to describe a single test that I would perform on it. Needless to say, about 150 tests ran through my head and I was about to describe every test I would perform. Anne-Marie managed in the next several minutes to isolate me down to a single test I would perform then tried to get from me how I knew that test passed. She managed to at last get a mediocre answer from me about how I would judge passing of my test "remove the cap from the marker". However the real lesson I learned was when she stopped and explained how nervous she was about how that demo of a real world coaching session was. I've spent many sessions attempting to coach other testers in how to test as a manager, and I was also more nervous, more unable to focus as the coach, then I ever am as the student. Seeing someone who coaches on a regular basis have that much nerves about it reminded me that being the coach is as brave as being the student. Thank you Anne-Marie, you did brilliant.
  That night was a quick dinner and back to the convention center for a round table (around rectangles) lead by Heusser and Walen about SWOT, specifically around how to identify Tasks, Stakeholders, Assumptions and Constraints. This was then followed by libations and the Agile Planning game ran by Matt, where we promptly failed, but with integrity (aka fired). The game was informative with regards to how to manage a project, as I've never had to think of a project from that level (I might have to apologize to some project managers if I ever see them again :)
  Tuesday began with another Lean Coffee (see Larsen's BlogThen continued with Jon Bach's keynote (on youtube) address of why we need more argument. Not argument in the new age sense of the word with all it's connotations of don't listen - just spout your opinion and see who can be the loudest - in the age old sense of debate amongst people - educated argument for the sake of understanding of the issue by both parties. It was an elegant argument that he made for the fact that too many people have become complacent in their need for harmony and their unwillingness to appear to cause conflict or strife. At first I was bemused by this, because I tend to argue with 5-10 people I trust, but then the thought occurred to me how much do I argue with people that I don't trust. Am I limiting myself to people that I trust in the hopes of not exposing myself to being vulnerable - better to say nothing, then be proven a fool. My take away from this will be to attempt to expose myself to more argument with people I trust less.
  Doug Hoffman's "Exploratory Automated Testing" was up next and I had such high hopes for this as automation is what I've been doing for most of my testing career. It's not so much I was disappointed in the talk, but that my crew and I have been doing what Hoffman suggested for the last couple of years. I was mostly hoping that his talk would start where it ended. There are still some new ideas I have about how to verify logging with our tests, what all CAN we be checking, what metrics exist for monitoring while running our tests? Why aren't we running all possible tests cases that we know how to generate? (it's merely clock cycles) This is probably the area we need to investigate the most next.
  Erik Davis then gave a report on his experience hiring testing people in the rust belt. His area has a lack of techies, a lack of testing people and a lack of four-year-degree-educated people. So he has had to resort to hiring from other fields. He has had to sell testing to people and convince them that testing is a good career / profession. I've found myself doing the sell thing, not for lack of techies or of a pool of semi-qualified candidates, but more because the people that I want to be testers (thinkers, learners and self-motivated people) are rarely aware of the fun and challenging career that is testing, hiding in plain sight. My takeaway from this is to educate people on testing, from management down to the tester themselves. Not just about what testing is and how it can be fun, but why testing is a profession worthy of pursuing, akin to some of the most challenging engineering disiplines out there. 
  After that was Michael Larsen and Summer QAMP. He described his early successes and his later failures. This talk was more interesting than I thought it would be as Michael is engaging and energetic. As well, his subject matter was part of the future of testing, a subject that has been heavy on my mind as of late.
  Last lecture of the night was Heather Tinkham who talked about five unconventional traits of extraordinary testers. I was interested to hear this one, cause it was the subject of a talk I gave at a local code camp recently, albeit I didn't limit myself to five but I should have. Heather focused on five traits, 3 of which I had already 'known' about. The last two I hadn't really thought about. The ability to be wrong, and that being okay. Lastly Seeing Is Forgetting the Name of the Thing One Sees, which is a simple and complex idea all in one. One which I don't feel comfortable trying to explain yet, as I'm waiting for the book to arrive before I can digest it.
  Then came lightning talks and tester games, both of which were really fun. Later that night I had an engaging talk with Jon Bach about how his (and his brothers) experience with SMBT all started on the floor right above mine, when I worked for Adecco at HP. It's interesting to think of the implications of having been all of 50 feet from a better way to test, when I was downstairs struggling to figure out why the testing we were doing just felt wrong. All in all I've come full circle, the information is out there now and I feel I am on the right path.
  Wednesday began with another, albeit smaller, Lean Coffee. Then breakfast with a host of people, where I expressed my interest in forming a Boise tester group. Several people gave me some advice, such as: Virtual Meetings, meetup.com, make a compelling name, being by practitioners for practitioners, make it a social meeting, not just testers (what about TDD, Agile, Systems Engineering) and of course the "Pizza and Beer" requirement. My takeaway from this is I've already started to talk with several of the Test/QA managers here locally and find out how much interest there is. But I'll be starting regardless of interest, beer and pizza for just me will be interesting.
  Then Dawn Haynes' keynote on "Introspective Retrospectives". Dawn's talk was so engaging that I neglected to take any notes as I was busy trying to digest everything she said, while nodding along in agreement. I remember specifically how important retrospectives are, even if all you are retro'ing is yourself. Watch it here for yourself. I'll be watching again soon, and talking notes this time.
  Next, Aaron Hodder blew my mind with his talk about Mind Maps as a practical tool for test planning and reporting. The visualization he used made even developers and managers walk by and say, "Oh hey this is interesting, but what about this area here?" When was the last time anyone said, "Oh, this test plan is interesting?" or provided useful feedback to your test 'plan'? I know what he was trying to say but what I took away was more generic, "How can we visualize stuff easier?" Aaron found a way to make test planning and reporting visual. I'd love to see it taken up another notch, how do you visualize a website? Can you visual test coverage of a website? Is there any use in that?
  Then as Carsten Feilberg's experiential, "How do you solve problems?" I'm hesitant to talk about this yet as to discuss the experience will ruin it for future attendees. What I can say is, "Damn, this was awesome". If you have the chance for an experiential exercise, I can't recommend it enough. Carsten did a most excellent job about making it not about him, as the speaker, but about what everyone else learned.
  Last official talk was the closing keynote from Scott Barber and Rob Sabourin. They provided some pretty funny and solid takeaways from the conference which will hopefully be on this channel soon.
  Later that night I attended the Quality Leader SIG panel, which started late but ended up being an interesting chance to hear other people's idea on what leader meant. This was followed by the Education SIG meeting where I volunteered for several projects.



Takeaways:
  - Blog 2x / week, 1-2k words, for 3 years... (thanks Matt, who said this like a doctor telling me how to take ibuprofen)
  - Focus / De-focus, stepping from the high to the low level and vice versa, this is something I need to work on. (Thanks Jesse)
    - Oblique strategies deck, helps with de-focusing
  - Games for learning
    -Dice Game, Set, Zendo, Agile Planning (Thanks to all the many people who showed me)
  - Need to form a tester group in Boise (thanks to mainly Aaron)
  - Trying to figure out at what level Lean Coffee would work in Boise, i.e. should it be an internal company thing or should it be more of a high-level cross company thing??? (Thanks Peter and Matt)
  - If you complain about something, but don't do anything about it...you're just a whiner (so do something about it)



NOTES for newbies next year:
  - Be fearless
    - walk up and talk with people you don't know
    - introduce yourself
    - interject yourself into conversations
    - remember they are just people too, yes even the keynote speaker is a real live human
  - Go to everything you can
    - Lean Coffee in the morning
    - Dinner with a group of people who you just met
    - Bar scene at night (you don't have to drink)
  - Write down what you expect to get out of the conference before you go
    - This way you've thought about things before Lean Coffee
    - You should have questions for each of the speakers before they even start to talk
    - You'll be able to compare what you expected from what you got



Negatives about CAST2013:
  - Automation was talked about like the devil...
  - I was surprised that in every talk I attended (minus Doug's), automation was talked about as a bad thing. However, when I would ask speakers after talks "about their views on automation" they would qualify that automation does have a place. It wasn't so much that I was disappointed in the opinions of people, but that they're opinion when left unquestioned was so strong against automation.
  - There was also this underlying political thing with ISST, people mumbling about it, but not really wanting to talk about it openly. I'm not fully sure of what the thing was, but it was semi-distracting. Or maybe I'm just blissfully ignorant.
  - Most of the supplied stuff (Red, yellow and green card, tickets, stickers, etc) were not explained till the start of Tuesday. For those of use who got them on Monday...it was not clear what to do with them.