Friday, September 27, 2013

Word of the Week: Manumatic

Before I go into the depths of what I mean, I should first talk about what the wiki defines manumatic as.  According to the dictionary, Manumatic is a type of semi-automatic shifter used for vehicles.  This is not what I am talking about, even though it shares some of the same flavor.  What I am talking about is semi-automated testing (or is that semi manual checking?).  Some testers like the term tool-assisted testing and I can imagine a half dozen other terms like tool driven testing.  Whatever you want to call it, I tend to call it a manumatic process or manumatic test.

The idea is that you have a manual process that is cumbersome or difficult to do.  However, either some part of the test is hard to automate or the validation of the results requires human interpretation.  There are many different forms this can come in, and my attempt to define it may be missing some corner cases (feel free to QA me in the comments), but allow me to give some examples.

At a previous company I worked for I had to find a way to validate thousands of pages did not change in 'unexpected' ways, but unexpected was not exactly defined.  Unexpected included JavaScript errors, pictures that did not load, html poorly rendering and the likes.  QA had no way of knowing that anything had in fact changed, so we had to look at the entire set every time and these changes were done primarily in production to a set of pages even the person who did the change may not have known.  How do you test this?  Well, you could go through every page every day and hope you notice any subtle changes that occur.  You could use layout bugs detectors, such as the famous fighting layout bugs (which is awesome by the way), but that doesn't catch nearly all errors and certainly not subtle content changes.

We used a sort of custom screenshot comparison with the ability to shut off certain html elements in order to hide things like date/time displays.  We did use some custom layout bug detectors and did some small checking, but primarily the screenshots were our tool of choice.  Once the screenshots were done, we would manually look at the screenshots and determine which changes were acceptable and which were not.  This is a manumatic test, as the automation does do some testing, but a "pass" meant nothing changed (in so far as the screenshots were concerned), and finding a diff or change in the layout didn't always mean "fail".  We threw away the "test results", only keeping the screenshots.

In manually testing, often we need new logins.  It requires multiple sql calls and lots of data to create a new login, not to mention some verifications that other bits are created.  It is rather hard to do, but we wrote automation to do it.  So with a few edits, an automated 'test' was created that allows a user to fill in the few bits of data that usually matter and lets the automation safely create a user.  Since we have to maintain the automation already, this means every tester need not have the script on their own box and fight updates as the system changes.  This is a manumatic process.

Let me give one more example.  We had a page that interacted with the database based upon certain preset conditions.  In order to validate the preset conditions, we need to do lots of different queries, each of which was subtly connected to other tables.  Writing queries and context switching was a pain, so we wrote up a program to do the queries and print out easy to read HTML.  This is a manumatic process.

I honestly don't care what you call it; I just want to blur the lines between automated testing and manual testing, as I don't think they are as clear as some people make them out to be.

No comments:

Post a Comment