Automation
Automation within this context is long living, long term testing somewhere between vertical integration testing (e.g., Unit testing including all layers) and system testing (including load testing). These activities include some or all of the following activities:
- Writing database queries to get system data and validate results.
- Writing if statements to add logic, about things like the results or changing the activities upon environment.
- Creating complex analysis of results such as reporting those to an external system, rerunning failed tests, assigning like reasons for failure, etc.
- Capturing data about the system state when a failure occurs, such as introspection of the logs to detect what happened in the system.
- Providing feedback to the manual testers or build engineers in regards to the stability of the build, areas to investigate manual, etc.
- Documenting the state of automation, including what has been automated and what hasn't been.
- Creating complex datasets to test many variations of the system, investigating the internals of the system to see what areas can or should be automated.
- Figuring out what should and shouldn't be automated.
Developer within this context is the person who can design complex systems. They need to have a strong grasp on the current technology sets and be able to speak to other developers at roughly the same level. They need to be able to take very rough high level ideas and translate them into working code. They should be able to do or speak to some or all of the following activities:
- Design
- Understand OOP
- Organization
- Database
- Design
- Query
- Refactor
- Debug
- Reflections
You will notice that the two lists are somewhat similar in nature. I tried to make the first set feel more operational and the second set to be a little more skills based, but in order to do those operations, you really have to have the skills of a developer. In my experience, you need at least one developer-like person on a team of automators. If you want automation to work, you have to have someone who can treat the automation as a software development project. That also of course assumes your automation is in fact a software development project. Some people only need 10 automated tests, or record-playback is good enough for their work. For those sorts of organizations, a 'manual' tester (that is to say, a person who has little programming knowledge) is fine for those sorts of needs.
Automation Failures
I know of many stories of automation failure. Many of the reasons revolve around expectations, leadership and communication. As that is an issue everywhere I don't want to consider those in too much depth other than to say a person who doesn't understand software development will have a hard time to clearly state what they can or can't do.
Technical reasons for failure involve things as simple as choosing the wrong tool to building the wrong infrastructure. For example, if you are trying to build an automated test framework, have you got an organized structure defining the different elements and sub elements. These might be called "categories" and "pages" with multiple pages in a category and multiple web elements in a page. How you organize the data is important. Do you save the elements as variables, call getters or embed that in actions in the page? Do actions in the page return other pages or is the flow more open? What happens when the flow is changed based upon the user type? Do you verify that the data made it into the database or just check the screen? Are those verifications in the page layer or in a database layer? Organization matters and sticking to that organization or refactoring it as need be is a skill set most testers don't have initially. This isn't the only technical development skill most testers don't have, but I think it illustrates the idea. Maybe they can learn it, but if you have a team for automation, that team needs a leader.
Real Failure
These sorts of problems I talk about aren't new (Elisabeth Hendrickson from 1998) which is why I hesitate to enumerate the problems with much more detail. The question is how have we handled such failures as a community? Like I said, Elisabeth Hendrickson said in 1998 (1998! Seriously!):
Treat test automation like program development: design the architecture, use source control, and look for ways to create reusable functions.So if we knew this 15 years ago, then why have we as a community failed to do so? I have seen suggestions that we should separate the activities into two camps, checking vs testing, with checking being a tool to assist in testing, but not actually testing. This assumes that automation purely assists because it doesn't have the ability to come up with judgment. This may be insightful in trying to denote roles, but this doesn't really tell you much about who should do the automating. CDT doesn't help much, they really only note that it depends on external factors.
When often automation fails or at least seems to have limited value, who can suggest what we should do? My assertion is that testers typically don't know enough about code to evaluate the situation other than to say "Your software is broken" (as that is what testers do for a living,). Developers tend to not want to test is typically noted when talking about developers doing testing. Furthermore, what developer ever intentionally writes a bug (that is to say, we are often blind to our own bugs)?
A Solution?
I want to be clear, this is only one solution, there maybe others which is why the subheading starts with "A". That being said, I think a mixed approach is reasonable. What you want is a developer-like person leading the approach, doing the design and enforcing the code reviews. They 'lead' the project's framework while the testers 'lead' the actual automated tests. This allows for the best of both worlds. The Automation Developer is mostly doing code as a software development project while the testers do what they do best, develop tests. Furthermore, the testers then have buy-in in the project and they know what actually is tested.
Thoughts?
We've actually known about this problem for 20 years plus. The a big part of the problem, aside from misconceptions, in my opinion/experience (I've been doing this since 1989) is that there is a constant churn of people involved with the work.
ReplyDeleteMeaning there is a constant "re-learning" of lessons by management and other people who do not have extensive experience with this line of work. Those of us who do have to keep 'teaching' the others, and at times you just get plain old sick and tired of it (I'm there right now, again).
The other problem is the 'selling' of the tools and practices by vendors & consulting companies who continually promote the "automagic" myth. Buy our tool and it will solve all your problems magically. The constant battle with this is what causes some of us to just give up. Mainly because they set the expectation with inexperienced/unknowing management, and once that is in place it is almost impossible to change.
The key thing is to get people to realize that 'automation' as 95% of people think of it is really 'automated test execution', and that it takes a lot of planning and work to get to the point where the test can be run via automated means. There are other types of 'automation' in testing and different tasks that can be driven by automated methods. But the key thing is as you said, it needs to be looked at as a type of Software Development project. Otherwise it is utter mayhem and will quickly implode and fail.
Regards,
Jim Hazen
I do agree the people part of the problem is a huge problem too. I was ignoring that, not because I haven't seen it but because I wanted to limit my scope. I also agree with you that vendors have been selling record-playback as the solution without actually making it work all that well. I actually think it *could* be better, but I don't think it can ever be the ultimate solution. That being said, I think the vendor’s message is more of a problem than the tool.
ReplyDeleteYes, most people do see automation as a holistic solution and that requires a great deal of effort to build. Sometimes manumatic solutions are the way to go or the software can be divided up to a back-end/front-end and the back-end can be automated while the front end is manually tested. But those require the people end of the equation to back you up -- you have to have management understand the complexity or at least trust the automation team.
The reason for my post is I see a number of dismissive 'manual' testers who don't feel automation serves any value other than to eating up money. I don't disagree that it can have that effect, but it can also be done right (in my opinion). I wanted to present a case for how it can be done right. The second issue I have is that inexperienced testers start out trying to use record and playback and don’t get why it doesn’t work, so they assume automation is a poor replacement* to manual testing.
Thanks for the comments,
- JCD
* Since manual testers are often converted into automators it appears they are treated as replacements. While it is true automation is a poor replacement to manual testing, some don’t then reflect and see it might have been a good supplement to manual testing.