- Limited deploys (to say .01% of the users and deploy to more over time) with data scientists backing it will allow us to get things done faster with more quality.
- Monitoring makes up for not having testers.
- Hiring the right team and best developers will make up for the lack of testers.
- Writing code that "just works" is possible without testing, given loose enough constraints (which leads to the next argument).
- Since complete testing is impossible, bench testing + unit testing + ... is enough.
- Quality cannot be tested in. That is the design/code.
- Testers make developers lazy.
- It is just a rename to change the culture so that testers are treated as equals.
- Testers should be under a dev/team manager, not in a silo.
- It is a minor role change where testers now work in a more embedded fashion.
- We hire too many testers who add inefficiencies as we have to wait <N time> before deploying.
- We only hire testers to do a limit set of activities like security testing.
- Testers do so many things the name/profession does not mean much of anything.
- With the web we can deploy every <N time unit>, so if there are bugs we can patch them quickly.
- As a customer I see those testers as an expensive waste.
That is sure a lot of different reasons, and I'm sure it is not an exhaustive list. Now I shall enumerate the rough responses I have seen:
- A little genuine anger and hurt that 'we' are unwanted.
- A little genuine disgust that 'we' are going through the 80's again.
- Denial ('this will never work')
- It can work for some, but probably not all. (This follows a very CDT 'it depends' point of view)
- Not everything can easily be instrumented/limited deployed. For example, I have heard Xbox 360 games cost ~10+k to do an update.
- In some software, having less bugs is more important than the cost of testing.
- This is why we need a standard (I never heard this one, but it is an obvious response)
- Concern about the question of testing as a role vs task.
- Different customers have different needs they want to fulfill.
- Good teams create defects too.
- We need science to back up these claims.
- Testing should move towards automation to help eliminate the bottlenecks that started this concern.
- It is very difficult to catch your own mistaken assumptions.
- If you need someone else to help catch mistakes, why not a specialist?
Complete Testing is Impossible & Quality Cannot be Tested in
The premise is true, complete testing is in fact impossible, I do agree with that. The conclusion that hiring testers is not needed is the part I find interesting. Quality of code and design cannot be tested in is also in interesting statement. The idea that testers make developers lazy feels very unsubstantiated, with possible correlation-causation issues (and I have seen no studies on the subject), so I am going to leave that part alone for now.
I write software outside of work. Mostly open source projects, of which I have 5 projects to my name. Most of them were projects in which I was looking to solve a problem I had and wanted to give to the world. In more than half of them I wrote no formal tests, no unit tests and to be honest the code quality in some of them is questionable. There are bugs I know how to work around in some cases, or bugs that never matter to me and thus the quality for me is good enough. I have had ~10k downloads from my various projects and very few bugs have ever been filed. Either most people used it and then deleted it or the quality was good enough for them from a free app. I hired no testers and as the sole dev I was the only one testing the app. I think this is a +1 for the no tester group, if you work on an open source app for yourself, no tester (maybe) required. CDT even would agree with this as it does depend. Context matters. I am sure other contexts exist, even if I have not seen them. What I have not seen is specifics around contexts that might require no testers, with the exception of the Microsoft/Google sized companies.
Now can you test code quality in? If you are about to design the wrong thing and a tester brings that up, did the tester 'test' code quality in? Well he didn't test any software, the no tester side would say. Sure, but he did test assumptions, which is often part of testing. What about the feature the tester notes the competition has that you don't, which was discovered when comparing competitor protect quality? What if I suppose the dev knows exact what to develop, can the tester add value? Well I know I have made code suggestions that have gotten a developer thinking. However, I could have just as easily been an XP developer sitting beside him as he coded rather than a tester. Well what about TDD? Is that testing quality in? If I write tests first, I have to think about my design. That is testing quality in, but again that is just the developer. What if the tester and the developer co-design the TDD or the tester recommends additional tests added during development. Again, this could be a developer, but a tester is often thinking about the problem differently. So while I think the statement is false, if I look at it from the spirit, I think it might be true in some cases. There are some really great developers who do think like testers and they could do it. However, I think that is only some of the developers. There will always be junior developers and there are always other mind set factors that play in.
What is in a Name & Quick Deploys?
What's Montague? it is nor hand, nor foot,
Nor arm, nor face, nor any other part
Belonging to a man. O, be some other name!
What's in a name? that which we call a rose
By any other name would smell as sweet;
- Juliet, Romeo and Juliet
Does the fact that we are in a in-group called 'test' and some see developers as an outsider matter? Visa-versa is probably true too. In fact, the other day Isaac and I had a small debate about correcting a tester in some very minor minutia about deep mathematical and development theory. To me, perhaps a developer who was neck deep in building new languages would find such a correction helpful, but a tester who was doing some exploration around the subject didn't need such a correction. In fact such a correction might make things more confusing to anyone who didn't have a deep understanding of the subject. I myself am guilty of the insider/outsider mentality. We all categorize things and use those categories to make choices. Perhaps renaming us all as human beings would make things better. If an organization is particularly combative, this might be a useful change. That being said, there is also a group identity for people who see themselves as testers or QA. There is a specific field of study around it, and that also adds value. Also, by having a sub-group, the tension, even if minor, might possibly help people grow.
This is a mushy, non-business oriented view of the world, and perhaps those who are just interested in the how fast things go today would prefer avoiding the conflict in order to get things done. The fear of doing this from a QA point of view comes from the 1980s, where defects were a big deal. Renaming people feels like going back to that era. The argument back is that this isn't the 80s, we can ship software almost instantly. The argument to this is that some software can't just be shipped with broken code. Work on many different areas must be 'mostly right' the first time. Also, this grand new world that people want to move into has not been vetted by time. Wait a while and see if it is the panacea it is believed to be. I recognize the value in renaming positions in order to improve team cohesion and believe it comes from a genuine desire to improve. Just keep in mind what is lost and the risks in doing so.
I should note, I have seen some of this implemented before in organizations I have worked in. Working under a single development manager, the development manager would often ignore the advice given by the "QA" person. Perhaps a full rename would have fixed things, but often development wants to move forward with the new shiny thing. Having no testers is perhaps a new shiny, and so development would be on board. The people who are more aware of risk of change are the QA and operations people. This is a generalization, unsubstantiated by anything but my observation, but it feels like QA is now calling this a risk and because it threatens our jobs we are being told we are biased, so we can be ignored. Psychology and politics are complex things and anything I write will not adequately cover this subject. I don't object to people trying this, maybe it will work. I still see it as a risk. I hope some people take it and report back their observations.
Perhaps the final argument about us is that we are many different roles tied with one name. This is a complexity that came up at WHOSE. We talked less about testing than you might expect. Communication and Math and Project Management and ... We had a list of about 200 skills a tester might need. We are a swiss-army knife. We are mortar for the projects developed, and because every project is different, we have to fill in different cracks for each project. We happened to do that best in some ways because we happen to see many different areas of a system. We have to worry about security, the customers who use it, the code behind it, how time works, the business, and lots more. That is ignoring the testing pieces, like what to test, when to test and how to justify the bugs you find. Maybe we could remove the justification piece, but sometimes the justification matters even to the developer, because you don't know if it is worth fixing or what fix to choose. I think we can't help being many different things because what we do is complicated.
This is a mushy, non-business oriented view of the world, and perhaps those who are just interested in the how fast things go today would prefer avoiding the conflict in order to get things done. The fear of doing this from a QA point of view comes from the 1980s, where defects were a big deal. Renaming people feels like going back to that era. The argument back is that this isn't the 80s, we can ship software almost instantly. The argument to this is that some software can't just be shipped with broken code. Work on many different areas must be 'mostly right' the first time. Also, this grand new world that people want to move into has not been vetted by time. Wait a while and see if it is the panacea it is believed to be. I recognize the value in renaming positions in order to improve team cohesion and believe it comes from a genuine desire to improve. Just keep in mind what is lost and the risks in doing so.
I should note, I have seen some of this implemented before in organizations I have worked in. Working under a single development manager, the development manager would often ignore the advice given by the "QA" person. Perhaps a full rename would have fixed things, but often development wants to move forward with the new shiny thing. Having no testers is perhaps a new shiny, and so development would be on board. The people who are more aware of risk of change are the QA and operations people. This is a generalization, unsubstantiated by anything but my observation, but it feels like QA is now calling this a risk and because it threatens our jobs we are being told we are biased, so we can be ignored. Psychology and politics are complex things and anything I write will not adequately cover this subject. I don't object to people trying this, maybe it will work. I still see it as a risk. I hope some people take it and report back their observations.
Perhaps the final argument about us is that we are many different roles tied with one name. This is a complexity that came up at WHOSE. We talked less about testing than you might expect. Communication and Math and Project Management and ... We had a list of about 200 skills a tester might need. We are a swiss-army knife. We are mortar for the projects developed, and because every project is different, we have to fill in different cracks for each project. We happened to do that best in some ways because we happen to see many different areas of a system. We have to worry about security, the customers who use it, the code behind it, how time works, the business, and lots more. That is ignoring the testing pieces, like what to test, when to test and how to justify the bugs you find. Maybe we could remove the justification piece, but sometimes the justification matters even to the developer, because you don't know if it is worth fixing or what fix to choose. I think we can't help being many different things because what we do is complicated.
Customers Don't Want QA?
While that is a lie in regards to me personally (I get excited when I see QA in the credits of video games), perhaps it is different for others. I have written elsewhere how users of mobile devices are accepting more defects. Perhaps that is true in general. We know there is such a thing as too much QA. To go to the absurd, when QA bankrupts the company, it is clearly too much QA. We also know there is too little QA, when the company goes bankrupt from quality issues. Perhaps we could say when there are regulatory constraints that close the company down because of a lack of QA could also be there. Those are the edge cases, and like a typical tester, I like starting at boundary cases. Customers clearly want some form of quality, and as long as testers help enhance quality (E.G. Value to someone who matters), testers are part of that equation. Are they the only way to achieve quality? Maybe not, but that involves either re-labeling (see above) or giving the task to others, be they the developers or the customers or someone else. Some customers like the cheaper products with less quality, and that does exist. Having looked at the games that exist on mobile devices that are free, my view is the quality is not nearly as good. Then again, I want testers. Maybe I'm biased. Even if I was a developer for a company, I would like to have testers.
Should the ideas of no testers be explored? I have no objection, but I would say that it should be done with caution and be carefully examined. Which makes the companies who try it...well, testers.
Brief Comment on the Community
I have heard a lot of good responses, albeit limited. I have also seen some anger and frustration. Sometimes silly things are said in the heat of the moment, likely from both sides. We also have some cultural elements whom are jagged edged. These can create an atmosphere of conflict, which can be good, but often just causes shouting without much communication. One of the problems is most people don't have the time or energy to write long winded detailed posts like this. We instead use twitter and use small slogans like #NoTesting. I personally think twitter as a method for communicating is bad for the community, but I acknowledge it exists. How can we improve the venue of these sorts of debates? How can we improve our communication? I don't pretend to have an answer, but maybe the community does.
UPDATE: I wrote a second piece around this topic.
From your next post http://about98percentdone.blogspot.ru/2014/12/more-on-no-testers.html I didn't understand - did your get translated comments or not. There were 3 main ideas: we need testers evidently, but we need good and real testers, therefore we need to pay them accordingly.
ReplyDeleteHow do you find idea that tester salary can be the same or about with developer salary? We have some gap between these digits here, in Russia. I'm pretty sure that it's not main problem but often all discussions revolve around this question.
I had to manually copy and paste the comments to get them translated rather than Google's translate page doing the translation. Disqus appears to use AJAX to do the translations and I don't think Google gets that data to do the translation.
Delete(This applies to both this post and the next post on No Testers)
DeleteI think for testers to add value they need to augment the other tools we have in our tool box including data science and unit testing. Need is a strong word and I don't mean to imply companies need us. They can choose to go another route. Disqus appears to do that and to be fair, they do not appear to be failing as a company. Perhaps the best way to say it is I think the No Tester camp over states what can be done without having any testers.
As for the conclusion, that testers should be paid based upon their education and value, I don't disagree with that at all. If you ask Dr. Whittaker, he would say that while developers have stepped up their game, testers have not. I am not sure he's completely right, but that is a bit of the perception. Perceptions often equal reality for those whom pay us. I can change my direct managements opinions by demonstrating my worth. In the past, one of the most popular objectives was that we (testers) did was find bugs and thus our worth was demonstrated by showing bug X would have cost Y rubles. The problem is bugs became the metric behind QA in many organizations and that became both our process and plan. We find bugs. We find them and hand them back to the developer to fix. This would seem to go against the Agile Manifesto (http://agilemanifesto.org/ ) which is part of the No Tester world view (I think). Whittaker said in one of his books he thought that one good measurement of a QA engineer was to see how much better the team's developers did over time. I might even go so far as to say that part of QA's job is maintaining or improving the customer's perception of the product over time.
How do you measure that? Great question. I personally say you don't, or more correctly, I don't want to work at a place where capturing metrics matters more than getting good work done. I think metrics are often used to drive behavior and thus have unintended consequences. Instead I spend more time trying to find a company that fits my own views and stay with them. If they change too much, I go looking for a new home. One heuristic I have found valuable is look for companies with less than 200ish employees. More than that and they are often institutional and need metrics to decide employee value.
I honestly don't know enough of Russian culture to hold a solid opinion on what cultural and economic factors are considered in Russia. Cell phone game apps are often of low quality and that appears to be considered okay by most users in the US for example. Maybe Russians accept more bugs. There are certainly sensitive topics from Russia's history that might play a part in the attitude of quality, but I would be lying if I said I know anything specific.
I would highly suggest you go look at BBST and the various missions of testing (http://www.testingeducation.org/BBST/foundations/BBSTFoundationsNov2010.pdf ; slide 69). In fact, I would suggest you either take BBST from AST or at least watch the videos and read the slides. They are really good and mind-expanding.
I am not sure I completely hit your question, so feel free to follow up. I realize this is a dense topic, but there is a lot of things to consider.
Well, I think I got answer. And I agree with you.
DeleteWhy am I interested in this question?
My team (only software developers) tries to write a lot of tests over production code. And sometimes testers don't find any critical issues during the testing but, at the same time, we get these issue from our customers. Not often but... And I listen the question "Why do we need testers in this case".
Now I have more arguments :) Thanks
Let me pose you a simple question. What developer intentionally adds bugs to their code? When do we intentionally make mistakes? I think rarely is a good answer. So why do bugs happen? They are mistakes of one sort or another. One is that we made an assumption about the inputs or the outputs and that mistake caused a bug. That sort of bug is easily found in logs or by a unit test. Another common mistake is an assumption of what should be built. This sort of mistake is often the sort of mistake developers, in their mind set tend to make and is often not caught without someone else looking at the product from a customer viewpoint. The third common mistake is the type where each piece of code looked reasonable, but when all the code was tied together (integrated) or when systems were tied together, something didn't work. This also is often not caught until a real user tries to use the system. Sometimes automation handles this well, sometimes not. I think those are the sorts of things you should use when arguing about hiring or not hiring testers. I am not saying these are the only common cases, but when you consider a developer's typical personality profile. In my experience they are usually optimistic and gung-ho to go build something. Testers tend to be more considered and careful. "That will never break." coming from a developer often means you should file a bug now, as that developer has not learned to be careful enough.
Delete- JCD
Blogspot comments don't allow me to "favorite" your comment :) So let me "+1" here
Delete