Moving beyond the hype in AI and machine learning?
Expectations are high, but application is yet to come to fruition. We’re talking about artificial intelligence (AI) and machine learning, as discussed in the 2020-2021 World Quality Report from Capgemini and Sogeti, in partnership with Micro Focus, published on November 5, 2020.
There’s a general buzz of excitement at the potential for using AI and machine learning in quality assurance (QA) and testing, just as there was last year. Yet, while our WQR survey findings reveal some evidence of supervised learning as a core part of machine learning (ML) in making quality engineering smarter, we’re not seeing the required maturity to show visible results.
Several questions arise for those of us watching the evolution of AI and ML in quality assurance. For example, are we using it as a tool to do something we already do, only better? Or to change what we do altogether. Or simply to carry on as we were, but using a machine instead of a human – in which case, where is the value?
Yet, despite these questions the future looks bright for AI and ML in this area. Almost nine out of ten respondents (88%) in this year’s WQR survey said that AI was now the strongest growth area of their test activities. Looking ahead, we see a primary goal to be that of avoiding defects before they even occur. Think about it. The ability to prevent a defect without having to run tests in the first place. That’s smart.
AI and ML use cases
Currently though, use cases include things like automated root cause analysis, with 58% of this year’s WQR survey respondents saying it was extremely or highly relevant. Having said that, we’re inclined to think this is more of an aspiration than that they’re actually applying it for this purpose at present.
While new use cases for AI and ML are only now emerging, some organizations are ahead of the game. We cite one multi-national bank that has been using machine learning for analysis on customer usage, seeing which features are working best for people. That knowledge is then being fed back into the bank’s development strategy.
Elsewhere, we’ve seen organizations running analytics on production incidents and run-time application logs both to conduct a deep intelligent what-if analysis and to predict future quality, as well as to prescribe necessary development and testing activities.
And, of course, there’s the use of AI for the generation and management of test data. Here we see it being used to identify test coverage gaps compared to real user experience patterns. It also supports regulatory compliance and ethical use of data when it’s used to create synthetic data, for example to comply with GDPR data privacy rules.
Testing ‘of’ or ‘with’
In assessing the state of AI and machine learning in quality assurance, one question that keeps coming up is whether we are using it as a tool to aid QA and testing, or whether we’re assessing the QA of the intelligent machine itself. There is a significant difference between the two. It is particularly difficult to assess the QA of AI, especially when it is continually learning, because you don’t know what the expected outcome is. And, as we point out in the report, there are challenges on the holistic coverage of AI systems – for instance, bias in AI.
In general, we expect the benefits to accrue initially when AI and ML are used as a tool to aid QA. For example, to predict patterns, identify the behaviors of coders (not as ‘big brother’ as that might sound), and find indicators of good code and bad code.
Changing skills
Finally, as with any new or emerging technology, ensuring you have the skills to maximize its value is a challenge. In the case of AI, it’s not just about what the technology can do, but about how it can be incorporated in the overall software development lifecycle. This is something to watch going forwards. And it’s interesting to note a divergence in how AI and ML change the skills needed from QA and test professionals in different countries covered by the report. For instance, the greatest overall area of need this year was identified as software development engineering testing skills (S-DET), mentioned by over a third (34%) of respondents. However, in the Netherlands, it was an issue for only 5% of respondents, while in the UK, Belgium and Luxembourg, the figures were over 70%.
Get in touch
If you’d like to hear more about our findings relating to AI and machine learning in quality assurance, please get in touch with:
Recent articles
- (9/30/2021) World Quality Report 2021-2022: A New Realism as QA Bounces Back
- (12/3/2020) Moving beyond the hype in AI and machine learning?
- (11/12/2020) Knock me down, I get back up