What Google has to say on AI in Testing

This week, the Google test blog newsletter was about GTAC, the Google Test Automation Conference. I found this session on AI applied to testing really relevant to:

Free Tests Are Better Than Free Bananas: Using Data Mining and Machine Learning To Automate Real-Time Production Monitoring (Celal Ziftci of Google)

The session was about Google’s assertion framework which runs against their production logfiles. The framework runs on real time logs, checking for inconsistencies. Examples of meaningful assertions:

transaction.id > 0, transaction.date != null.

If any assertions fail, a notification is sent to a developer to take some kind of action. Usually a developer would have to design assertions, but now they use a tool to assist.

Daikon invariant detector identifies invariants (rules which are always true for a certain section of code). Although Daikon has been designed to analyse code, they have modified it to work on data like logfiles. Daikon starts with a set of hypotheses about what your input will look like, and as you push data through it, it eliminates those which prove to be false. The rules identified can be used as assertions, thereby automatically generating test cases. These test cases still need a developer to determine value and validity, however.

The other technique they use is Association Rule Learning, which finds relationships between different data items, e.g.

when country is uk, time zone is +0

These, too, are added into the assertion framework to identify issues occurring in production.

In this case, the work that developers used to do in defining test cases is now being done by machines. But human beings are still needed to made a decision on whether the identified rules make sense and add value or not.

The AI system, at times, identifies trivial rules, but is also capable of identifying complex relationships that would be less obvious to humans.

Where are we?

© Sarah Holmlund | Dreamstime.comWhen I first Google’d this topic, to my surprise, the top results were not touting test tools using AI to augment the test process, but research papers and books describing the theory of applying AI to software testing.

The progress we have recently made in artificial intelligence applications has fascinated me. Particularly in areas touching on the potential creativity of machines. This is one of the highly experimental areas of AI, which still seems to be forming. Some other applications of AI have either been rather successful since the beginning, like AI planners, which are routinely used to aid in planning of complex systems, and data mining, which becomes increasingly popular with time. It looks like AI applied to testing is one of those areas where it still proves to be rather difficult to replace a human being.

In a training I attended this year, we discussed whether testers would soon be put out of work, replaced by machines. Since we started using agile development practices, and test automation tools, it would seem that we require fewer and fewer testers to produce the same amount of software. Perhaps the next step in this progression is for software development and testing to be carried out in part by machines. This could put people out of work and would not be limited to the software industry. This article from the Economist discusses the economic risk to jobs posed by intelligent computing. But luckily we still seem to have some years to go before we have put ourselves out of business.

On Becoming Obsolete

An examination of artificial intelligence techniques applied to software testing.

The idea of computers testing software with little or no human intervention intrigues me. Software development (and other creative pursuits) is arguably an inefficient and unpredictable endeavour. It’s rather difficult to design a perfect system and produce it accurately in one step. We need lots of back and forth in terms of sharing ideas and then improving them until we have something which could work. Then we need some iterations to translate this idea into code and probably lose something in that translation along the way. After this we must verify that we have achieved our goal with more communication and rework. Thus, there is something beautiful about cutting ourselves out of the loop and increasing computer independence, proceeding towards an efficient and natural process of software generation.

A forum discussion on Linkedin triggered me to dig deeper into how close we are to replacing testers in software development. As a lead tester, I find it a fascinating challenge. Although we have formal methods of testing software, a lot of testing in the field is, I believe, lead by gut instinct. It could be said that the best testers are the ones who intuitively are able to sniff out defects and weaknesses. But where does this intuition come from? And how can we replicate it in machines? Or will it come down to brute force approaches with thousands of inputs and outputs to achieve the same goal?

In future postings, I intend on unpacking what there is to know in this field and then speculating on where it could be headed and how it can be applied.

Blog at WordPress.com.

Up ↑