Innovative solutions driving your business forward.
Discover our insights & resources.
Explore your career opportunities.
Learn more about Sogeti.
Start typing keywords to search the site. Press enter to submit.
Generative AI
Cloud
Testing
Artificial intelligence
Security
May 21, 2018
People who control food on a conveyor belt are good at recognizing faulty products. However, the speed of a conveyor belt often makes it difficult to quickly remove the product by hand. With the current state of play in robotics and AI, we can build good systems that take this work 24/7 from us. It’s a relatively boring job to do, so people generally don’t mind handing it over to the ‘bots’.
But how does testing with robots and artificial intelligence actually work? The fundamental starting point should be to invest in building smart robots or other intelligent solutions. Author Andrew Keen explains very clearly that we may be able to trust code that is written (and which we can test), but the problem lies with the programmers and the platforms that are used. Can we trust them? Are we confident that, either intentionally or in error, the people building the complex solutions using AI haven’t embedded something harmful in their code? The truth is, when we’re talking about highly complex AI algorithms, it is too complex and there is too much to be able to test it all, so we simply have to trust in the good intent of the programmers. Indeed, a recent study shows that we have more fears about what man will do with technology than vice versa.
There are plenty of AI platforms to choose from. IBM Watson, tensor flow, Microsoft AI, Caffe, Apache Mahout, and NuPIC are just a few of the platforms available. They differ in their AI specialization. For example, one platform might offer broad artificial intelligence in terms of algorithms and numerical algebra. Another could focus on very specific issues, such as ‘deep-learning-vision’ technology, or the working out of a theory such as ‘hierarchical temporal memory’. Often, we have blind faith in the platforms we use. However, when it comes to the non-deterministic behavior of artificial intelligence, this is difficult to control. By this I mean that ‘old school’ deterministic testing technology relies on a series of specific steps with specific inputs that lead to one specific answer. But as we move towards greater use of AI in testing, we cannot predict the outcomes in a given situation. Why? Because what a non-deterministic AI solution gives as a result today, or tomorrow, may be different in two days’ time or further in the future because it is constantly learning and adapting to a specific situation.
Confidence in robots can be built by initially bringing humans and technology to perform activities together. A good example is testing the cockpit of a helicopter in a simulator scenario. The functionality of all buttons, hands and screens requires considerable domain knowledge. Leaving the start and the ascent of a helicopter to a robot (arm) is a big step. Instead, this test scenario might be better performed by a human. An endurance test of several hours of a flight is another story. That’s because it becomes possible to monitor a cockpit and operate the buttons with an intelligent robotic arm. Validation of the results of this joint human-robot approach to testing (cobotics), will help to build confidence in the greater use of robots.
5 ‘hops’ to digital testing
With the scene set for greater use of robotics and AI in testing, Sogeti’s book ‘Testing in the digital age – AI makes all the difference’ describes 5 hops’ on the journey to creating a ‘robust and long-term strategy’. These hops (or stages) move us from the past (reactive testing) to now (test monitoring) and eventually on to the future (test forecasting).
The 5 hops are covered in considerable depth in sequential chapters in the Sogeti book, so the following is merely a flavor of what is discussed:
Already, we are seeing artificial intelligence being put to use in many more areas than just test execution. For example, we see it used in the selection of test cases, monitoring of test results, smart composition of a test environment or creating test data. As the digital world grows exponentially around us, so the opportunities to test smarter will grow in parallel. That’s especially the case when considering the speed at which new technologies are implemented – sometimes too quickly. Testing the AI in new products is an important step in establishing confidence in the technology. This in turn will establish greater confidence in the use of intelligent robots that perform tests. As that confidence grows, we will see AI robots undertaking more testing, freeing up test engineers to run quality checks on the robots themselves in a cobotics model.
Senior Testconsultant High Tech