Can AI learn to become more human?

It could improve Australia’s national defence.

Humans are resilient and able to bounce back after facing adversity. That’s a trait that Professor Hoa Khanh Dam, from the University of Wollongong (UOW) School of Computing and Information Technology, is now trying to emulate in the form of Artificial Intelligence (AI) to protect Australia’s most sensitive infrastructure from cyber-attacks. 

Professor Dam has been focussing his research into software engineering and AI to try and improve Australia’s national defence. 

While the headlines have been predicting the demise of humanity as the use of AI continues to rise, Professor Dam said the evolution of technology should not be viewed with alarm, but as the next step in the industrial revolution. 

“I think there is a lot of conversation about AI, which is similar to the conversations that were had back at the start of the industrial revolution,” he said. 

“People back then were worried about the new technology taking their jobs, but technology is there to support us, and AI will also support us.  

“Humans are still in the loop and AI is just one kind of technology that we can use to support our work, our jobs, and our daily lives - that is the way it should be used responsibly. 

“There are limitations with AI at the moment in the sense that it can only specialise in one particular task. It has limited reasoning or planning like a human. 

“Although it can learn by itself by doing the task it is designed to do over and over again, it only is specialised in that one task and is not at the capacity to do the kind of reasoning and planning that humans do.” 

Learning resilience  

However, the work researchers at UOW are presently doing is moving AI closer to being able to do just that. 

Professor Dam is leading a UOW team working with the Department of Defence to develop a wide range of AI technologies to build cyber resilience into our next-generation cyber defence systems and infrastructures. 

The UOW’s Decision Systems Lab was established in 1998 and has already been at the forefront of AI engineering research. The lab comprises of an interdisciplinary research team from the Faculty of Engineering and Information Sciences and its researchers have worked on developing AI applications across a range of industries from software engineering to business, supply chain, healthcare and defence. 

Team member and PhD student Geeta Mahala, has worked on an innovative goal reasoning technology that can be applied to cyber security for Defence and beyond.   

Man and woman looking at computer in modern office. Professor Hoa Khanh Dam and PhD student Geeta Mahala are building cyber resilience into our next-generation cyber defence systems and infrastructures. 

Setting goals  

It is the first application of intelligent agent technology, and specifically the set of techniques that are commonly characterised as goal-reasoning techniques, to formulate extremely fast real-time responses to cyber security threats.    

A goal is a specification of the intent of an intelligent agent. It provides a useful abstraction because a goal admits multiple means of realisation and intelligent agent technology allows rapid switching between goals as a cyber-threat evolves. 

The project was a collaboration between UOW, the University of South Australia and the University of Adelaide. The UOW team, led by the late Professor Aditya Ghose, focused on goal reasoning, while the other universities focused on the details of the cyber-threats and the software engineering aspects of the resulting system. 

“In this project, we worked on an AI goal-reasoning tactic that can be applied to cyber security for defence. 

Cyber threats are constantly evolving, and attackers are using new techniques, so we have tried to build an effective counter-attack system and this AI with goal reasoning capabilities is for those agents,” Ms Mahala said.  

“When humans have a goal, they have to find different ways of achieving that goal, and that is what this technology will also do. It has to determine what goal the cyber-attack is trying to achieve, from turning off a firewall, introducing encryption, obtaining personal information – and has to then try to work around those attacks. 

“The goals of attackers are constantly changing, and the AI technology has to work around new attack techniques and to figure out what the attackers’ current goal is.” 

Defending cyber-attacks  

Professor Dam said the Decision Systems Lab is focussing its research on engineering these goal-based AI systems with the intent of creating a technology that will allow AI to switch between and achieve different goals as cyber threats evolve. 

“Cyber agents will come up with different ways to attack us, so our systems have to be agile and resilient,” he said. 

“We can assume the cyber attacker can also use AI and it can adapt as well so we have to develop our system such that it can co-evolve with and self-adapt to these different types of attack. 

“Our systems have to be one step ahead by looking far ahead and reason about different possibilities, to self-improve. The research we have been doing is in the early stages but as we explore this option we will come up with new ideas,” he explains. 

“At the end of the day these technologies can be deployed not just in defence settings but to protect our infrastructure across different domains in general.” 

However, all AI technology is not being used for defence capabilities or warding off cyber-attacks.   

A man standing outside of building with his arms crossed. Professor Dam and his team are developing AI systems for industries outside of defence.  

Real industry impact 

In a recent project, Professor Dam and his team developed an AI/IoT-powered system for supporting environmental management at Tram Chim National Park in Vietnam.  

The project was funded under the Department of Foreign Affairs and Trade’s Aus4Innovation grant.   

“This national park needed to monitor not just the water and air quality but to keep track of the animals that were using it, particularly the birds,” Professor Dam said. 

“We built an AI system where we deployed different types of sensors for water and air quality across different parts of the park and installed cameras in the area to monitor and keep track of the animals and birds. 

“Using AI technology, we could determine how many and what kind of birds were using the national park. We collected all the information in real-time and it was automatically processed by the AI system. 

“This technology is able to process, analyse and classify large volumes of data into metrics that give real-time insights into the national park's ecosystem health. It also allows the national park to collect and keep track of this information over time, so they can then look at the data and come up with adaptive environmental management strategies.” 

Professor Dam said the team is now looking at deploying a similar technology in national parks and other similar settings in Australia. 

“We are fascinated by the research we do and are always looking for practical settings where we can apply our ideas to generate real impact to industry and the broad communities,” Professor Dam said. 

“The collaborations with Defence and national parks are exciting for us. We can validate our ideas, co-develop ideas with different stakeholders. The work from our Decision Systems Lab focuses on engineering AI-enabled software systems, and it is very exciting to be able to work on these timely and emerging problems.”