Connecting trust, AI and public service values

Renee Leon

Image from AdobeStock

If you’re in the public service, how can you safely work with AI? In our newest Work with Purpose episode, AI researchers Professor Toni Erskine and Adjunct Professor Kate Conroy discuss ways to get started and pitfalls to avoid.

At the very least since the launch of ChatGPT, artificial intelligence (AI) has become a regular point of discussion in public sector workplaces around the globe. Whilst the technology offers many opportunities to optimise workflows and improve the delivery of public services in Australia, it also comes with many challenges around trust and data security.

Understand limitations

If the public service wants to use AI safely and productively, Professor Toni Erskine from the Australian National University says that all needs to start with a general understanding of how particular AI-enabled tools work and what their limitations are.

“For example, language generative models like ChatGPT, which have received a huge amount of attention recently, rely on statistical inference to string together a series of words to effectively predict what is likely to come next.

“This tool is neither thinking nor reflecting on the best answer to a question nor searching for the truth, yet users sometimes, I think, assume otherwise, and that’s evident within the university setting. Our expectations need to be recalibrated appropriately.”

Train and elevate experts

Whilst a general level of understanding is important, Professor Erskine also says that working hand-in-hand with experts is crucial for successfully integrating AI into public sector work.

“We need specific people with high-level expertise and ongoing training operating AI-enabled tools, including interpreting recommendations made by AI-driven decision support systems.”

Citing research by Professor Jenny Davis based at Vanderbilt University in the United States, Professor Erskine highlights the need for ongoing professional training as well as policies and technological design that place humans with specialist expertise at the core of ‘the loop’ in any setting.[i]

“[Professor Davis] wisely argues that we need to employ, train, and sustain expert human practitioners to the highest standard when it comes to any AI-driven system.”

Turn to public service values

Adjunct Professor Kate Conroy from Queensland University Technology encourages the public sector to be proactive about increasing staff’s awareness of how AI tools operate and the interplay between the human beings who are in the government and their stakeholders and the public.

She recommends asking some critical questions:

“What does the public think about having public servants using artificial intelligence? What kinds of Australians, including those who are most disenfranchised or marginalised or already struggling to manage, are going to be helped by the use of artificial intelligence? Or is it going to be a domino effect of continuing good things go to those who have the funding and the privilege and those who don’t continue to get [fewer] services and are continually struggling to have their voices heard?

She says that a good place to start when working with AI is public service values.

“In Queensland, for example, we have the Public Sector Ethics Act 1994, which actually has ethical principles that you should work to when you work for the public service.”

Explore options with open documents

As a next step, openly available documents can be a way to explore AI capabilities and find ways to improve policy in a time-efficient way.

“One fun thing you can do that is compliant with existing generative AI documents in the public service, is [using] an open product like ChatGPT-4, and you can put an open document such as a little piece of legislation or an act into that chat, and you can ask it questions or you can say, ‘I’m thinking about how to develop better guidance for getting the Queenslander’s driver’s licence renewed, could you please look at this Public Sector Ethics Act and give me some suggestions for how I could provide guidance in accordance with these principles?’,” Adjunct Professor Conroy says.

“So, you don’t have any personal data or sensitive data or information about customers or any government information. You’re asking general questions about how to be a better public servant [from] these tools.”

Be aware of risks

Taking a values and principles-centred approach when working with AI can greatly improve ways of working, according to Adjunct Professor Conroy, however, certain risks remain.

“If you don’t understand why you work the way you do or what your goals are and how that concords with your public service obligations, then AI is dangerous.

“And be aware that corporate and industry players who are trying to sell their products to government are not as sensitive to the specific ethical obligations of the public service. They don’t understand that relationship, particularly in Australia to Australians.”

“It is important we recognise we have hundreds of indigenous languages in Australia that are really underknown, and we are unlikely to get the sensitivity of cultural communication to our Australian stakeholders if we depend on these generalist American-centric AI products in order to do our jobs.”