google.com, pub-8701563775261122, DIRECT, f08c47fec0942fa0
Australia

How should the government regulate AI?

The federal government continues to navigate AI regulations, with different stakeholders trying to do this in different ways: AI companies are asking for aid, the Productivity Commission is demanding a clear and unfettered right for AI companies to use others’ content, creators are saying the opposite (backed by unlikely backers like News Corp), and unions are keeping the economy in amber so someone somewhere doesn’t lose their job. He demands it be frozen.

Meanwhile, enormous sums of money are being poured into IT infrastructure – which comes with eye-watering energy demands – for a product whose business case is not yet fully available but may eventually eclipse search engines in terms of monetization.

This isn’t that big of a policy issue for the government – unless it’s stupid enough to bow to calls for “sovereign AI capability” and waste money on “AussieGPT” (Richard Holden nailed the risks of this kind of fraud). An AI bubble may well emerge, and if it bursts it could cause serious damage to stock markets, perhaps even financial markets, but that’s an infusion of private money. An AI company like OpenAI could be the next Google or Amazon, or the next WeWork. Let the market decide.

Related Article Block Placeholder

Article ID: 1223358

The government’s policy problem is that it is unclear what exactly the policy problem is. like that – not in the sense that the government should be looking for problems to solve (there are plenty of those), but in the sense that the economic, social, political and cultural impacts of AI will likely be considerable – at least as large as social media and search engines, and possibly very much so if agential AI becomes a major interface between individuals and the rest of the world, if generative AI is used to generate disinformation, and chatbots replace personal and professional will be larger. Relationships at the population scale.

Given that we are still working out how to regulate social media long after it has caused society-wide material damage and produced some benefits, the ability of democratic governments (even those that do not belong to big tech, like the Trump regime) to respond effectively and timely to the negative effects of AI seems weak indeed.

This concern led Mordy Bromberg, chair of the Australian Law Reform Commission, to call in August for a process to proactively determine the scope of regulatory challenges posed by AI across the economy, rather than the quiet debate ahead of a productivity roundtable in which vested industrial interests put forward their case for specific regulatory changes.

Bromberg’s appeal was in vain. There is no one in the government who is considering artificial intelligence. Magpie-minded Andrew Leigh spoke yesterday about the role of AI in what he calls the “progressive productivity agenda”, mentioning that rather than AI killing the profession, as some predicted a decade ago, AI is seeing demand for radiologists soar.

This example illustrates the impossibility of predicting the effects of AI (as a former advocate of the wonderful “interconnectedness” offered by social media, I have particular experience of being badly wrong about the impact of new media technology). There are smart, experienced people in tech with very different views than Leigh; They think the coming jobs will be a hecatomb as AI emerges from laboratories and begins wreaking havoc on white-collar jobs, disrupting employment markets on a massive scale with attendant effects on the financial system and the broader economy.

Even if such risks are limited, they are not insignificant to the knowledgeable person. Perhaps Leigh’s AI future will increase productivity, increase demand for skills and improve outcomes. Maybe it will be the opposite. That’s why governments need to think about AI in terms of risk management: not just about regulatory impacts, but also about the potential for significant economic, political and social dislocation, according to Bromberg.

Although risk management is a fundamental part of bureaucratic management systems (or should be; auditor general reports appear to emerge regularly to suggest that the benefits of risk management are rarely pursued by the public service), bureaucrats are accustomed to dealing with known, predictable risks that can be prepared for and mitigated. The problem with AI policy is unknown risks, both in scale and nature. And this is on top of a more traditional bureaucratic problem: the civil service lacks the specialist expertise to properly address the technical issues involved.

Related Article Block Placeholder

Article ID: 1224151

OpenAI signs first Australian government contract

One solution to this risk management problem would be for the government to establish a relatively informal advisory panel of wise people to regularly report on what they see and maintain a monitoring brief on AI impacts across the economy and society, to flag potentially significant issues for governments. The panel would act as the vanguard of the bureaucratic process: after identifying what it believed to be an important issue requiring the government’s attention and perhaps action, bureaucrats could be assigned to investigate the issue.

Wise people need to be experts from a variety of fields: economists, scientists, and engineers who understand AI and its resource requirements, investors who have a deep understanding of the financial aspect and infrastructure needs of AI, and lawyers who understand regulatory issues. These may be difficult to find in the relatively shallow gene pool of Australian civilian life, but search need not be limited locally. The goal is for smart people to flag issues for the government more quickly than bureaucracy and without the influence of vested interests. And it won’t cost much.

It is a low-risk solution to the risk management problem. Buying smart people’s time can cost several million a year. But it can inform the government about emerging issues it needs to consider. It might even save us from repeating the awkward problem where we’re still trying to effectively regulate social media long after the damage has been done. But the damage could be much greater.

What should Australia do to prepare for the rise of artificial intelligence?

We want to hear from you. For publication, write to us at letters@crikey.com.au. cricket. Please include your full name. We reserve the right to edit for length and clarity.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button