Director’s Guide to AI Regulation in Australia
Does your organisation have frameworks in place to govern the use of AI?
The Australian Government recently shared its interim response to the 2023 consultation on safe and responsible AI. This response is based on extensive feedback from the public, academia, and businesses, highlighting both the benefits and concerns related to AI technologies. The feedback underscores that AI systems and applications have the potential to significantly enhance wellbeing, quality of life, and economic growth. However, there are worries that existing regulatory frameworks may not adequately address the risks associated with AI, especially with the rapid advancement of powerful new models.
The government is keen to ensure that AI used in high-risk settings, such as robotics in surgery or generative AI in automated cars, is both safe and reliable. They also want to allow AI to be used in low-risk settings with minimal restrictions. To achieve this, the government plans to implement measures focused on testing, transparency, and accountability to prevent harm in high-risk scenarios. They also intend to clarify and strengthen laws to protect citizens, work internationally to support the safe development of AI, and maximise the benefits of AI technology.
One significant implication for many businesses operating in low-risk environments is that the responsibility for regulating the safe use of generative AI will largely fall to the businesses themselves. This shift in responsibility means that boards must ensure their risk management frameworks extend to the assessment of new technologies.
Boards and directors AI leadership
For boards and directors to adequately assess and manage generative AI technologies, there are several steps they can take.
- Familiarising themselves with popular generative AI applications. Tools such as ChatGPT, Microsoft Copilot, GitHub Copilot, Gemini, Midjourney, and Mistral AI, as well as large language models (LLM) like OpenAI’s GPT-4 – Generative AI, Anthropic’s Claude, and Google’s Gemini, are leading the way in AI innovation. Understanding these tools and their capabilities is the first step toward effective governance and utilisation.
- Establishing strong AI governance principles. This includes confirming the company’s risk appetite for generative AI, developing a generative AI policy, and deploying AI guidelines across the business. These principles will provide a solid foundation for integrating AI technologies in a controlled and responsible manner.
- Assessing strategic generative AI opportunities. This involves defining the problems a generative AI tool will solve, understanding its impact on productivity and profit, and considering the implications for staff training and redeployment. By carefully evaluating these factors, businesses can make informed decisions about how to incorporate AI into their operations effectively.
- Upskilling the workforce in generative AI. This includes developing core skills among the c-suite executives, enhancing digital literacy across the workforce, and supporting career development in an AI-driven world. As AI becomes more integrated into business processes, having a knowledgeable and skilled workforce will be essential for maximising its potential.
Ensuring that ‘humans-are-in-the-loop’ to oversee all generative AI activities is imperative. While AI can perform many tasks autonomously, human oversight is necessary to ensure ethical considerations, safety, and reliability are maintained. This approach helps mitigate risks and ensures that AI technologies are used responsibly.
The government’s interim response highlights the importance of a balanced approach to AI regulation—one that maximises the benefits of AI while safeguarding against its risks. By implementing measures such as testing, transparency, and accountability, and by working internationally to support safe AI development, the government aims to create an environment where AI can thrive responsibly.
For businesses, this means taking proactive steps to manage AI technologies effectively. Boards and directors must familiarise themselves with the latest AI tools, establish strong governance principles, assess strategic opportunities, upskill their workforce, and ensure human oversight. By doing so, they will harness the power of AI to drive growth and innovation while maintaining the safety and reliability that stakeholders expect.
Get in touch
If you need assistance navigating and assessing new technologies or AI governance frameworks for your organisation, please get in touch. We’d love to help.