Safe and responsible Gen AI deployment
Generative Artificial Intelligence could add up to $115 billion annually to Australia's economy by 2030 through improving existing industries and creating new products and services according to joint research conducted by the Tech Council of Australia (TCA) and Microsoft.
Most gains are expected from increased workforce productivity through automation and task augmentation. Generative AI could automate and augment 44 percent of Australian workers’ task hours, allowing them to focus on higher-value tasks. New jobs and businesses created by Generative AI could also drive growth. Although, this opportunity assumes that Gen AI can get over the trust hump.
Public trust in Gen AI is low
While Generative AI is forecast to grow our economy, there is low public trust in the safe and responsible design, development, deployment and use of AI systems. Users report inaccuracies in Large Language Model (LLM) inputs and outputs; poor-quality model training data; biased outputs and a lack of transparency about how and when AI systems were being used.
A global survey conducted by the University of Queensland and KPMG found that three in five AI users are wary about trusting AI systems and that two-thirds of Australians agree that Australia has inadequate guardrails to make AI systems safe. Such distrust in generative AI tools is likely to act as a handbrake on business adoption, public acceptance and economic growth.
Additionally, newer, very powerful “frontier” AI models pose specific risks due to their ability to exceed previous capacities and generate new content quickly. The rapid development and deployment of AI services risk outpacing legislative frameworks, leading to worries about transparency, and oversight. There is particular public concern that AI systems are not being tested adequately and lack detailed information on their functioning. This is further eroding trust and reliability in these systems among both the public and businesses.
AI regulation could temper trust concerns while driving adoption
In response, governments around the world are sharpening their focus on ensuring that sufficient guardrails for AI development and deployment are put in place. This includes introducing obligations for testing of AI products before and after release; labelling AI systems in use and/or watermarking AI-generated content to drive transparency and ensuring AI developers are accountable and liable for AI safety risks.
In Australia, the government recognises that many AI tools do not present risks that require a regulatory response. For instance, AI driven automation and augmentation tools such as Microsoft Copilot applied to routine business processes such as rostering or editing are unlikely to attract regulatory attention.
The government also knows that there are gaps where existing laws do not sufficiently prevent harm in legitimate, high-risk areas as well as in the development, deployment and use of frontier or general-purpose AI models.
In considering the right regulatory approach to implementing safety guardrails, the government’s fundamental aim will be to “help ensure that the development and deployment of AI systems in Australia in legitimate, but high-risk settings, is safe and can be relied upon, while ensuring the use of AI in low-risk settings can continue to flourish largely unimpeded”.
Regulation for high-risk AI settings
The government has adopted five principles to guide the creation of a suitable regulatory environment to ensure the safe development and deployment of AI systems in high-risk settings. The Australian Government intends to:
- Adopt a risk-based framework that will assess the level of risk posed by the use, deployment, or development of AI. Based on this assessment, it will impose obligations on developers and deployers of AI to prevent harm.
- Strive to strike a balanced and proportionate response to promoting innovation and competition while protecting community interests such as privacy, security, and public safety. It will avoid imposing unnecessary or disproportionate burdens on businesses, the community, and regulators.
- Engage collaboratively and transparently with experts from across the country and provide opportunities for public involvement in developing its approach to the safe and responsible use of AI. It will draw on technical expertise and ensure that its actions are clear and easy to understand for those developing, implementing, or using AI.
- Align its approach, as a trusted international partner with the Bletchley Declaration and leverage its strong foundations and domestic capabilities to support global action to address AI risks. This includes addressing substantial risks to humanity from frontier AI, high-risk applications of AI, and near-term risks to individuals, institutions, and vulnerable populations.
- Adopt a community first approach by prioritising the needs, abilities, and social context of people and communities when developing and implementing its regulatory approaches. This means ensuring that AI is designed, developed, and deployed with the well-being of all people in mind.
Apply strong AI governance in ‘low-risk’ environments
Since the Australian regulatory guardrails for AI will concentrate on high-risk settings such as autonomous cars and robotic surgery, boards and management teams overseeing less risky enterprises will be left to regulate the use of AI in their businesses. What should business leaders of low-risk environments do next?
In my view, there are five strategic imperatives that all business leaders must consider.
1. Understand Generative AI technologies.
Board directors and executives must have a broad understanding of what Generative AI technologies are designed to do and their corresponding ‘use cases’. The two software applications that are likely to dominate the business landscape over the next 12 to 24 months are ChatGPT and Microsoft Copilot.
- ChatGPT: ChatGPT is a large language model (LLM) developed by Open AI based on Generative Pre-trained Transformer (GPT) architecture. It is designed to generate human-like text by predicting the next word in the sentence, given all the previous words. ChatGPT is trained on a diverse range of internet text and can perform a variety of tasks, such as answering questions, writing essays, generating creative content and engaging in conversations. ChatGPT also poses several risks to users and business. For instance, ChatGPT can inherit and amplify biases from its training data, leading to unfair or discriminatory outcomes. It can generate plausible but incorrect or misleading information, contributing to the spread of misinformation. It can process user inputs and potentially misuse sensitive information, raising data privacy concerns. It may also expose security vulnerabilities that could be exploited by malicious actors, resulting in unauthorised access or misuse.
- Microsoft Copilot: Microsoft Copilot is an AI-powered productivity tool designed to enhance the capabilities of Microsoft 365 applications, such as Word, Excel, PowerPoint, Outlook, and Teams. It leverages large language models, similar to ChatGPT, to assist users in creating documents, analysing data, generating presentations, drafting emails, and facilitating collaboration. Copilot aims to streamline workflows, boost efficiency, and enable more intuitive interactions with digital content. By understanding natural language queries and context, it provides relevant suggestions, automates repetitive tasks, and offers insights, allowing users to focus on higher-level creative and strategic activities. However, Copilot also carries similar risks to ChatGPT, such as biased outputs, misinformation, data privacy breaches, and security threats that can harm its users and businesses. Additionally, Copilot has access to your organisation’s sensitive data within Microsoft 365. It can retrieve and create data, potentially including confidential information. It can also access more data than it should, which can lead to data leaks or unauthorised exposure. Copilot’s actions may inadvertently violate data protection regulations which could. result in legal penalties and reputational damage. Deploying Copilot requires integration with existing workflows and processes. Poor integration can disrupt operations and cause inefficiencies.
2. Establish strong AI governance principles
- Confirm risk appetite for Gen AI: Boards are becoming ‘stewards of data’ for their organisations, setting the tone for how data is treated throughout the business. This responsibility includes determining the company’s risk appetite for AI innovation. In high appetite settings, a range of AI tools would be present across the organisation, with little or no senior management or board oversight. In low appetite environments, stringent controls would be in place to measure the impact and success of such deployments.
- Develop a Gen AI policy: An AI policy defines the scope of Artificial Intelligence (AI) for the business and sets out the principles and rules that will govern the development, deployment, and use of AI systems. It aims to ensure that AI is safe, trustworthy, ethical, and operates to the benefit of the business and its stakeholders.
- Deploy Gen AI guidelines: While an AI policy reflects a business’s risk appetite towards AI development, AI guidelines provide practical direction on using AI in the business. These guidelines would confirm which AI tools the business approves and prohibits, as well as usage advice. They would also address whether the business permitted the ingestion of Personal Identifiable Information (PII) in AI applications, set out training requirements, and provide a ‘how to’ guide on using AI applications.
- Consider a risk-based framework to assess innovative technologies: Governments around the world are mandating the adoption of risk-based frameworks to assess the safety of high-risk AI systems. In the US, the National Institute of Standards and Technology (NIST) Cybersecurity Framework is being updated to assess AI systems prior to market launch. In Europe and Australia, the Information Security Management System (ISMS) ISO27001 is being refreshed to incorporate AI assessment. Given the influx of AI driven applications that are likely to hit the market over the next decade, business leaders would do well to consider implementing a risk-based framework such as ISO27001 in their organisation to consistently assess the opportunities and risks of various AI systems.
3. Quantitatively assess strategic Gen AI opportunities
While AI trust issues and adoption remain low, business leaders have time to evaluate the specific benefits and costs of an AI deployment. They should consider the specific problem the AI deployment will solve; its impact on productivity and profit; the potential competitive edge; associated incremental costs; as well as implications for staff training and redeployment.
4. Invest in upskilling the workforce in Gen AI
According to the Tech Council of Australia, a failure to upskill the workforce in Gen AI would be a major handbrake on achieving Gen AI’s potential economic impact.
- Develop core C-suite skills in Gen AI: Because GAI is advancing so fast, it is hard for executives to know which ‘use cases’ represent current opportunities, and which are no longer relevant. This gap in knowledge not only hinders adoption, but also creates a risk if leaders invest without knowing how to use AI responsibly.
- Enhance digital literacy in the workforce: Australia faces a big challenge in digital skills, with 3 out of 5 businesses surveyed saying their workers had insufficient or obsolete digital skills. This hinders both the use and creation of Gen AI tools. Workers need to learn more about the advantages and drawbacks of different Gen AI models, to avoid using them wrongly.
- Support career development: When automation takes over simple tasks, junior employees have fewer opportunities to acquire the ‘tools of the trade’. Employers need to reconsider how they will help junior staff advance in their careers.
5. Ensure there are ‘humans in the loop’ overseeing all AI activity.
- Reduce the room for Gen AI error: As Gen AI models evolve incrementally reducing the room for error, human checking of staged outputs will still be required, especially in critical customer facing business processes. This room for error makes more careful about adoption where inaccurate or deceptive information could have serious productivity and reputational outcomes.
- Scale of investment required to build industry-specific Gen AI solutions: Businesses often need tailored solutions. To use GAI models effectively, businesses will need to spend more resources on developing AI solutions that match their industry-specific requirements. As the explosive growth of ChatGPT and the hype surrounding Gen AI tapers off and the adoption of corporate Gen AI tools such as Microsoft Copilot has been slower than expected, organisations have got time to consider where best to deploy Gen AI solutions in their businesses for competitive advantage. If ever there was an illustration of ‘to hasten with caution’, the adoption of Gen AI is it.
AI Readiness Review
Given the complexity of AI technologies, it’s essential to assess your current readiness before implementation.
For example, Microsoft 365 Copilot can streamline workflows, such as drafting documents or generating meeting summaries. But before deploying this AI-powered tool, organisations must ensure their data is secure, prevent unauthorised data sharing, and implement proper governance frameworks.
This is where Veracity’s AI Readiness Review comes in. Our review evaluates your organisation’s ability to adopt AI solutions like Microsoft 365 Copilot, focusing on security, governance, and data privacy. This might include recommendations such as tightening access permissions for platforms, deploying data protection policies, or enhancing security features.
Get in touch
Whether it’s automating routine tasks or improving customer experiences, AI offers enormous potential. However, to unlock its full value, senior business leaders must develop a clear AI strategy and ensure their IT environment is prepared for AI technologies.
An effective AI strategy is more than just selecting the right tools—it’s about preparing your business for seamless integration while maintaining security and governance.
If you need support assessing your readiness for AI adoption, please get in touch.