Found a new AI tool to use at work? Seek authorisation first.
You've found a new AI tool that will shave a couple hours off a task you need to do. That sounds good, but before you rush ahead, have you checked that you have approval?
Gen-AI tools like ChatGPT are commonly used for drafting emails, reports, and marketing content, while AI-powered analytics platforms help businesses process data with minimal effort. Other applications, such as transcription tools like Read.ai, Fireflies and Otter.ai can take the hassle out of notetaking for meetings.
These innovations can make a huge difference in workplace efficiency and automating the tedious tasks, but when employees introduce AI tools into a business environment without proper authorisation, it can lead to unintended risks.
Case Study
Sarah installs a notetaking Gen-AI tool
Consider the case of a well-intentioned employee, Sarah, who works in member services for a growing community organisation. Sarah often struggles to take comprehensive notes during meetings, so she installs an AI-powered transcription tool to automate this task. Without consulting IT or management, she installs the tool and integrates it with the company’s video conferencing platform, Zoom, allowing it to record and transcribe conversations for future reference.
While Sarah’s primary goal is efficiency, she unknowingly exposes sensitive member information to an unvetted third-party service. This data is stored on external servers, possibly violating data protection regulations, and increasing the organisation’s liability should the information be misused or leaked.
Understanding the risks
Sarah’s case highlights the challenges that come with unauthorised AI adoption in the workplace. One of the primary concerns is data security. Many AI applications operate on cloud-based platforms, which means company, client and customer information may be transmitted outside of internal systems without proper encryption or compliance measures. Even when AI providers claim to follow security protocols, businesses must vet them carefully to ensure they align with legal and ethical standards.
Beyond data security, compliance and regulatory risks pose another challenge. Businesses operating in industries with strict privacy regulations, such as finance or healthcare, must ensure that AI tools meet compliance requirements. Unauthorised AI tools may inadvertently collect and store data in jurisdictions with weaker privacy laws, leading to potential breaches of contract and regulatory penalties. In Sarah’s case, her company could face serious consequences if a member discovers that their confidential discussions were recorded without explicit consent.
Operational disruptions are another risk. When employees install unapproved AI tools, they may unknowingly introduce compatibility issues or software conflicts. AI applications that interact with internal systems can lead to data inconsistencies, workflow inefficiencies, or even system vulnerabilities. IT teams are then left scrambling to identify and resolve issues that could have been avoided through proper oversight.
Authorising AI tools in the workplace
To prevent these risks, business leaders must implement clear policies governing AI adoption. Establishing a framework for AI approval ensures that new technologies align with company security, compliance, and operational requirements. Employees should be encouraged to propose AI solutions through a formal review process, allowing IT and management teams to assess security implications before implementation.
Education and awareness are also key. Employees like Sarah often have the best intentions, but they may not fully understand the risks. Training programs that focus on responsible AI use can help employees make more informed decisions.
Consider restricting unapproved third-party applications
Organisations, with the assistance of IT, can configure their IT Platform to restrict unapproved third-party applications, ensuring that only vetted and approved AI tools can be installed and integrated. Configuring security settings within enterprise platforms, such as Microsoft 365, can help prevent unauthorised installations while still allowing flexibility for approved AI applications that support business objectives.
Conduct an AI Readiness Review
To ensure that AI tools align with business goals while mitigating risks, consider conducting an AI Readiness Review. This review helps businesses assess their existing AI capabilities, security infrastructure, and compliance posture. By taking a structured approach to AI integration, organisations can proactively identify vulnerabilities and establish the necessary safeguards before implementation.
Developing an AI Strategy
Beyond policy enforcement, organisations should take a strategic approach to AI adoption. Developing a well-defined AI strategy ensures that AI tools are leveraged in ways that drive business value while maintaining security and compliance. Business leaders can read this guide on developing a generative AI strategy to learn how to balance innovation with responsible AI governance.
Get in touch
If you need assistance with AI policies and oversight, or the safe implementation of AI tools into your business, please get in touch. We’d love to help.