In the bustling tech hub of Toronto, the talent search platform Plum noticed the buzz surrounding ChatGPT, an advanced AI chatbot making waves across various industries. Recognizing the need to define clear guidelines for staff on utilizing this generative AI technology, Plum sought direct input from the source itself. They collaborated with ChatGPT to draft a preliminary policy document, which laid the groundwork for their final policy.
Plum's CEO, Caitlin MacGregor, recalled the process, noting that while the initial draft was sound, there was room for customization to better suit their business needs. Drawing on insights from other startups and their own experiences, Plum crafted a comprehensive four-page policy that emphasized responsible AI usage among its employees.
Plum is just one of many Canadian companies taking proactive steps to formalize their approach to AI, spurred in part by government initiatives and the increasing reliance on AI technologies in the workplace. With the federal government releasing AI guidelines for the public sector, organizations, both large and small, have been prompted to adapt these guidelines to their specific contexts or develop their own policies from scratch.
The overarching goal of these policies is not to stifle innovation or limit AI usage but rather to empower employees to use AI tools responsibly. Niraj Bhargava, founder of Nuenergy.ai, emphasized the vast potential of AI for enhancing productivity but also highlighted the importance of establishing safeguards against potential risks. Finding the right balance between leveraging AI's capabilities and mitigating its inherent risks is paramount.
However, crafting AI policies is not a one-size-fits-all endeavor. Different organizations have unique needs and considerations based on their industry, clientele, and operational requirements. Bhargava emphasized the importance of tailoring AI policies to align with each organization's specific circumstances. For instance, what may be acceptable AI usage for a tech company could differ significantly from what's appropriate for a hospital.
While specific policy details may vary, several common principles emerge across these guidelines. Firstly, there's a strong emphasis on protecting sensitive data. Companies caution against plugging client or proprietary information into AI systems, as the privacy and security of such data cannot always be guaranteed. Secondly, there's a recognition of the limitations of AI technology. Employees are advised to approach AI-generated content with a critical eye, as inaccuracies and errors can still occur.
Transparency is another key aspect addressed in many AI policies. Elissa Strome, executive director of Pan-Canadian artificial intelligence strategy at CIFAR, highlighted the importance of attributing AI-generated content appropriately. Just as one wouldn't claim someone else's work as their own, the same principle applies to content generated by AI. While transparency requirements may vary depending on the context, organizations generally advocate for disclosing the use of AI in tasks that involve data analysis or content creation.
Despite the growing awareness and adoption of AI policies, there's still room for improvement. Surveys reveal disparities in the implementation of AI guidelines across organizations. While some companies have well-defined policies in place, others lag behind, with either loosely defined guidelines or no policies at all. This underscores the need for greater consistency and adherence to best practices in AI governance.
At companies like Sun Life Financial Inc., where data privacy is paramount, employees are restricted from using external AI tools for work. However, internal AI tools that adhere to stringent data privacy policies are permitted. Laura Money, Chief Information Officer at Sun Life Financial Inc., noted the importance of balancing innovation with data protection. To encourage employee engagement with AI technologies, the company offers training programs to familiarize staff with AI principles and applications.
As AI technology continues to evolve rapidly, organizations must adapt their policies accordingly. Plum and CIFAR, for instance, have already recognized the need for regular policy reviews to keep pace with technological advancements. Looking ahead, Bhargava emphasizes the urgency for organizations to establish robust AI policies sooner rather than later, as the widespread adoption of AI shows no signs of slowing down.