In a recent online discussion, Brian Jackson, the research director at Info-Tech Research Group, expressed surprise at the prevailing belief among IT professionals that the Chief Information Officer (CIO) should bear sole responsibility for Artificial Intelligence (AI) initiatives. According to Jackson, the second most common response to this query was a shrug, suggesting a lack of consensus on AI accountability. The insights were gleaned from a survey conducted by Info-Tech, encompassing 894 respondents involved in or overseeing IT operations for the 2024 Tech Trends report.
Jackson emphasized the early stages of AI implementation in many organizations as a potential reason for the uncertainty. However, he cautioned against assigning exclusive accountability to the CIO, asserting that involving business leaders is crucial when deploying AI to drive desired outcomes.
The survey also categorized organizations into "AI adopters," those investing or planning to invest in AI, and "AI skeptics," those with no immediate plans for AI adoption until after 2024. Notably, only one in six AI adopters plan to form a committee for accountability, while one in ten shares accountability across multiple executives.
Jackson outlined three fundamental concepts for responsible AI implementation: Trustworthy AI, ensuring understanding of its workings; Explainable AI, ability to clarify model predictions and potential biases; and Transparent AI, effectively communicating decision impacts. As AI increasingly becomes integral to customer value in various industries, establishing regulatory guardrails gains importance, according to Info-Tech.
Jackson cited examples of businesses, like OpenAI and Intuit, where AI is not just a complementary aspect but forms the core value proposition. He highlighted the need for executives to be accountable for AI regulation, emphasizing the equal importance of security by design. Despite increased investments in cybersecurity, organizations continue to face rising cyber threats, prompting Jackson to question the current industry dynamics where customers bear the costs of vendor-induced risks.
Looking ahead to 2024, Jackson predicted a shift in cybersecurity accountability, with the White House and the National Cybersecurity Strategy likely imposing security mandates on technology developers. He urged prioritizing security by design when building AI models to avert vulnerability costs in the future.
Another critical aspect Jackson pointed out is digital sovereignty, suggesting organizations update their robots.txt file to control the use of their website data for training AI models. However, he acknowledged that more measures are necessary, especially in light of AI's mimicry affecting various domains. Artists, in particular, have sought protection for their intellectual property using tools like Glaze and Nightshade, which manipulate images to thwart AI interpretation.
In conclusion, as organizations navigate the complex landscape of AI implementation, the need for inclusive accountability, security prioritization, and safeguarding digital sovereignty emerges as key considerations in the evolving AI landscape.