A recent report has drawn attention to a significant surge in phishing incidents following the introduction of ChatGPT, sparking concerns about potential cybersecurity threats and ethical implications. The study, conducted by cybersecurity provider SlashNext, identified a staggering 1,265 percent increase in malicious phishing emails since the launch of OpenAI's generative artificial intelligence platform, GenAI, a year ago.
The report emphasizes the uncertain trajectory of generative AI and highlights its rapid adoption among cybercriminals, posing a tangible threat in the digital landscape. It acknowledges the introduction of generative AI technologies by cybersecurity vendors to counter these malicious attempts but underscores the evolving nature of the threat environment since 2022.
Key findings from the study revealed a predominant 68 percent of phishing emails being text-based Business Email Compromise (BEC) attacks. Moreover, mobile phishing, particularly Smishing (SMS phishing), showed a surge, accounting for 39 percent of mobile threats. Credential Phishing also exhibited a substantial 967 percent increase.
For Don Delvy, CEO of D1OL: The Digital Athlete Engine, the report serves as a stark reminder of the potential pitfalls of a technology that he believes should have been handled cautiously. Delvy emphasizes the need for a global discussion on the ethical implications and responsible use of AI, expressing concerns about the unchecked advancement of technology that could lead to a concept he terms "Terminal AI."
According to Delvy, Terminal AI symbolizes the catastrophic potential of AI posing threats such as a nuclear crisis, governmental destabilization, and societal upheaval. He stresses the urgency for strategies ensuring the safe and ethical evolution of AI to avert such dire consequences.
To address these concerns and facilitate meaningful discourse, Delvy proposes five key steps:
1. Promoting transparency and open communication among AI developers, researchers, and the public.
2. Establishing clear ethical guidelines through collaboration between industry bodies and government agencies.
3. Integrating AI literacy into educational curricula to equip individuals with necessary knowledge and critical thinking skills.
4. Ensuring diversity within AI development teams to address diverse stakeholder perspectives.
5. Prioritizing human-centered AI that enhances human capabilities without overshadowing them.
In an interview, Delvy expressed his reservations about deploying large language models (LLLMs) on public clouds and voiced worries about the potential ramifications of the escalating phishing incidents linked to ChatGPT.
Referring to the recent employment dynamics at OpenAI, Delvy highlighted the importance of ethical leadership in the AI sector. He emphasized the need for industry leaders to champion transparency, accountability, and responsible innovation as AI continues its rapid evolution.