AI Vulnerabilities Affect Nigeria's Growing Adoption
NITDA warns of security flaws in AI models like GPT-4/5, revealing vulnerabilities common across major systems including Google's Gemini and Meta's LLaMA. These include hidden malicious instructions, prompt injection attacks, and memory poisoning that can alter model behavior. While patches exist, the core issue persists: AI systems are inherently difficult to fully secure due to their design as probabilistic pattern recognisers. Nigeria's rapid AI adoption in journalism, law, and business amplifies risks, as users often treat outputs as authoritative without verification. NITDA advises caution, emphasizing AI requires ongoing risk management, verification of outputs, and institutional safeguards rather than blind trust.