The increased adoption of Artificial Intelligence (AI) technology in various industries, such as content creation, education, healthcare, agriculture, and digital inclusion, has had a revolutionary impact on Kenya’s society, leading to a renaissance of new ideas and innovations.
One such innovation is Mzee.ai, a platform designed to preserve the wisdom of African elders by using AI to capture and share their expertise in fields like conservation agriculture, beekeeping, and sustainable fishing. AI’s potential for a positive impact on the personal and professional development of youth in Kenya is exponential, and Mzee.ai embodies this potential.
However, it is also increasingly important to expose the technology’s soft white underbelly, which is the human rights and data privacy concerns surrounding the use and proliferation of AI technology in various social, economic, and governance sectors.
Generative AI, a subset of AI, enables the automated creation of multimedia content such as text, images, video, and audio through tools such as OpenAI’s DALL-E, ChatGPT, MidJourney, and Runway’s Gen-2. Although the generation of content quickly and effectively can have a lasting and profound impact within the marketing, entertainment, and education spaces, this comes with some significant risks. Some of these risks are creating fake or harmful content, deepfakes, or unauthorized data exploitation, making it difficult to implement strong governance measures to ensure that AI benefits society without compromising safety or privacy.
Generative AI: Opportunities and Challenges
AI is a driver of economic growth, with applications in digital learning and mobile innovations like M-Pesa offering new avenues for development in Kenya. However, as highlighted in the KICTANet AI policy brief, it also introduces challenges such as algorithmic bias, privacy risks, and the generation of inappropriate or harmful content.
A growing concern is the misuse of generative AI to produce manipulated imagery, including deepfakes and explicit content involving children. Popular tools like OpenAI’s DALL-E, ChatGPT, and MidJourney enable users to create highly detailed images and text from simple prompts, while platforms like Runway’s Gen-2 facilitate AI-driven video creation. Although these tools have specific controls and rules used to curb malicious use of their technology, it does not prevent tech-savvy internet users from coming up with their Clones or circumnavigating such regulations to take advantage of the AI platforms to generate their harmful content as well as to use this tool to create content used in phishing, vishing and the creation of simple malware that is distributed across the internet. These concerns align with findings from a report by the Computer Society of Kenya.
Data Protection and Legal Frameworks
One key source that provides direction on how to regulate data privacy and the processing of personal information in the context of AI is Kenya’s Data Protection Act (DPA) of 2019. It emphasizes the rights of data subjects, consent-based processing, and the accountability of data handlers. Given the positive direction the government has taken in dealing with data protection issues, it is important to note that the DPA does not fully account for the unique use cases and challenges posed by Generative AI concerning how generative AI tools are developed and used in the public space.
This exposes a limitation in the regulatory landscape specific to the intersection between data protection and AI development. Current laws do not explicitly address the transparency and accountability required in AI systems or the responsibilities of AI developers when harmful content is produced or shared. According to KICTANet AI Policy Brief 2024, Kenya needs adaptive regulatory frameworks to keep pace with technological advancements and tackle emerging risks effectively.
Given the challenges of regulating generative AI, collaborative efforts between governments, the private sector, and civil society are critical. Kenya’s Youth IGF and KICTANet are working toward more inclusive policies, addressing ethical considerations like fairness, transparency, and privacy. The Kenya Youth IGF 2023 report highlights how the youth are key stakeholders in discussions on internet governance, advocating for policies that promote safe, inclusive digital spaces.
Moreover, integrating AI safety features into social media platforms is essential. As KICTANet recommends, partnerships between the public and private sectors can unlock critical datasets for AI safety applications. For example, telecom operators could collaborate with regulatory bodies to identify and block harmful content without compromising users.
AI as a Tool for Safety: Detection of Harmful Imagery
It is clear that AI technology has its inherent risks and pitfalls, but this does not limit the use of the technology in developing solutions for improving online safety. AI-based detection systems to identify and block harmful imagery will prevent their spread across platforms and provide key insights to drive policy discussions and preventive measures to help safeguard the internet. As highlighted in The Computer Society of Kenya’s article on “Generative AI: How to protect firms in evolving landscape”, Gen-AI tools are crucial for cybersecurity, helping detect sophisticated threats and anomalies.
To enhance the effectiveness of these solutions, comprehensive databases of harmful imagery must be developed. Such databases, protected by strong encryption and privacy policies, could serve as a reference for AI tools to detect and respond to harmful content in real-time. AI algorithms should be trained using diverse datasets to avoid biases that could limit their ability to identify harmful content accurately.
Conclusion
AI presents a unique and exciting opportunity for innovative solutions across various sectors, including healthcare, agriculture, and education. However, while technological advancements have accelerated recently, the regulatory landscape surrounding online safety, data protection, and human rights has yet to keep pace.
A multi-stakeholder approach is essential to address this gap, bringing together internet users, human rights organizations, the private sector, and government entities. This collaborative effort can help shape policies that benefit society while fostering economic growth in Kenya. Additionally, the development of AI tools capable of generating harmful content, such as AI-generated images, underscores the importance of enhancing online safety and understanding the challenges internet users face, particularly vulnerable populations. By prioritizing protections against harmful content, hackers, and scammers, we can create a safer online environment for everyone.
By Gerald Suleh, CyberSecurity Analyst, iLab Africa – IT Security