The Artificial Intelligence Bill, 2026
Proposed
Artificial Intelligence Commissioner Established
The Bill establishes the Office of the Artificial Intelligence Commissioner (Clause 4), an independent State Office responsible for overseeing the implementation and enforcement of the Act. This includes conducting risk assessments, performing conformity audits, and investigating complaints related to AI systems (Clause 10).
AI Systems Classified by Risk Level
Artificial intelligence systems will be classified into four risk categories: unacceptable, high, limited, and minimal risk (Clause 25). Systems deemed "unacceptable risk" are prohibited. The classification considers potential threats to health, safety, fundamental rights, the environment, or societal welfare.
Mandatory Safeguards for High-Risk AI
Providers and deployers of high-risk AI systems must conduct pre-deployment risk and human rights impact assessments, ensure transparency and explainability, maintain data records for five years, comply with data protection laws (including impact assessments), and incorporate measures for robustness, accuracy, and cybersecurity (Clause 26).
Significant Fines for AI Violations
Individuals or entities deploying prohibited AI systems, failing to conduct required assessments for high-risk systems, or violating transparency obligations face fines up to KSh 5 million or imprisonment for up to two years, or both. Lesser offences carry fines up to KSh 1 million or imprisonment for six months (Clause 35).
Human-Centric AI and Job Impact Assessments
The Bill mandates that AI systems be designed to enhance human capabilities, support human involvement, and include human oversight in critical decisions (Clause 32). Additionally, providers and deployers of AI likely to impact employment must conduct workforce impact assessments, including potential job displacement, and implement mitigation measures like reskilling programs (Clause 33).
About This Bill
This Bill establishes a comprehensive framework for the regulation and governance of artificial intelligence (AI) in Kenya, creating an Office of the Artificial Intelligence Commissioner and an Advisory Committee. It introduces a risk-based classification system for AI, imposing specific obligations on providers and deployers of high-risk AI systems, and outlines penalties for non-compliance. The Act aims to ensure ethical, transparent, and accountable AI use while fostering innovation and safeguarding human rights and public welfare.
Bill No.
Senate Bills No. 4
Gazette No.
Supplement No. 15 of 2026
Sponsor
Karen Nyamu
Background
The Artificial Intelligence Bill, 2026, aims to provide a comprehensive framework for the regulation and governance of artificial intelligence (AI) in Kenya. Its principal object is to ensure the ethical, transparent, and accountable use of AI, while simultaneously fostering innovation and safeguarding human rights, data protection, and public welfare. The Bill addresses gaps in existing laws, such as the Science, Technology and Innovation Act and the Data Protection Act (Cap 411C), aligning with global standards like the European Union Artificial Intelligence Act and Kenya's National Artificial Intelligence Strategy 2025-2030. The Act also seeks to establish a dedicated Office of the Artificial Intelligence Commissioner to oversee AI risks, promote AI literacy, and advise county governments on AI integration in devolved sectors.
Key Provisions
The Bill establishes several key mechanisms for AI governance:
- Establishment of the Office of the Artificial Intelligence Commissioner: An independent State Office (Clause 4) responsible for the implementation and enforcement of the Act (Clause 10). The Commissioner will head the office, which will be a body corporate (Clause 4(5)). The Commissioner must hold a master's degree in a relevant field and have at least ten years of experience in AI governance, data protection, technology policy, or related fields, and in institutional management (Clause 6).
- Establishment of an Advisory Committee on Artificial Intelligence: This committee will advise the Commissioner on emerging trends, risks, opportunities, and policy matters, and facilitate stakeholder engagement and multidisciplinary research (Clause 17, 18). Its membership will include representatives from government, data protection, professional bodies, the Council of Governors, private sector, and civil society, ensuring diversity and gender balance (Clause 17).
- Financial Provisions: The Office will be funded through monies allocated by the National Assembly, grants, gifts, donations, and other endowments (Clause 21). Annual estimates of revenue and expenditure will be prepared (Clause 22), and accounts will be audited in accordance with the Constitution and public finance management laws (Clause 23).
- Classification of Artificial Intelligence Systems: AI systems are to be classified by the Artificial Intelligence Commissioner based on the risk they pose to health, safety, fundamental rights, the environment, or societal welfare (Clause 25). Categories include unacceptable risk (prohibited), high risk (e.g., in healthcare, education, finance, security), limited risk, and minimal risk (Clause 25(2)).
- Transparency and Safeguards: AI providers and deployers must disclose the nature, purpose, limitations of their systems, the extent of automated decision-making, and measures to mitigate bias (Clause 28(1)). Users of AI systems whose automated decisions have significant legal effects must be afforded safeguards including the right to human intervention and to contest decisions, in compliance with the Data Protection Act (Clause 28(2)).
- Regulatory Sandboxes: The Commissioner will establish regulatory sandboxes for testing AI systems in a controlled environment, prescribing conditions for participation, including ethical, data protection, and risk monitoring safeguards (Clause 29). Priority will be given to innovations addressing national priorities and encouraging collaboration with county governments.
- Ethical Guidelines: The Commissioner will develop and publish ethical guidelines addressing prevention of bias, protection of privacy, human oversight, environmental sustainability, and prohibition of non-consensual use of personal images or likenesses in AI-generated content (Clause 30).
- AI Literacy and Human-Centric AI: The Commissioner will implement AI literacy programs at national and county levels (Clause 31). Persons designing or deploying AI systems must ensure they enhance rather than replace human capabilities, incorporate features supporting human involvement, and provide for human oversight in critical decisions (Clause 32).
- Workforce Impact Assessment: Providers or deployers of AI systems likely to impact employment must conduct workforce impact assessments, including potential job displacement, and implement mitigation measures like reskilling programs (Clause 33).
- Review of the Act: The Cabinet Secretary will cause a review of the Act every three years from its commencement to assess its effectiveness and suitability in addressing technological advancements, consulting with the Commissioner, Advisory Committee, stakeholders, and the public (Clause 37).
New Obligations
The Bill imposes significant new obligations, particularly for high-risk AI systems:
- For High-Risk AI System Providers/Deployers:
- Conduct pre-deployment risk and human rights impact assessments and implement mitigation measures (Clause 26(1)(a)-(b)).
- Ensure transparency, traceability, and explainability of decision-making processes (Clause 26(1)(c)).
- Maintain records of data inputs, training datasets, outputs, and performance metrics for at least five years (Clause 26(1)(d)).
- Comply with the Data Protection Act, including conducting data protection impact assessments where required (Clause 26(1)(e)).
- Incorporate measures for robustness, accuracy, and cybersecurity (Clause 26(1)(f)).
- Obtain explicit consent for AI-generated content involving a person's image, voice, or likeness, and clearly label AI-generated outputs (Clause 26(1)(g)).
- Submit annual compliance reports to the Artificial Intelligence Commissioner (Clause 28(3)).
- For all AI System Providers/Deployers:
- Design and deploy systems to enhance human capabilities and ensure human oversight (Clause 32).
- Conduct workforce impact assessments and implement reskilling programs for AI systems likely to impact employment (Clause 33).
- Public entities using AI must ensure compliance with this Act (Clause 34).
Penalties
The Bill outlines specific offences and associated penalties (Clause 35):
- Major Offences (Fine up to KSh 5 million or imprisonment up to 2 years, or both):
- Deploying or operating an AI system classified as an unacceptable risk (Clause 35(1)(a)).
- Deploying a high-risk AI system without conducting required risk assessment or implementing mitigation measures (Clause 35(1)(b)).
- Participating in a regulatory sandbox without adhering to prescribed conditions (Clause 35(1)(d)).
- Failing to conduct a workforce impact assessment under section 32 (Clause 35(1)(e)).
- Using an AI system in the public sector contrary to the Act, causing prejudice to public benefit or rights (Clause 35(1)(g)).
- Generating, deploying, or distributing AI-generated content (including synthetic media) using a person's image, voice, or likeness without explicit consent, where such content causes or is likely to cause harm, misinformation, defamation, or infringement of privacy (Clause 35(1)(i)).
- Minor Offences (Fine up to KSh 1 million or imprisonment up to 6 months, or both):
- Failing to comply with disclosure or transparency obligations (Clause 35(1)(c)).
- Contravening ethical guidelines resulting in bias, discrimination, or harm to individuals (Clause 35(1)(f)).
- Obstructing the Office of the Artificial Intelligence Commissioner, including providing false information or failing to submit required reports (Clause 35(1)(h)).
- Corporate Liability: Where an offence is committed by a body corporate, any director or officer with knowledge who did not exercise due diligence to ensure compliance will also be guilty (Clause 35(3)).
Transitional Provisions
The Bill does not contain explicit transitional provisions for existing AI systems or operations. However, Clause 37 mandates a review of the Act every three years from its commencement to determine its effectiveness and suitability in addressing technological advancements, involving consultations with the Commissioner, Advisory Committee, stakeholders, and the public.
Comments (3)
Join the conversation
Don't have an account?
Already have an account?