This framework establishes the institutional structure, processes, and policies required for effective AI governance across the Minthar Holdings ecosystem. The framework ensures that AI development, deployment, and use are conducted responsibly, ethically, and in compliance with Saudi regulations and international standards. It aims to balance fostering innovation with proactive risk management.
The Board of Directors of Minthar Holdings bears ultimate responsibility for AI governance. The Board designates a member responsible for AI who oversees the group's AI strategy and submits quarterly reports. The Board approves the Ethics Charter and key governance policies, and authorizes deployment of AI systems classified as "critical."
A multidisciplinary committee is formed comprising representatives from: technology and engineering, legal and compliance, risk management, business operations, and independent external experts. The committee meets monthly and issues decisions by majority vote. Its mandate includes: reviewing high-risk system deployment requests, adjudicating complex ethical cases, monitoring policy compliance, and updating frameworks as technology or regulations evolve.
The Chief AI Officer is responsible for the day-to-day execution of the governance framework: maintaining the AI systems registry, overseeing pre-deployment impact assessments, coordinating periodic audits, liaising with regulatory authorities, and reporting to the committee and Board.
Decision authority is distributed by system risk level: Critical level — requires Board approval based on committee recommendation; High level — requires AI Ethics Committee approval; Medium level — requires Chief AI Officer approval; Low level — delegated to department managers within approved policies.
Each stage of the AI model lifecycle is subject to specific governance controls: (1) Design — ethical and regulatory impact assessment; (2) Development & Training — data quality standards and bias testing; (3) Testing & Validation — comprehensive testing protocols with clear acceptance criteria; (4) Deployment — final review and documentation; (5) Monitoring — continuous performance and behavior indicators; (6) Decommissioning — safe decommissioning protocol with record preservation.
Minthar Holdings maintains a centralized registry of all deployed AI systems containing: system name and purpose, technical and executive owner, risk classification, training data used, bias assessment results, deployment and review dates, and recorded incidents. The registry is continuously updated and subject to annual audit.
Before integrating any third-party AI system: a due diligence assessment is conducted covering the vendor's ethical and security policies; the system is classified within the approved risk matrix; governance and liability terms are included in contracts; system performance is monitored periodically; audit rights are secured in service agreements.
This framework aligns with: the AI Governance Framework issued by the Saudi Data & AI Authority (SDAIA); National Cybersecurity Authority (NCA) cybersecurity controls; international standard ISO/IEC 42001 for AI management systems; the NIST AI Risk Management Framework; and the principles of the EU AI Act (as a best-practice reference).
Comprehensive AI governance audits are conducted at least twice annually. They include: reviewing the effectiveness of controls and policies, evaluating recorded incidents and lessons learned, measuring compliance indicators, and benchmarking practices against global best practices. Audit findings are reported to the committee and Board with improvement recommendations.
Any material modification to deployed AI systems — whether to the model, training data, or scope of use — is subject to a formal change management process that includes: change impact assessment, approval from the competent authority per the decision matrix, pre-deployment testing, and documentation and registry updates.