This policy establishes the systematic methodology for identifying, assessing, treating, and monitoring risks associated with AI systems across the Minthar Holdings ecosystem. It applies to all AI systems — whether internally developed or acquired from third parties — throughout their entire lifecycle.
We classify AI risks into six categories: (1) Model Risk — prediction errors, performance drift, hallucinations; (2) Data Risk — data bias, data leakage, data quality; (3) Operational Risk — system failures, integration failures, key-person dependency; (4) Reputational Risk — offensive outputs, loss of customer trust; (5) Legal & Regulatory Risk — regulatory violations, legal liability; (6) Ethical Risk — discrimination, privacy violations, societal harm.
Each risk is evaluated along two axes: Likelihood (rare, possible, likely, near-certain) and Impact (limited, moderate, severe, catastrophic). The result determines severity level: Critical (requires immediate escalation and urgent action), High (requires treatment plan within 48 hours), Medium (requires treatment within 30 days), Low (logged and monitored).
Before deploying any AI system, a comprehensive assessment is conducted including: identifying use cases and affected parties; analyzing training data for quality, bias, and representativeness; testing model performance across diverse scenarios including edge cases; conducting a Data Protection Impact Assessment (DPIA) when processing personal data; reviewing regulatory compliance; documenting residual risks and mitigation plans.
Approved mitigation controls include: Technical controls — adversarial testing, drift monitoring, emergency kill switches, data encryption; Procedural controls — mandatory human review for high-impact decisions, separation of duties, escalation protocols; Organizational controls — clear policies, training, accountability; Contractual controls — liability terms with vendors and clients.
After deployment, all systems undergo continuous monitoring covering: model performance indicators (accuracy, recall, error rate); bias indicators across demographic groups; failure and incident rates; user feedback and complaints; changes in the operational or regulatory environment. The monitoring dashboard is presented to the Chief AI Officer weekly.
Upon materialization of any risk: Critical level — immediate notification to CEO and Board, activation of incident response protocol, consideration of immediate shutdown; High level — notification to Ethics Committee and Chief AI Officer within 4 hours; Medium level — notification to Chief AI Officer within 24 hours; Low level — logged and presented in periodic reports.
We maintain a library of reference scenarios updated annually including: significant model performance drift affecting business decisions; leakage of training data containing personal information; discriminatory outputs affecting a protected class; successful adversarial attack on a deployed model; critical AI system failure during peak hours. Each scenario has an approved response plan.
This policy aligns with: the NIST AI Risk Management Framework (AI RMF 1.0); ISO/IEC 23894 for AI risk management; the SDAIA risk governance framework; and National Cybersecurity Authority (NCA) controls. Alignment is reviewed upon release of updates to these frameworks.
The risk matrix and mitigation plans are reviewed every six months or upon material changes to systems or the regulatory environment. Annual risk simulation exercises are conducted to test response readiness. Review findings are reported to the AI Ethics Committee and Board of Directors.