This policy defines the rules and acceptable boundaries for using AI systems and tools within Minthar Holdings and its subsidiaries, as well as use by clients and partners engaging with our AI-powered services. This policy applies to all employees, contractors, consultants, clients, and third parties who use or interact with Minthar's AI systems.
AI tools are authorized for: data analysis and business reporting; automation of repetitive administrative tasks; decision-support with human oversight; product and service development within the approved governance framework; research and development within protected sandbox environments; customer experience enhancement through intelligent assistance tools; marketing and educational content creation with mandatory human review.
The following are absolutely prohibited: creating misleading content or deepfakes for any purpose without clear disclosure; using AI for mass surveillance or tracking without consent; developing or deploying autonomous weapons or systems that cause physical harm; manipulating human behavior through exploitative or deceptive methods; collecting personal data beyond the legal bases specified in the PDPL; any use that violates Saudi regulations, Islamic values, or public decency.
The following uses require prior approval from the AI Ethics Committee: automated decision-making systems affecting individuals; processing sensitive data or children's data; deploying generative models in client-facing environments; integrating new third-party AI systems; cross-border data transfers for model training; use of facial recognition or biometric technologies.
Every employee using AI tools must: complete the mandatory responsible AI training program; review all AI-generated outputs before adoption or external sharing; report any unexpected behavior or questionable outputs through the designated reporting channel; refrain from inputting confidential or personal data into external AI tools without authorization; maintain a clear record of AI usage in projects.
Clients and partners using Minthar's AI-powered services must refrain from: attempting to extract or reverse-engineer models; using outputs for illegal or discriminatory purposes; reselling or sublicensing outputs as standalone AI services without agreement; exceeding usage limits specified in the service contract.
All AI systems are classified by risk level: (1) Critical — systems affecting health, safety, or fundamental rights, subject to the highest level of oversight; (2) High — systems affecting financial decisions, employment, or assessments, requiring committee approval; (3) Medium — internal process improvement systems, requiring registration and monitoring; (4) Low — general assistance tools, subject to standard usage policies.
When using AI for content creation: all generated content must be reviewed by a human before publication; AI use must be disclosed in published educational and advisory content; it is prohibited to attribute fully AI-generated content to a human author; the same quality standards applied to human content apply equally.
Violation of this policy constitutes a disciplinary offense that may result in: a formal written warning for a first non-serious violation; suspension of AI tool access privileges; mandatory retraining; termination for serious or repeated violations. For clients, consequences may include: service suspension, or contract termination with reservation of the right to claim damages.
All parties are encouraged to report any suspected violations of this policy through: the dedicated AI ethics email; the independent confidential reporting channel; the direct manager or compliance department. All reports are handled with full confidentiality and the reporter's identity is protected from retaliation.