AI governance Trust Page

Acme AI

Verified

Last verified: April 10, 2026

Summary

This page describes how Acme AI approaches AI governance, model usage, and risk management. Documents linked below may be shared publicly when enabled by the organization.

AI models in use

GPT-4o

OpenAI

Limited risk

Customer support drafting and internal knowledge retrieval

Claude 3.5 Sonnet

Anthropic

Minimal risk

Code review assistance and documentation generation

text-embedding-3-large

OpenAI

Minimal risk

Semantic search and knowledge base retrieval

Documents

  • AI Acceptable Use Policy

    AI Acceptable Use Policy

    Purpose

    This policy governs the acceptable use of artificial intelligence tools and systems at Acme AI. It applies to all employees, contractors, and third-party partners who interact with AI systems on behalf of the company.

    Permitted Uses

    • Customer support automation and routing
    • Code review assistance and developer productivity
    • Data analysis and business intelligence
    • Content drafting with mandatory human review

    Prohibited Uses

    • Automated hiring or firing decisions without human oversight
    • Processing of special category data (health, biometric) without explicit consent
    • Generating synthetic identities or deceptive content
    • Any use that violates applicable law or regulation

    Human Oversight

    All AI-assisted decisions that materially affect individuals must be reviewed by a qualified human before action is taken. Users may request human review of any automated decision.

    Data Handling

    No personal data is used to train third-party AI models without explicit data processing agreements. All AI providers are subject to our vendor assessment process.

    Governance

    This policy is owned by the AI Governance team and reviewed quarterly. Questions should be directed to compliance@acme-ai.example.com.

  • AI Data Processing Addendum

    AI Data Processing Addendum

    Overview

    This addendum describes how Acme AI processes personal data in connection with AI systems, in compliance with GDPR Article 28 and the EU AI Act.

    Data Controllers and Processors

    Acme AI acts as a data controller for customer data. Our AI service providers (OpenAI, Anthropic) act as data processors under signed Data Processing Agreements.

    Categories of Data Processed

    • Pseudonymised usage logs for model performance monitoring
    • Customer-provided content for task completion (not retained for training)
    • Aggregated analytics with no individual identifiers

    Legal Basis

    Processing is based on: (a) contract performance, (b) legitimate interests in product improvement, and (c) explicit consent where required.

    Data Retention

    AI interaction logs are retained for 90 days for debugging purposes, then permanently deleted. No personal data is retained by AI providers beyond the session.

    International Transfers

    Data may be processed in the United States under Standard Contractual Clauses (SCCs) approved by the European Commission.

    Your Rights

    Data subjects may request access, rectification, erasure, or portability of their data. Contact compliance@acme-ai.example.com.

  • AI Risk Assessment

    AI Risk Assessment

    Executive Summary

    This document summarises Acme AI's assessment of AI-related risks in accordance with the EU AI Act and NIST AI RMF 1.0. Our AI systems are classified as limited risk under the EU AI Act framework.

    Risk Classification

    Our AI systems do not fall within the high-risk categories defined in Annex III of the EU AI Act. Primary use cases are productivity automation and customer support — not decisions affecting individuals' rights or safety.

    Identified Risks

    Accuracy & Reliability

    Likelihood: Medium | Impact: Medium AI-generated outputs may contain errors. Mitigated by mandatory human review for customer-facing content.

    Data Privacy

    Likelihood: Low | Impact: High Handled via Data Processing Agreements, data minimisation, and no-training clauses with all AI providers.

    Bias & Fairness

    Likelihood: Low | Impact: Medium Bias audits conducted quarterly. No AI system makes autonomous decisions affecting employment, credit, or access to services.

    Model Dependency

    Likelihood: Medium | Impact: Medium Multi-provider strategy reduces single-vendor dependency. Fallback procedures documented.

    Controls

    • Human-in-the-loop for all consequential outputs
    • Quarterly bias and fairness audits
    • Annual third-party AI security review
    • Incident response procedure with 72-hour notification SLA

    Review Schedule

    This assessment is reviewed annually and after any significant change to AI systems.

  • AI Model Cards

    Model Card

    GPT-4o — OpenAI

    Use case: Customer support drafting, internal knowledge retrieval Risk level: Limited Data sent: Pseudonymised support tickets (no PII in prompts) DPA signed: Yes — OpenAI Data Processing Agreement (2024) Human oversight: All responses reviewed before sending to customers

    Claude 3.5 Sonnet — Anthropic

    Use case: Code review assistance, documentation generation Risk level: Minimal Data sent: Code snippets and technical documentation only DPA signed: Yes — Anthropic Commercial Terms with DPA Human oversight: Developer reviews all AI suggestions before merge

    Compliance Notes

    • No model is used for automated decision-making affecting individuals
    • All models operate under no-training agreements for customer data
    • Model versions are pinned and updated only after internal security review
Contact: compliance@acme-ai.example.com
Acme AI — AI Governance | AIBadge | AIBadge