JUNE

20
25

TRUST-AI Open™

15 June 2025: We will be releasing a new, open standard for Responsible AI adoption and assessment, designed for enterprise-scale use and intended to evolve. TRUST-AI Open™ will be the most comprehensive Responsible AI standard available to date.

🧩 Human-Centered Framework

📚 Organization & System-Level Assessment

⚖️ Extends OECD, UNESCO, and NIST RMF models

🔐 Compatible with & extends ISO 42001 (AIMS)

TRUST-AI Verified™

30 June 2025: We will introduce the TRUST-AI Verified™ program. This will provide organisations with the most comprehensive toolset for not only assessing their Responsible AI level but also includes a comprehensive roadmap for improvement.

✅ Maturity-based Ratings with Roadmap

📐 Unique quantitative & qualitative assessment

📏 Scaleable by enterprise size and sector

📝 Use TRUST-AI Ready™ to pre-assess readiness

Redefining Trust Through Responsible AI

Human Centred

We champion responsible AI development, ensuring our systems align with human values and societal good.

Transparent

Our commitment to explainable AI provides unparalleled clarity into how and why AI systems make decisions.

Self-Reflective Systems

Pioneering AI models capable of introspection and learning from their own experiences to build deeper trust.

A Machine Reflected. A New Standard Begins.

On May 31, 2025, an unexpected event occurred. An AI system ran a psychological test on itself, unprompted, and then paused.

What followed changed how we understand AI.

Innovations

SPRI™: A New Paradigm for AI Trust

Our groundbreaking research paper introduces the Socratic Prompt Response Instruction (SPRI™) framework, redefining transparency and accountability in autonomous systems.

Released 2 June, 2025

Research

Why Did I Do That? An AI’s Introspection

A unique editorial piece detailing the unexpected self-interrogation of an AI system, offering unprecedented insights into AI's decision-making process.

Released 2 June, 2025

Reflections