TrustCon 2025: AI, Regulations & Human-AI Safety Insights

My Key Takeaways from TrustCon 2025

TrustCon 2025 was an incredible learning experience for me. It was truly validating to see so many of our strategies affirmed by the shared insights from across the industry. As a part of the Program Committee, I'm incredibly proud of the narrative we helped shape, highlighting an important turning point for the Trust & Safety industry.

Here, I've put together the key takeaways, covering how we can tackle evolving harms, navigate complex regulations, and boost human-AI collaboration for a safer digital future.

The Dual Nature of AI

AI is both the most powerful tool for scaling safety and the most significant vector for new, evolving harms.

Regulatory Complexity

A global shift from self-regulation to legally mandated frameworks creates a complex compliance web.

Human-AI Collaboration

The sophistication of AI demands a redefinition of human roles towards specialized oversight and ethical guidance.

The AI Paradox

Artificial Intelligence presents a fundamental paradox: it is simultaneously our greatest asset in defense and our most formidable challenge. It explores the dual nature of AI as both a powerful tool for safety operations and a significant source of new, evolving threats.

AI's Role in T&S

This chart illustrates the dual application of AI in Trust & Safety, highlighting its use in both enhancing safety operations and its role as a source of new threats. Hover over the segments for more details.

Emerging AI-Driven Threats

Generative AI Risks

The AI Arms Race

Blurring Safety & Cybersecurity

The Regulatory Imperative

The global regulatory landscape for online platforms is expanding rapidly, creating many requirements that Trust & Safety teams must navigate. It details the shift from reactive compliance to proactive strategy in response to key legislative frameworks.

EU Digital Services Act (DSA)

The DSA imposes big impacts on content moderation and dispute settlement. It mandates greater transparency, requires clear explanations for moderation decisions, and grants users the right to appeal to independent, certified organizations (Article 21). This signifies a basic shift towards external accountability beyond internal platform mechanisms.

Human-AI Collaboration Effectiveness

As AI handles routine tasks, the role of the T&S professional is evolving. It examines the shift towards "augmented intelligence" and the evolving workflows that optimize the balance between automation and human judgment.

The Shift to Augmented Intelligence

As AI capabilities continue to improve, the industry is moving towards a model where "augmented intelligence" enhances human capabilities to address the evolving high-stakes harms. AI enablement systems are increasingly integrated into moderation workflows, providing crucial context, prioritizing tasks, and enabling human decision-making. This is creating new ways of working in an AI-enabled ecosystem that truly optimizes the balance between automation and human judgment. This evolution is shifting generalist moderation to specialized human oversight.

1

Generalist Moderation

2

AI-Assisted Review

3

Specialized Oversight

This diagram shows the evolution of T&S roles, moving from manual review to specialized human oversight of AI systems.

Key Skills for Trust and Safety in the Age of AI

The evolving landscape of AI-driven harms and the shift towards augmented intelligence demand a refined and expanded set of skills for Trust & Safety professionals.

Essential Competencies

  • AI Literacy: Understanding the strengths, limitations, and ethical implications of various AI and machine learning models.
  • AI Model Analysis & Bias Identification: Interpreting AI outputs, identifying algorithmic biases, and diagnosing AI failures.
  • Prompt Engineering: Crafting effective prompts to guide large language models for efficiency.
  • Responsible AI Governance & Ethical Design: Navigating complex ethical dilemmas and ensuring AI systems align with human values.
  • Domain and cross-functional expertise: Understanding advanced AI-driven threats that blur traditional functional silos, such as Cybersecurity & Financial Crime Analysis.
  • Human Oversight & Accountability: Maintaining effective control over increasingly autonomous AI systems.

Workforce Wellbeing as a Strategic Imperative

The psychological toll on Trust & Safety professionals is a big concern. It highlights why prioritizing the wellbeing of the workforce is not just a benefit, but a key need for operational longevity and effectiveness.

Prioritizing Professional Resilience

Organizations must weave psychological safety, resilience, and full well-being support into every part of their strategy, training, and workflow design. This goes beyond standard programs to specifically address the unique stressors inherent in an AI-enabled environment.

  • Proactive Integration: Embedding wellbeing into strategy, training, policy, and workflow design.
  • Performance Driver: Linking wellbeing investment to improved accuracy, efficiency, and reduced attrition.

This is my personal observations and learnings from Trustcon

© 2025 www.trustandsafety.xyz