Join Aaron as he delves into their "composable moderation" system, a cornerstone of their decentralized approach to Trust & Safety. Reimagining content moderation by allowing users to create open, stackable, customisable ecosystem of moderation services that prioritize user choice and community empowerment.
· Examining the concept of composable moderation at Bluesky and the challenges of user-generated moderation
· Balancing platform-wide safety measures with user-customizable moderation filters
· The role of community-led moderation services in shaping online spaces
· Challenges and opportunities in scaling composable moderation
As misinformation tactics, deceptive behaviors, AI generated content and covert influence operations evolve, T&S teams must develop agile policies that outpace bad actors, anticipating emerging threats and build resilience into trust and safety frameworks. Join Siobhan Oat-Judge to hear how TikTok is responding to some of the biggest threats online today, to consumers, societies and democracies.
This panel will explore strategies for achieving effective content moderation at scale, balancing automation with human oversight, and addressing challenges such as bias, context-sensitivity, and user appeals.
Artificial intelligence has revolutionized content moderation, enabling faster detection of harmful content and streamlining decision-making processes. However, the reliance on AI also introduces challenges, such as biases, false positives, and the potential for over-reliance on automated systems. This session explores how organizations can harness AI's potential while addressing its limitations to create safer online environments.
Key discussion points include:
As online platforms grow, ensuring user safety while maintaining scalability is a critical challenge. This session explores how businesses can effectively scale content moderation processes to proactively address harmful content while building trust within their communities.
This panel discussion brings together industry leaders and experts to explore real-world applications of Safety by Design principles in Trust & Safety operations. Attendees will gain insights into how organizations are proactively integrating safety considerations into their product development lifecycle, from conception to implementation.
This session explores the intricate balance between engineering innovation, operational efficiency, and user safety in the Trust & Safety landscape. Developing robust T&S systems that scale effectively while prioritising user protection.
Join this session for a journey to develop an effective internal content moderation system. Aligning across Policy, Operations and Enforcement to tackle the complex challenge of content moderation at scale.
As reflected in the latest regulation, Child Safety is paramount for Trust & Safety professionals, but is our current approach suited to the evolving digital landscape? Join us for an insightful panel discussion that delves into tangible steps to protect children from modern online harms.
With the recent introduction of Ofcom’s Guidance on Women and Girls’ Online Safety (WAGOS), this panel discussion aims to have a comprehensive and concrete conversation around how we can build frameworks for addressing gendered harms and fostering safer online spaces. Sharing actionable recommendations and discussing the pivotal role of service providers in adopting these guidelines.
Exploring the delicate balance between empowering youth participation and implementing effective parental controls. This session will explore creating safe, engaging environments for young gamers while respecting their autonomy and fostering responsible digital citizenship.
As online gaming experiences evolve and communities expand, so do the challenges of ensuring safe and inclusive experiences for players. The Trust & Safety Summit is excited to partner with Thriving in Games Group for a session bringing together industry leaders to discuss the new Digital Thriving Playbook—a operational framework for building healthy and inclusive online games and communities and beyond.
AI’s role in content moderation presents both transformative opportunities and significant challenges. While AI enhances efficiency and scalability, concerns around accuracy, bias, and ethical considerations remain at the forefront. This panel will explore how organizations can leverage AI responsibly to create safer online environments.
Maintaining a safe and enjoyable user experience across diverse platforms and game types presents unique challenges. Paul Snyder, Director of Player Safety Strategy at EA their approach to implementing consistent content filtration and moderation strategies across gaming
environments, working directly with game teams to identify gaps, obstacles and opportunities.
This session explores the critical importance of integrating Safety by Design principles into the earliest stages of development, to create safer, more trustworthy digital products and services from the ground up.
Developing a safe and inclusive platform from the ground up presents unique challenges, particularly in gaming and user-generated content. This session explores strategies to foster trust while balancing resource constraints and reducing brand risks while building and scaling a streaming platform:
Join this session to learn how Replikant’s Chief Strategy Officer, Ladan Cher, is leading from the front to create safe, inclusive digital spaces in the evolving landscape of Trust & Safety for content creators. In a world where legislation is still catching up, Ladan shares proactive approaches that
Understanding and supporting young users’ behaviour online is critical to fostering a safer environment. This session will delve into why young people act as they do online. Explore potential methods T&S leaders can take to encourage empathy and positive decision-making, and examine how to communicate enforcement decisions transparently, understanding and guiding young users and
promoting healthy digital interactions and safety.
For years, hashing technology has helped platforms detect and remove known child sexual abuse material (CSAM), but the challenge of identifying new, unknown CSAM has persisted—until now. Recent advancements in AI are transforming child safety efforts, enabling platforms to proactively detect previously unreported CSAM at scale. This panel brings together experts from Thorn, Internet Watch Foundation and Hive to explore how AI classifiers are reshaping the fight against online child exploitation.
It is imperative for the tech industry to work together, sharing insights and resources to prevent, detect, and address these critical threats to youth safety. Led by the Tech Coalition, this panel will bring together key stakeholders from across the industry to explore the role of collaboration in combating OCSEA, discussing both the challenges and the practical steps that companies can take collectively.
Our final panel discussion brings together leaders from across function to explore strategies for breaking down silos and fostering collaboration across all aspects of T&S operations., creating a more integrated and effective approach to platform safety and integrity. Key discussion points include:
Trust & Safety needs to be embedded into your culture, but how do you bring everyone along for the journey? Articulating its value across different stakeholders—C-level executives, the public, end users, and operational teams—is a complex challenge. In this fireside chat, we’ll dive into strategies for communicating the impact of Trust & Safety and emphasizing design-centric approaches to make
safety repeatable, measurable, and scalable.