The Atlantic Council has produced a report from a diverse group of experts convened to develop an action-oriented agenda for future online spaces. With experts on trust and safety, advertising, gaming, civil rights, information security, community organizing, product design, Web3, national security, philanthropy and foreign affairs.

Some hard truths

There was a sobering consensus among the expert group about the challenges in mitigating online risks:

  • That which occurs in the offline world will occur online: “[m]alignancy migrates, and harms are not equally distributed across societies” . No technology can be expected to solve for the racism, sexism, ethnic hatred, intolerance, bigotry, or struggles for power. 

  • Equally, without the natural inefficiencies of human-to-human communication and the social norms of the offline world, the online manifestations of the risks and harms that we learn to live with offline are set to increase at an exponential pace: “[o]nline spaces that do not acknowledge or plan for that reality consequently scale malignancy and marginalization” .

  • Accordingly, the starting proposition for developers, platforms and the policy makers who regulate them, must be that choices made when creating or maintaining online spaces are not value neutral, nor do new products and services enter into a neutral society.  

  • We also need to be realistic that industry will likely continue to drive rapid online changes, but also prove unable or unwilling to solve the core problems at hand.

  • Existing political and regulatory systems also cannot keep pace.

  • In any event, there is no consensus exists on what ‘good’ should look like in the digital world of today, let alone in the future.

Is the cavalry coming over the hill?

Glum as this is, the Atlantic Council’s report also saw opportunity because there is:

A rare combination of regulatory sea change that will transform markets, landmarks in technological development, and newly consolidating expertise can open a window into a new and better future.

The expert group considered that by developing more creative and aggressive strategies, industry, academics, community groups, philanthropies and governments can play a significant role in meeting this moment more effectively.

The expert group’s optimism lies in the emerging industry expertise in trust and safety (T&S):

Thankfully, the knowledge needed to identify and build solutions has been developing steadily both inside companies and outside of them. Significant collective expertise now exists to illuminate not only where harms and risks can scale through existing and emerging technologies, but also where lessons learned can be applied proactively to construct safer, more trustworthy spaces.

Is that the cavalry or just the Big Four?

The expert group was concerned that, without the concerted effort the report goes onto recommend (see below), T&S expertise may falter or be misdirected. 

First, the report notes that while an estimated 100,000 people globally work in T&S (which seems a low number in itself), most are poorly paid, low skilled and overworked content moderators in developing economies. The Atlantic Council report says:

The next generation of T&S practitioners and experts should also come from a more diverse range of disciplines. This will help T&S respond to the diversity of challenges present in AI and metaversal technologies (such as decentralized and/or immersive environments), as well as the increasingly varied range of societal harms online platforms can exacerbate.

Second, while professionalisation is to be welcomed, civil society groups, independent researchers, academics and journalists will continue to lead the way in building collective understanding of how risks propagate via platforms; and are often the ‘first responders’ for disadvantaged or marginalised groups. However, these informal T&S monitors face a number of challenges:

  • In today’s polarisation political environment, those who call out misinformation/disinformation can be subject to vitriolic pile-ons and threats.

  • There has been “a long-standing, at times catastrophic, power imbalances between the companies building online spaces and the communities impacted by them ”. The onus continuously rests on civil society with limited resources to adapt to the operational needs of well-funded, empowered corporations. While some technology companies have improved their interfaces with civil society, this is mainly around, at one end, big picture policy issues and at the other end, reporting ‘bad content’, rather than involving civil society in T&S issues during design and testing.

  • National security can be used as “a pretext to bar civil society, researchers, or journalists from accessing information regarding potentially rights-violating activities conducted in the name of cybersecurity” .

Third, while regulatory intervention such as the EU’s AI Act are welcome, the development of T&S expertise could be overly skewed towards audit functions (hence the Big Four): 

[A]uditors, assessors, vendors, and advisers will represent a growing segment of the broader T&S services. This creates a very real risk that influence will consolidate even further with in industry and its direct affiliates (i.e., auditing companies) in the Global North.

Lastly, while each developer, platform and corporate user will understandably have its own online business risks on which it needs to focus, they may not pay enough attention to the bigger picture in addressing risks and harms of wider social impact which their technology is enabling: “individual services may not internalize all the social costs of harms occurring on their platform, and thus may not invest sufficiently in socially optimal T&S ”.  

New risks on the near horizon

While we continue to struggle with the risks of current online technology, bigger, more complex risks are coming with the next waves of technology, including:

  • Federated online spaces have “many of the same propensities for harmful misuse by malign actors as centralised social media platforms, while possessing few, if any, of the hard-won detection and moderation capabilities necessary to stop them ”.  

  • Virtual reality (XR) can create totally new scenarios and possibilities for users, including unforeseen harms such as the impact offline of online physical or sexual violence.

The neuroscience behind XR can lead to a blurring of what is or isn’t real, and as a result, the consequences of harmful or inappropriate behavior may be more acute.

Maybe the gamers have the answers

In a stinging rebuke at the insularity of the tech sector, the Atlantic Council report says “[t]he technology sector has long suffered from the presumption that its problems are novel, and that relevant knowledge must then be developed sui generis in bespoke, tech-centric settings” .

The report says that T&S practice could learn much from how other sectors and disciplines have addressed social harms, including from:

  • Cybersecurity: which has gone through a process of professional development, cross-industry collaboration and technical and process standardisation. Probably the key lesson to be taken from the cybersecurity sector is the shift from “a long-standing ‘blame-the-user’ narrative has begun to shift to a secure-by-design approach that emphasizes that primary responsibility for safeguarding users lies with platforms.

  • Gaming: which has “long existed as multimedia interactive spaces that commingle real-time mixtures of audio, video, and text: one that will define online spaces more and more in the future" . This has required gaming developers to address new safety features to protect users in these immersive environments, to monitor user conduct in privacy-respecting and less data-intensive ways, to grapple with the increased aperture of user-generated content as a threat, and design games more positively around children to pre-emptively shape and encourage healthy and inclusive play patterns.

Recommendations

The Atlantic Council report calls for more investment by the private sector, government, universities and philanthropy in the development of T&S expertise, training and qualifications.

A couple of key recommendations stand out:

  • Effective T&S is as much a logistics challenge as a policy challenge: a matter of facilitating effective decision-making, undergirded by technology. The logistical aspects of T&S operations could benefit from the development of robust open tooling. For example, hash-matching tools that could detect exact and near-exact matches of previously identified content, or tool kits that could help build classifiers to assess new, not previously seen content or behavior, could also be of widespread benefit.  

  • Developers need to get much better at sharing information about T&S risks, skills and tools with each other and with other stakeholders, including researchers and civil society.

  • Until corporates break away from the perception that T&S investments are a cost centre rather than a value generator, this will remain one of the greatest barriers blocking more widespread and consistent adoption of T&S practices and standards within companies. 

  • In the ‘mad rush’ to invest in AI start-ups, venture capital needs to wake up to the risks to their investment of inadequate T&S expertise and culture in those start-ups.   

  • The development of T&S expertise needs to be more globally inclusive: “[o]ne fundamental limitation of the current T&S field is how closely it hews to the culture, language, and incentives of US technology companies ”.

Conclusion

On first reading, the report has the feel of being written by committee where everyone got something. However, the slew of recommendations (over 50) speaks to how big and complex scaling online trust is. The clear, compelling message of the report is that we are clearly on the precipice of a new digital era, and we have a short window in which to act.

Read more: Scaling trust on the web - Atlantic Council