Information Integrity by Design: The Missing Piece of Values-Aligned Tech

October 15, 2024

Published in Tech Policy Press.

Privacy, security, and inclusion are not natural elements of our technologies. They are largely a result of market demands — for safety and equity and belonging — precipitating movements (privacy by design, security by design, and product inclusion) that have pushed for these concepts to be viewed as defaults in the technologies on which we depend every day. New standards were created, best practices normalized, and awareness brought that transformed how we build technology and what we’ve come to expect of it. We must now think about the creation of an analogous framework for information integrity that seeks to incorporate these values from the earliest stages of development.

These movements were in some ways a response to technology’s “move fast and break things” mentality, which has proven untenable. It is clear that bolting on mitigations for unintended harms is insufficient. Democratization of the information ecosystem and the more recent democratization of AI tools is changing the information space and we have an opportunity to make sure it's for the better. The information environment has benefited greatly from technological innovations—supporting communities in times of crisis, elevating marginalized voices, and mobilizing global movements for racial justice and gender equality. However, the current attention economy means inaccurate and hateful content designed to polarize users and generate strong emotions is often that which generates the most engagement. The fear, and too often the impact, is algorithms inadvertently encouraging and amplifying mis- and disinformation and hate speech.

We cannot create an environment where our tools work harder for the malicious actors than for the people they seek to serve. This demands a new way of building and leveraging tools to combat information integrity issues.

Companies who want to future-proof themselves are realizing that prioritizing the impacts of their creations from the outset is simply good business. One area of impact, however, remains largely unaddressed, and it is becoming the most urgent of all, with the most significant liabilities: the impact of designed media systems on our information environment.

Information as a design process

We no longer arrive at a media source and simply consume it, harkening to the days of print or even television. Our relationship to information is now experienced through a middleman: the majority of internet users get their information through social media platforms and media apps. These platforms are complex, designed environments where the information itself is secondary to the systems of engagement around it.

Although the explosive success of platforms demonstrates a responsiveness to a market need, it is now evident the current model is not sustainable if we want to have a healthy information environment. The problem it introduces has become obvious: quantity of engagement is too often inversely related to quality of information. When what you have to say or share is less important than the ability to get a reaction, the noise almost always drowns out the signal. The results have been nothing short of destructive for society, and have become major liabilities for the platforms themselves, as well as affiliated businesses, leading to regulatory pressure, brand pressure, and mass user disillusionment.

What we often forget, however, is that these unintended, often divisive outcomes are largely the result of upstream design and/or process decisions. Allowing anonymous users to go viral is a design decision; providing zero in-line context of a user’s previous activity — whether they are a brand new user, a high volume spammer, frequently break community rules, or have zero balance in the sources they share — is a design decision; providing zero real-time feedback on the divisiveness of a post the user is about to publish — especially in the age of generative AI — is a design decision.

In just the same way we would consider a poorly secured password flow a negligent cybersecurity decision, we should begin to look at effortlessly toxic information environments through a similar lens. Just as it has with other “by design” movements, we can rely on definitions to understand the standards of this new movement

In the context of democratic discourse, which is the foundation of any healthy society, information integrity takes on and goes beyond the traditional definition that originates in information security. In this new context, information integrity refers to:

  • Accuracy: correct or precise information, including fact-checking efforts and disinformation monitoring.

  • Consistency: steady access, lack of censorship.

  • Reliability: enabling sources of information that are reliable, independent, and transparent.

  • Fidelity: exactness with which information is copied, and understood by others as originally intended.

  • Safety: unlikely to be at risk of danger, risk, or injury; including digital safety and cybersecurity.

  • Transparency: the quality of work being done in an open way without secrets.

When these factors are considered in the design as opposed to a reaction to bad behavior or unintended confusion, and an organization's position relative to each factor is made public through proactive reporting, there is a better opportunity for positive, sustainable results. Prioritizing “information integrity by design” doesn’t mean systems won’t be attacked or exploited, but it will mean that the skill level required to perpetuate a successful attack or confuse the information space will increase.


Visit Tech Policy Press to read the entire article.

Previous
Previous

Siliconsciousness podcast: “Rethinking the AI Agenda for the US and the World”

Next
Next

READOUT: Inaugural Global Action Forum: Setting a Global Majority Agenda for AI Innovation