By Ashley Boyd

Ms. Boyd is the vice president of advocacy at the nonprofit Mozilla.

(@2021 The New York Times)  TikTok made a big announcement last year. The company would open a Transparency and Accountability Center, giving the public a rare glimpse into how it works, including its algorithm. These A.I.-driven systems are usually black boxes, but TikTok was committed to “leading the way when it comes to being transparent,” it said, providing insight into how and why the algorithm recommends content to users.

The announcement sought to position TikTok as an outlier among peers — the rare tech platform that’s responsible and nontoxic. Facebook, Twitter and YouTube long ago lost the battle for public opinion, facing ire from consumers and lawmakers about A.I. systems that misinform, radicalize and polarize. But as a newer platform, TikTok has the potential to stake out a rosier reputation, even amid negative press about its privacy practices and connection to China.

Despite its posture as a transparent, trustworthy platform, however, TikTok suffers from some of the same afflictions as its peers do. In June, Mozilla reported that political ads, banned on TikTok, are stealthily infiltrating the platform and masquerading as organic content. It took a team of my colleagues conducting in-depth research with technical tools to expose this.

To its credit, TikTok has since spoken with my colleagues and taken steps to address this problem and provide transparency into who is paying for influence on the app. But big questions remain, like: Will the quest for transparency always be a game of cat and mouse between major tech platforms and underresourced, independent researchers? And if an imperfect TikTok is one of the more transparent platforms, what does that say about the state of trust and consumer agency online?

TikTok is not the only platform struggling to make meaningful transparency a reality. Without clear laws or norms to separate “meaningful” from “superficial” transparency, tech executives continually fail to follow through on voluntary public commitments. The result is a series of superficial transparency initiatives that achieve little or disappear quickly.

Consider Facebook’s Ad Library from 2018: After years of political actors’ abusing its ads platform, Facebook pledged to release a public archive of ads. But the library failed to meet most of the requirements that researchers Mozilla contacted had requested. The tool was riddled with bugs, missing vital information and had restrictive search limits, and Facebook didn’t engage with our suggested improvements.

More recently, after pressure from certain executives at the company, Facebook partly dismantled the team behind CrowdTangle, a tool that provides transparency into which public page posts on the platform receive the most engagement. Brian Boland, a former Facebook vice president and an internal advocate who pushed for more transparency during his time at the company, told The New York Times that Facebook “doesn’t want to make the data available for others to do the hard work and hold them accountable.” (A Facebook spokesperson said that the company prioritizes transparency and that the purpose of the reorganization of CrowdTangle was to better integrate it into the product team focused on transparency.)

And just last week, Facebook effectively shut down N.Y.U.’s Ad Observatory project, an initiative by third-party researchers that sought greater transparency into Facebook’s ad targeting. (Facebook said the researchers were violating the company’s terms of service.)

YouTube is also guilty of providing a fuzzy picture about its platform. For years, YouTube’s recommendation algorithm has amplified harmful content like health misinformation and political lies. Indeed, Mozilla published research in July that found that YouTube’s algorithm actively recommends content that violates its very own community guidelines. (A YouTube spokesperson said that the company is exploring new ways for outside researchers to study the company’s systems and that its public data shows that “consumption of harmful content coming from our recommendation systems is significantly below 1 percent.”)

Meanwhile, YouTube touts its transparency efforts, saying in 2019 that it “launched over 30 different changes to reduce recommendations of borderline content and harmful misinformation,” which resulted in “a 70 percent average drop in watch time of this content coming from nonsubscribed recommendations in the United States.” However, without any way to verify these statistics, users have no real transparency.

Just as polluters green-wash their products by bedecking their packaging with green imagery, major tech platforms are opting for style, not substance.

Platforms like Facebook, YouTube and TikTok have good reasons to withhold more complete forms of transparency. More and more internet platforms are relying on A.I. systems to recommend and curate content. And it’s clear that these systems can have negative consequences, like misinforming voters, radicalizing the vulnerable and polarizing large portions of the country. Mozilla’s YouTube research proves this. And we’re not alone: The Anti-Defamation League, The Washington Post, The New York Times and The Wall Street Journal have come to similar conclusions.

The dark side of A.I. systems may be harmful to users, but those systems are a gold mine for platforms. Rabbit holes and outrageous content keep users watching, and thus consuming advertising. By allowing researchers and lawmakers to poke around in the systems, these companies are starting down the path toward regulations and public pressure for more trustworthy — but potentially less lucrative — A.I. The platforms are also opening themselves up to fierce criticism; the problem most likely goes deeper than we know. After all, the investigations so far have been based on limited data sets.

As tech companies master fake transparency, regulators and civil society at large must not fall for it. We need to call out style masquerading as substance. And then we need to go one step further. We need to outline what real transparency looks like, and demand it.

What does real transparency look like? First, it should apply to parts of the internet ecosystem that most affect consumers, like A.I.-powered ads and recommendations. In the case of political advertising, platforms should meet researchers’ baseline requests by introducing databases with all relevant information that are easy to search and navigate. In the case of recommendation algorithms, platforms should share crucial data like which videos are being recommended and why, and also build recommendation simulation tools for researchers.

Transparency must also be designed to benefit everyday users, not just researchers. People should be able to easily identify why specific content is being recommended to them or who paid for that political ad in their feed.

To achieve all this, we must enforce existing regulations, introduce new laws and mobilize a vocal consumer base. This year, the Federal Trade Commission signaled its authority and intention to continue to oversee the potential bias of A.I. systems in use. The Government Accountability Office has outlined what A.I. audits and third-party assessments might look like in practice. And Congress’s bipartisan interest in reining in major tech companies has begun to focus on transparency in some important ways: The Honest Ads Act, which was introduced in previous Congresses, would make online political ads as transparent as their TV and radio counterparts.

Meanwhile, consumers should ask companies whether and how products use A.I. technology. Why? Consumer expectations can push companies to voluntarily adopt transparency reporting and features. The increased uptake of encryption over the past several years is a good analogy. Once obscure, end-to-end encryption is now the reason consumers flock to messaging platforms like iMessage and Signal. And this trend has pushed other platforms, like Zoom, to work to adopt the feature.

As Big Technology companies exert ever more influence over our individual and collective lives, visibility into what they are doing and how they operate is more important than ever. We can’t afford to let transparency become a meaningless tagline — it’s one of the few levers for change in the public interest that we have left.

Leave a comment

Your email address will not be published. Required fields are marked *