A new research tool from Harvard University has laid bare the increasingly onerous nature of social media terms and conditions, revealing that these documents are not only harder to read but often strip users of their right to take platforms to court. The findings come as European capitals from Paris to Lisbon grapple with how to regulate the digital giants.
The so-called Transparency Hub, developed by Harvard's Berkman Klein Center, archives over 20,000 legal documents and tracks the terms of more than 300 platforms, including TikTok and Instagram. Its goal, according to Jonathan Zittrain, professor of international law at Harvard, is to make it easier for people to understand where their data is going and what rights they retain.
Readability in Decline
One of the hub's starkest findings is that privacy policies have become markedly less comprehensible over the past decade. Using the Flesch-Kincaid Grade Level readability metric, researchers analyzed documents from 2016 to 2025 and found that roughly 86 percent now demand a college-level reading ability. This trend makes it harder for ordinary users—especially younger ones—to grasp what they are agreeing to.
The timing is significant. Across Europe, countries such as France, Portugal, Spain, and Denmark are debating restrictions on social media to protect minors from harmful use. The European Union itself is developing an age verification tool, as member states enact social media bans for younger users. Yet if the fine print remains impenetrable, even well-intentioned regulations may fall short.
Arbitration Over Justice
Another emerging pattern documented by the hub is a quiet shift away from public courts. Kevin Wrenn, a researcher at Boston University who used the Transparency Hub, noted that many platforms now require users to settle disputes through arbitration—a private process where a neutral third party issues a binding decision. In most cases, Wrenn said, the companies themselves select the mediators, effectively removing the user's right to sue in a public forum.
This trend is particularly pronounced among AI platforms. Current terms for services like Anthropic and Perplexity include clauses that prohibit users from participating in class action lawsuits. Instead, individuals who suffer damages must bring legal action on their own, rather than joining forces with others before a judge or jury. Perplexity does allow users to opt out of these restrictions by sending a written notice to a support email within 30 days of first using the AI—a narrow window that many may miss.
Such clauses have implications for European users, who might otherwise rely on collective redress mechanisms available under EU law. The shift to arbitration could undermine consumer protections that have been carefully built up over decades, from Brussels to Berlin.
As European policymakers push for greater transparency and accountability in the digital sphere, the Harvard findings serve as a reminder that the devil is in the details—details that most users never read. The Transparency Hub offers a way to track these changes over time, but it remains to be seen whether regulators will use that knowledge to rein in the fine print.


