DWeb Digest

A Publication Exploring DWeb Ideas and Principles

A dimly lit interior of a print shop with multiple printing machines, colorful supply bins, digital control panels, and a "PRINT SHOP" sign illuminated with neon-style lighting.
Issue 1 | May 2024

DWeb Digest: Inaugural Edition

Article 1

Editor’s Letter

I am thrilled to have had a chance to edit and shape this special magazine on decentralization for the Filecoin Foundation for the Decentralized Web. For many of us, increasing decentralization has always been an important focus and goal, but it hasn’t always been clear to others why decentralization matters so much, or even what should be decentralized. The collection of articles in this magazine will help provide some perspective on these questions. It kicks off with Danny O’Brien’s exploration of “terminal values” and specifically the importance of “cognitive liberty.” While this may not be where people expect a magazine on decentralization to start, I think it’s the perfect (and somewhat inspiring) framing for putting everything that follows in perspective. This is followed up by a very practical and very important example of this in play, as Adam Rose and Basile Simon from the Starling Lab explain how they’ve been able to document war crimes evidence via the blockchain.  Mai Isikawa Sutton’s thoughtful article that compares the two traditional “camps” in the decentralized web world, those of “Web3” and “DWeb,” and discusses how they can each learn from the other and collaborate to build a better world charts a useful path forward. Holmes Wilson follows that by tackling an important subject about decentralized services, and how there may be too much (sometimes hidden) centralization in today’s open source world, limiting our current ability to control our own work and data. This fits well with Farzaneh Badiei’s piece on other areas with “hidden” centralization, and smaller steps we can take towards making the world more decentralized. Chris Riley’s article builds on those ones to highlight another element often lost in the discussion of decentralization:  how the data itself flows rather than just where and how it’s stored, and the importance of data portability and data transfer.  His piece also touches on some of the policy and legal issues, which works nicely with pieces from Kurt Opsahl, reminding us how code is expression and needs to be free if the dream of the decentralized web is to be realized, and Kristin Smith, who analyzes of the policy ecosystem regarding cryptocurrency — and whether it will be enabling innovation and freedom, or co-opted by authoritarian governments. Naomi Brockwell puts in very real terms why everyone should be thinking about how decentralized data, which we can control ourselves, is a necessary element for protecting our own privacy. That fits well with Cory Doctorow’s closing piece that explores how the control large centralized platforms have over the configuration settings have robbed us of autonomy, while making the internet less than it could and should be. And, finally, I have my own piece in this collection, exploring a framework for thinking about where and how decentralization makes sense, and can be transformative, both on the internet and in many other industries as well. This is an incredibly important topic, which will have a major impact on society moving forward. The ideas presented in this magazine are important in understanding what kind of better world is well within our grasp, if we can just take the steps to get there. I want to thank the Filecoin Foundation for the Decentralized Web for asking me to be a part of this project, and all of the authors for their thought-provoking articles. And, most of all, I’d like to thank you, reading this now, for exploring these ideas and hopefully building on them to create a better, more decentralized future.

Photo of Mike Masnick
Mike Masnick
Read Article
Article 2

Terminal Values: Cognitive Liberty

If you build and market new technologies to a global audience, you may occasionally reflect on how the use of those technologies align with your personal values and our collective human rights. Or you may not. Be warned, however: avoiding the topic entirely can lead to some uncomfortable situations down the line. As an activist at the Committee to Protect Journalists and the Electronic Frontier Foundation, my job in the 2010s was to make sure those uncomfortable situations happened as early as possible in the product lifecycle. I was a sort of traveling conscience salesman, knocking on the door of shiny new start-ups like Tumblr, or fast-moving, thing-breaking giants like Facebook, and then shoving my foot in the door as I brandished my credentials. I learned that people in those companies mostly thought that human rights violations were something that happened far away, so I would sit with their development teams and ask if they knew how incredibly popular they were in another country – like Tunisia, say, or the Philippines. This made them happy. Then I'd describe the kind of struggles human rights activists had in that place. They would look sad. Finally, I'd note some misfeature of their tooling that those activists had told me was screwing them over: how the sites' login page was entirely unencrypted, say, and was being intercepted by the government or other malicious actors holed up in that country's infrastructure. That would usually make the team avoid eye contact with me entirely, but hopefully they would go back to their desks after my brown-bag talk and fix something. Anything. (The same misuses, vulnerabilities, and exploitations were happening under their noses in the United States, where they lived, but it would take a few more years before they would believe that.) Since then, thanks to smarter activists than me from around the world, and more assiduous technologists at those companies, matters have improved. Your passwords are, I hope, encrypted in transit and at rest. Companies hear directly from those affected by their decisions around the world, as well as in their own home country. There is a far richer conversation across society on the ethical deployment of digital technology. But the reflections and doubts we struggle with have grown more complex, more dialectical even. The psychologist Milton Rokeach contrasted the deeper goals of culture, which he termed "terminal values," with the methods we use to implement and maintain them, which were "instrumental values.” In those more naive times, the human rights I would tout would be blunt and absolute: defend free speech, protect privacy. Now, even digital rights activists collectively wonder: are those really our terminal values? Or do we ask these big tech companies to do these things in the pursuit of wider, more fundamental values? Perhaps we don't want our technology to be an engine of unbounded free expression, and unstoppable privacy. Perhaps we hold those values contingent on their capability to help us to achieve a more democratic society, or social equity or stability or prosperity or safety. After seeing up close what poor a job those tech giants have made of defending frankly any set of consistent values, fundamental or not, I've turned to a new job at the Filecoin Foundation for the Decentralized Web, where I work to bolster the ability of Internet users to create and use decentralized alternatives to those weary tech giants. I sincerely believe decentralization can lead to better protections for the values and rights that we hold in common. But, as I foster and create and brandish these new technologies, I've found myself pausing to reflect too. Is decentralization a terminal value? If decentralizing tech — and distributing its powers more widely — fails to serve our more fundamental needs, should we fall back to those giant centralized systems, imperfect as they may be? Should we even hold back from supporting such wild new technologies, given where so many people believe the last wave of digital tech led us? I do believe there are more fundamental digital values than speech, privacy, and decentralization. But there are not many, and they lie not so far from those needs. Rather than Rokeach's terminal values, which included "self-respect" and "inner harmony," and to which we might add such clear and pressing concerns as the fight against racism, poverty, and injustice, I think there is one fundamental terminal value that these digital rights ultimately — and intimately — defend and enhance. There's no established name that I know of for this concept that will bring it instantly to mind. The right of self-determination, from the human rights tradition, cuts close. Duke University's Nita A. Farahany’s recent re-coinage of the term "cognitive liberty" in her new book "The Battle for Your Brain: Defending Your Right to Think Freely in the Age of Neurotechnology" is a brilliant framing and naming, and deserves to be widely adopted, especially given the dystopian technological applications she documents. But my route to this terminal value is a little different, and comes from an older, more optimistic tech tradition: one that still lies, sometimes deeply buried, behind the screens we use today. The "PC" that perhaps still sits near you somewhere, if it hasn't shrunk into your laptop or your phone, has always stood for "personal computer". That name is an echo of a line of revolutionary 20th century thought; a profound ideological rebellion from the locked-down, timesharing and centralized mainframe ideologies that preceded it. The PC was always intended as a machine that augments individual abilities. That ambition has deep roots, from Vannevar Bush's 1945 essay "As We May Think," Doug Engelbart's 1962 paper "Augmenting Human Intellect," through Ted Nelson's 1974 manifesto "Computer Lib," Steve Job's 1980 "Bicycle For The Mind" campaign, to Sherry Turkle's 1984 book "The Second Self" and beyond. In this way of thinking about digital tech, the personal computer is an extension of your brain and its abilities. Its memory is to help you remember; its processing power is there to help you think faster; its network connection is for you to reach out to others; its interfaces are to connect more closely to you. It is yours in the same way as your hands belong to you, as your eyes, as your imagination. Something has taken us from that tradition. The PC has inched closer to our faces, and under our skin. It has become ever more personal and intimate (do you sleep with your phone?) It has in many ways become more "user friendly." But it has also become much much less user controlled. Its memory and processor now spends its time on showing advertisements, enforcing copyright protection rules, and conducting sly surveillance of your habits, using systems that resist your ability to evade them. That network connection is used to stream out your behavior to strangers, rather than let you voluntarily choose with whom you communicate and what about. No matter how they ape the liberatory language of this tradition, many of us look at Neuralink or VR and see it as a fundamentally alienating tech, controlled by others, leering into our personal space; foreign body horror rather than extensions of ourselves. Those on the cutting edge of technological adoption, like elderly or disabled people, know the profound difference between intimate tech that expands your personal autonomy and that which is limited and controlled by others. Many others who might think they have more freedom in what tech they adopt are feeling the walls close in too. Farahany captures this growing risk in her book: of technology used to spy into your brain, or even worse, to reach in and manipulate it. We can understand from her examples that this is the minimum freedom we need if we are even to be able to address or defend or experience all those other terminal values. We can't fight injustice, we can't even see injustice, if we can't think about it. We can't see alternatives to the way we live if we do not have control of our own thoughts, our conscience, our free will. We can't revel in a free world, when our minds are in shackles. But marking the perimeter of our freedom around our skulls is to mislay the potential alternative. Personal technology can be an extension of our mind. With personal computers of all shapes, under our conscious control, acting as our faithful agents, we can think faster, consider more options, grow and work collectively and in a more fulfilling way than was possible than ever before. It's more than a goal of technology, it's the only version that leads to a positive vision of rights and values, as opposed to a slowly more closed-in, limited world. The personal technology that forms our "exocortex," as blogger Ben Houston called it, must  possess the same free-wheeling, unbounded liberty as exists inside our heads. This is why free expression and privacy are so fundamental in the world of technology, perhaps even more than in their historical context. It's not just about being able to say anything to anyone in the public square, or even keeping your messages to others private. It's about being able to speak privately to ourselves. When I talk about decentralizing tech, what I most think about is moving processing, storage and control back toward the edge, back closer to the end-user and their control. Right now, if you wrote a note to yourself and put it on your personal computer's hard drive, you could say whatever you want, draw whatever you want. If it's in the cloud, you don't have that guarantee: even if you kept it in your own Google Drive or DropBox, it could be found wanting, and deleted. I don't know about you, but my notes are my memory, most of the time. My search engine searches are as much me talking to myself about my worries and interests, in a way that I would rarely disclose even to my closest companion. And of course, AI conversations are becoming even more tightly linked to our explorations, our reflections, the intermediate steps of our imagination. Ultimately, as a digital rights activist back in the 2010s, what I was doing was demeaning to me and to those I sought, clumsily, to represent. I went on bended knee and begged the indulgence of centralized services, hoping to catch their sympathy or provoke their sense of shame, to extract a few temporary concessions that could make the tools they made a little bit more aligned with the desperate needs of their captive userbase. That's no way to defend inalienable rights. And those rights need to be more than defended: they need to expand to fit the challenges we now face. As Engelbart wrote, "\[t]he complexity of the problems facing mankind is growing faster than our ability to solve them." As individuals and collectively, we need our own abilities to grow to match the challenges of the modern world. The place where everything about human nature starts, and ends, is within our own consciousness. Personal computers give us the chance to expand that consciousness; but that means we need to expand the perimeter of our basic freedom to think. Our own consciousness cannot be rented from others, or temporarily conceded to us, with built-in police or backdoors or hidden ad men. We need to seize the means of computation, and that means ejecting all of these interlopers, and relocating it back into the personal domain we control: whether that's physically, or by using tools like encryption and zero-knowledge proofs to preserve our control when our data and processing power sits on others' hardware. That's the pyramid of digital rights for me: a firm foundation of decentralized, user-controlled technology, giving us broader cognitive liberty, internal privacy, freedom of self-expression, and freedom of self-determination. On top of that solid ground, we can build a society that's free and fair. And then we can have the ability and freedom to self-reflect, to talk, and to plot our better shared future together, free at last in our digital environment.

Photo of Danny O'Brien
Danny O'Brien
Read Article
Article 3

Creating Human Records that Stand the Test of Time

As business rivalries go, the story of Suen-nada and Ennum-Ashur was pretty routine. They both claimed ownership of valuable intellectual property, wrestled over control of accounts, and accused each other of theft. Their case went to court. Witnesses were present. Testimony was delivered and became crucial evidence. A fire burned down their Assyrian colony in 1836 B.C., but a record of their testimony survived and is now available for review at The Metropolitan Museum of Art in New York. We know about the trial because, thousands of years ago, the world’s most advanced technologists figured out how to use a crude implement to etch markings into stone and clay. This data storage breakthrough would be used to record crop yields, trades, weddings, births, deaths, wars, legends, and other data that was critical to evolving human civilization.  While they are museum pieces today, at the time there was a logic to using hard materials for persistence. Even back then, this method wasn’t the fastest way to record information (papyrus was invented at least a millennium before) but it would last and, most importantly, be difficult to alter. Modern technologists still find this approach important — and the efficiency tradeoffs familiar. Indeed, the goals of documentation haven’t changed much over the last few thousand years. However, as humanity’s most important records are now digital, we are realizing that more than ever we need to find new ways to preserve records and ensure they are immutable.  Threats to data integrity have certainly evolved. Flash drives have a short shelf life (often shorter than papyrus). Authoritarian rulers use social media to sow doubt. Even the human psyche is under attack: Is seeing still believing in the age of generative artificial intelligence? Evidence has always needed to stand the test of time and the test of scrutiny. Long-established concepts can still help, including decentralization and cryptography (which was likely around even in the days of Suen-nada and Ennum-Ashur). A related modern concept can also help: blockchains. In the spring of 2022, Russian artillery shells ripped through the walls of several schools in Kharkiv, Ukraine. The whole region was under heavy fire from advancing Russian armed forces, but an intentional attack on civilian targets is a war crime. The United Nations specifically defines education as a human right. Turning sanctuaries of learning into a battlefield creates a vicious cycle of illiteracy and poverty. There are no statutes of limitations on war crimes. They can be — and mostly are — prosecuted decades later. That means evidence must be preserved. But in an active conflict there is risk of loss, tampering, and damage. While the direct attacks might be top of mind, digital evidence is particularly vulnerable to power grid failures and connectivity changes. If sitting on unmaintained technologies, it can also decay and degrade — think of hard drives failing over time. Witnesses in Kharkiv shared an instinct with our ancestors from thousands of years ago: to document what happened. Their photos went onto sites like Telegram, but social media isn’t the most reliable place to store critical records. Users can delete their posts or make their accounts private. A platform’s CEO could take down content without any rationale. Starling Lab set out to authenticate and preserve these vulnerable assets. Co-founded by the Stanford Department of Electrical Engineering and the USC Shoah Foundation, our team explores how web3 technologies and decentralized web principles can be applied in the fields of law, journalism and history. We use open source tools and develop methodologies for the collection and verification of digital evidence. Our investigators made web archives of social media posts from Kharkiv, verified using state-of-the-art OSINT techniques. Decentralized storage deals preserve the collection on thousands of servers around the world, and the redundant recording of hashes and cryptographic signatures permit trustless inspection of the items and their audit log. We don’t know what admissibility standards will look like when these incidents have their day in court. But we know that even an untrusted source could produce all the evidence, and prosecutors can quickly verify its authenticity by comparing it to registrations that we made on multiple blockchains. Open source intelligence (like social media posts) may still be questioned, so Starling arranged for photographers to visit two of the schools. They used the context-rich capture app ProofMode, from the Guardian Project, to include corroborating metadata (including time, GPS coordinates, surrounding cell network, phone locale, etc.). These bundles were cryptographically sealed with the images and their integrity proofs registered to several blockchains for safekeeping. Starling Lab has since made a pair of submissions to the Office of the Prosecutor at the International Criminal Court, including an analysis of how these methodologies can establish credibility of the evidence. The challenge isn’t always bringing our technology to a location, but also to a point in time. For a war crimes investigation focused on the Balkans, we authenticated photographs from original 30-year-old film slides. The same approach – using our Starling Framework of Capture/Store/Verify – has helped us to preserve testimonies of Holocaust survivors, store thousands of examples of Russian misinformation, document living conditions for the homeless in California, record promises by politicians about government surveillance, and save examples of climate change impacts in the Amazon. Cryptography, decentralization, and blockchains are the tools we used to preserve these important records in humanity’s collective memory. These projects have created immutable records to stand up against challenges from the wide-scale adoption of generative AI, sophisticated disinformation campaigns, and changing digital custody practices. Today’s courts and other civic institutions must confront similar challenges that undermine trust in their own critical records. By embracing similar innovations, there’s a chance for digital evidence to become as resilient as ever – the modern equivalent of being etched in stone.

Photo of Adam RosePhoto of Basile Simon
Adam Rose & Basile Simon
Read Article
Article 4

The Debate Over DWeb vs. Web3 & The Decentralized Elephant in the Room

This article started off as one that would delineate the difference between the terms “DWeb” (as in decentralized Web) and “Web3” (as in Web 3.0). It seemed like a useful exercise to tease out the communities that implicitly or explicitly support these terms. But the more I dug into it, the more I realized this wasn’t that interesting in itself. The disagreement over these labels, and what is even included in the purview of each, felt like a distraction from what’s actually being negotiated as people define and claim allegiance to DWeb or Web3. What matters in this discussion — in any discussion about technology, really — is who’s designing it, who controls it, and who stands to benefit? Decoding DWeb vs. Web3 Let’s start with what the terms have in common: they both point to a present shift in networked technologies wherein some level of distributed ownership, control, or management is core to their operation. Those who associate with these movements champion user self-determination over the data and the rules that govern their platforms. By virtue of espousing “decentralization” as a core value, most projects that span the DWeb and Web3 ecosystems concern themselves with questions of shared ownership and governance. I’m fairly confident that how I’ve described these terms so far wouldn’t be that contentious, but where it gets spicy is when you start to project ethical attributes to these movements. Despite their similarities, it’s undeniable that the terms now carry different meanings and serve distinct purposes; so much so that people are explicit about supporting or steering clear of projects associated with one or both of these words. Web3: An Impending Web Though it was popularized by Gavin Wood, co-founder of Ethereum and Polkadot, the term Web3 was already used to describe a future semantic Web — where all linked data is machine-readable, shareable, and reusable across the Web. Under the World Wide Web Consortium’s definition of a “semantic Web,” which was then synonymous with “Web 3.0,” there were several types of web standards that existed that would enable this kind of “Web of data” including Extensible Markup Language (XML) and Web Ontology Language (OWL). Nowadays, though, Web3 has come to signal something more specific: protocols and platforms that involve blockchain and distributed ledger technologies, including cryptocurrencies. Based on an overview of some mainstream definitions, it has largely become a buzzy marketing term meant to signal that a project is part of a new phase in the evolution of the Web (even Twitter founder Jack Dorsey thinks it’s a buzzword). The word itself points to its temporality — it’s the next thing after “Web 2.0” — and is a sign of being part of an inevitable progression of the World Wide Web. Reflecting on the term Web3, Evgeny Morozov points out that while its proponents evoke it as a revolutionary new phase of the web, they rarely (if ever) address fundamental issues of power that made the old web toxic. He writes that many Web3 advocates are adversarial towards “Web 2.0” projects for their monopolistic control over user data. Yet despite Web3 products’ core offering of enabling end-users to own their digital assets, most don’t engage with the underlying political economy that fundamentally shapes the priorities and incentives of these tools. For one, Evgeny notes that many of the VC investors who are salivating over the profitability of Web3 ventures are the same characters who were behind funding and shaping the most disastrous, centralized “Web 2.0” companies. Taking a less critical stance, Bluesky CEO Jay Graber gives an elegant overview of the Web’s phases of history – describing Web 1.0, Web 2.0, and Web3 as “the hosted web, the posted web, and the signed web” respectively. This breakdown is helpful to contextualize the current wave of cryptographically timestamped global ledgers, aka blockchains, within a technological history of the Web. Jay’s definition of Web3 is notably more expansive than what you normally hear, harkening back to its original definition of a semantic Web. She therefore includes not just blockchains, but any protocol that is “self-certifying,” including older protocols like Git, PGP, and BitTorrent, as well as newer ones such as IPFS, Hypercore, and Secure Scuttlebutt (SSB). Jay is saying that it isn’t exclusively new tech that holds the potential to unlock user sovereignty over data and identity, but that we can look to older protocols as well. While hers is a provocative and tidy definition, I’d be hard pressed to say that anyone else would include these other non-blockchain protocols under the umbrella of Web3. What is telling, though, is that it’s a purely technical definition, without any mention of the organizational or economic issues that plagued “Web 2.0.” As Evgeny would point out, what’s the narrative that this is playing into? Are the problems that plague the Web purely technical? If sufficiently decentralized, will these technologies fix all that ails us? There’s a major oversight when one focuses solely on the technical affordances that were available at each stage of the Web, and points to them as fatal flaws. It strongly implies that all we need to do for this new Web to work in the long run is to just code our way to the right systems. Instead, the true issue at hand is: How do we architect those systems to reflect the complexities of diverse human interaction and need in distributed, digitally-mediated contexts? This is a screenshot of a Google Trends search of the terms comparing Web3 (blue) and DWeb (red) made on April 30, 2023. DWeb: An Evolving Web While “DWeb” is still one of the terms commonly used in this space, it’s not used nearly as heavily as a marketing buzzword. For many, it’s become more of a general adjective than a word describing a movement or a major shift in the evolution of the Web. One of the loudest advocates of this term is the Internet Archive, which has been hosting events and discussions on the Decentralized Web since 2016. Under the purview of its DWeb organizing work, the Internet Archive has included any kind of technological project that is decentralized across the technical stack — from community networks, federated social media platforms, and peer-to-peer protocols to even older, tried-and-tested protocols such as email and BitTorrent. Since 2020, the Internet Archive has doubled down on the values-driven core of its work through the DWeb Principles. screen cap of nathan schnieder tweet Image source I should disclose that I have been a core member of the Internet Archive’s DWeb Projects team for the last few years. As part of this work, through collaboration with several dozen stakeholders, I co-stewarded the process to define the five overarching principles: Technology for Human Agency, Distributed Benefits, Mutual Respect, Humanity, and Ecological Awareness. Our aim was to put a stake in the ground and affirm the values of those building alternative network infrastructure. Instead of merely being not centralized, we wanted to define what it was that we stood for. Though some have pointed out that it sounds too much like dweeb, I find “DWeb” to be an incredibly useful umbrella to organize under. It seems to attract people who are not only interested in building a new Web (and many Webs) for the sake of profit, but also for the sake of addressing concrete challenges, especially those faced by the most marginalized communities. And while people do cite the ways the Web used to be more decentralized, the term is temporally ambiguous. It doesn’t have the baggage of seeming like a new phase, nor is it under threat of having an expiration date. This creates room for the movement to evolve as we gain more allies and build a network of solidarity. By calling it a DWeb, it reminds us to continue wrestling with the question of what it is we’re decentralizing. Decentralization as Praxis Too many self-defined Web3 projects — at least those that involve tokenomics — have been harmful. They’re either scams, burn a grotesque amount of energy (particularly those based on Proof-of-Work), or require tons of hardware usage (leading to immense waste). Even still, several solidarity-minded people have written about the potential of cryptographic protocols to power new institutions when the current ones are failing us. Emmi Bevensee wrote a thoughtful, nuanced piece about the glimmer of hope that blockchain projects bring to redesigning social systems. Nathan Schneider explicitly says, “Crypto is a tool for designing institutions” and has been advocating for crypto projects to embed good policy into their code. And Alice Yuan Zhang urges us to seriously engage with the failures of these tools and ask who and what we are decentralizing: Decentralization as praxis is rooted in direct action, striving to abolish capitalistic economics and supply chains which encode mass oppression into large-scale systems with many actors and minimal accountability. Crypto-based protocols contain the potential for large-scale coordination that is unprecedented, including the ability to collectively monetize and compensate people for their labor. Which is all the more reason why we can’t let such powerful tools for mass coordination become monopolized by those who merely want to amass wealth or control. We need to contend with the fact that it’s just not feasible to expect ourselves to effectively tackle systemic challenges based on goodwill and volunteerism. Values-first, free and open source, and peer-to-peer projects struggle financially. They often don’t compensate their contributors enough, if at all. Too many of them fizzle as they try to raise money through grants or crowdfunding efforts, while competing against similar projects that are injected with venture capital and whose sole fiduciary purpose is to make a profit for shareholders. So if I were to oversimplify the ideologies of DWeb and Web3 as “justice-oriented” and “profit-oriented,” respectively, I would say that these movements have an incredible amount to learn from each other. Though Web3 is aligned with venture-capital, there’s a staggering amount of thinking and experimentation happening around governance in this space. This is an outcome of decentralization being a core facet to these projects. Whatever their motivation, blockchain projects have unleashed people’s imagination about what’s possible when people own and control things together. When money is explicitly part of the equation, they’re able to think more pragmatically about how to direct those resources and make things happen. One project that’s able to straddle both worlds is Open Collective, a software company that provides tools for groups and communities to manage their finances transparently and consentfully. Open Collective has remained purpose-driven, even as they’ve taken venture-capital to grow the project. Now, they’re exploring how to “exit to community” – to shift from a privately-owned company to one owned by its community of stakeholders. Despite the fact that their tooling isn’t reliant on a blockchain to coordinate funds between people, Open Collective is arguably one of the most successful platforms for decentralization that exist. That’s because they’ve remained committed to focusing their energy on decentralizing the thing that matters most: power. Building new and shinier tools out of the same political and economic conditions will do nothing to fundamentally change the world. But “decentralization” is also not a value in itself, and it’s not enough to build the kinds of technological networks we need to confront intensifying global crises. People need to continue to discuss, shape, and re-shape collective values. With those values held at their core, communities with common interests can collectively coordinate and embody them, first and foremost through how they govern themselves and treat one another. After all, the current internet is a reflection of shared values; the question is how we embed justice and mutual care into the Webs to come. Instead of expecting some technology to save us, we need to organize and save ourselves.

Photo of Mai Ishikawa Sutton
Mai Ishikawa Sutton
Read Article
Article 5

Free Software’s Paradox: Losing While Winning and the Need for Decentralization

In today's digital landscape, free and open source software has become more ubiquitous than ever. Unfortunately, at the same time, it appears extraordinarily far from achieving its stated goals of giving users access to and control over code.  This paradox is rooted in the fact that while free software has democratized access to “code at rest” and enabled efficient collaboration between software makers, the operational challenges of running code are now playing the same role restrictive licenses once did — creating lock-in and barriers to entry. Cloud companies such as AWS may not control the code for technology like Kubernetes, but they do control the ability to use it in practice at scale.  To overcome this operations barrier and achieve its goals, free software must begin to make the operational power to use the code as accessible as the code itself. The path forward looks very different for enterprise software and end users, but for both the decentralization of running code plays a vital role. Free software won’t fulfill its goals until we can decentralize the role of the server. Where Free and Open Source Software Began In the beginning of the age of the personal computer, it was proprietary licenses that controlled access to code and software development tools, but as software grew in complexity that changed. Software began “eating the world” and to address the complexity of real-world use cases and planetary scale, software had to increase in complexity, and manage that complexity through increased efficiency. Free and open source software, whose aim is to ensure users’ freedom, control, and sovereignty over code, emerged victorious as a tool for efficiency.  Specifically, free and open source software let software projects collaborate globally on 90-95% of their codebase — sharing cost and risk — while focusing precious in-house engineering efforts on the 5-10% of their codebase essential to their core product and value proposition. Today, you will find almost no modern software products made purely from proprietary code. In the degree of adoption and democratizing access to software development, free software succeeded beyond its wildest dreams. However, in terms of bringing users sovereignty there was a problem: the same increase in complexity of software stacks and scaling requirements that compelled software companies to adopt free and open source software also created a dramatic increase in the complexity of deploying software. By the mid-2000s, GNU/Linux and the web stack had emerged as an off-the-shelf way to quickly deploy software on the web, to the world. But as complexity increased, the ability to deploy software on a GNU/Linux stack at scale became much more difficult, and companies like Amazon Web Services (AWS) entered the market, competing on operational excellence. Services like AWS and Heroku could ensure your company’s basic Linux-based services were running, so your team could focus on code and product.  In terms of technological sovereignty, this was a shift backward to the era of mainframes, where all code ran on hardware controlled by a few large providers. Dependence on cloud services for operations created a fundamental shift in the degree to which free software could succeed at its aim of giving users control and sovereignty. Free and open source licenses for code would no longer be enough. Where Free and Open Source Software is Now Ten years later, the ability to deploy free software-based apps to actual users is (for practical purposes in most cases) entirely dependent on cloud services, because the knowledge, experience, and practices for how to do so is locked up within these organizations.  Cloud services such as AWS, Microsoft Azure, and Google Cloud enjoy such significant marketshare not because they control the code for technologies they deploy (tools such as Linux, Postgres, and Kubernetes are free and open source software) but because they control the ability to use it at scale. Most engineers don’t even learn these tools directly anymore; they simply learn to turn to a cloud services company, purchase the correct service, and make it work for their needs. The power to operationalize code is creating the same lock-in and barriers to entry that software licenses once did.  Unraveling this problem will require distributing operational power with the free and open source code itself. This will require a mix of practices, from straightforward approaches like "GitOps" (including DevOps tools and scripts in one's Git repository) to more qualitative work like identifying and eliminating any barriers to becoming proficient in operationalizing free software tools. This shift will be essential in breaking free from the control of cloud companies and ensuring that organizations have the capacity to fully access and leverage free software. The Solution for End Users: Decentralization While solving the problem of distributing operational power is a complex undertaking for enterprise users, for end users it is much simpler: software must not require servers at all. Most users don’t have servers, so any software that requires a server will inherently rob users of the ability to operationalize the code and achieve sovereignty. After all, if free software code requires something most users do not possess, users cannot operationalize that code; it will remain a mere proof of concept until some company operationalizes it, creating a relationship where users depend on that company. Almost no one runs their own email server despite abundant free software code; instead they use Gmail and depend on Google for that. Consequently, the key to solving this problem for end users lies not in GitOps or documentation but in shipping pure peer-to-peer applications that do not require any server or the knowledge of how to use one.  This is not as difficult as it seems, and it has become much less difficult given recent improvements in the state of the art. The world has two decades of filesharing tools like BitTorrent, and over a decade of Bitcoin clients. Both are excellent examples of peer-to-peer applications that connect users without servers, delivering both code and the power to operationalize that code in a single package. My own project, Quiet, uses tools like libp2p and IPFS (developed for blockchain networks like Ethereum and Filecoin) to build a no-server-required alternative to team chat apps like Slack and Discord. Building fully peer-to-peer applications used to be seen as an unrealistic pipe dream, but no longer. Decentralization: Where Free and Open Source Software Must Go Decentralization can play a pivotal role in democratizing the operationalizing of software, both in the enterprise setting and for end users. Although decentralization is not a silver bullet, decentralized tools like Ethereum and Filecoin combine code and operations in sophisticated, incentivized networks built on open-source code. These networks offer the potential to create a more level playing field in the world of software, allowing individuals and organizations to harness the full potential of free software. For example, Ethereum's smart contracts and decentralized applications (dApps) provide a framework for creating applications that are not dependent on a central authority. This enables businesses and end users to build and utilize software that is both transparent and secure. Similarly, Filecoin's decentralized storage platform allows users to store and retrieve data without relying on centralized services, giving users more control over their information. By embracing decentralization, the free software movement can evolve to overcome the paradox of losing while winning. By incorporating the operational power to use the code within the code itself, free software will empower enterprises and end users to take full advantage of the code they have access to. Decentralized networks and platforms, such as Ethereum and Filecoin, can help usher in this new era of democratized software, breaking down barriers to entry and ensuring that everyone can benefit from the innovation and collaboration that free software enables. Peer-to-peer applications like my own project Quiet, built on similar building blocks but without the need for global blockchain networks or money, bring these tools to communication and social media. Conclusion The paradox of free software is a pressing issue that must be addressed if we are to fully realize the potential of open-source technology. By shifting the focus towards incorporating operational power within the code, and embracing decentralization as a means to democratize software operationalizing, the free software movement can overcome the current challenges it faces. This new era of free software will not only empower enterprises and end users, but will also foster a more open, inclusive, and innovative digital landscape for all. Decentralization is the next logical step for free software

Photo of Holmes Wilson
Holmes Wilson
Read Article
Article 6

Navigating Crypto Policy Around the Globe

In the wake of sustained market turmoil in the crypto ecosystem, federal regulators in the United States and relevant authorities abroad are reassessing the fundamental question: how much should we integrate or corral this emerging technology?  Much of the recent focus, at least since the collapse of FTX, has been on containment and often punitive action against the broader crypto ecosystem. In the United States, banking and financial markets regulators have clamped down on crypto through numerous enforcement actions, while the first evidence of a developer brain drain from the country is starting to materialize. In the European Union, the picture is also mixed, but bloc-wide legislation may create a new haven for crypto development. Authoritarian governments such as China have launched centrally-controlled competitors to crypto and attempted to outlaw decentralized infrastructure, leveraging the super-surveillance capabilities of central bank digital currencies (CBDC) to control and suppress their populations. These three divergent paths offer different visions for crypto’s global growth and the fundamental promises of decentralization. In the United States, the tension around supporting the growth of decentralized, peer-to-peer consumer services lies at an intersection between the country’s historical support for emerging technology and the need to preserve both government control, broadly, and the American financial system’s global power. Take the debate over whether it would be prudent and power-enhancing for the United States to develop and deploy a CBDC. Other countries, such as China with its digital yuan, have already developed and launched such technology, and while it may lower the barriers some consumers face when navigating the marketplace for goods and services, this technology has enhanced the power of that country’s surveillance capabilities to a remarkable degree.  An alternative option, one that might both support the U.S.-led global financial system and show that America remains committed to free markets and free enterprise, is developing a regulatory regime that would allow a dollar-denominated stablecoin to flourish. This regulatory open-mindedness would enhance U.S. soft power and global influence, particularly as other countries look to the United States for leadership and guidance on emerging technologies. Making the case to federal regulators and lawmakers that crypto technology can be a benefit to the United States, rather than a challenge to its soundness, is the fundamental challenge in promoting crypto policy in the country. The European Union has taken a more comprehensive perspective, pioneering the MiCA legislation, which creates a licensing regime for digital asset businesses. While the legislation places new requirements on crypto businesses, the level of comprehensiveness is a welcome sign that, at the very least, the EU has accepted that crypto is here for good. The MiCA legislation does not offer a complete embrace of decentralization’s core properties — far from it. It does, however, offer an implicit recommendation: Web3 is big enough and permanent enough that we feel the need to regulate it. Ironically, the EU may become a more attractive jurisdiction than the United States for decentralized technologies to flourish. The less-than-democratic governments of the world offer a far bleaker vision of the future of decentralization. Some authoritarian governments, seeking ever-greater control over their societies, have embraced the surveillance-enhancing aspects of CBDCs, restricting their citizens' financial freedom and personal liberty. In systems where a full CBDC is implemented, governments could financially exclude individuals or entire groups of people with the press of a button, leaving them with nothing. The recordability of open blockchain systems, a pioneering aspect of true peer-to-peer systems, would be deployed to eradicate rather than enhance a population’s trust and financial future. Needless to say, these governments view true decentralization as a threat to centralized state control. In China, and in other like-minded regimes, Web3 is akin to the introduction of the internet itself: a new technology that threatens to empower the average citizen. So where is the sweet spot for the future of Web3? It may be in the EU, where recently passed legislation outlines new rules but also provides a workable pathway for compliance and growth. But it could yet still be the United States, where recent punitive actions may be a short term spasm as the government looks to wrest the narrative of innovation from the cryptocurrency community and put the economy back in “safe hands.” Wherever the future of the cryptocurrency community lies, we will get there because those jurisdictions recognized some core ideas: trust in our institutions of government and business is broken, and we lack a coordinated strategy for the development of new digital-forward solutions to some of our core problems, including digital identity, accessible financial services, individual data sovereignty, privacy architecture, and better cybersecurity. Embracing the benefits of decentralization does not mean that a government has to relinquish control or drop any of its core competencies. It does mean, however, that leveraging those benefits can lead to a more open, more equitable, and more trusting community, worthy of the best visions for a 21st century society.

Photo of Kristin Smith
Kristin Smith
Read Article
Article 7

Decentralization In All the Things

We’re at an inflection point in the way we view society.  We’ve been locked into industrial age views in an increasingly digital age.  The economic and industrial policies of today are still tied to a world that existed over a century ago, and there are so many ways in which we can and should rethink them. This goes way beyond just planning for an ever increasingly digital world: it means taking the lessons of what a digital world has taught us -- including upending some antiquated thinking about scarcity -- and applying it much more broadly to society.  We are at a unique moment when we can re-envision an open society that works for everyone. So much of our thinking about today’s world is based on a mental model that effectively craves centralization. We’re working off of a model that focuses on efficiency and profit maximization that automatically pushes towards centralization and what is, in effect, a dictatorial (benevolent or not) view of how society should be structured. As such, it should not be particularly surprising that we see vast consolidation and diminishing competition in the corporate world, or growing illiberalism and authoritarian control in the political world. Our own societal structures have demanded it, and those same structures make it feel as if there are few ways to alter the overall path, but that’s mainly because we’re viewing the issue through a very narrow prism. Centralization has some benefits. It can lead to greater coordination and efficiency. It creates a much clearer chain of command and control. However, it also has downsides. Greater consolidation can certainly limit (or potentially stifle) competition and innovation. And the direction a project, company, or government takes becomes dependent on an individual or a very small group of powerful people. Sometimes they may lead things in a good direction, but there is a very real risk that they make bad, societally destructive decisions. Alternatively, they might make decisions that are more focused on retaining power and control than on benefiting the public. Decentralization also comes with a mix of positives and negatives. Smaller, more decentralized projects can be more nimble, quicker to adapt and change. The fact that lots of smaller groups are trying out ideas allows for rapid experimentation with different approaches, often leading to faster iteration and innovation, driven by competition rather than sheer power and dominance. It also distributes power to the ends, decreasing the risk of abuse of power. But decentralization has its own challenges. It often removes the economies of scale and potentially limits the ability to make the huge investments that are necessary for major leaps forward.The lack of a single central structure can often lead to significant waste and errors. Sometimes it can lead to directionless or counterproductive meandering, or wasteful and duplicative efforts that could be more successful when combined.  Often, we see the pendulum swing between more centralized and decentralized worlds.  As things become too centralized, problems like limited competition and abuse of power make themselves clear, so we break things up and hope that a more decentralized world will result. And maybe it does, for a period of time. But then the focus returns to economies of scale and efficiencies, and things recentralize. Rather than focusing on making the world more decentralized or more centralized as a whole, this article proposes a better approach: understanding how to determine which things should be centralized and which should be decentralized, and how the two can actually complement each other, such that the benefits of each are available while the negatives are minimized. From interstate highways to the information superhighway A key contribution to the economic revolution that helped the American economic engine in the second half of the 20th century was the interstate highway system. While it took nearly half a century of political fighting to get it done, the economic benefit to America has been massive. The system cost approximately half a trillion dollars to build, but studies have shown that for every dollar spent on the interstate highway system, it has returned $6. By just about any measurement, as an investment in infrastructure, it has created massive positive returns for society. The interstate highway system opened up huge new opportunities for business in a wide variety of ways, by creating core infrastructure that allowed so many other businesses to exist and build on top of it. The highway system vastly cut down the time it took to travel across the US, opening up the ability to ship goods quickly and efficiently around the country.  It enabled entirely new businesses, like UPS and FedEx, to thrive. It also opened up new opportunities for state and local governments to build off of the interstate system and create local roads and opportunities for different kinds of useful economic growth. In some ways you could view the interstate highway system as the culmination of a massive centralized bit of planning. It required the power and will of the US government to build a singular interstate system. But what’s most fascinating about how it worked was that it actually allowed for a much more decentralized power to make the interstate highway system useful. This lesson is important: having centralized infrastructure that is open and on which others can build in a decentralized manner can open up tremendous possibilities. And we see that same pattern in the internet. In some ways the internet is an even better example than the U.S. interstate highway system, because the internet did not require a huge centralized planning system to build the infrastructure, nor is the upkeep of the internet reliant on the same centralized system. Instead, it was built and created in a distributed manner, as an open system that anyone could build on, adapt, and contribute to. As a centralized open protocol, it enabled amazing decentralized benefits. The protocol allowed anyone to build on it and experiment. And out of that grew tremendous benefits, through open innovation. A consistent, standardized protocol allowed for widespread innovation through competition, a standardized infrastructure basis on which to build, and a singular ability to communicate across the different experiments. Out of this comes the best benefits of both centralization (efficiency, economies of scale, enabling infrastructure) and decentralization: distributed power, adaptive and rapid innovation, and the ability to be more nimble and responsive to opportunities. This applies in other areas of the internet as well, including its network layer infrastructure. At certain times and in certain regions, there have been experiments with wholesale open access and local loop unbundling projects, in which the core physical infrastructure (generally a fiber-optic buildout) is available for anyone to offer customer-facing Internet Service Provider (ISP) services to. In that scenario, you avoid the inefficiencies of needing to build multiple versions of the core infrastructure with its high capital expenditure requirements, but still enable competition. Different ISPs can innovate and compete by offering different types of services with different features, but they can do so by leveraging the same core infrastructure. Here you see the basics of this model at work: the high capital expenditure effort becomes the core infrastructure, but that infrastructure is open for experimentation where low marginal cost services can be built atop it. In some places, such as Ammon, Idaho, this has created a world in which changing your broadband provider means going to a portal, reviewing a page with competing ISP service packages, and clicking on the one you want. No installation is needed. No new hardware is needed. The UK has implemented a similar framework, with some limitations, in which BT effectively became the central wholesale provider for a variety of competitors. More recently, BT spun off the division handling this, Openreach, as a separate company. This has created a world in which users in the UK have access to many more competitive broadband options than elsewhere in Europe, and the speeds have been, generally, faster than other countries in Europe. There were some concerns about the shift to fiber-based broadband, but in recent years, Openreach and others have been rapidly building out fiber networks to meet the demand among users. Again, this further enables the benefits of both approaches. You don’t need inefficient and wasteful overbuilds of the infrastructure, but you get greater competition, innovation, and nimbleness for the consumers. Time to swing the pendulum back The keys to making this work are fairly straightforward: core infrastructure, preferably built on an open model or owned by no one as an open protocol, creates a standardized foundation. From there, you push the power to the ends, allowing lots of people to build on that foundation, enabling competition and innovation. These days many will point to the internet and highlight that it has moved away from this ideal. While the open internet protocol exists, some of the services on top — services that many people use and rely on — have become large and centralized. In some ways, the pendulum has swung away from the original decentralized aspects of the early internet. It’s become slow, large, anti-competitive and prone to abuse. But there remain opportunities to swing the pendulum back in the other direction. There are concerns about vast centralization (one search engine dominating the market, one social network on which much of the planet relies, etc.) but it doesn’t need to remain that way. There are real opportunities to build for a future in which we go back to using open protocols as core infrastructure, while enabling the power to shift out towards the ends of the network, with encouragement for competition and innovation to make things more useful. This doesn’t mean there won’t be large players who are more successful than others, but if they’re based on an open protocol it avoids the current lock-in problems, and creates powerful incentives for better behavior. Email is a useful example for this. Email is based on a series of open protocols, starting with the Simple Mail Transfer Protocol (SMTP) which was widely adopted. These days, the most popular email provider is Google, with its Gmail service. Some might argue that this shows that the more decentralized model described above has failed, but the details suggest otherwise — especially when compared to a fully proprietary stack, such as social media. Yes, Gmail has a large market share, but using Gmail does not cut you off from others using other email providers like Microsoft’s Outlook, Yahoo Mail, or a privacy-focused provider like Proton Mail. While it’s not technically easy, users can host their own email as well.  They can all communicate with one another, and if you are using one service, and feel it’s not serving your needs — or worse, has become untrustworthy — you can export your emails, move them to a different service, and still communicate with everyone else. In contrast, if you find Facebook untrustworthy and decide to leave, you will lose out on the conversations happening there with your friends and family.  That’s a centralized silo in which Facebook’s corporate entity, Meta, has full control, and can even remove you entirely. If you look through the development of Gmail, you can see the advantages. Even as it is owned by Google, and questions have been raised elsewhere about Google’s business models and practices, with Gmail it has stayed quite benign. In the early days it did run ads, some of which were based on keyword scanning of your emails. Many people found that an intrusion on what they felt should be more private messages, so eventually Google moved away from that model, likely realizing that if people felt their privacy was at risk in Gmail, they could just easily move to a competing service and not lose access to anyone. Facebook, on the other hand, has the power to be much more aggressive in pushing its own decisions on users, even ones that are more questionable regarding user privacy. Yes, users may abandon the platform over the long run (some of which appears to finally be happening), but it’s a much slower process, and while it’s happening, users who abandon Facebook have to live without the content and communications on Facebook that their friends and family rely on. Towards a better decentralized future It is, then, not difficult to envision a better world built on this model. Create the core infrastructure as a base.  Make it a kind of open protocol. Enable others to build on that base to leverage the power of the standardized and connected infrastructure. Allow that experimentation and competition to drive new, different, and useful innovations. We could, for example, see social networks built on this model. There are many such experiments happening today, with the most successful current one being ActivityPub, the underlying protocol of the “Fediverse” that has enabled Mastodon, a social network with no central “owner,” but rather a series of individual social networks that federate, enabling cross communication. This model has created some interesting opportunities and experiments, as different federated “instances” experiment with different approaches, different features, and different rules.  But many of them can communicate with each other.  Some choose not to federate with others, and some servers block other servers. It has created a whole new ecosystem of experimentation and learning that does not involve a centralized power that can be abused.  And that’s just one experiment. There are many more being worked on as we speak, often creating models that are even more decentralized and may prove even more interesting in the long run. Taking it back out of the internet Early on in this piece we used the example of the interstate highway system, and how it acts as a kind of “protocol” that enables so much above and beyond it.  You have local towns and cities that built their own roads and systems around the interstate highways.  You have entrepreneurs and businesses that built up around the highways as well, and those who leveraged the highways to make other things possible, like the ability to ship goods across long distances quickly and efficiently.  As we look at the power of this model, it’s worth considering what else it can apply to.  Already we are seeing some rethinking of financial systems (with some potential pitfalls, but also many opportunities) when there is a more decentralized monetary system built on an open protocol. Many of the most interesting decentralized finance applications are coming out of the global majority regions, rather than the U.S. and Europe. Projects like Umoja.money are focused on building out a payments infrastructure that can work in “the hardest to reach communities on earth.” But it can apply elsewhere as well. Healthcare and education, these days, are often held up as industries that have too long been stymied by the old ways of doing things, resistant to change, and where prices have been driven to unfortunate levels, sometimes blocking access to those who cannot afford it (on the healthcare side, the direct-to-consumer cost issues are limited to the few countries, like the US, that do not have universal healthcare, but even in the rest of the world that does have universal healthcare, there are often complaints about the system being less innovative and responsive to customers than it could be). Indeed, in recent months, healthcare systems in both the UK and Canada have faced difficult challenges, commonly dubbed “healthcare crises,” as the systems are strained and under-resourced, often due to still-increasing costs on the systems themselves coupled with a shortage of healthcare workers. So, merely having universal healthcare systems does not solve the underlying challenges of modern healthcare. This model presents new ways to think about these issues. Reframing the problem could lead to a world in which healthcare is revolutionized such that treatments (which have high capital expenditure upfront, but low marginal costs for each one) could become a form of an “open protocol.”  As advancements in rapid manufacturing technologies become common, you could envision a world in which the chemical composition of life-saving drugs could be downloaded and “printed” out of a home device. The information, the “recipe” for the medicine, could be part of the open protocol, but other services could be built up around it that enable better, more equitable access to medicines, bundled with other services. There are many forms this could take. For example, what today might be considered a “life insurance” company might find a benefit to itself in keeping its customer base healthy for much longer.  Suddenly, it might not be a “life insurance” provider, but a holistic health provider, where it has every incentive to help you stay healthy and well by suggesting healthier foods and exercise plans and providing access to life-saving medicine as part of its holistic offering. This type of model can work in countries with universal healthcare as well, where the issue now is reframing the setup of the systems in a manner that maximizes health benefits while minimizing the costs that are straining those systems. Coming up with ways to make the medicines and treatments more widely available—creating open protocols, recipes, and instructions—could lead to an entirely different framework, in which the resources that today are used to fund many of these things can be focused more on core research and development, rather than on the cost of individual products and offerings.  Education can be rethought of in the same manner.  Today, most education is, for good reason, local and distributed, which is quite useful for enabling teachers to better understand their students. But it also means that the best teachers can only reach a tiny number of students at a time.  A merged model, in which decentralized teachers can make use of the best lesson plans, lectures, teaching aids and tools, and bring them to children around the globe, can be envisioned under this same model. Build up the core infrastructure, the basic building blocks of education from the best teachers anywhere, and allow distributed teachers to make use of that material. You can even create a more personalized learning environment this way, perhaps by flipping the traditional model of in-class lectures and at-home “homework.” Students could watch virtual lectures at home, and then class time could be better used for more individual instruction as the teacher works with students to make sure they understood what they learned. These are just a few examples of how we can begin to rethink so many parts of the way the world works today, empowering some of the best features of more centralized systems with the power of decentralization.  Keep the centralization to a process of an open, standardized core infrastructure, and allow that to be the hub on which innovation and experimentation can occur.

Photo of Mike Masnick
Mike Masnick
Read Article
Article 8

Recognizing Code as Speech is Vital

Strong protection for code as speech is critical to the development of software around the world, especially open-source projects essential for the distributed web, which rely on many authors and contributors to build, improve, and secure a codebase. These projects’ developers often consist of a community of volunteers, contributing to the project’s collective goal of building a better tool that aligns with their dream of designing a more equitable decentralized future.   If software regulations could easily impose restrictions on the developers, it might raise the daunting prospect of liability for contributing a pull request. This concern can be acute for decentralized projects, especially those providing privacy-enhancing technologies or open forums for communication.  Tools and services that are open and available to all, rejecting the compromise of a central entity to construct a walled garden around the space, can be used in ways that raise the authorities’ concerns. Without strong protections for code, governments would be tempted to reach the disfavored uses by going after the developers. Moreover, when the restriction on code prohibits publication or export to undemocratic countries, such as under a sanctions regime, this can deprive human rights defenders and pro-democracy opposition groups of the privacy, encryption and communications tools they need to fight for their rights. Respecting code as speech enables the spread of technologies upon which fundamental rights depend. While code is a relatively new form of speech, there is a rich history of recognizing that expression comes in many forms.  Expression may be delivered as oratory from the stage, sung to music, or posted as a broadsheet on the subway walls and in tenement halls.  And now, in the information age, perhaps the most critical avenue for expression comes in the form of computer code.  Our world embodies code in almost every aspect of modern life, expanding from stand-alone computers to smart devices and chips integrated into everything you see. Our lives have moved online, with critical tools for communication, commerce, advocacy and all forms of social interaction happening on the internet, mediated by software which defines how we can or can’t reach out into the world. For advocates of decentralization, publishing code allows their expression to amplify and effectuate the dream of a distributed web, making the essential building blocks of the future they want to see. Expressive code can be human readable source code, which is like any other literary work, or the resulting executable binaries, which operationalize the expression in the underlying source. This protection afforded by the recognition of code as speech will be fundamental to allowing new and exciting ideas the breathing space to thrive, especially ideas that a government may want to suppress, or software systems that a government may want to co-opt by compelling certain code that is more to its liking. To many it may now seem obvious that code should be protected from censorship or restrictions on distribution, but it was not immediately recognized by the law. Courts first recognized code as speech at the tail end of the last millennium in Bernstein v. U.S. Dep’t of State, a case brought by cryptographer Daniel J. Bernstein that challenged restrictions on the export of cryptography from the United States. Bernstein sought to publish an academic paper and, critically, the associated source code for Snuffle, an open-source encryption system. However, at the time, cryptography was thought of as a dual use military technology, and U.S regulations barred its export like a tank or military grade avionics, and this included publishing it for free on the web. That is, unless Bernstein registered for a license like he was an arms dealer, then he would likely get denied because Snuffle offered more cryptographic protection than the U.S. would allow. With the help of Electronic Frontier Foundation attorneys, Bernstein’s case established that code was protected speech. Judge Patel explained why the First Amendment protects code, recognizing that there was: No meaningful difference between computer language, particularly high-level languages …, and German or French … Like music and mathematical equations, computer language is just that, language, and it communicates information either to a computer or to those who can read it. ... source code is speech.   A few years later, the Sixth Circuit Court of Appeal (one of the United States mid-level appeals courts) agreed, observing in Junger v. Daley that code, like a written musical score, “is an expressive means for the exchange of information and ideas.” Both courts recognized that the protected expression in code was not just found in the artfulness and advocacy in its human readable source, but also embodied in the executable ones and zeros of the object code.  Speech can be both expressive and functional — indeed, the functionality is often the fundamental point of the software’s expression.  While the protection of the First Amendment is critical, it is not an absolute bar to the United States government regulating software. Rather, the U.S. Constitution requires that the government show that laws which purport to regulate expressive code pass judicial “scrutiny.” The court looks to the extent the restriction is based on the software’s communicative content, and balances that against the government’s asserted interest. The more the American government seeks to control content and expression, the less likely the regulation will be upheld.   Where software is functional, like an executable binary, this is not a bar to the protection, but rather a factor to be considered in this scrutiny — for example, whether the regulation burdening the software is narrowly tailored, using the least restrictive means to achieve a compelling state interest.  The protections for code as speech are not limited to the United States, though they are most firmly established there. Freedom of expression is broadly protected under international human rights law, and the arguments for why code must also be considered a medium of expression fit well into these treaties and conventions. EFF, for example, has analyzed how code is expression under the American Convention of Human Rights, an international human rights treaty that covers most of Latin America.  Moving forward, we must recognize that our fundamental right to expression encompasses computer code under national and international law throughout the world, and use that to protect and preserve the ability of software developers to create a stronger, safer, and more democratic and equitable decentralized web to help foster a better future.

Photo of Kurt Opsahl
Kurt Opsahl
Read Article
Article 9

Reviving Internet Decentralization Without Relying on the "B word"!

In the end, the fate of every technology is centralization. But we can change that.  We are nostalgic about decentralized virtual networks and services on the Internet. The beloved early Internet was decentralized with its Usenet and Slashdot, when control over content and conduct was in the hands of nodes distributed across communities. It included communities that were not necessarily defined by their geographical and regional boundaries.   Having been inspired by these autonomous virtual networks, I remember I was passionately (and naively) discussing how I am going to bring decentralized justice systems to communities that did not have access to justice, through these virtual networks, and to make it more efficient and effective. The respondent was unamused: “But we made rendering justice effective and efficient through centralization!”  The theory that centralization is the answer to effectiveness and efficiency (and sometimes even fairness), embellished with all sorts of historical narratives (and evidence as some call it), is dominant in our societies globally. No technology on its own can be inherently decentralized and change this dominant narrative. This is why decentralizing the Internet cannot be achieved with just decentralized design or adoption of funky, hyped up technologies that appear to be decentralized, because in the end, other factors will lead to their centralization.  We see such centralization everywhere, even bitcoin miners (that beloved decentralized technology) are not that decentralized and some miners have amassed quite a lot of power. We see the centralization and consolidation of digital services and Internet connectivity. For example, Cloudflare and Google are both dominant in the market for providing open Domain Name System resolvers (the technology that enables you to access websites and other digital services).  The design of the technology is not the only factor for a decentralized Internet. I go so far to claim that there is no such thing as decentralized technology. Decentralization of power and decision-making can be enabled through technology but in the end it’s our governance and operational practices and regulatory approaches that lead to centralization or decentralization.  So how can we create a decentralized Internet, and what governance mechanisms do we need to actually make decentralized technology happen? Before answering this question we first have to answer the question of where we want and need decentralization. Obviously not all centralization of technology is necessarily bad. It can bring more security and know-how in some instances (for example, larger platforms that provide hosting might have more capital to invest in maintaining the security). Then where exactly do we need decentralization? I believe when we talk about decentralization we usually talk about the decentralization of “decision makers,” which means that we want to take the power of decision-making out of the hands of the few. That way, no one person can make a decision about the fate of millions of others. That way, if one network is compromised, it does not violate the privacy and security of millions and sometimes the most vulnerable.    I believe some simple but essential Internet infrastructure elements can be decentralized through collective action. They do not need “cutting edge” technologies that nobody has adopted or will adopt, no matter how much money we throw at them. We can, through policies and collective action, restore decentralization in some critical parts of the Internet. The parts that are critical for our access to digital services. One such space is the operation of DNS resolvers, which is increasingly centralized. Imagine if we provided the capability for thousands of operators to run efficient and secure resolvers. Imagine there were so many of these operators that they were not restricted to a single jurisdiction, ensuring that any intellectual property lawsuit would not affect access to web services as a result of the resolvers being forced to block them. To decentralize the Internet, we should dream small. There is no such thing as inherently decentralized technology. While it may not be possible or even advisable to fully decentralize all aspects of the Internet, we can restore decentralization in critical areas that enable indiscriminate access to digital services through policies and collective action.

Photo of Farzaneh Badiei
Farzaneh Badiei
Read Article
Article 10

Why We Need To Fight For Our Privacy

We all love good sci-fi films, such as those that show us some dystopian future surveillance state where the government monitors its citizens' every move. We eagerly follow the story, and enjoy the protagonist’s quest to restore privacy and freedom of speech, and bring down the centralized powers. We’re comforted by the belief that “it’s just a film — that could never happen here. Our freedoms are secure.” One look at history, however, shows us that sudden and complete loss of freedom is actually a very real threat. The construction of the Berlin Wall in 1961 separated East Berlin from West Berlin overnight, resulting in the immediate loss of freedom for over a million people, including freedom of travel, speech, assembly, association, and economic activity. Those trapped in East Berlin had to endure this situation for 28 years. The Iranian Revolution in 1979 led to the establishment of an Islamic republic and a rapid change in societal norms and personal freedoms, all within a single year. Women, in particular, lost many of their rights. In Hong Kong in 2020, China imposed its National Security Law, which granted authorities broad powers to crack down on dissent. The law resulted in the immediate suppression of speech, particularly of political expression, resulting in the arrest of pro-democracy activists, the disbandment of political opposition parties, and the closure of many independent media outlets. The loss of freedom may also be happening where you are, but, much like the parable of a frog in boiling water, you may not notice it. Massive technological shifts over the past few decades have improved society in all kinds of ways, but this new digital landscape has also granted governments and corporations unprecedented abilities to surveil individuals. We are monitored, both online and offline, through CCTV cameras, biometric identification systems, and data-tracking software. Our personal information is collected and analyzed to create detailed profiles of our behavior and preferences, and these are used to manipulate our choices and decisions. Governments go so far as to mandate the installation of spyware on citizens' devices to monitor their communications and online activities. When people feel that they are being watched or monitored, they're less likely to express their opinions or engage in activism. Without private communication, agitation and pushback against authoritarianism becomes impossible out of fear of reprisal. As privacy advocate Juan Angel put it in his essay “Privacy: The Hill to Die on”: Life in the panopticon of absolute digital surveillance forces humans to become shells of themselves, subjects who self-censor their own thoughts, behaviors, and expressions even in private interactions. The internet, originally viewed as an instrument of liberation, now has omnipresent tracking weaved into its every corner, and is fast evolving into the most potent enabler of totalitarianism we’ve ever seen. It’s essential that we safeguard privacy in this digital world, because it’s crucial for preserving an open society. Yet, many people either don’t seem to notice the erosion of their privacy, or they don’t care. That’s because surveillance and censorship are often sold to us as essential tools for safeguarding our own well-being — necessary for protecting liberal values and ensuring that those in power can effectively catch the bad guys. Many people are often eager to demonstrate their moral purity, so they champion this noble cause and proudly proclaim that they “have nothing to hide.” Such grandstanding blindly misses the fact that billions of people around the world do not enjoy the same rights as them, and surveillance and censorship are responsible for undermining their freedom and safety every day. The very privacy tools that are often criticized in the West for enabling criminal activity are crucial for individuals living under oppressive regimes. Compliance with the surveillance state is a luxury afforded only to those who are privileged enough to be shielded from the oppressive effects of this surveillance. Even if you are lucky enough to live in a country with relatively high human freedom, your rights may not be as secure as you believe. You are not immune to future political changes. The preservation of your individual rights is contingent upon your ability to question authority and challenge prevailing narratives. Privacy is crucial for this. Privacy, however, isn’t just about safeguarding against the potential rise of totalitarianism, or some catastrophic event that may or may not occur in our future — it's also essential for protecting ourselves from very real and constant threats in our present. We give away personal information without a second thought, to every company, doctor’s office, and online retailer — but they often don’t keep our information safe. Data breaches are constant, and often undetected, and reveal sensitive personal information that can have a devastating effect on our reputation and our financial well-being. Malicious actors routinely use this sensitive data for identity theft, with tens of millions of people falling victim each year in the US alone. This financially ruins many. Then there’s our real-time location data that is perpetually ingested by all kinds of services that we interact with. Cell providers are just one collector of this data, and they have a long history of selling it. If you’ve ever had a stalker, jilted ex-lover, or ruthless rival, then you’ll understand all too well why this is alarming. People also unexpectedly become targets every day: Perhaps you said something years ago online that’s suddenly resurfaced, or perhaps you’ve attracted attention because of a desirable social media handle. Only when it’s too late do most people realize how easy it is for someone to find their home address on the internet, and now the safety of their family is at risk. There’s also a more subtle danger that comes along with a lack of privacy, which many people miss. Consider that our daily activities are rapidly and increasingly transitioning into the digital world. Our interests, purchase histories, political affiliations, and activism are indiscriminately collected at all times. What is this data used for? Most obviously, it’s used by advertising and data broker companies to build comprehensive profiles of our preferences, habits, and beliefs. They either profit directly from this data, or they sell it to others. It’s a common instinct for most people to think that this data is harmless: “Why does it matter if a company knows my favorite color, and wants to sell me a better pair of shoes?” But it may not just be shoes that they’re selling. They may be targeting us with content in an attempt to influence our views and opinions, drive us to artificially inflated emotional states, and manipulate and control our feelings. Furthermore, this data doesn’t just stay with private companies — it’s often siphoned up by governments all over the world. Even if you trust your government not to misuse this data, and trust that there are no rogue employees in your government, a future government might not behave the same way. Regimes come and go, but this data is forever. It can be picked over at any time in the future, and we have no idea who might get access to it. All too often, there is little to no oversight or accountability about how this mass data being collected is actually utilized by governments. Snowden points out in his book Permanent Record: This system of near-universal surveillance was set up not just without our consent, but in a way that deliberately hid every aspect of its programs from our knowledge. At every step, the changing procedures and their consequences were kept from everyone, including most lawmakers. The current narrative pushed on us by those who would have us sacrifice our privacy is that privacy and security are at odds with each other. The opposite is true. A world without privacy is less secure. When journalists, whistleblowers, and activists cannot communicate without government surveillance, and share information with the public that is vital for holding our governments accountable, we are less secure. When we can’t openly express ourselves because we know we’re perpetually monitored and we fear reprisal for thoughts that go against accepted mainstream dogma, our future as an open society is less secure. When we can’t keep our personal information private, and instead must hand it over to countless entities that are unable to protect that information, our reputation and financial well-being is less secure. Surveillance is an instrument of power consistently wielded by totalitarian regimes. Think of the precedents that we’re setting for future generations – If we normalize a lack of privacy, we risk creating a future society that resembles our most terrifying dystopian fiction stories of today. Snowden once said: It is, in a dark way, psychologically reassuring to say, ‘Oh, everything is monitored and there's nothing I can do. I shouldn't bother.’ —The problem is that it's not true. The erosion of privacy is not inevitable, and we must fight in order to prevent it. We can make better choices in our lives that safeguard our privacy. We can push back against those who would take our privacy from us. But most importantly, we must start to care, and change the complacent culture around privacy. We must do this for our future because the stakes are too high.

Photo of Naomi Brockwell
Naomi Brockwell
Read Article
Article 11

Decentralization and Data Flows

Decentralized is the new disruptive; a large piece of the narrative power motivating enthusiasm around distributed ledger technologies (or blockchain, or Web3, depending on the author and context) is the potential for transforming power dynamics and resources as compared to existing ecosystems. But more is needed to realize the normative goals under the concept of decentralization than merely setting up servers in a different way. Notably, like the proverbial spice, the data must flow — both in and out, as users dictate. Where data flows are unduly limited, so too is decentralized infrastructure. Data portability reflects a particular way in which data can flow or be blocked, and designing policy around data portability to maximize the control of technology users and data subjects — including promoting reciprocity of transfers — ultimately promotes both effective data flow and meaningful decentralization. One large and well-studied category of obstacles to free data flows arises from the tension between the global internet and national law. Some of these issues arise from laws and regulations which fall clearly into the category of protectionism, such as Russia’s notorious data localization mandate. Others arise from differences in protections for fundamental rights, including the long-running tensions between the European Union and the United States over data protection. Despite years of investment in mechanisms for legitimate data transfers (such as the “Privacy Shield” processes), obstacles remain, including the Irish data protection authority’s response to Meta transferring data on European Users to the United States for processing in 2023. Separate from cross-border concerns, intranational considerations also impose (often highly legitimate and necessary) limitations on the free flow of data. In particular, privacy laws ensure that individuals maintain ultimate control over the use and transfer of their data in various ways, and these obligations generally supersede the value of free flow of (personal) data. Data portability, through public policy and tools, works at this intersection to ensure that users can transfer their data to the services of their choice; thus, the General Data Protection Regulation in the EU includes an explicit right to data portability. Here too, despite the best of intentions, things can go awry, notably where motivations other than protection of the subject of the data are given excess weight. Notably, the European Union’s forthcoming Data Act allows users to request certain data regarding their use of connected devices, but expressly prohibits users from sharing such data with entities designated as “gatekeepers.” Restrictions on user choice motivated by competition considerations, such as the Data Act’s gatekeeper language, would seem to be attempting to force a split between decentralization and data flows, effectively forcing data to pool in multiple places. The policy intends to motivate users to migrate from large platforms to small, and should they then wish to move back, or to a different large platform, they will be unable to do so. Like decentralization, the free flow of data is not an unequivocal good, and the design of the mechanisms of data flow contribute substantially to its proclivity for good or bad outcomes. Mark Nottingham’s IETF proposal notes that “not all centralization is avoidable, and in some cases, it is even desirable.” Mark’s characteristics of the kinds of centralization that should be regarded as harmful also broadly apply to those restrictions on data flows that should be concerning: a restriction on data flow “is most concerning when it is not broadly held to be necessary, when it has no checks, balances, or other mechanisms of accountability, when it selects 'favorites’ which are difficult (or impossible) to displace, and when it threatens to diminish the success factors that enable the Internet to thrive.” One of the foundational principles of the Data Transfer Initiative, established in the earliest days of development of the Data Transfer Project codebase, is reciprocity: services that let users transfer data in should also allow users to transfer their data out. As a baseline, data portability is in general a user right, so all services should allow users to download their data, regardless. But there is substantial value for both users and businesses in going beyond this minimum and actively facilitating direct transfers. Tools supported by DTI, such as Meta’s Transfer Your Information tool for Facebook, make it easy for users to transfer their photos and other personal data from one service to another, with safeguards to ensure the legitimacy of the request as well as its proper scope. For users, direct transfer technologies eliminate the need for potentially slow and costly downloads and uploads to devices that may not have adequate storage or processing power. Also, the use of adapters to translate between services minimizes inconsistencies in how data is stored and used between two different services. Businesses, in turn, benefit through improved user experience, trust, and sentiment, while reducing costs and technical challenges associated with importing data. Thus, for businesses seeking to benefit from these advantages through DTI-supported tools, reciprocity is expected. While the language and the execution of the principle may be oriented towards service providers, user interests are at its core. Reciprocity not only ensures that users can move their data to the services of their choice; it encourages users to experiment with new services, helping to make sure that if the user decides not to continue with the experiment and wants to move any new data they have created back to their original service provider, they are free to do so. Decentralization depends on data flows, and balancing the technology’s policies and public policy’s technicalities in a manner that keeps user interests at the core, including promoting reciprocity, offers the best path forward for promoting both data flows and decentralization.

Photo of Chris Riley
Chris Riley
Read Article
Article 12

Twiddler: Configurability for Me, But Not For Thee

Tracking Exposed is a scrappy European nonprofit that attempts to understand how online recommendation algorithms work. They combine data from volunteers who install a plugin with data acquired through “headless browsers” to attempt to reverse-engineer the principles that determine what you see when you visit or search Tiktok, Amazon, YouTube, Facebook or Pornhub. At first blush, that might seem like a motley collection of services, but they have one unifying principle: they are all “multi-sided” marketplaces in which advertisers, suppliers and customers are introduced to one another by a platform operator who takes a commission for facilitating their transactions. Amazon introduces sellers to buyers, helps the former ship to the latter, and places ads alongside search-results and product pages. Tiktok, Youtube and Pornhub all do the same, but with performers and media companies who are introduced to viewers and advertisers and whose ads are inserted at different points in the chain. Facebook brokers display of materials from a mix of professionals (artists, performers, media companies) and individuals (friends, family, members of an online community or interest group). This kind of “platform” business isn’t unusual. A big grocery chain sells its own products and products from third-party sellers, and does a brisk sideline in “co-op” — charging to place items at eye-height or in end-caps at the end of the aisles. But online platform businesses have a distinctly more abusive and sinister character. To a one, they follow the “enshittification” pattern: “first, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves.” Why are digital businesses more prone to this conduct than their brick-and-mortar cousins? One answer is tech exceptionalism: namely, that tech founders are wizards, uniquely evil and uniquely brilliant and thus able to pull off breathtakingly wicked acts of sorcery that keep us all in their thrall. “Tech exceptionalism” is a charge that is more usually leveled at technology boosters, but it can just as easily be aimed at technology critics, who commit the sin of criti-hype by credulously repeating tech barons’ claims of incredible prowess in their criticism: “Look at these evil sorcerers who have ‘hacked our dopamine loops’ and taken away our free will!” There’s another, simpler explanation for the enshittification of platform economics. Rather than trusting the self-serving narratives of the Prodigal Techbros who claim to have superhuman powers but promise that they have stopped using them for evil, we can adopt a more plausible worldview: that tech barons are ordinary mediocrities, no better and no worse than the monopolists that preceded them, and any differences come down to affordances in technology and regulation, not an especial wicked brilliance. Tech exceptionalism is a sin, but digital is different. The shell-games that platform owners play with surpluses, clawing them back from one group and temporarily allocating them to another, are not a unique feature of digital platforms — every business has dabbled with hiding costs from purchasers (think of “junk fees”) and shafting suppliers (e.g. “reverse factoring”). The difference lies in the ease with which these tricks can be tried and discarded. The faster the shells move in the shell-game, the harder it is to track the pea. If you’re an analog grocer changing the prices of eggs, you have to send minimum-wage teenagers racing around the store with pricing guns to sticker over the old prices. If you’re Amazonfresh, you just twiddle a dial on a digital control panel and all the prices are changed instantaneously. A platform operator can effortlessly change the distribution of surpluses in an instant, while suppliers and customers have to engage in minute, time-consuming and unreliable Platform Kremlinology just to detect these changes, much less understand them. There is nothing intrinsically wicked about two-sided marketplaces or other “intermediaries” who serve as brokers between consumers and suppliers. When I was a kid in Toronto, I frequently ran into Crad Kilodney, a notorious “street author” who wrote, printed, bound and sold his books all on his own. He sold his books from street-corners where he stood for long hours, wearing a sign that said “Very Famous Canadian Author — Buy My Books” or “Margaret Atwood” (Atwood later memorialized Kilodney by standing at one of his usual spots with a sign around her neck reading “No Name Canadian Author”). Kilodney was one-of-a-kind, and I can still quote many of his stories and poems from memory, but even he didn’t think that every writer should have to follow in his footsteps. There are plenty of writers with interesting things to say who are unwilling or unable to print, bind and sell their words directly to readers from a frozen street-corner. The problem isn’t the existence of intermediaries — it’s how much power the internet gives to intermediaries. That power starts with twiddling those sliders and knobs that change search results, pricing, recommendations and other rules of the platform. Online performers know this well. If you’re a Youtuber or a Tiktoker, you invest money and time into producing material for the platform, but you can’t know whether the platform will show it to anyone — even the subscribers who explicitly asked to see it! — until you hit publish. For an online creator, the platform is a boss who docks every paycheck and tells you that you’re being punished for breaking rules that your boss refuses to explain, lest you figure out how to violate them without him noticing. Part of Tracking Exposed’s remit is to unravel these secret rules so that creative workers can avoid their bosses’ hidden penalties. These secret rules were behind the #audiblegate scandal, where Amazon stole hundreds of millions of dollars from independent audiobook creators who used its Audible Content Exchange (ACX) platform to post their work. Amazon hid the fact that it was clawing back royalties, withholding payments, and flat-out lying about its royalty structure. The key to hiding these financial crimes from Amazon’s victims was velocity, the ability to change accounting practices from minute-to-minute or even second to second, allowing Amazon to stay one step ahead of the writers it stole from. It’s not just creative workers who get ripped off by digital platforms, of course. The “gig economy” is rife with these practices. Companies like Doordash want to criminalize tools that let drivers see how much a job will pay before they commit to it. Uber is a notorious twiddler of the driver-compensation knobs, exploiting the ease of changing pay structures to stay one step ahead of drivers. Sometimes, Uber overreaches and finds itself on the wrong end of a wage-theft investigation, but for every twiddle that draws a state Attorney General’s attention, there are dozens of smaller twiddles that slide under the radar. Twiddling allows platforms to rip off all kinds of suppliers — not just individual workers. For independent sellers, Amazon’s twiddling has piled junk fee upon junk fee, so that today, Amazon’s fees account for the majority of the price of goods on Amazon Marketplace. Advertisers and publishers are also on the wrong side of twiddling. The FTC’s lawsuit against Facebook and the DoJ’s antitrust case against Google are both full of eye-watering examples of high-speed shell-games where twiddling the knobs resulted in nearly undetectable frauds that ripped off both sides of the adtech market (publishers and advertisers) to the benefit of the tech companies. Twiddling is the means by which enshittification is accomplished. The early critique of Airbnb concerned how the company was converting every city’s rental housing stock to unlicensed hotel rooms, worsening the already dire worldwide housing crisis. Those concerns remain today, of course, but they’ve been joined by outrage over enshittifying twiddling, where homeowners are being hit by confusing compensation rules, and responding by imposing junk fees on renters. Undisciplined by competition or regulation, the platforms can’t keep their fingers off the knobs. Remember when Facebook conducted its infamous voter turnout experiment? 61 million Facebook users were exposed to a stimulus the company predicted would increase voter turnout. The resulting controversy was an all-too-typical exercise in tech criticism, where both sides completely missed the point. Facebook’s defenders pointed out that this kind of experiment was a daily activity for Facebook’s knob-twiddlers, who adjusted the platform rules all the time. Rather that focusing on what a fucking nightmare it is for 3,000,000,000 people to be locked into having their social lives mediated by tech bros who couldn’t stop twiddling the knobs, the critics of the Facebook experiment focused on the result. It was textbook criti-hype. The Facebook experiment increased voter turnout by 280,000, which sounds like an impressive figure. But the effect size is only 0.4% (remember, the experimental group had 61 million users!). Rather than focusing on how badly Facebook’s ads perform (and how advertisers are getting overcharged), or how the company’s compulsive twiddling changes the rules constantly for tens of millions of users at a time, critics of the Facebook voter turnout experiment instead promoted Facebook’s ad-tech market by repeating Facebook’s hype around this unimpressive result. There’s a bitter irony in enshittification: the internet’s great promise was disintermediation, but the calcified, monopolized internet of “five giant websites, each filled with screenshots of the other four” is a place where intermediaries have taken over the entire supply chain. As Douglas Ruskoff puts it, the platforms have “gone meta” — rather than providing goods or services, they have devoted themselves to sitting between people who provide goods and services and people who want to consume them. It’s chokepoint capitalism, a market where the intermediaries have ceased serving as facilitators and now run the show. The double irony is how the platforms seized power: by installing so many sliders and knobs in the back-end of their services that they can twiddle away any temporary advantage that business customers, advertisers or end users take for themselves. The early internet promised more than disintermediation — it also promised endless configurability, where users and technologists could install after-market code that altered the functioning of the services they relied on, seizing the means of computation to tilt the balance of power to their benefit. Technology remains intrinsically configurable, of course. The only kind of computer we know how to build is the universal, Turing complete Von Neumann machine, which can run all the software we know how to write. That’s how we got things like ad-blockers, the largest boycott in world history. The configurability of technology is why things like free and open software are politically important: in a technologically mediated society, control over the functions of the technology you rely on is control over every part of your life — your job, your education, your love life, your political engagement. While it remains technically possible to reconfigure the technologies that you rely on, doing so is now a legal minefield. “IP” has come to mean “any law that lets a company control the conduct of its competitors, critics or customers,” and that’s why “IP” is always at the heart of maneuvers to block platform users’ attempts to wrestle value away from the platforms. When Facebook wants to stop you from reading your friends’ posts without being spied on, it uses IP law. When Facebook wants to stop you from tracking paid political disinformation, it uses IP law. When Facebook wants to stop you tracking the use of Facebook in fomenting genocide, it uses IP law. When Facebook wants to stop you from re-ordering your feed to prioritize posts from your friends, it uses IP law. The platforms don’t just twiddle with every hour that God sends, they also hoard the twiddling — twiddling is for platform owners, not platform users. The enshittification of the internet has three interlocking causes: Platforms were able to create vertical monopolies by buying their competitors and suppliers, so users have nowhere to go; Platforms were able to block regulation that would give users more power, and encourage regulation that prevents new companies from entering the market and competing for users by giving them a better deal; Platforms were able to twiddle their own rules constantly, staying ahead of attempts by business customers (performers, media companies, marketplace sellers, advertisers) and end users to claim more value for themselves. To unwind enshittification, we need to throw all three of these mechanisms into reverse: Block future mergers and unwind existing mergers; Create and enforce strong privacy laws, labor protections, and other regulations that protect platform users from platform owners; Restore the right of users — including workers — to reconfigure the technology they use. Digital tools could be a labor organizer’s best friend. They could give users and device owners more flexibility and bargaining power than their offline predecessors. As has been the case since the Luddite uprisings, the most important question isn’t what the technology does, it’s who it does it for and who it does it to. The trick is to create rules that are both administratable and easy to comply with. One challenge for regulating platforms is that they are complex and opaque. To a first approximation, everyone who understands Facebook works for Facebook (this also used to be true of Twitter, but today it’s more likely that everyone who understands how Twitter works is a bitter ex-employee who is only too eager to puncture the company’s bullshit, which opens up some tantalizing regulatory possibilities). That means that when Facebook seems to be cheating, it will be hard to prove. It could take years to get to the bottom of seeming rule violations. For a rule to work effectively, it should be easy to figure out if it’s being obeyed. The other dimension to pay attention to is compliance costs. A regulation that is so expensive to comply with that it prevents small companies from entering the market does monopolists a favor by clearing the field of potential competitors before they can grow to be a threat. That’s what happened in 2019, when the EU proposed mandatory copyright filters aimed at preventing infringement on big platforms like Youtube and Facebook, as a way of shifting power from the platform operators to the media companies that relied on them. In the end, Youtube and Facebook supported the proposal. This may seem paradoxical, but it makes more sense once you realize that Youtube’s already spent $100,000,000 on its Content ID filter system, so any regulation that forces new companies to have enough money to build their own filters is a bargain. If the table stakes for hosting content in the EU starts at $100,000,000, Youtube and Facebook can sew up the market without worrying about upstarts coming along and offering a better deal to creators. Today, the EU’s filter rules — and other intermediary rules that assume the internet will always be dominated by a handful of giants, like rules requiring services to scan for harmful content, extremism and hate speech — present a significant challenge to the spread of the Fediverse, which seeks to replace giant, twiddle-addled multinational corporations with human-scale services run by small businesses, co-ops, volunteers, and nonprofits. Thankfully, operating a server is much safer in the USA, thanks in large part to Section 230 of the Communications Decency Act, which is often erroneously smeared as a gift to Big Tech, but which really protects the small-fry who are often a better deal for platforms users, and who are in any event unable to lock their users in when they want to offer a worse one (that’s why Mark Zuckerberg wants to get rid of Section 230). You may have heard that “Nathan,” the volunteer operator of mastodon.lol, a server with 12,000+ users, announced that he was shutting down his server because he doesn’t want to deal with the acrimony over the new Harry Potter game. This may seem like a serious problem with replacing Big Tech with small tech — what happens if you rely on a server whose owner turns out to have different interests from your own, leaving you stranded? This is a question that many Big Tech users have had to grapple with, of course, thanks to Twitter’s takeover by a mercurial, insecure manbaby who is bent on speedrunning the enshittification cycle. The reality is that mastodon.lol’s 12,000 users are much better situated than the 450,000,000 who were reliant on Twitter prior to the takeover. Mastodon is designed to prevent lock-in, and Mastodon users can easily export the list of people they follow, and the list of people who follow them, and import them onto a new server. With just four steps, a Mastodon user — including a user of mastodon.lol — can leave a server and set up on a new one, and keep all the connections they depend on. This is so straightforward, so useful, so resistant to enshittification, such a great check against excessive twiddling, that we could even make it a regulation: If you operate a server, you have an obligation to give any user — including a user you kick off the server — their data, including the data they need to get set up on another server. That’s a rule that’s both easy to administer and easy to comply with. It’s easy to tell if the rule is being followed. If one of Nathan’s 12,000 mastodon.lol refugees claims that they haven’t been given their data, Nathan can disprove the claim by sending them a fresh copy of that data. That’s a rule that Nathan — and every other Mastodon server operator, small or large — can comply with, without being unduly burdened. All Nathan needs to do is not switch off the export function already built into Mastodon, and save users’ data for a reasonable amount of time (say, 12 months) after he winds down his service so that he can provide it to users who didn’t snag their data before he pulled the plug. This is a rule that could be imposed on big services just as readily as on small ones. If we ordered Twitter to allow users to move freely from Twitter to the Fediverse — either as part of a new regulation, or as a settlement in one of the many enforcement actions that have been triggered by Twitter’s reckless, lawless actions under Musk — we could easily tell whether Twitter was abiding by the rule. What’s more, adding support for an open standard — ActivityPub, which underpins Mastodon — to Twitter is a straightforward technical exercise. Enshrining this Freedom Of Exit into platform governance accomplishes many of the goals that our existing content regulations seek to attain. Rather than protecting users from hate speech or arbitrary disconnection by crisply defining the boundaries of both and building a corporate civil justice system to hear disputes, we could just let users leave when they disagree with the calls that companies make, and provide them with an easy way to set up somewhere else when a platform kicks them off. That is, rather than making platform owners better, or more responsible, we could just make them less important. The goal isn’t no intermediaries, it’s better ones, and easy movement from bad ones to better ones. The problem isn’t that platforms do some twiddling — that’s how they get better as well as how they get worse — but that platform users can’t twiddle back and if they can’t leave, they’ll get twiddled to death.

Photo of Cory Doctorow
Cory Doctorow
Read Article
Social Media

Follow us and join the conversation.