Big Ideas Driving the Decentralized Web

We are at a fork in the road when it comes to building an internet that enhances trust, allows users to maintain privacy, incentivizes entrepreneurs to build safe and secure systems, and enables the preservation of the world’s most important information for future generations. At Filecoin Foundation for the Decentralized Web (FFDW), we believe that the Internet must return to its original – more decentralized – architecture. Some call this direction “Web3”; some call it the decentralized web or “DWeb.”

The first internet protocols decentralized computer networks and allowed users to share and collaborate directly, without a central service or computer. Out of all the net’s early protocols – including email and file transfer protocols – the World Wide Web quickly became the most popular. The web was originally based on the idea of interlinked documents. As web browsers became more sophisticated, websites began to offer more services, including hosted applications like Google’s Gmail and Facebook’s social feed. During this time, we also saw the rise of a few large, centralized companies that consolidated this environment into a handful of platforms. Today, the essential internet infrastructure needed for most web applications is controlled by three companies: Amazon, Facebook, and Google.

Now we’re entering an era where there is a rising interest in alternatives to these giants. Technologists are building the next evolution of the internet, where users and creators can act independently. These changes will radically transform the internet as we now experience it, resulting in the enormous potential to create a better web.

Below we’ve outlined some of the big ideas that are shaping the future of the web.

This image shows three different configurations of computer networks. A centralized network (all connected to one point), a decentralized network (hub & spoke), and a distributed network in which each node is connected to each other.

Decentralization

From the 1980s through the early 2000s, the web operated on decentralized network architecture. The basic idea behind the early web was simple: The system was designed to connect people directly, without a single overseer. It didn’t rely on any one server to transfer or route requests. Instead, this decentralized web ran on openly-developed, interoperable standards that did the routing and transfer – offering a level, competitive playing field for anyone to develop novel tools and apps based on these standards. Anyone could get online and publish whatever content they wanted without having to pay big companies for access to the internet’s fundamental services, either in money or by trading their personal information. A website could be created easily and would gain popularity by being linked to by other websites, each maintained individually.

The decentralized network architecture of the early web provided a free and open architecture that encouraged creativity, innovation, and rapid growth. Because the decentralized web didn’t rely on central servers it was more private, more resilient, and relatively secure against hackers.

Centralization

The web of today is quite different from the early web. Today’s web is highly – though not entirely – centralized. What this means is that the network architecture of the web depends upon a handful of central servers and service providers. This all started when companies like Microsoft, Amazon, Google, and Facebook developed software and services that took advantage of their concentrated expertise and early-mover advantage to offer improvements in efficiency, speed, and usability over individually-run services. Very quickly, these products conquered market share and user attention, crowding out competitors and amassing power and capital in a way that set them far ahead of any new entrants. Users flocked to the more developed and better-funded services these companies offered, further reinforcing consolidation.

The upside? Billions of people acquired more or less free and astoundingly easy access to high-speed internet services in a way that the decentralized internet’s earliest adopters could never have imagined. Millions of creators and entrepreneurs took their first steps into global digital society with pre-packaged services and user-friendly tools. It’s difficult to conceive of the rapid commercialization of today’s always-online smartphone culture without the marketing and coordination of these giants.

The downside? Control over today’s internet is concentrated in the hands of a few for-profit power players. These huge, ultra-powerful network nodes collect our data, censor or control the information we get, and present vulnerable targets to hackers. On top of that, the centralized web is proving to be a disaster for competition and innovation.

Distributed

Distributed networks take decentralization one step further: instead of individual websites, providing for smaller, independent audiences, there is zero centralization. Each node on the network works independently of one another, spreading computing resources and data use evenly across the network. When you look for data, you are routed to wherever it is. When you post data, that data can be replicated and spread across the whole network.

Distributed networks have high fault tolerance and reliability; if one node goes down, the rest of the network can equally assume the workload with little to no disruption. Distributed networks can also offer low latency. Because the network is spread over a wider geographical area, traffic can flow through the closest node, rather than traveling to a central point of contact that might be much further away. Distributed networks aren’t without negatives, however. The cost of setup and the burden of maintenance for the necessary hardware and software can be very high. Without some elements of centralization, network changes and decisions become more difficult. The network is a shared resource, which means it can be difficult to incentivize nodes to take up the full burden, and hard to prevent “free riders” – users taking up more than their fair share of resources, like spammers or other malicious actors. The internet was originally envisaged as a distributed network but mostly settled into a decentralized, or (as we’ve seen) a more strongly centralized form.

Peer-to-Peer Networks

Also known as P2P networks, these are inherently distributed but can sometimes operate with a level of decentralization. They can be sorted into three categories: unstructured, structured, and hybrid.

  • Unstructured: Nodes have no specific organization or pattern of communication with each other. Though easier to construct, unstructured P2P networks require significant computing power and memory because each search query is sent to the highest possible number of nodes.
  • Structured: Nodes have a clearer, more efficient structure of communication and file search processes. The downside? The structure brings higher levels of centralization, along with more setup and maintenance resources.
  • Hybrid: A hybrid network brings together client-server architecture with P2P, typically with a centralized server directing communication between nodes. This option combines the best of both worlds: the efficiency and performance of a centralized system with the security and open source structure of decentralization.

Open Source

The key to innovation on the decentralized web is open-source software (also known, with some ideological differences, as free or libre software). To describe software as open source is to say that its source code is publicly available, may be shared freely, and is open to collaboration and improvement by the entire community. Open source software isn’t managed by a single company or individual; no one seeks to maintain exclusive rights to program, debug, and distribute the code.

Open source works because it effectively crowdsources knowledge and expertise, drawing on an entire community of programmers to build projects together. This process cuts out developmental bottlenecks and streamlines production. It also makes projects on the web more democratic and open to innovation from anyone with a bright idea. Even if they disagree with the direction of a project, developers can “fork” open source software – take the existing copy of the source, and use it as the basis for a new project, with new goals.

Protocols

Computers on the internet interact according to protocols, which strictly constrain what and how information is sent, enabling recipients to process and respond to messages from senders. The architecture of these protocols determines how information traveling along this infrastructure is processed, stored, shared, and transmitted between computers, devices, systems, and, ultimately, users.

Nodes

A network node is defined as the connection point among network devices such as routers, printers, or switches that can receive and send data from one endpoint to the other. One example is the Interplanetary Filesystem (IPFS), which lets users store content across a network of voluntarily-run nodes.

Decentralized Data Storage

With decentralized storage, data is stored across multiple locations, or nodes, which are run by individuals or organizations that share their extra disk space, either voluntarily, or for explicit incentives like a monetary payment.

Filecoin is one example of a decentralized data storage system that creates a market to store files, with built-in economic incentives to ensure files are stored reliably over time. With the Filecoin network, there is a decentralized, efficient, and robust foundation to help store valuable information.

There are plenty of other ideas driving the DWeb and Web3 movements – from blockchain to governance and identity to privacy. We’re hopeful that this explainer is a starting point for understanding how the internet works and what a decentralized web means and how it works.

DWeb
Web3
Open Source
Decentralized Data Storage