TL;DR: The textbook model of two-sided network effects -- more buyers attract more sellers attract more buyers -- is obsolete for modern platforms. The platforms that win today run on algorithmic matching quality, not network density: a marketplace with 10 million sellers is worthless if the algorithm cannot surface the right one for a specific buyer at a specific moment. This transition from network-density to algorithmic-matching platforms fundamentally changes where defensibility accumulates.
The Crumbling Foundation: Two-Sided Market Theory
In 2003, Jean-Charles Rochet and Jean Tirole published a paper that became the canonical text for a generation of platform builders. Their model was clean, almost elegant in its simplicity: a platform sits between two distinct user groups, and the value to each group rises as the other group grows. More buyers attract more sellers. More sellers attract more buyers. A virtuous cycle. A flywheel. The rest is market dominance.
Thomas Eisenmann, Geoffrey Parker, and Marshall Van Alstyne extended this thinking into strategic frameworks that shaped how Silicon Valley conceived of platforms for two decades. Their insight — that platforms must solve a coordination problem between interdependent groups — was genuine. It explained eBay. It explained early Uber. It explained the credit card industry.
But intellectual models are not eternal truths. They are approximations that hold under certain conditions, and those conditions have changed so drastically that the original model now obscures more than it reveals.
The classical cross-side network effect function is typically modeled as:
where is utility for the buy-side, is the number of sell-side participants, is the cross-side effect coefficient, and captures diminishing returns. The two-sided network effect, as classically described, assumes that quantity on one side directly creates value for the other side. This assumption is fracturing. In most modern platforms, raw participant counts matter far less than the quality and speed of the match between them. A marketplace with ten million sellers is worthless if the algorithm cannot surface the right one for a specific buyer at a specific moment. A ride-hailing service with a hundred thousand drivers is useless if the dispatch system sends the wrong car to the wrong location.
We are witnessing a structural transition: from network-density platforms to algorithmic-matching platforms. The implications for market structure, competitive strategy, regulation, and defensibility are not incremental — they are categorical.
Why the Two-Sided Model Is Obsolete
The Rochet-Tirole model rests on several assumptions that no longer hold in the current environment:
Assumption 1: Homogeneous participants. Classical two-sided theory treats each "side" as a relatively uniform group. Buyers are buyers. Sellers are sellers. But modern platforms deal with radical heterogeneity. On Uber Eats, a user searching for ramen at 11pm in Brooklyn has nothing in common with a user ordering office catering for 40 in Dallas at 10am. Treating them as the same "side" collapses meaningful distinctions into a useless abstraction.
Assumption 2: Cross-side externalities dominate. The theory assumes that value flows primarily across sides — from buyers to sellers and back. But on platforms like TikTok, the most powerful effects are within-side and algorithmic. A new creator on TikTok does not benefit because there are more viewers in some abstract aggregate sense. A new creator benefits because the recommendation algorithm can test their content against micro-segments of the audience and find a match. The algorithm is the network effect.
Assumption 3: More is better. In the classical model, adding participants to either side always increases total platform value. This is empirically false for most mature platforms. After a certain density threshold, adding more restaurants to a food delivery app does not make a user's experience meaningfully better — it may make it worse through choice overload. What makes it better is a smarter ranking of the restaurants already there.
Assumption 4: Platforms are intermediaries. The two-sided model imagines the platform as a relatively passive meeting point — a bazaar where buyers find sellers. Modern algorithmic platforms are active allocators. They do not merely connect; they decide. They choose what you see, when you see it, and what you never see at all. This is not intermediation. This is curation, allocation, and in many cases, market-making.
Table 1: Classical Two-Sided Market Assumptions vs. Current Realities
| Classical Assumption | Current Reality | Strategic Implication |
|---|---|---|
| Homogeneous user groups | Radical heterogeneity within each side | Segmented matching beats mass aggregation |
| Cross-side effects dominate | Algorithmic and within-side effects dominate | Investment in ML > investment in growth marketing |
| More participants = more value | Quality of matching > quantity of participants | Data moats beat scale moats |
| Platform as intermediary | Platform as active allocator | Algorithm design is the core strategic decision |
| Two distinct sides | Multiple overlapping participant types | Multi-sided optimization replaces bilateral thinking |
Algorithmic Matching as the New Network Effect
Here is the argument distilled to its essence: the algorithm is the new network effect.
In a traditional network effect, value compounds because each new user directly increases the utility of the network for existing users. More people on the telephone network means more people you can call. This is Metcalfe's Law territory — value proportional to the square of the number of connections.
In an algorithmic network effect, matching efficiency can be expressed as:
where is the number of interactions at time , is the signal quality per interaction, and is the model's parameter dimensionality. Value compounds not because of user count, but because each interaction generates data that improves the algorithm's ability to match, recommend, or allocate. The feedback loop is: more interactions produce more data, better data trains a better model, a better model produces better matches, better matches produce more interactions. The user count is an input to this loop, but it is not the loop itself.
This distinction matters enormously for competitive dynamics. A traditional network effect creates a barrier that scales with user count — you cannot replicate a billion-user social graph overnight. An algorithmic network effect creates a barrier that scales with data quality and model sophistication — and these can sometimes be achieved with a fraction of the user base if the data is rich enough and the model architecture is superior.
Notice the crossover point in the chart above. Early on, traditional network effects are stronger — raw user count matters when you are trying to achieve basic liquidity. But once a platform reaches sufficient density, the algorithmic effect compounds faster and produces a wider moat. This is why TikTok, with fewer users than Facebook in 2020, was already producing higher engagement per user — its algorithmic matching was simply better.
The implications cascade. If algorithmic quality is the primary source of value, then:
- Talent in machine learning and data science is more strategically valuable than growth marketing.
- Data infrastructure investment yields higher returns than user acquisition spending past a certain threshold.
- A platform with fewer but more engaged users can generate more value than one with a massive but passive user base.
Data Network Effects vs. Traditional Network Effects
It is worth drawing the distinction precisely. Traditional network effects and data network effects are often conflated, but they operate through different mechanisms and produce different competitive structures.
Traditional (direct) network effects: Each additional user makes the product more valuable for every other user. The telephone. Fax machines. WhatsApp. The value is in the connections between users.
Traditional (cross-side) network effects: Each additional user on one side makes the product more valuable for users on the other side. eBay circa 2004. Visa's merchant-cardholder dynamics. The value is in the bilateral liquidity.
Data network effects: Each additional interaction produces data that improves the product for all users, regardless of whether those users are directly connected. Google Search. Netflix recommendations. Spotify Discover Weekly. The value is in the accumulated intelligence of the system.
The critical difference is that data network effects are mediated by a model. The raw data is useless without the algorithm that processes it. This means two platforms with identical datasets but different models will produce wildly different value. The model is not a commodity — it is the core differentiator.
The Multi-Sided Platform: Beyond Buyers and Sellers
The very language of "two-sided" markets has become a constraint on thinking. Consider Uber in 2026. The platform connects:
- Riders seeking transportation
- Drivers providing transportation
- Restaurants offering food through Uber Eats
- Couriers delivering that food
- Advertisers bidding for placement within the app
- Freight shippers and carriers through Uber Freight
- Regulators operating within compliance frameworks
- Autonomous vehicle operators through Uber's AV partnerships
- Financial service providers through Uber's payment and lending products
Calling this a "two-sided market" is like calling a symphony an argument between two instruments. The platform is an n-sided optimization problem where the algorithm must balance the utility functions of multiple, partially overlapping, sometimes conflicting participant groups simultaneously.
This multi-sided reality demands a fundamentally different analytical framework. The relevant question is no longer "which side should we subsidize?" (the classic two-sided pricing problem from Rochet and Tirole). The relevant questions are: Which participant interactions generate the most data value? Where are the cross-group synergies that the algorithm can compound? Which constraints (regulatory, supply, demand) are binding at any given moment?
Table 2: Platform Complexity — From Two-Sided to Multi-Sided
| Platform | Participant Types | Primary Matching Mechanism | Classical Model Fit |
|---|---|---|---|
| Uber (2015) | Riders, Drivers | GPS-based dispatch | Strong fit |
| Uber (2026) | Riders, Drivers, Restaurants, Couriers, Advertisers, AV Partners, Freight, Finance | Multi-objective ML allocation | Poor fit |
| TikTok | Creators, Viewers, Advertisers, Music Labels, Commerce Sellers | Content-graph recommendation | Very poor fit |
| Amazon | Buyers, Sellers, Advertisers, AWS Clients, Logistics Partners, Content Creators | Search ranking + personalization | Poor fit |
| Airbnb | Guests, Hosts, Experiences Providers, Advertisers, Local Governments | Trust-weighted matching + pricing ML | Moderate fit |
The Algorithmic Moat: Why Data Quality Beats User Quantity
For two decades, the dominant strategy playbook for platform companies was: grow fast, subsidize the harder side, reach critical mass, and then monetize. The logic was sound under two-sided theory — the network effect would create a self-reinforcing barrier. Whoever got big first would stay big.
This playbook is failing with increasing frequency. Consider: ClassPass was not the largest fitness marketplace by member count when it overtook competitors — it was the one whose algorithm best predicted which classes a user would attend and enjoy. Spotify does not have the most music (that distinction arguably belongs to platforms with more liberal upload policies) — it has the best personalization engine. Google did not win search because it had the most users first; it won because PageRank was a fundamentally better algorithm, and user data then compounded that advantage.
The "algorithmic moat" thesis states: defensibility in platform businesses now comes primarily from the quality and uniqueness of the data-algorithm feedback loop, not from the raw size of the user base.
This has three components:
1. Data uniqueness. Data that only your platform generates — proprietary behavioral signals, interaction patterns, outcome data — is worth more than commodity data that any competitor can acquire. Uber's knowledge of how specific riders behave in specific weather conditions at specific times is data no one else has. This is more defensible than the fact that Uber has millions of riders.
2. Model architecture. The same data fed into a superior model architecture produces disproportionately better outcomes. This is why platform companies are now hiring ML researchers the way they once hired growth hackers. The model is not a utility — it is the product.
3. Feedback loop velocity. How quickly new data is integrated into the model and affects real-time decisions determines how fast the moat compounds. A platform that retrains its recommendation model weekly will fall behind one that updates in real time, even if the weekly platform has more total data.
The Cold Start Problem, Reconsidered
Andrew Chen's articulation of the cold start problem — how do you create value on a platform before you have the users whose presence creates value? — was framed almost entirely within two-sided theory. Solve the chicken-and-egg problem, and the network effect takes over.
Under an algorithmic model, the cold start problem looks different. The question is not merely "how do I get users on both sides?" but "how do I generate enough data, fast enough, to train a matching algorithm that produces value before users churn?"
This reframing changes the solution set. Classical cold start solutions focused on single-player utility (make the product useful even without the network — Instagram as a photo filter app, OpenTable as a reservation management tool). Algorithmic cold start solutions focus on data bootstrapping: using synthetic data, transfer learning from adjacent domains, human-in-the-loop curation that generates training signal, or aggressive exploration strategies that sacrifice short-term match quality for long-term model improvement.
Consider how Spotify handled cold start for podcast recommendations. It did not wait for millions of podcast listening sessions to accumulate. It used collaborative filtering signals from music listening behavior — the hypothesis being that musical taste is predictive of podcast preference. This cross-domain transfer of data was an algorithmic cold start strategy, not a network-density strategy.
The new cold start playbook:
- Bootstrap with adjacent data. Use signals from related domains or public datasets to initialize the model.
- Design for data density. Build product mechanics that generate high-signal interactions early. Every tap, scroll, pause, and skip is training data.
- Accept short-term inefficiency for long-term learning. Run exploration strategies (show users things the model is uncertain about) even though it hurts immediate conversion metrics.
- Use human curation as a bridge. Editorial or expert curation generates implicit training data (what good looks like) while the algorithmic model matures.
TikTok's Content Graph vs. the Social Graph
TikTok's ascent is the clearest empirical proof that algorithmic matching has supplanted classical network effects as the dominant force in platform competition.
Facebook and Instagram are built on the social graph — your connections determine your feed. The network effect is direct and interpersonal: the platform is more valuable because your friends are on it. This is a textbook network effect, and it created one of the most durable competitive positions in technology history.
TikTok is built on the content graph — an algorithmic representation of your interests, inferred from your behavior, matched against a universe of content. You do not need to follow anyone. You do not need to know anyone. The algorithm finds what you want before you know you want it.
The strategic consequences are severe for social-graph platforms:
Portability of value. On Facebook, your social graph is locked in — moving to a competitor means abandoning your connections. On TikTok, there is nothing to port. Your "graph" exists only as a model state inside TikTok's servers. But paradoxically, this makes TikTok more defensible, not less, because the value is in the model's accumulated understanding of you, which no competitor can replicate without your behavioral history on their platform.
Speed of value creation. A new user on Facebook needs to add friends, join groups, and build a social context before the feed becomes interesting. A new user on TikTok gets a compelling feed within minutes — sometimes within seconds — because the algorithm begins learning from the very first interaction. The time-to-value is orders of magnitude shorter.
Creator economics. On social-graph platforms, distribution is tied to follower count. Established creators have entrenched advantages. On TikTok, distribution is tied to content performance as judged by the algorithm. A first-time creator can reach millions if the content resonates. This makes TikTok more attractive to new creators, which increases content supply, which gives the algorithm more to work with — a different kind of flywheel, one driven by algorithmic curation rather than network density.
This is not merely a product design choice. It represents a fundamental shift in how platform value is structured. The social graph is a stock — it accumulates over time and creates switching costs through accumulated connections. The content graph is a flow — it generates value through continuous real-time matching, and its "switching cost" is the machine learning model's personalized understanding of each user.
The Decay of Cross-Side Network Effects
Classical cross-side network effects exhibit diminishing returns that the original theory underestimated. This decay follows a predictable pattern:
Phase 1: Liquidity hunger. When a marketplace is young, every additional participant on either side materially improves the experience. The first ten restaurants on a food delivery app are each enormously valuable to users. The network effect is strong and genuinely cross-side.
Phase 2: Sufficiency plateau. After reaching adequate supply density, the marginal value of additional participants drops sharply. The difference between having 200 restaurants and 250 restaurants in a mid-sized city is nearly invisible to the average user. The cross-side network effect flattens.
Phase 3: Algorithmic dominance. At maturity, the quality of the experience is almost entirely determined by the ranking, matching, and pricing algorithms. Whether a ride-hailing platform has 5,000 or 50,000 drivers in a metro area matters less than whether the dispatch algorithm can minimize wait times and route efficiently. The network effect has been absorbed into the algorithm.
Phase 4: Congestion and negative effects. Beyond a certain point, adding more supply can degrade the experience for existing participants. More sellers competing for the same buyer attention. More drivers fragmenting the same demand. More content creators making discovery harder. These are negative same-side effects that overwhelm the weakening cross-side effects.
This decay pattern explains a puzzle that classical theory cannot: why do well-funded marketplace startups with strong supply sometimes fail to build defensibility? The answer is that they overinvested in a network effect that decays and underinvested in an algorithmic capability that compounds.
Winner-Take-Most Dynamics Are Changing
Winner-take-most dynamics are shifting. The classical prediction from two-sided market theory is winner-take-all, or at least winner-take-most. If network effects are the primary source of value, then the largest platform in a market should be unassailable — its network is bigger, therefore more valuable, therefore it attracts more users, therefore it gets bigger. A self-reinforcing monopoly.
This prediction has held in some markets (search, social networking pre-TikTok, mobile operating systems) but failed in others where theory predicted it should hold. The ride-hailing market was supposed to be winner-take-all. It is not. Lyft persists in the US. Bolt persists in Europe. Ola persists in India. Local players survive and sometimes thrive despite Uber's scale advantages.
Why? Because the network effect that matters in ride-hailing — algorithmic dispatch efficiency — is local, not global. Uber's data from Lagos does not make the algorithm better in London. And within a single city, a smaller competitor with a better local algorithm and sufficient driver density can provide an equivalent or superior experience.
This locality of algorithmic advantage fragments markets that should, under classical theory, consolidate. It also creates opportunities for category-specific challengers. A vertical marketplace focused exclusively on one domain — say, rare book sales — can build a more specialized algorithm than a horizontal marketplace, even with a tiny fraction of the users.
The new competitive structure is not winner-take-all but winner-take-most-in-each-algorithmic-niche. Dominance is fragmented along the lines of data specialization and model specificity, not along the lines of total user count.
Platform Envelopment Strategies
If algorithmic moats are local and domain-specific, how do dominant platforms maintain and extend their positions? Through platform envelopment — using their existing algorithmic capabilities, data assets, and user relationships to enter adjacent markets.
This strategy, first described by Eisenmann, Parker, and Van Alstyne, has evolved in the algorithmic age. Classical envelopment was about bundling — Microsoft using Windows to envelop the browser market, for instance. Algorithmic envelopment is about data leverage across domains.
When Uber moved from ride-hailing to food delivery, the key asset it carried was not its user base (which had to be re-engaged for a different use case) but its logistical intelligence — models for driver allocation, route optimization, demand prediction, and surge pricing. These models, trained on ride-hailing data, transferred partially to food delivery. The algorithmic capabilities were the actual vector of envelopment.
Similarly, Amazon's move from e-commerce to advertising was not primarily about cross-selling to a captive audience. It was about applying its product recommendation and user behavior models — trained on purchase data — to ad targeting. The algorithm was the weapon of envelopment, not the user base.
This reframing has implications for antitrust analysis. If the competitive threat of platform dominance comes from algorithmic capabilities rather than user lock-in, then regulators need to think about data portability and model interoperability, not just user switching costs.
Table 3: Algorithmic Envelopment — How Platform Capabilities Transfer Across Domains
| Platform | Origin Domain | Enveloped Domain | Algorithmic Asset Transferred |
|---|---|---|---|
| Uber | Ride-hailing | Food delivery, Freight, Finance | Dispatch optimization, demand prediction, route planning |
| Amazon | E-commerce | Advertising, Cloud, Logistics-as-a-Service | Recommendation engine, purchase prediction, supply chain ML |
| Search | Advertising, Maps, Cloud AI, Commerce | Intent modeling, ranking algorithms, NLP | |
| ByteDance | Short video (TikTok) | E-commerce (TikTok Shop), Search, Education | Content recommendation, engagement prediction, creator matching |
| Apple | Hardware | Services, Advertising, Finance | Privacy-centric user modeling, on-device ML, App Store curation |
Regulatory Implications: DMA, Antitrust, and the Algorithmic Question
The European Union's Digital Markets Act (DMA), which took full effect in 2024, represents the most significant regulatory intervention in platform markets since antitrust action against Microsoft in the early 2000s. But the DMA was designed with two-sided market theory in mind — its core obligations focus on interoperability, data portability, and preventing self-preferencing. These are important provisions, but they address the wrong competitive bottleneck if the real source of dominance is algorithmic.
Consider the DMA's data portability requirements. Allowing users to export their data from a platform addresses the switching cost created by social graph lock-in. But it does not address the switching cost created by the platform's trained model. You can port your data, but you cannot port the algorithm's understanding of you. The model state — the accumulated patterns learned from your interactions — is the real lock-in, and it is not portable in any meaningful sense.
This creates a regulatory gap. The most powerful anticompetitive advantage of dominant platforms — their algorithmic intelligence — is invisible to regulatory frameworks designed around user data and market access.
Three regulatory directions are worth considering:
1. Algorithmic transparency. Requiring platforms to disclose the objectives and key parameters of their matching and ranking algorithms. Not the source code (which is legitimately proprietary), but the optimization targets. What is the algorithm maximizing? Engagement? Revenue? User satisfaction? Some weighted combination? This transparency would enable both regulatory oversight and informed user choice.
2. Data dividend models. Recognizing that users collectively create the training data that makes algorithms valuable, and structuring compensation accordingly. This is economically tricky but philosophically sound — if the algorithm is the moat, and the algorithm is trained on user data, then users are co-creators of the moat.
3. Algorithmic interoperability. The most radical option: requiring dominant platforms to expose their algorithmic capabilities through standardized APIs, allowing third-party services to use the same matching and ranking infrastructure. This would be the algorithmic equivalent of requiring a phone company to let competitors use its network.
Building Defensibility in the Algorithmic Age
If the preceding analysis is correct, the strategic playbook for platform builders must change. Here is what defensibility looks like in the algorithmic age:
1. Proprietary data generation. Design product interactions that generate data no competitor can acquire. Every feature should be evaluated not just for user value but for the training signal it produces. Airbnb's review system is not just a trust mechanism — it generates structured data on host quality, guest preferences, and experience outcomes that feed the matching algorithm.
2. Feedback loop acceleration. Compress the time between data generation and model improvement. Real-time learning systems are more defensible than batch-trained ones, because the moat compounds faster. A food delivery platform that updates estimated delivery times based on current traffic data is building defensibility every minute of every day.
3. Multi-sided optimization. Platforms that optimize across more participant types simultaneously create thicker algorithmic moats. Each additional "side" generates new interaction types, new data signals, and new optimization dimensions that a simpler competitor cannot replicate. This is why the most defensible platforms are not two-sided but five-, six-, or seven-sided.
4. Domain depth over domain breadth. The niche algorithm advantage means that going deep in a specific domain can be more defensible than going broad across many domains. A platform that understands the nuances of, say, B2B chemical procurement — the regulatory constraints, the supply chain timing, the quality specifications — has a moat that a horizontal marketplace cannot easily breach.
5. Model architecture as intellectual property. The specific way a platform structures its machine learning pipeline — feature engineering, model architecture, training procedures, serving infrastructure — is a form of intellectual property that is more defensible than patents. It is harder to copy because it is deeply entangled with the specific data the platform generates.
The Algorithmic Marketplace Framework
Drawing these threads together, I propose the Algorithmic Marketplace Framework (AMF) as a replacement for the classical two-sided market model. The AMF evaluates platform strength along five dimensions:
Dimension 1: Data Signal Density — How much usable training signal does each user interaction generate? High-signal platforms (where every interaction reveals preferences, intent, and outcomes) compound faster than low-signal platforms (where interactions are sparse or ambiguous).
Dimension 2: Algorithmic Cycle Time — How quickly does new data improve the matching algorithm and reach production? Measured in minutes (real-time learning), hours (near-real-time), days (daily retraining), or weeks/months (periodic batch training). Shorter cycle times mean faster compounding.
Dimension 3: Side Multiplicity — How many distinct participant types does the platform serve, and how many cross-group interaction types generate data? More sides create richer data and harder-to-replicate optimization problems.
Dimension 4: Data Uniqueness — What fraction of the platform's training data is proprietary (generated only through platform interactions) vs. commodity (available from public or purchasable sources)? Higher proprietary fractions mean deeper moats.
Dimension 5: Model Transferability — How well do the platform's algorithmic capabilities transfer to adjacent domains? High transferability enables envelopment strategies; low transferability means defensibility is domain-locked but potentially deeper within that domain.
This framework does not replace competitive analysis — it structures it. For any platform, plotting its position on these five dimensions reveals where its defensibility is strong, where it is weak, and where competitors or regulators might attack.
Conclusion: The Model Is the Moat
The intellectual model that has governed platform strategy for two decades — attract both sides, reach critical mass, let the network effect do the rest — is not wrong so much as it is incomplete. It describes the first chapter of platform competition. We are now in the second chapter, and the rules are different.
In this chapter, the algorithm is the primary source of value creation, competitive advantage, and defensibility. User count still matters, but as an input to data generation, not as a direct source of network effects. The platforms that win will not be the ones with the most participants. They will be the ones whose algorithms best understand what each participant needs and can deliver it with the least friction.
The transition from network-density platforms to algorithmic-matching platforms is not a prediction about the future. It is a description of the present that most strategic frameworks have not yet caught up to. The Rochet-Tirole model was built for a world of credit cards and newspapers. We live in a world of content graphs and real-time dispatch optimization. The theory must evolve, or it will mislead.
For founders building platforms: your moat is your model. Invest accordingly.
For incumbents defending positions: your user base is a depreciating asset if your algorithm is not improving. The network effect that got you here will not keep you here.
For regulators assessing market power: the data is the evidence, but the algorithm is the weapon. Look at the model, not just the market share.
For economists updating the theory: Rochet and Tirole gave us the grammar. Now we need the vocabulary for a world where the platform is not a bazaar but a brain.
Further Reading
- Two-Sided Markets on Wikipedia — Rochet and Tirole's framework
- TikTok Algorithm — Content graph vs social graph
- EU Digital Markets Act — Platform regulation
References
-
Rochet, J.C. and Tirole, J. (2003). "Platform Competition in Two-Sided Markets." Journal of the European Economic Association, 1(4), 990-1029.
-
Eisenmann, T., Parker, G., and Van Alstyne, M. (2006). "Strategies for Two-Sided Markets." Harvard Business Review, 84(10), 92-101.
-
Parker, G., Van Alstyne, M., and Choudary, S.P. (2016). Platform Revolution: How Networked Markets Are Transforming the Economy. W.W. Norton.
-
Chen, A. (2021). The Cold Start Problem: How to Start and Scale Network Effects. Harper Business.
-
Hagiu, A. and Wright, J. (2015). "Multi-Sided Platforms." International Journal of Industrial Organization, 43, 162-174.
-
Cusumano, M., Gawer, A., and Yoffie, D. (2019). The Business of Platforms: Strategy in the Age of Digital Competition, Innovation, and Power. Harper Business.
-
Evans, D.S. and Schmalensee, R. (2016). Matchmakers: The New Economics of Multisided Platforms. Harvard Business Review Press.
-
Zhu, F. and Iansiti, M. (2019). "Why Some Platforms Thrive and Others Don't." Harvard Business Review, 97(1), 118-125.
-
European Commission (2022). Digital Markets Act. Regulation (EU) 2022/1925.
-
Acemoglu, D. and Restrepo, P. (2019). "The Wrong Kind of AI? Artificial Intelligence and the Future of Labor Demand." Cambridge Journal of Regions, Economy and Society, 12(1), 25-35.
-
Gregory, R.W., Henfridsson, O., Kaganer, E., and Kyriakou, H. (2021). "The Role of Artificial Intelligence and Data Network Effects for Creating User Value." Academy of Management Review, 46(3), 534-551.
-
Varian, H. (2019). "Artificial Intelligence, Economics, and Industrial Organization." In The Economics of Artificial Intelligence, University of Chicago Press.
Datasets referenced
Read Next
- Digital Economics
The Economics of Zero Marginal Cost Bundling: When Adding Products Decreases Revenue
In digital markets, the marginal cost of adding one more product to a bundle is zero. Conventional wisdom says bundle everything. The data says the opposite — past a threshold, each addition dilutes the bundle's perceived value and total willingness to pay drops.
- Digital Economics
Winner-Take-Most vs. Multi-Homing: An Empirical Analysis of Market Concentration in Vertical SaaS
The 'winner-take-all' narrative dominates SaaS strategy. But empirical data across 20+ vertical categories tells a different story: most B2B software markets stabilize with 3-5 serious players, and switching costs are falling faster than incumbents realize.
- Digital Economics
Attention Economics Quantified: Measuring the True CPM of Cognitive Load in Digital Advertising
CPM measures whether an ad loaded in a browser. It says nothing about whether a human noticed it. Here's a framework for pricing what actually matters — the cognitive cost of attention — and why the gap between CPM and true attention cost is where billions in ad spend disappear.