TL;DR: Most companies claiming a "data moat" do not have one. A real data network effect requires three conditions: the data must be proprietary (not replicable), the model must improve with more data at a rate that matters, and the improved model must create value users cannot get elsewhere. Most e-commerce companies fail the first condition -- their data advantages are temporary and diminish logarithmically with scale.
The Moat That Isn't
Every pitch deck in 2025 featured the same slide. A flywheel diagram. Arrows pointing in circles. The word "data" somewhere near the center. The implied promise: once we collect enough information about our users, no competitor can touch us.
This is, in most cases, a comfortable fiction.
The concept of a data moat, where proprietary information compounds into an unassailable competitive position, is real. It exists. But it exists in far fewer places than investor presentations would suggest. The distance between "we have a lot of data" and "our data creates a structural advantage that compounds over time" is roughly the distance between owning a library card and being a novelist. Possession of the raw material is not the thing.
What follows is an attempt to separate the genuine article from the imitation. To identify the specific conditions under which data creates real network effects in e-commerce, and, equally important, the far more common conditions under which it does not.
Traditional Network Effects vs. Data Network Effects
Traditional network effects are well understood, though as we have explored in our analysis of multi-sided algorithmic marketplaces, the classical model is fracturing. The telephone becomes more valuable as more people own telephones. Facebook becomes more useful as more of your friends join. The mechanism is direct: each additional participant increases the value of the network for every existing participant. Metcalfe's law gives us the mathematics, value proportional to the square of connected users, and while the precise exponent is debated, the direction is not.
Data network effects operate through a different mechanism entirely. The claim is that each additional user generates data, that data improves a product (usually through machine learning), and the improved product attracts more users. Formally, the data network effect function can be expressed as:
where is product value with users, each contributing data , and maps model quality to user-perceived value. The value doesn't flow directly between users. It flows through an intermediary: the model.
This distinction matters more than most analysis acknowledges.
In a traditional network effect, the relationship between users and value is super-linear. In a data network effect, it is sub-linear, and this is the inconvenient truth that separates durable moats from temporary head starts.
Gregory, Henfridsson, Kaganer, and Kyriakou (2021) formalized this distinction in their study of data-driven competitive advantages. Their central finding deserves more attention than it receives: data network effects are contingent, not automatic. They depend on specific structural conditions that most companies do not satisfy.
Most e-commerce companies satisfy the first condition weakly, the second condition temporarily, and the third condition not at all.
The Data Flywheel: A Beautiful Theory
The data flywheel is the canonical narrative of data network effects. It goes like this:
- Users interact with your product
- Interactions generate behavioral data
- Behavioral data trains machine learning models
- Better models create a better user experience
- Better experience attracts and retains more users
- Return to step 1
On a whiteboard, this is irresistible. In practice, every link in this chain contains a hidden assumption that frequently fails.
The link between steps 2 and 3 assumes that more data actually improves model performance at the current margin. The link between steps 3 and 4 assumes that model improvement translates into a user-perceivable difference. The link between steps 4 and 5 assumes that the user-perceivable difference is large enough relative to other switching factors, price, brand, convenience, habit, to actually change behavior.
The failure rate at each link compounds. If each link has a 60% chance of functioning as theorized, and that is generous for most e-commerce businesses, the probability of the full flywheel operating is 0.6^5, roughly 7.8%. This is not a modeling exercise. It is an explanation for why most data flywheel claims collapse under scrutiny.
The Data Plateau: Where Most Flywheels Stall
The most important concept in data network effects is one that rarely appears in the optimistic version of the story: diminishing returns.
Machine learning models do not improve linearly with data volume. They follow a characteristic curve, steep initial gains that flatten dramatically as data scales. This is not a bug. It is a mathematical property of statistical learning.
Consider a recommendation system for an e-commerce platform. The first 10,000 purchase events might improve recommendation accuracy from random (call it 2% click-through) to meaningful (8%). The next 100,000 events might push it to 12%. The next million to 14%. The next ten million to 14.8%.
The "data plateau" is the inflection point where marginal data stops producing meaningful model improvement. This diminishing returns pattern follows a logarithmic curve:
where is model performance at data volume , is the asymptotic performance ceiling, and controls how quickly returns diminish. The marginal improvement approaches zero well before most companies exhaust their data collection capacity.
For most e-commerce applications, this point arrives far sooner than companies expect, or admit.
This has a counterintuitive implication. Data advantages are often strongest for mid-stage companies competing against early-stage entrants, and weakest for large incumbents competing against well-funded challengers. The challenger does not need to match the incumbent's data volume. They need only to reach the plateau themselves, and in many categories, the plateau is achievable with a few million transactions.
When Proprietary Data Actually Matters
If most data advantages erode at scale, where do genuine, durable data moats exist? The answer is domain-specific, and three categories stand out.
1. Recommendation Systems with Long-Tail Catalogs
When a catalog contains millions of items, many with thin interaction histories, proprietary data on obscure items retains value far longer than data on popular ones. The head of the distribution, bestsellers, trending products, is well-served by any moderately sized dataset. But the long tail, where the majority of catalog diversity lives, remains data-starved even at massive scale.
This is where Amazon's recommendation system draws real advantage -- and where the platform's information extraction becomes most strategically valuable. Not from knowing that people who buy popular thriller novels also buy other popular thriller novels, any bookstore knows this. But from knowing the purchase affinities between a niche polymer chemistry textbook and a specific set of lab equipment. That signal exists nowhere else because the co-purchase events only happen at Amazon's scale and breadth.
2. Search Ranking in Specialized Domains
Generic product search is a largely solved problem. But search in domains with ambiguous intent, fashion ("something for a summer wedding, not too formal"), home improvement ("fix a wobbly fence gate"), medical supplies, benefits enormously from proprietary click-through and conversion data tied to specific queries.
The key distinction: the data must connect queries to outcomes, not just queries to clicks. A company that knows which search results led to purchases and returns has a qualitatively different signal than one that only knows which results were clicked.
3. Fraud Detection and Trust Systems
Fraud detection is perhaps the purest example of a durable data moat. Fraudulent patterns are adversarial, they change specifically in response to detection. This means historical fraud data has compounding value: each detected pattern teaches the model something that new entrants cannot learn without first suffering the fraud.
PayPal's fraud detection system processes signals from hundreds of millions of accounts across dozens of countries. A new payment processor starting today would need to absorb billions in fraud losses to accumulate a comparable training set, and by that time, the fraud patterns would have evolved again.
Amazon's Data Moat: A Decomposition
Amazon is the most frequently cited example of a data moat in e-commerce. It is also the most frequently misunderstood. The moat is real, but it is not monolithic, it is a composite of several distinct data advantages, each with different characteristics.
Component 1: Cross-Category Purchase Graphs. Amazon sees the correlations between purchases in seemingly unrelated categories, camera equipment and hiking gear, baby monitors and sleep supplements, specific tool brands and truck accessories. This purchase graph, spanning hundreds of millions of customers across millions of SKUs, is genuinely unreplicable through any means other than operating a similarly broad marketplace for a similarly long period.
Component 2: Search-Intent Mapping. Billions of searches with associated click, cart, and purchase outcomes create a mapping between natural language intent and product satisfaction that no isolated dataset can approximate. This is particularly valuable for ambiguous queries where intent is not obvious from keywords alone.
Component 3: Seller Performance Data. Amazon maintains detailed reliability metrics on millions of third-party sellers, fulfillment speed, defect rates, customer service quality, return rates by category. This dataset directly affects the Buy Box algorithm and is valuable precisely because it is adversarial: sellers who underperform are penalized based on data they cannot observe or manipulate.
Component 4: Logistics Optimization Data. Historical shipping, warehousing, and demand data at the SKU-location level enables Amazon's delivery speed. This is less a data network effect and more a data scale advantage, it improves with volume but does not exhibit the circular reinforcement of a true flywheel.
Notice what this decomposition reveals: the components with the highest competitive impact are not always the hardest to replicate, and the components that are hardest to replicate are not always the most impactful. The moat is strongest at the intersection, cross-category purchase data and search-intent mapping, and weakest in areas like pricing history and basic review data, where competitors or third-party aggregators can assemble comparable datasets.
The Data Half-Life Problem
Data is not a durable good. It decays. And the rate of decay varies dramatically by type, a fact that most data moat analyses ignore entirely.
Behavioral data in e-commerce has a half-life. User preferences shift. Fashion cycles turn over. Product catalogs rotate. The purchase patterns learned from 2023 data may be actively misleading in 2026. A recommendation model trained on three-year-old clickstream data is not just stale, it is wrong in ways that actively harm the user experience.
This creates a structural problem for data moat claims. If the useful life of behavioral data is 12-24 months, then a data moat must be continuously replenished. The advantage belongs not to whoever has the most historical data, but to whoever has the freshest relevant data at sufficient volume. This is a subtly different, and much harder, position to defend.
Consider the half-lives of different data types in e-commerce:
- Session clickstream data: 2-4 weeks useful life. Click patterns reflect seasonal intent, trending products, and transient browsing contexts. Last month's clickstream is noise.
- Purchase history: 6-18 months useful life, depending on category. Grocery purchase patterns are more stable than fashion.
- Return and sizing data: 2-4 years useful life. Body measurements and fit preferences change slowly. This is part of why size-prediction models generate durable advantages.
- Fraud patterns: 3-6 months for specific tactics, but the meta-patterns (which account types are targeted, which payment vectors are exploited) persist for 1-3 years.
- Product taxonomy and attribute data: 3-5 years. The relationships between product attributes, that a certain fabric weight implies a certain use case, change slowly.
The implication is clear: any data moat assessment must discount historical data by its type-specific decay rate. A company with ten years of clickstream data does not have ten years of advantage. It has, at best, 18 months, and that only if collection velocity has remained high.
Stitch Fix: Anatomy of a Real Data Network Effect
Stitch Fix is the most instructive case study in e-commerce data network effects, not because it is the most successful company, but because it is the most legible example of the mechanism working as theorized.
The core of Stitch Fix's model is a feedback loop between human stylists and machine learning algorithms. Customers receive curated boxes of clothing. They keep what they like and return what they do not. Each keep/return decision, accompanied by explicit feedback ("too long," "wrong color," "love the fabric"), trains the recommendation system.
Here is what makes Stitch Fix's data genuinely proprietary and defensible:
1. The data is generated through a unique interaction model. There is no equivalent public dataset of "personal styling feedback linked to specific garment attributes and body measurements." This data does not exist in any form that competitors can purchase, scrape, or assemble from other sources.
2. Marginal data improves performance in visible ways. Because the domain is personal fit and style, highly idiosyncratic, long-tail by nature, additional data points on individual customers produce measurable improvement in recommendation accuracy. The learning curve per customer is steep for the first 3-5 Fix cycles, and the improvement is directly perceptible to the customer.
3. Improvement drives retention. Customers whose Fix quality improves over time retain at significantly higher rates. Stitch Fix reported that customers who received 5+ Fixes had retention rates roughly 2.5x those of customers with only 1-2 Fixes. The flywheel spins because each interaction genuinely makes the next one better in a way the customer can feel.
But Stitch Fix also illustrates the limitations. The company's data moat was strongest in its core personal styling use case and weakest in adjacent categories (home goods, accessories) where the data was thinner and the personalization signal weaker. The moat did not transfer automatically across product categories, it had to be rebuilt in each domain.
The lesson from Stitch Fix is precise: data network effects are real, but they are domain-specific, interaction-model-dependent, and require that the feedback loop closes in a way the customer experiences directly. Strip any of these three properties, and the moat vanishes.
Why Most "AI Companies" Have No Real Data Moat
The proliferation of "AI-powered" e-commerce companies has created a credibility gap. Everyone claims machine learning. Everyone claims data advantages. Almost nobody has them.
Here is a diagnostic framework. A company claiming a data moat must answer three questions affirmatively, with evidence:
Question 1: Is your training data genuinely proprietary?
If the same behavioral signals can be reconstructed from publicly available data (reviews, product listings, price histories), third-party data providers (credit card transaction data, web scraping services), or a competitor with reasonable funding and 18-24 months of operation, then the data is not proprietary in any meaningful sense. It is merely a head start, and head starts are not moats. Real defensibility comes from switching costs that compound independently of data advantages.
Question 2: Does your model performance measurably improve with the data you are currently collecting?
Not "did it improve historically", is it improving now, at the current margin? Many companies passed through a phase where data drove rapid model improvement and now sit firmly on the plateau, collecting data that changes nothing about model behavior. The moat existed in 2021 and evaporated by 2024, but the narrative persists because nobody updates the pitch deck.
Question 3: Do users choose your product over alternatives specifically because of data-driven features?
If users choose you for price, brand recognition, convenience, catalog breadth, or any factor unrelated to the ML-driven experience, then the data flywheel is disconnected from the growth engine. You have data, and you have growth, but one is not causing the other.
Most "AI e-commerce companies" fail Question 1, their data is reconstructible. Of those that pass Question 1, most fail Question 2, they have hit the plateau. And of the rare survivors of both, many still fail Question 3, users do not actually choose them for their AI.
Quality vs. Quantity: The Misunderstood Debate
The data moat conversation is dominated by volume. How many users. How many transactions. How many petabytes. This fixation on quantity obscures the more important variable: data quality.
Quality, in this context, has a precise meaning. It is the ratio of signal to noise in a dataset with respect to a specific prediction task. High-quality data is data that, per unit, teaches the model more than average. Low-quality data is data that, per unit, teaches less.
Consider two e-commerce companies, each processing 10 million transactions per month:
Company A sells commodity goods, phone chargers, paper towels, batteries. Purchase decisions are driven overwhelmingly by price and availability. The behavioral data is low-signal: knowing that a customer bought the cheapest USB-C cable tells you almost nothing about their next purchase. Model performance plateaus early because there is little to learn from undifferentiated purchase behavior.
Company B sells specialty goods, artisanal food, curated fashion, niche hobby equipment. Purchase decisions involve taste, expertise, and personal preference. Each transaction carries rich signal about the customer's preferences, knowledge level, and aesthetic sensibility. Model performance continues improving because each data point genuinely constrains the preference space.
At equal transaction volumes, Company B's data is worth an order of magnitude more for training recommendation models. The moat is not in the number of transactions. It is in the information density per transaction.
This principle, that data moats depend on information density, not volume, explains several otherwise puzzling observations. It explains why Netflix's recommendation system remains strong despite having far fewer users than YouTube. It explains why Spotify's Discover Weekly creates more loyalty than Apple Music's personalization despite comparable catalog sizes. And it explains why many high-volume e-commerce platforms with massive datasets produce mediocre recommendations: they have quantity without quality, volume without signal.
Cold Start Strategies for Building Data Advantages
If data moats are real but rare, how does a new entrant begin building one? The cold start problem, having no data when data is precisely what you need, is the central strategic challenge.
Five approaches, in descending order of effectiveness:
1. Interaction-First Design
Design the core product experience to generate the data you need as a byproduct of normal use. Stitch Fix's keep/return feedback loop is the canonical example. The customer is not doing extra work to provide data, the act of deciding what to keep is the interaction, and it generates exactly the signal the model needs.
The key constraint: the interaction must be intrinsically valuable to the user even before the model is good. If the product only works well after sufficient data accumulates, you have a chicken-and-egg problem. If the product works adequately without data and improves with it, you have a viable cold start strategy.
2. Complementary Data Acquisition
Acquire a dataset adjacent to your target domain and use transfer learning to bootstrap model performance. A fashion recommendation startup might acquire a vintage clothing archive with rich attribute labels, use it to train a visual similarity model, and then fine-tune on actual user behavior as it accumulates.
This does not create a moat, the acquired data is, by definition, available to others. But it accelerates the path to the plateau, after which the proprietary behavioral data begins to accumulate.
3. Explicit Preference Elicitation
Ask users directly. The "style quiz" that many fashion companies employ is an explicit preference elicitation mechanism. It generates labeled data, "I prefer this over that", without requiring transaction history.
The tradeoff is obvious: explicit data collection creates friction. Every question you ask is a potential abandonment point. The art is in designing elicitation that feels like a service (helping me find what I want) rather than a tax (answering questions so your model gets better).
4. Synthetic Data Generation
Use generative models to create synthetic training data that approximates real user behavior. This approach has improved dramatically with modern generative AI but remains limited: synthetic data can help with coverage (filling in sparse regions of the data space) but cannot replace real behavioral signal for preference modeling.
5. Data Partnerships and Consortiums
Pool data across non-competing entities. A group of regional e-commerce companies might share anonymized purchase data to collectively train recommendation models that none could train individually.
The practical barriers are significant, privacy regulations, competitive concerns, technical integration costs, but where they can be overcome, consortiums offer a path for smaller players to reach the plateau collectively.
The Data Moat Assessment Matrix
After analyzing the conditions for real data network effects, the failure modes of the data flywheel, and the dynamics of data decay, we can construct a practical assessment framework. The Data Moat Assessment Matrix (DMAM) evaluates a company's claimed data advantage across five dimensions, each scored on a 1-5 scale.
Scoring interpretation:
- Weighted score 4.0-5.0: Genuine data moat. Rare. Examples: Amazon's long-tail recommendations, PayPal's fraud detection, Stitch Fix's personal styling.
- Weighted score 2.5-3.9: Data advantage, not a moat. Provides a temporary head start that erodes under competitive pressure. Most well-funded e-commerce companies land here.
- Weighted score 1.0-2.4: No meaningful data advantage. The data claim is marketing, not strategy. The majority of self-described "AI-powered" e-commerce companies score in this range.
The matrix forces honesty. Scoring requires specific evidence for each dimension, not aspiration, not projection, but current measurable reality. A company that scores itself honestly and lands below 2.5 has not failed. It has learned something valuable: that its competitive advantage must be built on something other than data.
When Data Advantages Erode
Data moats are not permanent. They erode through several distinct mechanisms, and understanding these mechanisms is as important as understanding how moats form.
Mechanism 1: Plateau Convergence
When multiple competitors reach the performance plateau simultaneously, the data advantage among them collapses to near-zero. This has already occurred in basic e-commerce product recommendations. Any platform with a few million monthly active users has sufficient data to train a recommendation model that performs within a few percentage points of the best in the industry. The plateau is a great equalizer -- which is why winner-take-most dynamics in vertical SaaS so rarely produce true monopolies.
Mechanism 2: Foundation Model Displacement
Large pre-trained models, language models, vision models, multimodal models, encode general world knowledge that can substitute for domain-specific training data. A startup in 2026 can fine-tune a foundation model on a modest proprietary dataset and achieve performance that would have required orders of magnitude more data in 2020. This compresses the data advantage timeline: what took Amazon a decade to learn from behavioral data, a well-resourced competitor might approximate in months by combining a foundation model with targeted data collection.
Mechanism 3: Data Market Commoditization
As data brokerage and aggregation mature, previously proprietary signals become purchasable. Credit card transaction data, mobile location data, web browsing behavior, social media sentiment, all of these are available, at a price, to any company willing to pay. Each data source that transitions from proprietary to commodity erodes the moats of companies that relied on its exclusivity.
Mechanism 4: Regulatory Disruption
Privacy regulation, GDPR, CCPA, and their successors, constrains how data can be collected, stored, and used. But the constraint is not evenly distributed. Regulations disproportionately affect companies whose moats depend on broad behavioral surveillance. Companies whose data advantages come from explicit, consensual, in-product interactions (the Stitch Fix model) are more resilient to regulatory change than those whose advantages come from tracking users across contexts.
Mechanism 5: Architectural Innovation
Sometimes the moat does not erode, it becomes irrelevant. A new approach to the problem makes the old data unnecessary. Collaborative filtering was the dominant recommendation paradigm for a decade. The emergence of deep learning-based approaches, and now large language model-based approaches, changed which data matters and how much of it is needed. An incumbent with the world's best collaborative filtering dataset may find that advantage meaningless when the industry moves to a different architecture.
Conclusion: The Honest Assessment
Data network effects are genuine economic phenomena. They create real competitive advantages in specific, identifiable circumstances. But the circumstances are narrow, the advantages are temporally bounded, and the gap between a real data moat and a data marketing story is vast.
The companies with authentic data moats share three properties. Their data is generated through interaction models that cannot be replicated without building a similar business. Their models demonstrably improve at the current data margin in ways that users experience directly. And their data types have half-lives long enough to compound before they decay.
For everyone else, which is most of the e-commerce industry, the honest move is to stop claiming a data moat and start building one, or to acknowledge that competitive advantage lies elsewhere. Price. Logistics. Brand. Curation. Service. These are not lesser advantages simply because they lack the techno-mystique of "AI-powered data flywheels."
Spinoza wrote that the purpose of philosophy is not to mock, lament, or condemn human actions, but to understand them. The same ethic applies to competitive analysis. The purpose of studying data network effects is not to validate pitch deck narratives or to dismiss them reflexively, but to understand, with precision and honesty, when they are real, when they are not, and what to do in either case.
The companies that will build durable advantages in e-commerce over the next decade are not necessarily those with the most data. They are those with the clearest understanding of which data matters, why it matters, and when it will stop mattering. That understanding is, itself, a kind of moat, one that no dataset can substitute for and no competitor can scrape.
Further Reading
- Stitch Fix Algorithms Tour, How data network effects work in practice
- Network Effects on Wikipedia, The economic theory
References
-
Gregory, R.W., Henfridsson, O., Kaganer, E., & Kyriakou, H. (2021). "The Role of Artificial Intelligence and Data Network Effects for Creating User Value." Academy of Management Review, 46(3), 534-551.
-
Varian, H. (2019). "Artificial Intelligence, Economics, and Industrial Organization." NBER Working Paper Series, No. 24839.
-
Hagiu, A. & Wright, J. (2020). "When Data Creates Competitive Advantage." Harvard Business Review, January-February 2020.
-
Bajari, P., Chernozhukov, V., Hortacsu, A., & Suzuki, J. (2019). "The Impact of Big Data on Firm Performance: An Empirical Investigation." AEA Papers and Proceedings, 109, 33-37.
-
Farboodi, M. & Veldkamp, L. (2021). "A Growth Model of the Data Economy." NBER Working Paper, No. 28427.
-
Stitch Fix, Inc. (2021-2024). Annual Reports and SEC Filings. Algorithmic personalization and data strategy disclosures.
-
Claussen, J., Peukert, C., & Sen, A. (2019). "The Editor vs. the Algorithm: Returns to Data and Externalities in Online News." Working Paper.
-
Agrawal, A., Gans, J., & Goldfarb, A. (2018). Prediction Machines: The Simple Economics of Artificial Intelligence. Harvard Business Review Press.
-
Cennamo, C. & Santalov, J. (2019). "Platform Competition: Strategic Trade-offs in Platform Markets." Strategic Management Journal, 40(11), 1752-1788.
-
Zhu, F. & Iansiti, M. (2019). "Why Some Platforms Thrive and Others Don't." Harvard Business Review, January-February 2019.
Read Next
- Digital Economics
Two-Sided Network Effects Are Dead, The Rise of Multi-Sided Algorithmic Marketplaces
The textbook model of two-sided markets, more buyers attract more sellers attract more buyers, is a relic. The platforms that win today run on algorithmic matching, not network density. The implications for defensibility are profound.
- Digital Economics
The Economics of Zero Marginal Cost Bundling: When Adding Products Decreases Revenue
In digital markets, the marginal cost of adding one more product to a bundle is zero. Conventional wisdom says bundle everything. The data says the opposite, past a threshold, each addition dilutes the bundle's perceived value and total willingness to pay drops.
- Digital Economics
Winner-Take-Most vs. Multi-Homing: An Empirical Analysis of Market Concentration in Vertical SaaS
The 'winner-take-all' narrative dominates SaaS strategy. But empirical data across 20+ vertical categories tells a different story: most B2B software markets stabilize with 3-5 serious players, and switching costs are falling faster than incumbents realize.
The Conversation
4 replies
the three-conditions framing is the cleanest i've seen, but id argue most e-commerce companies fail the SECOND one, not the first. proprietary data is easy, everyone has it. what almost nobody has is data whose marginal value per user stays positive past the 80th percentile. after a certain volume, another search query about sneakers tells you essentially nothing new
I ran growth on a category at MELI for 3 years and the data-moat story was always more complicated than the pitch decks suggested. Our recommendation models had genuine compounding returns, but only within a single country/vertical, the LatAm cross-border data was so messy it actively hurt the US-market model when we tried to pool it. Data moats are more local than people assume.
Hajek and Parkes have a 2022 paper on 'data-driven network effects' that formalizes some of what youre describing, they show that without supply-side heterogeneity, the network-effect curve is logarithmic rather than exponential. worth citing if you do a follow-up. tbh the claim that data network effects are 'exponential' is the single most oversold concept in the moat literature right now.
the thing nobody acknowledges: the real moat isn't the data, it's the labeled data. we have billions of transactions at klarna but the fraud labels are sparse and expensive to generate. the company that figures out cheap labeling at scale has the actual moat.
Join the conversation
Disagree, share a counter-example from your own work, or point at research that changes the picture. Comments are moderated, no account required.