MQMC 2026 brings together international scholars presenting cutting-edge research in quantitative marketing,
platform economics, and AI-driven markets.
Below we share abstracts from confirmed speakers.
This page will be updated on a rolling basis as additional speakers and abstracts are confirmed.
Hebrew University
Business School
Coauthor: David Schweidel (Emory University's Goizueta Business School)
Abstract: What if each of us truly owned our own data?
Far beyond privacy, data ownership raises fundamental questions about control and value creation.
What would it mean for marketing if firms did not have to rely on dominant intermediaries such as Google or Facebook to reach consumers? And what if individuals could decide whether—and at what price—their data are used not only for advertising, but also to train and power AI engines?
Consumer data are a core input into firms’ marketing decisions, yet today’s data markets are dominated by large intermediaries that aggregate and monetize data to maximize their own profits.
Firms depend on these intermediaries for access, while consumers—who generate the data—have little control and are rarely compensated.
This talk presents a blockchain-based data marketplace that enables direct trade of data between buyers and sellers at market value.
By eliminating data brokers, this approach restores consumer control over data use and monetization while offering firms an alternative path to acquiring data for marketing and AI applications.
We discuss the limitations of current data markets, introduce a proof-of-concept research platform for studying data marketplaces, and outline key open research questions needed to prepare marketing theory and practice for a future of consumer-centered data markets.
National University
of Singapore
Coauthors: Shuang Zheng (Renmin University of China), Xin Ye (Arizona State University), Liang Shen (University of Connecticut)
Abstract: Generative search is an emerging search paradigm that integrates Generative AI (GenAI) into traditional search engines. Increasingly adopted by platforms worldwide, this format presents consumers with a GenAI-generated response to their query prior to displaying conventional search results. Unlike keyword-based search, which relies on consumers’ ability to translate underlying needs into effective keyword queries, generative search allows consumers to directly express their underlying intentions in natural language and receive responses that guide subsequent exploration. This shift moves the consumer–search engine interaction upstream in the search process, to the stage of original problem formulation. By bringing this upper-funnel interaction, namely, the mapping from problem formulation to keyword articulation, onto the platform, generative search makes a previously unobservable stage of consumer search behavior empirically observable. However, it remains empirically unknown whether generative search can increase consumer purchases, and how consumer search behavior changes when search can begin from expressed intent rather than keywords. Using a large-scale field experiment conducted on Meituan, a leading Chinese technology platform, this paper provides the first empirical evidence on the effectiveness of generative search. We find that generative search significantly increases consumer purchases. Further analysis reveals that generative search facilitates more effective and diverse keyword queries, reduces exploratory browsing and clicking while concentrating evaluation within relevant categories and merchants, which helps explain the observed increase in purchases. These findings provide empirical insights for future consumer search models that allow search to originate at the intention-expressing stage.
New Economic School
Abstract: We present the first large-scale field experiment on integrating generative AI into a national judicial system. In partnership with Pakistan's judiciary, we developed JudgeGPT, a custom-built generative AI assistant designed specifically for Pakistan's trial courts, and randomized 1,559 judges into one of three arms: (i) access to JudgeGPT with targeted training tailored to the tool; (ii) access to JudgeGPT with placebo training in technology and law; and (iii) a control group that received the placebo training but no access to the AI assistant. Treated judges are more likely to use generative AI and to support its broader adoption in the judiciary. Using an LLM-based, lawyer-validated measure of opinion quality, we find that JudgeGPT improves writing quality when paired with targeted training, but lowers it when provided without such training. Administrative records also point to sizable productivity gains: a one-standard-deviation increase in the share of judges assigned to JudgeGPT plus targeted training raises case resolutions by roughly 1,100 cases per district-year. Consistent with this pattern, usage data from JudgeGPT suggest that targeted training shifts judges away from open-ended legal search and toward more structured writing tasks, where the tool is likely more useful. We find little evidence that either treatment amplifies gender or ethnic bias in case outcomes or judicial language. Overall, the results suggest that generative AI can raise public-sector productivity, but that these gains may depend on targeted training that directs use toward tasks for which the tool is better suited.
Chinese University
of Hong Kong (CUHK)
Business School
Coauthors: Jieteng Chen, Jiwoong Shin
Abstract: Generative AI has transformed content creation, enhancing efficiency and scalability across media platforms. However, it also introduces substantial risks, particularly the spread of misinformation that can undermine consumer trust and platform credibility. In response, platforms deploy detection algorithms to distinguish AI-generated from human-created content, but these systems face inherent trade-offs: aggressive detection lowers false negatives (failing to detect AI-generated content) but raises false positives (misclassifying human-created content), discouraging truthful creators. Conversely, conservative detection protects creators but weakens the informational value of labels, eroding consumer trust. We develop a model in which a platform sets the detection threshold, consumers infer credibility from labels when deciding whether to engage, and creators choose whether to adopt AI and how much effort to exert to create content. A central insight is that equilibrium structure shifts across regimes as the threshold changes. At low thresholds, consumers trust human labels and partially engage with AI-labeled content, disciplining AI misuse and boosting engagement. At high thresholds, this inference breaks down, AI adoption rises, and both trust and engagement collapse. Thus, the platform’s optimal detection strategy balances these forces, choosing a threshold that preserves label credibility while aligning creator incentives with consumer trust. Our analysis shows how detection policy shapes content creation, consumer inference, and overall welfare in two-sided content markets.
University
of Wisconsin–Madison
Abstract: Algorithms are often blamed for creating political echo chambers. This paper studies Telegram—a large decentralized social-media platform with no recommendation feed or discovery algorithm—to show that similar polarization emerges even without algorithmic curation. Channels that share one another’s posts form linguistically and ideologically cohesive communities, and users’ limited search ability makes exposure highly like-minded. High search costs give channels market power over their audiences, while the demand for ideologically consistent content reinforces segmentation. A Hotelling-style simulation calibrated to the data shows that these forces alone can reproduce the observed polarization. Moreover, when users care about both ideology and content quality, algorithmic recommendations can generate more diverse expo-sure than decentralized discovery. Polarization, therefore, is an equilibrium outcome of market design rather than a technological artifact.
Hong Kong University
of Science and Technology
Coauthors: Wenbo Wang (Hong Kong University of Science and Technology), Zijun (June) Shi (Hong Kong University of Science and Technology).
Abstract: Social media platforms often confine users to information silos, creating narrow loops of reinforced interests that limit exposure to diverse perspectives and exacerbate societal divisions. This study introduces a metric to systematically quantify this phenomenon through two dimensions: intrapersonal variety, reflecting the breadth of individual topic engagement, and interpersonal difference, capturing the divergence of user consumption from population norms. Focusing on TikTok, we combine demographic surveys with a field experiment wherein participants record their browsing behavior. We analyze video topics to compute this information silo index, identify silo formation patterns, and correlate them with user characteristics. Furthermore, we test experimental interventions by providing users with tailored feedback on their content consumption habits. Our results demonstrate the intensity of information silos, assess user awareness, and establish the efficacy of feedback mechanisms in disrupting these patterns to foster broader content engagement.
Peking University’s Guanghua School of Management
Coauthors: Qiao Liu (Peking University) , Xiaohan Yang (Chinese University of Hong Kong )
Abstract: Digital platforms often subsidize transactions when entering into a new market, yet the economic returns to these subsidies remain less understood. This paper studies the launch of “Shan Gou”, a food- delivery service introduced by the E-commerce platform Taobao in mid-2025, which was accompanied by a large-scale consumer subsidy program. Using platform level transaction data from 100,000 consumers and exploiting the staggered adoption of subsidy usage, we estimate the impact of subsidies on consumer spending and platform activity. We find that the subsidies in the food-delivery category not only increase spending within that category but also generate positive spillovers to non-food-delivery categories. Further analysis shows that this cross-category demand spillover is driven by increased platform engagement induced by the subsidy program. Our findings illustrate how category-specific subsidies can support platform expansion and provide a rationale for the high levels of subsidy observed in practice.
Chinese University
of Hong Kong (CUHK)
Business School
Abstract: A large-scale randomized field experiment on a two-sided platform shows that providing a recall option in consumer search significantly increases the probability of a successful match and, counterintuitively, leads to higher final transaction prices. These findings challenge the well-established result in the consumer search literature that recall has no impact on search outcomes in an infinite-horizon single-agent setting. We develop a theoretical model of consumer search in which consumers are uncertain about the price distribution and update their beliefs in a Bayesian manner. The model predicts that recall can increase both match rates and accepted prices as consumers re-evaluate past offers in light of updated beliefs. An incentive-aligned online laboratory experiment further corroborates these findings and the underlying learning mechanism in a controlled setting, showing that recall reduces reliance on outside options.
University of Colorado Boulder Leeds School of Business
Coauthor: Daria Dzyabura (New Economic School, Moscow School of Management Skolkovo)
Abstract: Digital platforms increasingly allow consumers to save items for later use, such as adding movies to watchlists, songs to playlists, or recipes to digital boxes. We conceptualize this behavior as anticipatory collection: saving an item for potential future consumption before knowing which context will be realized. Unlike purchases, which maximize utility for a specific occasion, or consideration sets, which screen alternatives for a single imminent decision, anticipatory collection builds reusable sets of items that will be valuable in at least one future context.
We formalize this process with a max-over-contexts choice model, where each context has its own utility function and an item enters the collection if it performs well in any one of them. This structure introduces new estimation challenges because the utility function is non-linear and heterogeneity arises both in which contexts consumers anticipate and how they evaluate items within those contexts. We develop iterative estimation procedures, validate them in simulation, and apply the model to recipe collections from AllRecipes.com. The model significantly outperforms purchase-based benchmarks and uncovers latent contexts that explain how consumers collect for diverse future occasions. Our findings establish anticipatory collection as a distinct consumer decision process, advance methodological tools for modeling it, and open new research directions.
MIT Sloan School
of Management
Abstract: Why do people report probabilities like 0.2, 0.5, or 0.8 rather than 0.317, for example? Survey evidence documents widespread heaping at round numbers in subjective probability elicitation, yet existing work treats this as measurement noise or satisficing rather than the outcome of rational optimization. We argue it is neither noise nor laziness, but the outcome of rational optimization under two kinds of cost: forming a precise internal belief is cognitively demanding, and so is expressing it with many decimal places. We model a sender who first refines her belief through hierarchical binary partitioning and then rounds it to a decimal grid, with convex costs on both dimensions. The optimal precision pair is finite and well-defined, and reported probabilities heap on round decimals as a natural consequence. A further result is that refinement has sharply diminishing returns once internal resolution becomes finer than what the decimal grid can express, thus, at that point, thinking harder simply cannot improve what one says. Introducing response time as the resource governing both dimensions ties deliberation directly to reporting precision. The proposed framework bridges rational inattention and bounded communication.
New Economic School,
Moscow School
of Management Skolkovo
Coauthor: Ville Satopää (INSEAD)
Abstract: We develop a statistical framework that formalizes Daniel Kahneman’s taxonomy of judgmental noise—level, pattern, and occasion—and makes its components estimable from data. The model decomposes forecast errors into systematic miscalibration in baseline judgments (level bias), systematic mis-weighting of information (pattern bias), idiosyncratic differences across forecasters in these tendencies (level and pattern noise), and within-forecaster inconsistency across judgments (occasion noise), each linked to observable characteristics of forecasters, interventions, and events. Applying the framework to tens of thousands of real-world predictions, we find that nearly half of total forecasting error arises from occasion noise, while systematic level and pattern biases account for most of the remainder. Cross-forecaster heterogeneity contributes comparatively little. Cognitive reflection and numeracy reduce occasion noise and bias, whereas extraversion and openness amplify them. Structured collaboration and probabilistic training mitigate all major sources of error simultaneously. Together, these results provide quantitative evidence for the behavioral origins of noise in forecasting and clarify which psychological and institutional factors most effectively mitigate it.
Avito
Abstract: The Russian advertising market underwent a dramatic structural transformation following the exit of many multinational brands in 2022. This talk examines how the composition of advertisers, categories, and media investments changed in response to this shock, drawing on large-scale industry data and market analysis. The departure of international brands led to a sharp decline in the number of foreign advertisers and a substantial reallocation of advertising budgets across domestic companies, localized brands, and new entrants from Asia.
I discuss how firms adapted their marketing strategies under these conditions, including brand localization, rebranding, indirect media placement through digital channels, and shifts in category-level competition. The talk also examines the evolving structure of media demand, the implications for television and digital advertising markets, and scenarios for the potential return of multinational brands.
These developments provide a rare real-world setting for understanding how large exogenous shocks reshape advertising markets, brand competition, and media allocation decisions.
Hebrew university
Coauthors: Daria Dzyabura (New Economic School), Renana Peres (Hebrew University Business School)
Abstract: We study the impact of advertising cessation (“going dark”) on brand performance in FMCG markets, exploiting the large-scale and abrupt withdrawal of advertising by foreign brands in Russia following the 2022 sanctions. Using panel data from a major Russian e-commerce platform spanning 2021–2024, we analyze sales and market share dynamics across five product categories. Employing fixed-effects regressions and exponential decay models, we document economically meaningful declines in market share following advertising halts, with an average decrease of 1.37 percentage points.
The effects of going dark are highly heterogeneous across categories. High-churn categories, such as diapers, experience rapid and severe declines (e.g., Pampers: −62%), whereas more habitual, “sticky” categories, such as detergents and deodorants, exhibit substantially greater resilience. We further show that domestic and Asian brands capitalize on the exit of foreign advertisers, gaining market share, and that rebranding strategies partially mitigate losses for some affected brands.
We further document that some dominant brands, such as Rexona in deodorants, do not experience meaningful market share declines after going dark, suggesting that sustained advertising investment over long horizons can create durable brand equity that continues to support performance even in the absence of ongoing advertising.
École Polytechnique Fédérale de Lausanne
Coauthors: Argyro Katsifou and Ralf W. Seifert (École Polytechnique Fédérale de Lausanne)
Abstract: Assortment planning deserves much attention from practitioners and academics due to its direct impact on retailers’ commercial success. In this paper we focus on the increasingly popular retail practice to use combined product assortments with both “standard” and more fashionable and short-lived “variable” products for building up store traffic of “loyal” and “non-loyal” heterogeneous customers and enlarging the sales due to the potential cross-selling effect. Addressing the assortment planning as a bilevel optimization problem, we focus on decision-dependent uncertainties: the retailer’s binary decision about product inclusion influences the distribution of the product’s demand. Furthermore, our model accounts for customers’ optimal purchase quantities, which depend on budget constraints limiting the basket that a customer is able to purchase.
In this work, we propose iterative heuristics using optimal quantization of demand and customer’s budget distributions to define the total assortment and the inventory level per product. These heuristics provide lower bounds on the optimal value. We conduct a comparison to other existing lower bounds and we formulate upper bounds via linear (LP) and semidefinite (SDP) relaxations for the performance evaluation of the heuristics and for an efficient numerical solution in high-dimensional cases. For managerial insights, we compare the proposed approach with three assortment planning strategies: (1) the retailer does not carry variable products; (2) the retailer ignores the cross-selling effect; and (3) the maximum space allocated to each product is fixed. Our results suggest that variable assortment boosts the retailer’s profits if the cross-selling effect is not neglected in the decision about products’ quantities.
Hong Kong University
of Science and Technology
Coauthors: Xinyu Cao (Hong Kong University of Science and Technology), Xingyu Fu (University of New South Wales, Sydney)
Abstract: Firms often rely on commissioned sales agents to sell a product line, but these agents' incentives may not fully align with the firms'. We formalize this agency tension and show that it is first-order: with an uninformed agent who controls both pricing and persuasion, a two-version product line cannot be sustained in equilibrium. The agent's personal payoff biases effort and pricing toward the easy-to-sell, high-quality product, leading to product line collapse. Strikingly, even an infinitesimal sales delegation may trigger this collapse. We then propose three managerial remedies. First, empowering the agent with customer information can facilitate product line implementation, though incentive misalignment persists and the relaxed (first-best) outcome is not attained in general. Second, limiting the agent's pricing discretion restores managerial control and can sometimes sustain a product line. Third, offering product-specific commission rates --- penalizing sales of high-end products while rewarding those of low-end ones --- can induce product line implementation in equilibrium. Together, these results offer guidance for managing salesforce: firms should view information access, pricing authority, and commission structure as complementary levers to align selling incentives with product line strategy.
Chinese University
of Hong Kong (CUHK)
Business School
Abstract: Influencer marketing has experienced rapid growth in recent years. In the video game industry alone, approximately 40,000 YouTube influencers are actively promoting various games. Given that multiple influencers are often engaged simultaneously in typical promotional campaigns, and the collective impact of these promotions (e.g., sales outcomes) is only observed at an aggregate level, understanding and optimizing the selection of influencers is crucial. However, this task is challenging due to the need to accurately predict how campaign outcomes respond to the promotional efforts of participating influencers. Two primary challenges arise in this context. First, this response model is high-dimensional and nonlinear, characterized by numerous main effects and interactions due to the large number of influencers involved. Second, the number of observed campaign outcomes is often limited, particularly for short-lived products like video games, complicating reliable estimation. Existing methods face a trade-off: some effectively manage high dimensionality but struggle to capture nonlinear relationships (e.g., LASSO), while others provide flexible nonlinear modeling but perform poorly with a small number of observed outcomes (e.g., many machine-learning approaches). In this paper, we propose a Bayesian Additive Regression Tree with structural priors (BARTwSP) approach to simultaneously address these challenges. BARTwSP integrates tree-based machine learning techniques, of which the additive aggregation of trees is found to well approximate any complex nonlinear relationships, with the Bayesian framework which imposes a strong prior on the tree depth to ensure a shallow tree structure, thereby reducing overfitting and enhancing inference with small samples. By incorporating structural prior knowledge, we can further regulate variable splitting probabilities, addressing the ``small N large P" problem. Using a Monte Carlo simulation that replicates the real-world challenges, we demonstrate that the BARTwSP model predicts outcomes more accurately with at least 18% reduction in MSE than the benchmark models, including LASSO and other machine-learning approaches in high-dimensional (large P) settings with limited observations (small N). The results further indicate that this advantage is linked to the BARTwSP model's superior ability to select relevant variables used in the data generation process characterized by prominent between-group and within-group sparsity. We illustrate the benefits of our proposed model using data from a leading global video game developer. We generate structural prior knowledge using large language models (LLMs). Our findings show that: first, the BARTwSP model predicts daily active users (DAU) more accurately with the best MSE in this real-world setting. Second, leveraging the estimated response function, we optimize the selection of influencers subject to influencer-specific costs or an overall campaign budget. The proposed model can increase the DAU by 21% under the current budget constraints, and it selects only 48% of the original influencers but requests some of them to post more frequently. Lastly, using an endogeneity-corrected version of the model, we quantify the effects of individual influencers to better understand the factors driving the optimization results.
Chinese University
of Hong Kong (CUHK)
Business School
Abstract: We propose a novel framework to estimate heterogeneous treatment effects in network-based quasi-experiments, addressing the challenge of interference without requiring prior assumptions on spillover mechanisms. Our approach leverages a dual neural network architecture combined with graph convolutional networks to process full non-Euclidean graph structures. We establish formal theoretical identification conditions and provide functional approximation guarantees for the neural estimator. Validated via Monte Carlo simulations and applied to a massive online multiplayer game with over 140,000 users, our method reveals that reaching level milestones increases daily transactions by 45–63%. Crucially, we uncover significant heterogeneity based on network position: socially isolated players exhibit stronger responses to content unlocks than well-connected players. These findings extend causal deep learning to networked settings and offer actionable metrics for personalized player retention strategies.
Hong Kong University
of Science and Technology
Coauthors: Jincheng Fang and Shijie Han (Hong Kong University of Science and Technology)
Abstract: Video content such as short-form social media clips and livestream commerce has become central to contemporary digital marketing, serving not just as visual media but as strategic vehicles for narratives, persuasion, and product demonstration. Standard Large Multimodal Models (LMMs) are limited in capturing these marketing-relevant elements because prompt-based approaches often miss persuasive logic and produce factually weak outputs. To address this gap, we develop a Reflexive Agentic Measurement Framework tailored to marketing video analytics. The framework involves nine AI agents in two phases. The first phase constructs agent behavior, involving iteratively discovering latent narratives and persuasive cues and then autonomously generating context-aware coding prompts. The second phase generates and verifies content analysis: decomposing marketing videos into discrete propositions tied to key marketing elements and validating each element against the raw footage to ensure factual reliability. Applying this framework to 5,000 livestream commerce videos, we show that it delivers richer and more accurate descriptions of marketing strategies—such as dynamic screen layouts, product effect demonstrations, and narrative framing—than standard prompt-based methods and human-crafted baselines. Our approach also achieves this at a low marginal cost per video minute ($0.005 per video minute), offering a scalable methodology for automated marketing video coding. By enabling systematic measurement of previously elusive strategic elements in marketing videos, this research advances measurement tools for digital marketing and supports data-driven investigations into the drivers of video marketing effectiveness.