4.1 Descriptive Overview
The systematic search and screening process yielded 35 studies for inclusion. Table 1 presents the full distribution of included studies by year, journal, methodology, and geographic context. The temporal distribution reveals a field in rapid expansion: only one study was published in 2020, two appeared in 2023, and nine in 2024, whereas 20 studies were published in 2025 and three in 2026. This concentration means that over 91% of the empirical base has emerged within the most recent three years, reflecting both the novelty and the urgency of the topic.
Table 1. Summary of Included Studies (N = 35)
In terms of publication outlets, the included studies span 19 distinct journals. The Journal of Retailing and Consumer Services contributes the largest share (n = 6), followed by the Journal of Advertising (n = 4), the Journal of Theoretical and Applied Electronic Commerce Research (n = 4), the Journal of Interactive Advertising (n = 3), and the Journal of Business Research (n = 3). The remaining 15 studies appear across outlets in advertising, hospitality, consumer marketing, and information systems, reflecting the interdisciplinary character of the research domain.
Experiments constitute the dominant methodology (n = 23, approximately 70%), followed by survey designs (n = 5), mixed-methods studies (n = 2), conceptual and review papers (n = 2), and one prior systematic literature review. The heavy reliance on experimental designs reflects the field's orientation toward causal inference regarding disclosure and authorship manipulations. Geographically, most studies draw on Western samples (primarily the United States and Europe), with limited representation from East Asia (China, South Korea) and only one explicitly cross-national investigation. The Global South remains almost entirely absent from the evidence base.
4.2 Theme 1: The Disclosure Dilemma
AI disclosure -- the explicit labeling of content as machine-generated -- is the single most frequently examined antecedent in the corpus, investigated in nine of the included studies. The prevailing pattern is clear: disclosure activates consumer skepticism and erodes trust-related outcomes. Bui (2025) demonstrate that revealing AI involvement in prosocial advertising reduces ad credibility and purchase intentions through persuasion knowledge activation. Similarly, Koning and Voorveld (2025) find that disclosure diminishes trust in both advertisements and the organizations behind them, suggesting that the reputational cost extends beyond the individual message.
This negative disclosure effect replicates across varied marketing contexts. Qiu et al. (2025) report that AI disclosure inhibits purchase intentions specifically in cause-related marketing campaigns, where consumers appear to hold heightened expectations of human sincerity. In service advertising, the intangibility of the offering intensifies the problem: Grigsby et al. (2025) find that although 75% of consumers report favoring disclosure as a matter of principle, the same disclosures reduce their trust in the advertised service. Bui (2025) extends this pattern by showing that disclosure diminishes perceived advertising value, which in turn suppresses purchase intentions. In the context of AI-generated sponsored vlogs, Liu, Lian, and Osman (2025a) similarly demonstrate that AI disclosure affects perceived content quality and online shopping intentions. Wortel et al. (2024) confirm these findings in social media contexts, documenting how AI disclosures on Instagram advertisements trigger persuasion knowledge and attribution processes that shift consumer attitudes downward.
In contrast, the evidence is not uniformly negative. Kirkby et al. (2023) report that AI disclosure did not significantly affect brand attitudes or perceived authenticity, a finding that stands as a notable boundary condition. One possible explanation is that brand strength moderates the relationship: when established brands disclose AI involvement, consumers may perceive the disclosure as a signal of competence rather than deception. Wu et al. (2024) add further nuance, demonstrating that disclosure effects on word-of-mouth intentions vary depending on whether the AI was deployed for creative versus analytical tasks. Consumers penalize AI authorship more severely when the task is perceived as requiring human creativity and emotional investment.
Building on these findings, a synthesis across the corpus suggests that the disclosure-trust relationship follows a conditional pattern rather than a simple negative main effect. The direction and magnitude of the effect depend on content domain, product category, and disclosure framing, a set of moderating conditions explored further in Section 4.4. Notably, the evidence indicates that the format of disclosure itself matters. A bare label stating "Generated by AI" may activate persuasion knowledge more readily than a brand-voice disclosure framing such as "Our team used AI tools to enhance this content," which signals organizational honesty while preserving attributions of human oversight. The null finding reported by Kirkby et al. (2023) is consistent with this interpretation, as their experimental stimuli employed a brand-voice transparency approach rather than a stark algorithmic label. This distinction between label-only and brand-voice disclosure formats represents a potentially important boundary condition that future research should examine directly.
4.3 Theme 2: Authenticity as Central Mediator
Perceived authenticity emerges as the most consistently identified mechanism through which AI-generated content influences downstream consumer responses. Seven studies in the corpus position authenticity as a formal mediator, and several additional studies measure it as a dependent variable. The pattern across these investigations points toward a dominant causal chain: awareness of AI authorship reduces perceived authenticity, which in turn diminishes trust, brand attitudes, and behavioral intentions.
Bruns and Meissner (2024) provide foundational evidence through three experiments showing that generative AI content creation diminishes perceived brand authenticity, which mediates negative follower reactions on social media. Kirk and Givi (2025) extend this work across seven experiments, identifying a dual-process pathway in which AI authorship reduces both perceived authenticity (a cognitive evaluation) and triggers moral disgust (an affective response), with each pathway independently suppressing attitudes, word-of-mouth intentions, and brand loyalty. The identification of moral disgust as a parallel mediator is distinctive within the corpus and connects the AI-content literature to moral psychology.
This pattern extends to specific industry contexts. In the restaurant industry, AI disclosure undermines perceived brand authenticity, which mediates subsequent declines in brand image and consumer behavior (Ali et al., 2025). Zhang & Hur (2025) replicate the authenticity-mediation pathway in visual advertising, showing that AI-generated images with disclosure erode authenticity perceptions and, through them, brand trust. Consumer reactions to perceived ChatGPT usage in online reviews follow the same logic: reviews suspected of AI generation are rated as less authentic, less trustworthy, and less useful (Amos & Zhang, 2024). Even when the level of AI involvement varies, the authenticity mechanism persists; one study finds that higher degrees of AI involvement progressively diminish perceived authenticity, creativity, and emotional depth, reducing content engagement (Liu, Lian, & Osman, 2025a).
Beyond authenticity, the corpus identifies several related mediating constructs that operate through parallel psychological pathways. Perceived humanness serves as a mediator in luxury advertising, where artificial creativity influences consumer trust and perceived humanness simultaneously (Jung et al., 2025). Social presence mediates the effects of anthropomorphic design cues on purchase intention and trust in generative AI marketing communication (Wang et al., 2025). Emotional trust functions as a mediator in hospitality contexts, where the mere use of the term "artificial intelligence" in product descriptions lowers emotional trust and suppresses purchase intentions (Cicek et al., 2024). Perceived risk operates as a complementary pathway on e-commerce platforms, where trust and risk jointly determine purchase behavior in response to AI-generated content (Yu et al., 2025).
Taken together, the mediator evidence answers RQ2 with considerable consistency. The dominant pathway runs from AI authorship awareness through reduced perceived authenticity to lower trust and diminished behavioral intentions. Secondary pathways through perceived humanness, social presence, moral disgust, and emotional trust enrich the picture but have each been documented in only one or two studies, indicating avenues for replication.
4.4 Theme 3: Moderating Conditions
Findings across the corpus are heterogeneous, signaling that the effects of AI-generated content on consumer trust are not uniform. Thirteen studies identify explicit moderating variables, which can be organized into three categories: content-level, consumer-level, and contextual moderators.
At the content level, the type of appeal and the nature of the task emerge as important boundary conditions. Chen et al. (2024) demonstrate that emotional versus rational advertising appeals yield different consumer responses to AI-generated ads, with emotional appeals intensifying negative reactions because they invoke expectations of human warmth and sincerity. Wu et al. (2024) further show that consumers evaluate AI involvement differently when the task is creative (e.g., copywriting, design) versus analytical (e.g., data-driven product recommendations). AI disclosure reduces word-of-mouth intentions for creative tasks but not for analytical ones, suggesting that perceived appropriateness of AI use shapes consumer tolerance. The degree of AI involvement also matters: fully AI-generated content elicits stronger negative reactions than AI-assisted content, where human oversight is perceived to remain (Liu, Lian, & Osman, 2025a). Product category introduces additional variation, with services (particularly intangible offerings in hospitality) generating heightened trust concerns relative to tangible goods (Grigsby et al., 2025; Cicek et al., 2024).
Beyond these binary comparisons, consumer reactions appear to differ not merely between the poles of "AI-generated" and "human-created" but along a graduated spectrum of perceived AI involvement. This spectrum runs from minor AI assistance, through substantive human-AI collaboration, to fully autonomous AI generation. Liu, Lian, and Osman (2025a) provide initial evidence that increasing AI involvement progressively diminishes authenticity perceptions. Taken alongside the task-type findings of Wu et al. (2024), the pattern is consistent with a threshold model: below a certain level of perceived AI contribution, the disclosure penalty appears negligible, but once involvement crosses a critical boundary, consumer reactions shift sharply rather than scaling in proportion. Identifying where this threshold lies, and how it varies across content types and consumer segments, remains an open empirical question with substantial practical significance.
At the consumer level, individual differences moderate the AI content-trust relationship in several ways. AI literacy shapes how consumers process information from AI-generated sponsored vlogs, moderating the link between information adoption and purchase intention (Liu, Osman, et al., 2025b). Consumers with higher AI literacy may evaluate AI-generated content more critically yet also more accurately, producing a more balanced rather than uniformly negative response. Pre-existing attitudes toward AI in general influence brand evaluations when brands disclose their use of the technology (Pierre, 2025). Self-efficacy operates as an additional consumer-level moderator, interacting with appeal type to alter trust and perceived humanness judgments (Chen et al., 2024). Generational differences further segment the audience: Millennials and Gen Z demonstrate greater receptivity to AI-driven virtual influencers and the content they produce (Su, 2025).
At the contextual level, culture, brand transparency strategy, and anthropomorphic design cues shape the strength and direction of AI content effects. The sole cross-national study in the corpus reveals significant differences in how consumers across countries respond to AI-generated advertising, pointing toward cultural dimensions (such as uncertainty avoidance and technology acceptance norms) as underexplored explanatory factors (Nguyen et al., 2026). Brand transparency framing moderates the relationship between AI use and consumer perceptions, with transparent communication about AI involvement partially offsetting negative reactions (Pierre, 2025). Anthropomorphism emerges as a particularly promising buffering mechanism: two studies show that endowing AI-generated content or its source with human-like characteristics enhances social presence, perceived authenticity, and trust (Lee & Kim, 2024; Wang et al., 2025). However, a qualitative comparative analysis cautions that such anthropomorphic strategies must be balanced against ethical transparency requirements to avoid perceptions of manipulation (Du Plessis, 2025).
4.5 Theme 4: Downstream Outcomes
Across the 35 studies, a broad range of behavioral and attitudinal outcomes affected by AI-generated marketing content, with purchase intention and brand trust measured most frequently. Purchase intention appears as a dependent variable in 21 studies, making it the single most examined outcome. Brand trust follows closely (n = 18), and advertising attitude is measured in 12 studies. Israfilzade (2025) provides direct experimental evidence that AI-generated advertisements reduce both consumer trust and purchase intent relative to human-created counterparts, reinforcing the centrality of these two outcome variables. This concentration reflects the field's applied orientation toward predicting consumer behavior in marketplace contexts.
Beyond these primary outcomes, several studies trace the effects of AI-generated content to word-of-mouth and sharing intentions. Kirk and Givi (2025) show that AI authorship reduces willingness to share and recommend marketing content, an effect that flows through both the authenticity and moral disgust pathways. Wu et al. (2024) replicate the word-of-mouth finding but demonstrate that the effect is conditional on task type. Brand attitude and brand image are measured as outcomes in studies spanning product branding (Kirkby et al., 2023), cross-national advertising (Nguyen et al., 2026), and restaurant marketing (Ali et al., 2025), with the general pattern showing negative or null effects of AI disclosure on these constructs.
Several studies extend the outcome space beyond traditional marketing metrics. Consumer engagement behaviors -- including clicks, likes, and follows -- decline when AI involvement is disclosed (Bruns & Meissner, 2024; Liu, Lian, & Osman, 2025a). Ethical judgments represent a distinct outcome category: Li et al. (2024) examine how AI identity disclosure influences consumer unethical behavior through social judgment processes, while Kirk and Givi (2025) document moral disgust as both a mediator and an outcome in its own right. Customer experience quality is diminished when AI-driven personalization triggers distrust (Hardcastle et al., 2025), and donation intentions decline in AI-generated charitable advertising (Arango et al., 2023). The pattern that emerges across these diverse outcomes supports a trust-centered causal chain: AI awareness feeds into authenticity and credibility judgments, which shape trust, which in turn determines behavioral intentions across purchasing, sharing, engaging, and donating contexts.
4.6 Tensions and Contradictions
Three unresolved tensions run through the corpus and resist straightforward synthesis, though a closer examination of boundary conditions offers partial resolution for each.
The first is the disclosure paradox. Consumers consistently report preferring transparency about AI involvement -- Grigsby et al. (2025) find that 75% of respondents favor disclosure -- yet the same disclosures reduce their trust and purchase intentions. This paradox places brands in a double bind that intensifies as regulatory mandates such as the EU AI Act make disclosure increasingly compulsory. However, the paradox becomes easier to address once disclosure is disaggregated by format. The studies reporting strong negative effects predominantly employ label-only disclosure conditions (a simple statement that content was "AI-generated" or "created by AI") which functions as a persuasion knowledge trigger. In contrast, the null finding of Kirkby et al. (2023) involved a brand-voice transparency framing in which the brand proactively communicated its use of AI tools. This distinction suggests that the disclosure paradox may dissolve when the analysis moves from a binary disclosure-versus-nondisclosure framework to a more granular examination of how disclosure is framed. Label-only disclosure activates skepticism; brand-voice disclosure may instead signal organizational honesty.
A second tension concerns the inconsistency between null and negative disclosure effects. Kirkby et al. (2023) find no significant impact of AI disclosure on brand attitudes or perceived authenticity, a result that contrasts with the majority of studies reporting negative effects (Bui, 2025; Qiu et al., 2025; Zhang & Hur, 2025). The divergence appears dependent on at least three factors operating simultaneously. Brand strength may serve as a buffer: established brands with pre-existing authenticity capital can absorb disclosure risk that would damage less familiar brands. Content type further conditions the effect, with negative results concentrated in emotional, prosocial, and cause-related contexts where expectations of human sincerity are highest. Disclosure format, as discussed above, adds a third layer. These factors likely interact, such that a strong brand disclosing AI assistance in informational content through a brand-voice frame may face negligible backlash, while an unfamiliar brand disclosing full AI authorship of emotional content faces compounding penalties. The corpus does not yet contain factorial designs that test these interactions jointly, but the pattern of available results is consistent with this multi-layered conditional account.
A third tension involves authenticity restoration through anthropomorphism. Two studies demonstrate that human-like cues can partially restore perceived authenticity and trust when content is AI-generated (Lee & Kim, 2024; Wang et al., 2025). Yet Du Plessis (2025) warns that such strategies risk crossing into manipulation, potentially producing a delayed backlash when consumers recognize the anthropomorphic framing as itself inauthentic. The tension may be explained by a nonlinear relationship between humanlikeness and consumer acceptance. At moderate levels, anthropomorphic cues enhance social presence, warmth, and perceived sincerity without triggering skepticism. At excessive levels, the gap between the apparent humanness of the communication and the known machine origin may produce uncanny valley discomfort or, worse, perceptions of deliberate deception. The field has not yet established where the tipping point lies, nor how it shifts across product categories and consumer segments, but the available evidence indicates that anthropomorphism is a balancing problem rather than a simple buffering strategy.
4.7 Conceptual Framework
Figure 2 presents an integrative framework that synthesizes the findings across all 35 included studies into an input-process-output model. On the left side, antecedents are organized into three tiers: content-level factors (AI authorship, disclosure presence, content type, AI involvement level, and appeal type), consumer-level factors (AI literacy, general AI attitudes, self-efficacy, and generational cohort), and contextual factors (product category, culture, and regulatory environment).

Figure 2. Integrative Input-Process-Output Framework of Consumer Trust Responses to AI-Generated Marketing Content
These antecedents feed into a set of mediating mechanisms positioned at the center of the framework. Perceived authenticity occupies the primary mediating role, supported by empirical evidence across seven studies. The mediating mechanisms operate through three distinguishable classes: cognitive pathways, including persuasion knowledge activation and source credibility assessment; affective pathways, including moral disgust and emotional trust; and perceptual pathways, including perceived authenticity, perceived humanness, and social presence. These classes are not mutually exclusive -- in many consumer encounters with AI-generated content, cognitive, affective, and perceptual processes operate simultaneously, with boundary conditions determining which pathway dominates. On the right side, outcomes span trust in advertisements and brands, purchase intention, word-of-mouth, advertising attitude, brand image, engagement behavior, and ethical judgments.
Moderating conditions, including product category, culture, brand transparency framing, anthropomorphism cues, task type, and marketing context, are positioned to influence both the antecedent-to-mediator and mediator-to-outcome linkages. The framework highlights that the pathway with the strongest cumulative support runs from AI disclosure through reduced perceived authenticity to diminished brand trust and suppressed purchase intentions. Pathways through moral disgust and perceived humanness carry preliminary support and represent high-priority targets for future investigation, as discussed in Section 6.