CONTENTS

    AI Is Rewriting Corporate Comms: ~25% of Press Releases Are Now AI‑Generated (2025)

    avatar
    Tony Yan
    ·October 3, 2025
    ·5 min read
    Corporate
    Image Source: statics.mylandingpages.co

    Updated on 2025-10-03

    Corporate communications just crossed a threshold: multiple 2025 analyses indicate that nearly a quarter of corporate press releases are likely drafted by AI. That number is big enough to change newsroom triage, distribution strategies, and brand trust—yet it’s also nuanced by methodology and editing realities. Here’s what the data actually says, why it matters, and how to adapt your workflows without sacrificing quality or credibility.

    What the “~25%” figure really means

    In October 2025, researchers summarized cross‑platform findings showing that “since the launch of ChatGPT, nearly a quarter of corporate press releases” on major U.S. wires were AI‑generated. The summary notes that the detector performs best on large corpora and may undercount heavily human‑edited texts, implying a conservative estimate, according to the 2025 EurekAlert research overview. The same synthesis observes that AI‑flagged content spiked post‑late 2022 and that science/tech categories saw higher rates by late 2023.

    A companion write‑up reiterates publication in the Cell Press journal Patterns and places the broader average across multiple document types near the high‑teens, while corporate press releases cluster near a quarter, per the Phys.org 2025 summary of the Patterns publication. Important caveats: these are classifier‑based estimates, not disclosures; precision/recall metrics and industry breakdowns are still emerging publicly. Treat the 25% as a directional benchmark, not a universal constant across sectors or regions.

    Adoption inside PR teams: broad, uneven, and maturing

    A 2025 field study of 719 practitioners conducted February–May by PRWeek and Boston University points to high perceived innovation and uneven organizational readiness, offering a clearer picture of how AI is actually used in comms teams. See the PRWeek & Boston University 2025 AI in PR Survey for toplines and methodology; the authors report strong enthusiasm for experimentation but weaker infrastructure and governance, which tracks with what many in‑house and agency teams are experiencing.

    Zooming out, enterprise investment and accessibility trends help explain the speed of adoption. The Stanford HAI AI Index 2025 reports global generative‑AI investment of roughly $33.9B in 2024 alongside steep inference‑cost declines—both drivers of AI‑assisted drafting becoming a default option for busy comms teams.

    Bottom line: usage is widespread, but maturity varies. Many teams rely on AI for first drafts, brainstorming, and editing assistance while building (or backfilling) governance, review, and disclosure practices.

    Why this matters now: quality, trust, and sameness risk

    AI can accelerate drafting, but it also increases the risk of:

    • Sameness and “workslop”: generic wording that blends into the wire and gets ignored by editors.
    • Over‑confident claims: subtle hallucinations or misplaced certainty without primary sources.
    • Weak citations: missing or broken links to verifiable data.

    Newsrooms are already strained by volume, and AI search systems add another layer. As the Columbia Journalism Review has documented, many AI search engines struggle to consistently attribute sources, which can distort how releases and their underlying evidence propagate. See the CJR Tow Center analysis on AI search citation quality (2024–2025) for context on how poor attribution complicates downstream pickup and trust.

    The practical takeaway: to earn attention and withstand scrutiny, AI‑assisted releases need stronger sourcing, clearer claims, and visible editorial provenance.

    Measurement and monitoring: don’t just ship—instrument

    If AI helps you draft faster, your job shifts from writing alone to writing plus measurement. You need to know:

    • Which outlets and journalists picked up your release—and the quality of that pickup.
    • Whether “answer engines” (ChatGPT, Perplexity, Google AI Overview) are citing your release or your primary sources—and how sentiment is trending.
    • How these signals change over time versus benchmarks.

    A practical way to operationalize this is to track citations and sentiment across major answer engines in parallel with traditional media monitoring. Tools that monitor AI‑surface mention visibility and tone can reduce manual effort. For example, Geneo supports tracking brand visibility across ChatGPT, Perplexity, and Google AI Overview alongside sentiment over time. Disclosure: Geneo is our product.

    Sentiment and citation examples

    To move beyond “did we get coverage?” establish a small KPI set:

    • Citation integrity: Is the AI answer citing your release or primary data correctly?
    • Sentiment directionality: Net‑positive vs. net‑negative shifts after distribution.
    • Visibility share: How often your brand appears in AI answers for critical queries versus peers.

    For a practical look at cross‑query scoring and sentiment breakdowns, see how an internal report analyzes competitive mentions and tone in a consumer category via this sample report on luxury smart watch brands 2025. Adapt the measurement approach (not the subject matter) to your priority keyword space.

    Governance and disclosure: set rules before you scale

    Legal requirements around AI disclosures vary by jurisdiction, and most wire services emphasize human editorial oversight without publishing AI‑specific governance rules. For example, PR Newswire’s 2024 announcement frames AI as an assistive layer, not a substitute for human creative and editorial review, reinforcing the need for human control according to the PR Newswire AI solutions announcement (2024).

    Consider codifying the following in your comms policy (coordinate with legal/compliance):

    • When AI may be used: drafting, brainstorming, copyediting; prohibited for legal statements or regulated claims without SME sign‑off.
    • Disclosure options: e.g., “This release was drafted with assistance from generative AI and fully reviewed by our editorial team.” Use in sensitive contexts, high‑risk sectors, or when stakeholders expect transparency.
    • Record‑keeping: retain the claims log, prompts, drafts, and approvals for auditability.
    • Source standards: every factual claim must have a primary source link in the claims log.

    A safe, human‑in‑the‑loop workflow you can implement today

    Use this vendor‑neutral sequence to keep quality high while benefiting from speed:

    1. AI first draft with specificity: Provide structured inputs (audience, objectives, data tables, quotes) and forbid new facts.
    2. Subject‑matter review: SMEs validate technical accuracy and add nuance or proprietary context.
    3. Claims log and primary sources: For every statistic or assertion, paste the source title and URL; require author/date; note geography and sample size when relevant.
    4. Hallucination and plagiarism checks: Run a second AI or manual check to flag unverifiable statements and duplicated phrasing.
    5. Legal/compliance pass: Ensure regulated or forward‑looking claims follow policy; refine any disclosure language.
    6. Style and readability edit: Align to house style; cut boilerplate and generic adjectives; add clear, attributable quotes.
    7. Distribution with tracking: UTM‑tag owned assets; prepare query lists for answer‑engine audits.

    Timeliness: how often to refresh this benchmark and your process

    This trend is moving quickly—treat your policy and metrics as living documents.

    • Fact‑check cadence: Weekly for two weeks post‑publication, then biweekly through Q4 2025.
    • Watch list: release of the Patterns paper’s full methodology (title/DOI, classifier metrics), any wire‑service policy updates, and new adoption studies.
    • Visibility audits: Re‑run your answer‑engine checks monthly on priority queries, noting changes in citation patterns or sentiment. If AI answers misattribute your news, add corrective steps (more explicit source links, embeddable data visualizations, or journalist notes with primary materials).

    What to do next

    • Benchmark: Take one flagship topic and measure pickup quality, AI‑answer citations, and sentiment for 30 days post‑launch; compare with a pre‑AI drafting baseline.
    • Tighten governance: Adopt the workflow and disclosure options above; involve legal early for sensitive announcements.
    • Invest in measurement: Whether you build internal dashboards or use specialized tools, treat AI‑surface visibility as a first‑class metric alongside traditional coverage. Teams often pair internal analytics with platforms like Geneo to keep tabs on AI‑answer visibility and sentiment over time.

    Change‑log

    • 2025-10-03: Added the 2025 EurekAlert synthesis on AI‑written press releases; included Phys.org Patterns summary; incorporated PRWeek/BU 2025 survey context; added Stanford HAI AI Index 2025 for macro trends; referenced CJR Tow Center’s analysis of AI search citation issues; noted PR Newswire editorial stance on human oversight.

    Loved This Read?

    Write humanized blogs to drive 10x organic traffic with AI Blog Writer