AI Governance · Policy & Ethics Report 2025
AI Persona Mimicry, Ghostwriting
& Information Manipulation
Global Regulatory Trends & Policy Ethics
EU AI Act · China GB 45438-2025 · NO FAKES Act · Hiroshima AI Process — The International Contest Over the Human-AI Boundary
AI has crossed the threshold from text generation into persona mimicry — cloning individual writing styles, voices, and likenesses. Who owns your digital twin? Can a deceased person’s AI replica be sold? Six primary sources trace how regulators worldwide are racing to answer.
Prologue
- Prologue: When AI Pretends to Be Human
- The Ethics of AI Mimicry — Cyber Pseudepigraphy, Cognitive Atrophy & the Digital Twin
- Deepfakes & FIMI — The Geopolitics of AI-Generated Disinformation
- Transparency Obligations for AI-Generated Content — EU, China & US Compared
- AI Ghostwriting in the Professions — Law, Journalism & Academia
Prologue: When AI Pretends to Be Human
Artificial intelligence’s capacity to generate text has surpassed mere summarization and translation, entering the domain of “persona mimicry” — replicating specific individuals’ writing styles and modes of reasoning. This advance has thrown into sharp relief the ethical and legal challenges of “AI ghostwriting” and the manipulation of public opinion through fabricated digital identities.
A 2025 study found that LLM use reduced critical thinking scores, while research published by Cornell University in 2025 indicated that using large language models can lead to cognitive atrophy and reduced brain elasticity, as neural networks related to memory become underused. According to Stanford University’s 2025 AI Index, global private investment in generative AI reached US$33.9 billion in 2024 — over 20% of all AI-related private investment, up 8.5× from 2022.
Central Thesis
AI persona mimicry, ghostwriting, and information manipulation are not merely technical problems — they are a fundamental challenge to human agency itself. The regulatory race between the EU, China, and the United States is the political answer to a single question: who is responsible for the integrity of the information space?
Chapter 01 — Ethics
The Ethics of AI Mimicry — Cyber Pseudepigraphy, Cognitive Atrophy & the Digital Twin
1.1 Cyber Pseudepigraphy and the Hollowing of Authorship
When AI communicates while pretending to be human, it violates individual autonomy and corrodes social trust. In education and research, AI-assisted dishonesty has emerged as a novel form distinct from traditional plagiarism — termed cyber pseudepigraphy: having AI generate content without engaging one’s own thinking, then presenting the output as evidence of one’s intellectual capacity. Whereas traditional plagiarism appropriates another person’s work, AI generation produces text that “nobody wrote — including the person presenting it,” hollowing out the very concept of authorship.
1.2 The Digital Twin: Legal Status and the Risk of Being Propertized
Jennifer Rothman, Nicholas F. Gallicchio Professor of Law at the University of Pennsylvania and an international authority on the right of publicity, has recently focused her scholarship on “the ways intellectual property law is employed to turn people into a form of property.” As AI enables the creation of human digital twins (HDTs) that fully replicate an individual’s appearance, voice, and personality, she warns that Congress is on the verge of a serious mistake.
The Pennsylvania Gazette — Feature (Dec 2025)
Who Will Own Your Digital Twin?
thepenngazette.com/who-will-own-your-digital-twin/
Chapter 02 — Disinformation
Deepfakes & FIMI — The Geopolitics of AI-Generated Disinformation
The European Parliament Research Service’s December 2025 briefing warns that generative AI has dramatically accelerated foreign information manipulation and interference (FIMI). Since Russia’s full-scale invasion of Ukraine, pro-Russian disinformation campaigns powered by generative AI have surged. In March 2022, a deepfake video of President Zelensky apparently ordering soldiers to surrender spread across Twitter, Facebook, YouTube, and VKontakte.
EPRS Briefing — European Parliament (Dec 2025)
Information Manipulation in the Age of Generative Artificial Intelligence
europarl.europa.eu — PE 779.259
Frontiers in Artificial Intelligence — Peer-Reviewed (2025)
AI-Driven Disinformation: Policy Recommendations for Democratic Resilience
frontiersin.org — 10.3389/frai.2025.1569115
2.1 Operation Overload and the Industrialization of Impersonation
Between January and March 2025, the Institute for Strategic Dialogue (ISD) found that a Russia-aligned campaign, “Operation Overload,” impersonated over 80 different organizations — exploiting their logos, fonts, and staff voices. One video used the Wall Street Journal’s branding to falsely claim that USAID funded “LGBTQ+ values among kids in Ukraine.” The technique weaponizes the credibility of established institutions themselves.
Chapter 03 — Global Regulation
Transparency Obligations for AI-Generated Content — EU, China & US Compared
3.1 EU AI Act Article 50: Layered Transparency Obligations
The EU AI Act requires that where an AI system interacts directly with humans, users must be informed they are dealing with AI. Deployers generating deepfakes or content touching on matters of public interest must disclose this fact explicitly. Providers of GPAI (general-purpose AI) models are required to publish summaries of training data content and comply with copyright law.
EU AI Act — Key Issue 5
Transparency Obligations — Chapter IV
euaiact.com/key-issue/5
3.2 China: GB 45438-2025 Mandatory Labeling
China’s “Measures for Labeling AI-Generated Content” (GB 45438-2025), effective September 2025, imposes the most granular labeling requirements currently in force anywhere.
3.3 United States: State-Level Vanguard, Federal Vacuum
No comprehensive federal AI law currently exists in the United States. California’s B.O.T. Act (AB853) requires generative AI systems with over one million users to provide both latent and manifest disclosure that content is AI-generated. Texas’s Responsible AI Governance Act (TRAIGA) prohibits systems that incite self-harm, facilitate unlawful discrimination, or generate illegal deepfakes.
Anecdotes.ai — Regulatory Overview (2025)
AI Regulations in 2025: US, EU, UK, Japan, China & More
anecdotes.ai/learn/ai-regulations-in-2025-us-eu-uk-japan-china-and-more
3.4 Three-Pole Regulatory Comparison
| Dimension | EU AI Act | China GB 45438-2025 | US Executive Order 14110 |
|---|---|---|---|
| Core Philosophy | Fundamental rights protection & risk management | Social stability & national security | Innovation promotion & safety assurance |
| Text Disclosure | Required only for content touching on public interest | Mandatory explicit labeling on all AI text | Mandatory for federal agencies; guidelines for private sector |
| Metadata Obligation | Machine-readable format required | Service provider info embedded; mandatory | NIST developing standards |
| Enforcement Body | AI Office & national authorities | Cyberspace Administration of China (CAC) | Federal agencies (limited binding force) |
Chapter 04 — Professional Ethics
AI Ghostwriting in the Professions — Law, Journalism & Academia
The impact of AI acting as a professional ghostwriter extends beyond efficiency gains — it is restructuring the architecture of professional responsibility and trust. In the legal profession, the risk of AI-generated false citations being filed in court has become a reality. The California State Bar’s 2023 guidelines prohibit the unreviewed use of AI output and require attorneys to verify all content and take personal responsibility under their own name.
Law
Mandatory Human Oversight
- Unreviewed use of AI output is prohibited
- Attorneys must verify all content and sign off personally
- Confidentiality rule (Rule 1.6): strict security checks before inputting client data into AI
- False AI-generated citations have already been filed in court
Journalism
The Human-in-the-Loop Principle
- AP: AI direct article writing prohibited; restricted to backend tasks (summaries, metadata tagging)
- Indonesia Press Council Regulation No. 1 (2025): strict editorial control even in AI-assisted reporting
- Final writing and editorial judgment reserved for humans
Chapter 05 — Postmortem Rights
Digital Twins of the Dead — The NO FAKES Act & Postmortem Privacy
As AI technology makes the “resurrection” of deceased individuals possible, the question of how far the law should extend to protect a person’s identity has become urgently contested. At the federal level in the US, the NO FAKES Act has been proposed to address the unauthorized use of deceased persons’ likenesses and voices. The bill would recognize the right to control one’s digital replica as a universal right, extending not just to celebrities but to all citizens.
WFU Journal of Law & Policy
Deepfakes of the Dead: Applying Postmortem Publicity Law to AI Digital Replicas
wfujournaloflawandpolicy.org — posthumous publicity & AI
Chapter 06 — Future Outlook
2026–2030: Agentic AI, the Hiroshima Process & Three Pillars of Governance
As AI evolves into “agentic” systems capable of autonomously executing code, signing contracts, and completing transactions, regulatory focus is shifting from static content generation to dynamic decision-making and action. Reinterpretation of agency law is already underway to determine whether liability for a harmful AI-negotiated contract falls on the user or the developer.
Scenario 1
Unpredictable Advanced AI
Highly capable open-source models proliferate; serious harm from accidents and misuse emerges. AI literacy becomes a core organizational governance metric (UK GO-Science 2030 scenarios).
Scenario 2
Labor Market Disruption
Automation spreads from clerical to skilled trades. The ability to audit AI output transcends mere skill and becomes the heart of competitive and ethical governance.
HAIP
Hiroshima AI Process
The G7-led international code of conduct. The HAIP Reporting Framework enables cross-border AI safety assessment and mutual verification — advancing global transparency.
Technical Transparency
Digital signatures and metadata embedding that identify the origin of content — as seen in China’s strict labeling rules and NIST’s standardization work — must be established as an international de facto standard.
Clarity of Legal Liability
In an era of agentic AI, remedies must go beyond after-the-fact damages to include “Law-Following AI” (LFAI) built into the design phase, and mandatory retention of AI action logs to ensure post-hoc traceability.
Redefining Human Agency
The expansion of AI into professional domains paradoxically raises the value of distinctly human capacities: critical thinking and ethical judgment. The ultimate defense of information integrity still resides in human sincerity and the social mechanisms that verify it.
Guarding the Integrity of the Information Space
The advance of AI persona mimicry and ghostwriting imposes a new cognitive burden on every information recipient: the permanent obligation to suspect AI’s presence. Will AI become a trustworthy companion, or a systemic deceiver? The answer depends on the outcome of the regulatory and ethical debates currently in progress.
Over the next five to ten years, AI will become an indispensable partner in human communication. The final line of defense for information integrity will still reside, then as now, in human sincerity and the social mechanisms built to verify it.
References & Primary Sources
- EPRS, “Information manipulation in the age of generative artificial intelligence,” PE 779.259, Dec. 2025 — europarl.europa.eu
- Frontiers in Artificial Intelligence, “AI-driven disinformation: policy recommendations for democratic resilience,” 2025 — frontiersin.org
- WFU Journal of Law & Policy, “Deepfakes of the Dead: Applying Postmortem Publicity Law to AI Digital Replicas” — wfujournaloflawandpolicy.org
- The Pennsylvania Gazette, “Who Will Own Your Digital Twin?” Dec. 2025 — thepenngazette.com
- EU AI Act — Key Issue 5: Transparency Obligations — euaiact.com/key-issue/5
- Anecdotes.ai, “AI Regulations in 2025: US, EU, UK, Japan, China & More” — anecdotes.ai
- Stanford University, AI Index 2025 — generative AI private investment data
- Cornell University, LLM use and cognitive atrophy research (2025)
- China GB 45438-2025, Measures for Labeling AI-Generated Content (effective Sept. 2025)
- G7 Hiroshima AI Process (HAIP) — International Code of Conduct for Advanced AI Systems
