Global Regulatory Trends and Policy-Ethical Analysis Report on AI-Based Personality Imitation, Ghostwriting, and Information Manipulation

意識の深層
AI Persona Mimicry, Ghostwriting & Information Manipulation — Global Regulatory Trends & Policy Ethics

AI Governance · Policy & Ethics Report 2025

AI Persona Mimicry, Ghostwriting
& Information Manipulation
Global Regulatory Trends & Policy Ethics EU AI Act · China GB 45438-2025 · NO FAKES Act · Hiroshima AI Process — The International Contest Over the Human-AI Boundary

AI has crossed the threshold from text generation into persona mimicry — cloning individual writing styles, voices, and likenesses. Who owns your digital twin? Can a deceased person’s AI replica be sold? Six primary sources trace how regulators worldwide are racing to answer.

EU AI Act · DSA
China GB 45438-2025
NO FAKES Act
Digital Twin Rights
Hiroshima AI Process

Artificial intelligence’s capacity to generate text has surpassed mere summarization and translation, entering the domain of “persona mimicry” — replicating specific individuals’ writing styles and modes of reasoning. This advance has thrown into sharp relief the ethical and legal challenges of “AI ghostwriting” and the manipulation of public opinion through fabricated digital identities.

A 2025 study found that LLM use reduced critical thinking scores, while research published by Cornell University in 2025 indicated that using large language models can lead to cognitive atrophy and reduced brain elasticity, as neural networks related to memory become underused. According to Stanford University’s 2025 AI Index, global private investment in generative AI reached US$33.9 billion in 2024 — over 20% of all AI-related private investment, up 8.5× from 2022.

Central Thesis

AI persona mimicry, ghostwriting, and information manipulation are not merely technical problems — they are a fundamental challenge to human agency itself. The regulatory race between the EU, China, and the United States is the political answer to a single question: who is responsible for the integrity of the information space?

II

Chapter 01 — Ethics

The Ethics of AI Mimicry — Cyber Pseudepigraphy, Cognitive Atrophy & the Digital Twin

1.1 Cyber Pseudepigraphy and the Hollowing of Authorship

When AI communicates while pretending to be human, it violates individual autonomy and corrodes social trust. In education and research, AI-assisted dishonesty has emerged as a novel form distinct from traditional plagiarism — termed cyber pseudepigraphy: having AI generate content without engaging one’s own thinking, then presenting the output as evidence of one’s intellectual capacity. Whereas traditional plagiarism appropriates another person’s work, AI generation produces text that “nobody wrote — including the person presenting it,” hollowing out the very concept of authorship.

Cyber Pseudepigraphy
Presenting AI-generated content as one’s own intellectual product. Inhibits educational growth; erodes academic integrity.
Cognitive Atrophy
Cornell University 2025: LLM use reduces brain elasticity and leaves memory-related neural networks underactivated.
Liar’s Dividend
Disinformation so pervasive that even truth becomes suspect. Undermines democratic dialogue and destroys the credibility of information.
Collective Diversity Loss
2024 research: while ChatGPT improves individual output, it reduces the collective diversity of new content across populations.

1.2 The Digital Twin: Legal Status and the Risk of Being Propertized

Jennifer Rothman, Nicholas F. Gallicchio Professor of Law at the University of Pennsylvania and an international authority on the right of publicity, has recently focused her scholarship on “the ways intellectual property law is employed to turn people into a form of property.” As AI enables the creation of human digital twins (HDTs) that fully replicate an individual’s appearance, voice, and personality, she warns that Congress is on the verge of a serious mistake.

The Pennsylvania Gazette — Feature (Dec 2025)

Who Will Own Your Digital Twin?

thepenngazette.com/who-will-own-your-digital-twin/

▶ Rothman’s Warning “Everyone should be paying greater attention to what steps they can take to limit exposure. I was recently advised, for example, to remove outgoing voicemail messages that use your own voice.” If the rights to a person’s voice and likeness become permanently transferable to corporations, that identity could be deployed indefinitely in contexts — adult content, political propaganda — the individual never intended.
III

Chapter 02 — Disinformation

Deepfakes & FIMI — The Geopolitics of AI-Generated Disinformation

The European Parliament Research Service’s December 2025 briefing warns that generative AI has dramatically accelerated foreign information manipulation and interference (FIMI). Since Russia’s full-scale invasion of Ukraine, pro-Russian disinformation campaigns powered by generative AI have surged. In March 2022, a deepfake video of President Zelensky apparently ordering soldiers to surrender spread across Twitter, Facebook, YouTube, and VKontakte.

EPRS Briefing — European Parliament (Dec 2025)

Information Manipulation in the Age of Generative Artificial Intelligence

europarl.europa.eu — PE 779.259

Frontiers in Artificial Intelligence — Peer-Reviewed (2025)

AI-Driven Disinformation: Policy Recommendations for Democratic Resilience

frontiersin.org — 10.3389/frai.2025.1569115

2.1 Operation Overload and the Industrialization of Impersonation

Between January and March 2025, the Institute for Strategic Dialogue (ISD) found that a Russia-aligned campaign, “Operation Overload,” impersonated over 80 different organizations — exploiting their logos, fonts, and staff voices. One video used the Wall Street Journal’s branding to falsely claim that USAID funded “LGBTQ+ values among kids in Ukraine.” The technique weaponizes the credibility of established institutions themselves.

▶ The Scale of Generative AI Investment Stanford’s 2025 AI Index: global private investment in generative AI reached US$33.9 billion in 2024 (20%+ of all AI-related private investment), up 18.7% from 2023 and 8.5× higher than 2022. This capital concentration is industrializing FIMI at scale.
IV

Chapter 03 — Global Regulation

Transparency Obligations for AI-Generated Content — EU, China & US Compared

3.1 EU AI Act Article 50: Layered Transparency Obligations

The EU AI Act requires that where an AI system interacts directly with humans, users must be informed they are dealing with AI. Deployers generating deepfakes or content touching on matters of public interest must disclose this fact explicitly. Providers of GPAI (general-purpose AI) models are required to publish summaries of training data content and comply with copyright law.

EU AI Act — Key Issue 5

Transparency Obligations — Chapter IV

euaiact.com/key-issue/5

3.2 China: GB 45438-2025 Mandatory Labeling

China’s “Measures for Labeling AI-Generated Content” (GB 45438-2025), effective September 2025, imposes the most granular labeling requirements currently in force anywhere.

Manifest Labels
Visible text or logo indicating AI generation must appear at the beginning, middle, or end of chatbot responses.
Latent Labels
Service provider name and content ID must be embedded in file metadata to ensure full traceability.
Platform Monitoring
Social media platforms must classify AI-generated content as “confirmed,” “probable,” or “suspected” and distribute appropriate labels accordingly.

3.3 United States: State-Level Vanguard, Federal Vacuum

No comprehensive federal AI law currently exists in the United States. California’s B.O.T. Act (AB853) requires generative AI systems with over one million users to provide both latent and manifest disclosure that content is AI-generated. Texas’s Responsible AI Governance Act (TRAIGA) prohibits systems that incite self-harm, facilitate unlawful discrimination, or generate illegal deepfakes.

Anecdotes.ai — Regulatory Overview (2025)

AI Regulations in 2025: US, EU, UK, Japan, China & More

anecdotes.ai/learn/ai-regulations-in-2025-us-eu-uk-japan-china-and-more

3.4 Three-Pole Regulatory Comparison

AI-Generated Content Transparency: Three-Pole Regulatory Comparison
DimensionEU AI ActChina GB 45438-2025US Executive Order 14110
Core PhilosophyFundamental rights protection & risk managementSocial stability & national securityInnovation promotion & safety assurance
Text DisclosureRequired only for content touching on public interestMandatory explicit labeling on all AI textMandatory for federal agencies; guidelines for private sector
Metadata ObligationMachine-readable format requiredService provider info embedded; mandatoryNIST developing standards
Enforcement BodyAI Office & national authoritiesCyberspace Administration of China (CAC)Federal agencies (limited binding force)
V

Chapter 04 — Professional Ethics

AI Ghostwriting in the Professions — Law, Journalism & Academia

The impact of AI acting as a professional ghostwriter extends beyond efficiency gains — it is restructuring the architecture of professional responsibility and trust. In the legal profession, the risk of AI-generated false citations being filed in court has become a reality. The California State Bar’s 2023 guidelines prohibit the unreviewed use of AI output and require attorneys to verify all content and take personal responsibility under their own name.

Law

Mandatory Human Oversight

  • Unreviewed use of AI output is prohibited
  • Attorneys must verify all content and sign off personally
  • Confidentiality rule (Rule 1.6): strict security checks before inputting client data into AI
  • False AI-generated citations have already been filed in court

Journalism

The Human-in-the-Loop Principle

  • AP: AI direct article writing prohibited; restricted to backend tasks (summaries, metadata tagging)
  • Indonesia Press Council Regulation No. 1 (2025): strict editorial control even in AI-assisted reporting
  • Final writing and editorial judgment reserved for humans
VI

Chapter 05 — Postmortem Rights

Digital Twins of the Dead — The NO FAKES Act & Postmortem Privacy

As AI technology makes the “resurrection” of deceased individuals possible, the question of how far the law should extend to protect a person’s identity has become urgently contested. At the federal level in the US, the NO FAKES Act has been proposed to address the unauthorized use of deceased persons’ likenesses and voices. The bill would recognize the right to control one’s digital replica as a universal right, extending not just to celebrities but to all citizens.

WFU Journal of Law & Policy

Deepfakes of the Dead: Applying Postmortem Publicity Law to AI Digital Replicas

wfujournaloflawandpolicy.org — posthumous publicity & AI

NO FAKES Act (Federal)
Positions the right to control one’s digital replica as a universal right applicable to celebrities and private citizens alike
California State Law
Astaire Celebrity Image Protection Act and related statutes currently apply primarily to entertainers
China Deep Synthesis Rules
Processing biometric data (face, voice) requires the subject’s consent — or next-of-kin consent if the subject is deceased
Legal Liability Gap
As of 2025, liability for “digital resurrection” businesses remains legally unclear; new risks in data integrity and cybersecurity are emerging
VII

Chapter 06 — Future Outlook

2026–2030: Agentic AI, the Hiroshima Process & Three Pillars of Governance

As AI evolves into “agentic” systems capable of autonomously executing code, signing contracts, and completing transactions, regulatory focus is shifting from static content generation to dynamic decision-making and action. Reinterpretation of agency law is already underway to determine whether liability for a harmful AI-negotiated contract falls on the user or the developer.

Scenario 1

Unpredictable Advanced AI

Highly capable open-source models proliferate; serious harm from accidents and misuse emerges. AI literacy becomes a core organizational governance metric (UK GO-Science 2030 scenarios).

Scenario 2

Labor Market Disruption

Automation spreads from clerical to skilled trades. The ability to audit AI output transcends mere skill and becomes the heart of competitive and ethical governance.

HAIP

Hiroshima AI Process

The G7-led international code of conduct. The HAIP Reporting Framework enables cross-border AI safety assessment and mutual verification — advancing global transparency.

Three Pillars of AI Governance
I
Technical Transparency

Digital signatures and metadata embedding that identify the origin of content — as seen in China’s strict labeling rules and NIST’s standardization work — must be established as an international de facto standard.

II
Clarity of Legal Liability

In an era of agentic AI, remedies must go beyond after-the-fact damages to include “Law-Following AI” (LFAI) built into the design phase, and mandatory retention of AI action logs to ensure post-hoc traceability.

III
Redefining Human Agency

The expansion of AI into professional domains paradoxically raises the value of distinctly human capacities: critical thinking and ethical judgment. The ultimate defense of information integrity still resides in human sincerity and the social mechanisms that verify it.

Guarding the Integrity of the Information Space

The advance of AI persona mimicry and ghostwriting imposes a new cognitive burden on every information recipient: the permanent obligation to suspect AI’s presence. Will AI become a trustworthy companion, or a systemic deceiver? The answer depends on the outcome of the regulatory and ethical debates currently in progress.

Over the next five to ten years, AI will become an indispensable partner in human communication. The final line of defense for information integrity will still reside, then as now, in human sincerity and the social mechanisms built to verify it.

References & Primary Sources

  1. EPRS, “Information manipulation in the age of generative artificial intelligence,” PE 779.259, Dec. 2025 — europarl.europa.eu
  2. Frontiers in Artificial Intelligence, “AI-driven disinformation: policy recommendations for democratic resilience,” 2025 — frontiersin.org
  3. WFU Journal of Law & Policy, “Deepfakes of the Dead: Applying Postmortem Publicity Law to AI Digital Replicas” — wfujournaloflawandpolicy.org
  4. The Pennsylvania Gazette, “Who Will Own Your Digital Twin?” Dec. 2025 — thepenngazette.com
  5. EU AI Act — Key Issue 5: Transparency Obligations — euaiact.com/key-issue/5
  6. Anecdotes.ai, “AI Regulations in 2025: US, EU, UK, Japan, China & More” — anecdotes.ai
  7. Stanford University, AI Index 2025 — generative AI private investment data
  8. Cornell University, LLM use and cognitive atrophy research (2025)
  9. China GB 45438-2025, Measures for Labeling AI-Generated Content (effective Sept. 2025)
  10. G7 Hiroshima AI Process (HAIP) — International Code of Conduct for Advanced AI Systems