Skip to main content Skip to secondary navigation
Article Newspaper/Magazine

Human heuristics for AI-generated language are flawed

Significance

Human communication is now rife with language generated by AI. Every day, across the web, chat, email, and social media, AI systems produce billions of messages that could be perceived as created by humans. In this work, we analyze human judgments of self-presentations written by humans and generated by AI systems. We find that people cannot detect AI-generated self-presentations as their judgment is misguided by intuitive but flawed heuristics for AI-generated language. We demonstrate that AI systems can exploit these heuristics to produce text perceived as “more human than human.” Our results raise the question of how humanity will adapt to AI-generated text, illustrating the need to reorient the development of AI language systems to ensure that they support rather than undermine human cognition.

Abstract

Human communication is increasingly intermixed with language generated by AI. Across chat, email, and social media, AI systems suggest words, complete sentences, or produce entire conversations. AI-generated language is often not identified as such but presented as language written by humans, raising concerns about novel forms of deception and manipulation. Here, we study how humans discern whether verbal self-presentations, one of the most personal and consequential forms of language, were generated by AI. In six experiments, participants (N = 4,600) were unable to detect self-presentations generated by state-of-the-art AI language models in professional, hospitality, and dating contexts. A computational analysis of language features shows that human judgments of AI-generated language are hindered by intuitive but flawed heuristics such as associating first-person pronouns, use of contractions, or family topics with human-written language. We experimentally demonstrate that these heuristics make human judgment of AI-generated language predictable and manipulable, allowing AI systems to produce text perceived as “more human than human.” We discuss solutions, such as AI accents, to reduce the deceptive potential of language generated by AI, limiting the subversion of human intuition.

Author(s)
Maurice Jakesch
Jeffrey T. Hancock
Mor Naaman
Publisher
PNAS
Publication Date
March 7, 2023