Skip to main content Skip to secondary navigation
Article Newspaper/Magazine

Linguistic Markers of Inherently False AI Communication and Intentionally False Human Communication: Evidence From Hotel Reviews

Abstract

To the human eye, AI-generated outputs of large language models have increasingly become indistinguishable from human-generated outputs. Therefore, to determine the linguistic properties that separate AI-generated text from human-generated text, we used a state-of-the-art chatbot, ChatGPT, and compared how it wrote hotel reviews to human-generated counterparts across content (emotion), style (analytic writing, adjectives), and structural features (readability). Results suggested AI-generated text had a more analytic style and was more affective, more descriptive, and less readable than human-generated text. Classification accuracies of AI-generated versus human-generated texts were over 80%, far exceeding chance (∼50%). Here, we argue AI-generated text is inherently false when communicating about personal experiences that are typical of humans and differs from intentionally false human-generated text at the language level. Implications for AI-mediated communication and deception research are discussed.

Author(s)
David M. Markowitz
Jeffrey T. Hancock
Jeremy N. Bailenson
Publisher
Journal of Language and Social Psychology
Publication Date
September 11, 2023