Stanford Social Media Lab research is supported by the Stanford Institute for Human-Centered Artificial Intelligence, the Knight Foundation, and the National Science Foundation.

Attributional dynamics of smartphone use

Why do people believe the phone is additive? In this project. one of our primary research questions is whether beliefs that the mobile phone is addictive is rooted not only people’s own experience with the phone.

Read More

“Silvers” and technology

The goal of this research program is to better understand what adults over 65, “silvers,” think and feel about technology, how they engage with potentially harmful content online, and how we might be able to leverage the wisdom and experience of this population to empower them in their use of new technologies.

Read More

Political search media

In a recent and ongoing project on political search media, we collect and analyze data from search engines’ results pages for queries related to political issues, such as the names of political candidates running for office.

Read More

Social media mindsets

Is using social media helpful or harmful to your well-being? Our research suggests the answer may be a matter of mindset.

Read More

AI mediated communication

AI-MC is mediated communication between people in which a computational agent operates on behalf of a communicator by modifying, augmenting, or generating messages to accomplish communication or interpersonal goals.

Read More

Social robots as media

Robots are poised to play an important role in social life. Surprisingly, to date we know surprisingly little about the psychological, social or emotional responses to robots.

Read More

Folk theory of cyber-social systems

People’s beliefs and behaviors are shaped by their understanding of how systems work, also known as folk theories. These systems include social systems, such as understanding the behavior of others, physical systems, such as gravity, as well as cyber-social systems, that have both social and digital components.

Read More

Truth, trust, and technology

We investigate how people lie and detect deception with technology. Our research suggests that lies are not more prevalent online than offline (see Guillory & Hancock, 2012; Markowitz & Hancock, 2016), but instead, deception is represented differently when technology is involved.

Read More

Fake news and language

This project revolves around fake news, discusses the differences between distorted/ dishonest misinformation and blatantly deceptive news, and investigates language features of dishonest political news articles from the psychological aspects.

Read More

Language and social dynamics

What do words suggest about people and their experiences? We use language as a lens into psychological events and social interactions. Our approach uses computational tools to gather and examine language data, from online reviews to science papers.

Read More

Meta-analysis

The Stanford Social Media Lab is working on a large-scale meta-analysis to understand what is the effect of social media use and well-being across the literature, as well as understand how the effect differs across methodologies, platform of study, and social  media behavior.

Read More

Disclosure to conversational agents

Disclosure, or revealing personal information about oneself, has been found to lead to a wide range of benefits, including increased psychological and physical health. Currently, we are investigating what occurs when disclosing in such a way to a form of technology, rather than to another person.

Read More

Technology’s role in mental health crisis recognition and intervention

Technology plays a prominent role in how people access and understand their own health. The Social media lab conducts studies on the role of conversational agents in recognizing and responding to health crises such as self harm or sexual violence.

Read More