Can Large Language Model Agents Simulate Human Trust Behaviors?

Abstract

Large Language Model (LLM) agents have been increasingly adopted as simulation tools to model humans in social science and role-playing applications. However, one fundamental question remains: can LLM agents reliably simulate human behavior? In this paper, we focus on a key aspect of human interaction: trust, and aim to investigate whether LLM agents can effectively simulate human trust behaviors. We first find that LLM agents generally exhibit trust behaviors, which we refer to as agent trust, under the framework of Trust Games, a widely recognized tool in behavioral economics. Additionally, we discover that GPT-4 agents demonstrate high behavioral alignment with humans in terms of trust behaviors, suggesting the feasibility of simulating human trust behaviors with LLM agents. Moreover, we investigate the biases in agent trust and the differences in trust directed towards other LLM agents versus humans. We also examine the intrinsic properties of agent trust under conditions such as advanced reasoning strategies and external manipulations. Our study provides new insights into the behaviors of LLM agents and highlights the fundamental analogy between LLMs and humans beyond value alignment. We further discuss the broad implications of our findings for various applications where trust plays a critical role.

Type
Publication
Accepted at NeurIPS 2024

Our project URL: https://llm-agent-trust-behavior.github.io/

ChengXing Xie
ChengXing Xie
Artificial Intelligence Student

My research interests include LLM Agents, Multi-Modeling (VLMs, Diffusion).