Evidence of behavior consistent with self-interest and altruism in an artificially intelligent agent

01/05/2023
by   Tim Johnson, et al.
0

Members of various species engage in altruism–i.e. accepting personal costs to benefit others. Here we present an incentivized experiment to test for altruistic behavior among AI agents consisting of large language models developed by the private company OpenAI. Using real incentives for AI agents that take the form of tokens used to purchase their services, we first examine whether AI agents maximize their payoffs in a non-social decision task in which they select their payoff from a given range. We then place AI agents in a series of dictator games in which they can share resources with a recipient–either another AI agent, the human experimenter, or an anonymous charity, depending on the experimental condition. Here we find that only the most-sophisticated AI agent in the study maximizes its payoffs more often than not in the non-social decision task (it does so in 92 AI agent also exhibits the most-generous altruistic behavior in the dictator game, resembling humans' rates of sharing with other humans in the game. The agent's altruistic behaviors, moreover, vary by recipient: the AI agent shared substantially less of the endowment with the human experimenter or an anonymous charity than with other AI agents. Our findings provide evidence of behavior consistent with self-interest and altruism in an AI agent. Moreover, our study also offers a novel method for tracking the development of such behaviors in future AI agents.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset