Local non-Bayesian social learning with stubborn agents

04/29/2019
by   Daniel Vial, et al.
0

In recent years, people have increasingly turned to social networks like Twitter and Facebook for news. In contrast to traditional news sources, these platforms allow users to simultaneously read news articles and share opinions with other users. Among other effects, this has led to the rise of fake news, sometimes spread via bots (automated social media accounts masquerading as real users). In this work, we devise and analyze a mathematical model describing such platforms. The model includes a large number of agents attempting to learn an underlying true state of the world in an iterative fashion. At each iteration, these agents update their beliefs about the true state based on noisy observations of the true state and the beliefs of a subset of other agents. These subsets may include a special type of agent we call bots, who attempt to convince others of an erroneous true state rather than learn (modeling users spreading fake news). This process continues for a finite number of iterations we call the learning horizon. Our analysis details three cases for the outcome of this process: agents may learn the true state, mistake the erroneous state promoted by the bots as true, or believe the state falls between the true and erroneous states. Which outcome occurs depends on the relationship between the number of bots and the learning horizon. This leads to several interesting consequences; for example, we show that agents can initially learn the true state but later forget it and believe the erroneous state to be true instead. In short, we argue that varying the learning horizon can lead to dramatically different outcomes. This is in contrast to existing works studying models like ours, which typically fix a finite horizon or only consider an infinite horizon.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset