Please login to be able to save your searches and receive alerts for new content matching your search criteria.
This paper studies the evolution of the distribution of opinions in a population of individuals in which there exist two distinct subgroups of highly-committed, well-connected opinion leaders endowed with a strong convincing power. Each individual, located at a vertex of a directed graph, is characterized by her name, the list of people she is interacting with, her level of awareness, and her opinion. Various temporal evolutions according to different local rules are compared in order to find under which conditions the formation of strongly polarized subgroups, each adopting the opinion of one of the two groups of opinion leaders, is favored.
In this paper we investigate a model (based on the idea of the outflow dynamics), in which only conformity and anticonformity can lead to the opinion change. We show that for low level of anticonformity the consensus is still reachable but spontaneous reorientations between two types of consensus ("all say yes" or "all say no") appear.
We study the introduction of lexical innovations into a community of language users. Lexical innovations, i.e. new term added to people's vocabulary, plays an important role in the process of language evolution. Nowadays, information is spread through a variety of networks, including, among others, online and offline social networks and the World Wide Web. The entire system, comprising networks of different nature, can be represented as a multi-layer network. In this context, lexical innovations diffusion occurs in a peculiar fashion. In particular, a lexical innovation can undergo three different processes: its original meaning is accepted; its meaning can be changed or misunderstood (e.g. when not properly explained), hence more than one meaning can emerge in the population. Lastly, in the case of a loan word, it can be translated into the population language (i.e. defining a new lexical innovation or using a synonym) or into a dialect spoken by part of the population. Therefore, lexical innovations cannot be considered simply as information. We develop a model for analyzing this scenario using a multi-layer network comprising a social network and a media network. The latter represents the set of all information systems of a society, e.g. television, the World Wide Web and radio. Furthermore, we identify temporal directed edges between the nodes of these two networks. In particular, at each time-step, nodes of the media network can be connected to randomly chosen nodes of the social network and vice versa. In doing so, information spreads through the whole system and people can share a lexical innovation with their neighbors or, in the event they work as reporters, by using media nodes. Lastly, we use the concept of "linguistic sign" to model lexical innovations, showing its fundamental role in the study of these dynamics. Many numerical simulations have been performed to analyze the proposed model and its outcomes.
A model of opinion dynamics with two types of agents as social actors are presented, using the Ising thermodynamic model as the dynamics template. The agents are considered as opportunists which live at sites and interact with the neighbors, or fanatics/missionaries which move from site to site randomly in persuasion of converting agents of opposite opinion with the help of opportunists. Here, the moving agents act as an external influence on the opportunists to convert them to the opposite opinion. It is shown by numerical simulations that such dynamics of opinion formation may explain some details of consensus formation even when one of the opinions are held by a minority. Regardless the distribution of the opinion, different size societies exhibit different opinion formation behavior and time scales. In order to understand general behavior, the scaling relations obtained by comparing opinion formation processes observed in societies with varying population and number of randomly moving agents are studied. For the proposed model two types of scaling relations are observed. In fixed size societies, increasing the number of randomly moving agents give a scaling relation for the time scale of the opinion formation process. The second type of scaling relation is due to the size dependent information propagation in finite but large systems, namely finite-size scaling.
The Levy–Levy–Solomon (LLS) model [M. Levy, H. Levy and S. Solomon, Econ. Lett.45, 103 (1994)] is one of the most influential agent-based economic market models. In several publications this model has been discussed and analyzed. Especially Lux and Zschischang [E. Zschischang and T. Lux, Physica A: Stat. Mech. Appl.291, 563 (2001)] have shown that the model exhibits finite-size effects. In this study, we extend existing work in several directions. First, we show simulations which reveal finite-size effects of the model. Second, we shed light on the origin of these finite-size effects. Furthermore, we demonstrate the sensitivity of the LLS model with respect to random numbers. Especially, we can conclude that a low-quality pseudo-random number generator has a huge impact on the simulation results. Finally, we study the impact of the stopping criteria in the market clearance mechanism of the LLS model.
In this paper, we provide an analytical framework for investigating the efficiency of a consensus-based model for tackling global optimization problems. This work justifies the optimization algorithm in the mean-field sense showing the convergence to the global minimizer for a large class of functions. Theoretical results on consensus estimates are then illustrated by numerical simulations where variants of the method including nonlinear diffusion are introduced.
We model, simulate and control the guiding problem for a herd of evaders under the action of repulsive drivers. The problem is formulated in an optimal control framework, where the drivers (controls) aim to guide the evaders (states) to a desired region of the Euclidean space. The numerical simulation of such models quickly becomes unfeasible for a large number of interacting agents, as the number of interactions grows O(N2) for N agents. For reducing the computational cost to O(N), we use the Random Batch Method (RBM), which provides a computationally feasible approximation of the dynamics. First, the considered time interval is divided into a number of subintervals. In each subinterval, the RBM randomly divides the set of particles into small subsets (batches), considering only the interactions inside each batch. Due to the averaging effect, the RBM approximation converges to the exact dynamics in the L2-expectation norm as the length of subintervals goes to zero. For this approximated dynamics, the corresponding optimal control can be computed efficiently using a classical gradient descent. The resulting control is not optimal for the original system, but for a reduced RBM model. We therefore adopt a Model Predictive Control (MPC) strategy to handle the error in the dynamics. This leads to a semi-feedback control strategy, where the control is applied only for a short time interval to the original system, and then compute the optimal control for the next time interval with the state of the (controlled) original dynamics. Through numerical experiments we show that the combination of RBM and MPC leads to a significant reduction of the computational cost, preserving the capacity of controlling the overall dynamics.
In this paper, we adopt an agent-based approach to model collective market dynamics when interactions between the agents in a market are significant. Our model has two special features. First, social groups are formed in a random cluster process which, we believe, mimics the actual formation of social circles. Second, the process is shown to have an equilibrium distribution which gives the highest probability to the market configurations with the maximum total likelihood. With this model we are able to catch some key characteristics of collective dynamics emerged from agent interactions. These characteristics include, for example, a heavy-tailed distribution for the market returns.
The practice of detecting power laws and scaling behaviors in economics and finance has gained momentum in the last few years, due to the increased use of concepts and methods first developed in statistical physics. Some disappointment has emerged in the economic profession, however, as regards the models proposed so far to theoretically explain these phenomena. In this paper we aim to address this criticism, showing that scaling behaviors can naturally emerge in a multiagent system with optimizing interacting units characterized by financial fragility.
Although in many social sciences there is a radical division between studies based on quantitative (e.g. statistical) and qualitative (e.g. ethnographic) methodologies and their associated epistemological commitments, agent-based simulation fits into neither camp, and should be capable of modelling both quantitative and qualitative data. Nevertheless, most agent-based models (ABMs) are founded on quantitative data. This paper explores some of the methodological and practical problems involved in basing an ABM on qualitative participant observation and proposes some advice for modelers.
A model is developed to study the effectiveness of innovation and its impact on structure creation on agent-based societies. The abstract model that is developed is easily adapted to any particular field. In an interacting environment, the agents receive something from the environment (the other agents) in exchange for their effort and pay the environment a certain amount of value for the fulfilling of their needs or for the very price of existence in that environment. This is coded by two bit strings and the dynamics of the exchange is based on the matching of these strings to those of the other agents. Innovation is related to the adaptation by the agents of their bit strings to improve some utility function.
Since the start of the financial crisis in 2007, the debate on the proper level leverage of financial institutions has been flourishing. The paper addresses such crucial issue within the Eurace artificial economy, by considering the effects that different choices of capital adequacy ratios for banks have on main economic indicators. The study also gives us the opportunity to examine the outcomes of the Eurace model so to discuss the nature of endogenous money, giving a contribution to a debate that has grown stronger over the last two decades. A set of 40 years long simulations have been performed and examined in the short (first five years), medium (the following 15 years) and long (the last 20 years) run. Results point out a non-trivial dependence of real economic variables such as the gross domestic product (GDP), the unemployment rate and the aggregate capital stock on banks' capital adequacy ratios; this dependence is in place due to the credit channel and varies significantly according to the chosen evaluation horizon. In general, while boosting the economy in the short run, regulations allowing for a high leverage of the banking system tend to be depressing in the medium and long run. Results also point out that the stock of money is driven by the demand for loans, therefore supporting the theory of endogenous nature of credit money.
Given the economy's complex behavior and sudden transitions as evidenced in the 2007–2008 crisis, agent-based models are widely considered a promising alternative to current macroeconomic research dominated by DSGE models. Their failure is commonly interpreted as a failure to incorporate heterogeneous interacting agents. This paper explains that complex behavior and sudden transitions also arise from the economy's financial structure as reflected in its balance sheets, not just from heterogeneous interacting agents. It introduces "flow-of-funds" or "accounting" models, which were pre eminent in successful anticipations of the recent crisis. In illustration, a simple balance sheet model of the economy is developed to demonstrate that non-linear behavior and sudden transition may arise from the economy's balance sheet structure, even without any micro-foundations. The paper concludes by discussing one recent example of combining flow-of-funds and agent-based models. This appears a promising avenue for future research.
In this paper, we report on the theoretical foundations, empirical context and technical implementation of an agent-based modeling (ABM) framework, that uses a high-performance computing (HPC) approach to investigate human population dynamics on a global scale, and on evolutionary time scales. The ABM-HPC framework provides an in silico testbed to explore how short-term/small-scale patterns of individual human behavior and long-term/large-scale patterns of environmental change act together to influence human dispersal, survival and extinction scenarios. These topics are currently at the center of the Neanderthal debate, i.e., the question why Neanderthals died out during the Late Pleistocene, while modern humans dispersed over the entire globe. To tackle this and similar questions, simulations typically adopt one of two opposing approaches, top-down (equation-based) and bottom-up (agent-based) models of population dynamics. We propose HPC technology as an essential computational tool to bridge the gap between these approaches. Using the numerical simulation of worldwide human dispersals as an example, we show that integrating different levels of model hierarchy into an ABM-HPC simulation framework provides new insights into emergent properties of the model, and into the potential and limitations of agent-based versus continuum models.
The evolution of unconditional cooperation is one of the fundamental problems in science. A new solution is proposed to solve this puzzle. We treat this issue with an evolutionary model in which agents play the Prisoner's Dilemma on signed networks. The topology is allowed to co-evolve with relational signs as well as with agent strategies. We introduce a strategy that is conditional on the emotional content embedded in network signs. We show that this strategy acts as a catalyst and creates favorable conditions for the spread of unconditional cooperation. In line with the literature, we found evidence that the evolution of cooperation most likely occurs in networks with relatively high chances of rewiring and with low likelihood of strategy adoption. While a low likelihood of rewiring enhances cooperation, a very high likelihood seems to limit its diffusion. Furthermore, unlike in nonsigned networks, cooperation becomes more prevalent in denser topologies.
An analytical treatment of a simple opinion model with contrarian behavior is presented. The focus is on the stationary dynamics of the model and in particular on the effect of inhomogeneities in the interaction topology on the stationary behavior. We start from a micro-level Markov chain description of the model. Markov chain aggregation is then used to derive a macro chain for the complete graph as well as a meso-level description for the two-community graph composed of two (weakly) coupled sub-communities. In both cases, a detailed understanding of the model behavior is possible using Markov chain tools. More importantly, however, this setting provides an analytical scenario to study the discrepancy between the homogeneous mixing case and the model on a slightly more complex topology. We show that memory effects are introduced at the macro-level when we aggregate over agent attributes without sensitivity to the microscopic details and quantify these effects using concepts from information theory. In this way, the method facilitates the analysis of the relation between microscopic processes and their aggregation to a macroscopic level of description and informs about the complexity of a system introduced by heterogeneous interaction relations.
For Agent-based models, in particular the Voter Model (VM), a general framework of aggregation is developed which exploits the symmetries of the agent network G. Depending on the symmetry group Autω(N) of the weighted agent network, certain ensembles of agent configurations can be interchanged without affecting the dynamical properties of the VM. These configurations can be aggregated into the same macro state and the dynamical process projected onto these states is, contrary to the general case, still a Markov chain. The method facilitates the analysis of the relation between microscopic processes and a their aggregation to a macroscopic level of description and informs about the complexity of a system introduced by heterogeneous interaction relations. In some cases the macro chain is solvable.
Because the dynamics of complex systems is the result of both decisive local events and reinforced global effects, the prediction of such systems could not do without a genuine multilevel approach. This paper proposes to found such an approach on information theory. Starting from a complete microscopic description of the system dynamics, we are looking for observables of the current state that allows to efficiently predict future observables. Using the framework of the information bottleneck (IB) method, we relate optimality to two aspects: the complexity and the predictive capacity of the retained measurement. Then, with a focus on agent-based models (ABMs), we analyze the solution space of the resulting optimization problem in a generic fashion. We show that, when dealing with a class of feasible measurements that are consistent with the agent structure, this solution space has interesting algebraic properties that can be exploited to efficiently solve the problem. We then present results of this general framework for the voter model (VM) with several topologies and show that, especially when predicting the state of some sub-part of the system, multilevel measurements turn out to be the optimal predictors.
The largely dominant meritocratic paradigm of highly competitive Western cultures is rooted on the belief that success is mainly due, if not exclusively, to personal qualities such as talent, intelligence, skills, smartness, efforts, willfulness, hard work or risk taking. Sometimes, we are willing to admit that a certain degree of luck could also play a role in achieving significant success. But, as a matter of fact, it is rather common to underestimate the importance of external forces in individual successful stories. It is very well known that intelligence (or, more in general, talent and personal qualities) exhibits a Gaussian distribution among the population, whereas the distribution of wealth — often considered as a proxy of success — follows typically a power law (Pareto law), with a large majority of poor people and a very small number of billionaires. Such a discrepancy between a Normal distribution of inputs, with a typical scale (the average talent or intelligence), and the scale-invariant distribution of outputs, suggests that some hidden ingredient is at work behind the scenes. In this paper, we suggest that such an ingredient is just randomness. In particular, our simple agent-based model shows that, if it is true that some degree of talent is necessary to be successful in life, almost never the most talented people reach the highest peaks of success, being overtaken by averagely talented but sensibly luckier individuals. As far as we know, this counterintuitive result — although implicitly suggested between the lines in a vast literature — is quantified here for the first time. It sheds new light on the effectiveness of assessing merit on the basis of the reached level of success and underlines the risks of distributing excessive honors or resources to people who, at the end of the day, could have been simply luckier than others. We also compare several policy hypotheses to show the most efficient strategies for public funding of research, aiming to improve meritocracy, diversity of ideas and innovation.
This paper further investigates the Talent versus Luck (TvL) model described by [Pluchino et al. Talent versus luck: The role of randomness in success and failure, Adv. Complex Syst.21 (2018) 1850014] which models the relationship between ‘talent’ and ‘luck’ on the impact of an individuals career. It is shown that the model is very sensitive to both random sampling and the choice of value for the input parameters. Running the model repeatedly with the same set of input parameters gives a range of output values of over 50% of the mean value. The sensitivity of the inputs of the model is analyzed using a variance-based approach based upon generating Sobol sequences of quasi-random numbers. When using the model to look at the talent associated with an individual who has the maximum capital over a model run it has been shown that the choice for the standard deviation of the talent distribution contributes to 67% of the model variability. When investigating the maximum amount of capital returned by the model the probability of a lucky event at any given epoch has the largest impact on the model, almost three times more than any other individual parameter. Consequently, during the analysis of the model results one must keep in mind the impact that only small changes in the input parameters can have on the model output.