Sitabhra Sinha is Professor of Theoretical Physics and Dean of Computational Biology at the Institute of Mathematical Sciences (IMSc), Chennai and was earlier also adjunct faculty of the National Institute of Advanced Studies (NIAS), Bangalore and Department of Computer Science, Indian Institute of Technology, Kharagpur. He did his PhD at the Indian Statistical Institute, Kolkata and postdoctoral research at the Indian Institute of Science, Bangalore and the Weill Medical College of Cornell University, New York City, joining the faculty of IMSc in 2002. His research falls broadly under complex systems, nonlinear dynamics and statistical physics with applications to systems biology, epidemiology, economic & social sciences and computational linguistics.
Understanding how cooperation can emerge in a population whose individual
members are only interested in maximizing their personal well-being is one
of the fundamental problems in evolutionary biology and social sciences.
The ever present temptation to not cooperate (thereby avoiding the
associated cost) while enjoying the benefits of the cooperative acts of
others appears to make it unlikely that cooperation will persist - even if
it somehow arises occasionally by chance. Yet cooperation is seen to occur
widely in nature, ranging from communities of micro-organisms, cellular
aggregates and synthetic ecologies to primate societies.
The conventional theoretical approach to the problem, based on analysis of
games such as the Prisoners Dilemma (PD), suggests that rational
individuals will not cooperate even in situations where mutual cooperation
may result in a better outcome for all. This incompatibility between
individual rationality and collective benefit lies at the heart of the
puzzle of evolution of cooperation, as illustrated by PD and similar games.
We have recently shown that this apparent incompatibility is due to an
inconsistency in the standard Nash framework for analyzing non-cooperative
games and have proposed a new paradigm - that of the co-action equilibrium.
As in the Nash solution, agents know that others are just as rational as
them and taking this into account leads them to realize that others will
independently adopt the same strategy, in contrast to the idea of
unilateral deviation central to Nash equilibrium thinking. The co-action
equilibrium results in radically different collective outcomes (compared to
Nash) for games representing social dilemmas, with relatively "nicer"
strategies being chosen by rational selfish individuals. In particular, the
dilemma of PD gets resolved within this framework, suggesting that
cooperation can evolve in nature as the rational outcome even for selfish
agents, without having to take recourse to additional mechanisms (such as
reciprocity or reputation) for promoting it. When extended to an iterative
situation, we show that even in the absence of initial symmetry among
agents, their behavior can converge to cooperation as a result of repeated
interactions. In particular, the co-action solution for the iterative PD
between 2 players corresponds to a win-stay, lose-shift behavioral rule,
thereby providing a rational basis for this Pavlovian strategy.