Markov strategy

In game theory, a Markov strategy{{Cite web |date=2017-02-06 |title=First Links in the Markov Chain |url=https://www.americanscientist.org/article/first-links-in-the-markov-chain |access-date=2017-02-06 |website=American Scientist |language=en}} is a strategy that depends only on the current state of the game, rather than the full history of past actions. The state summarizes all relevant past information needed for decision-making. For example, in a repeated game, the state could be the outcome of the most recent round or any summary statistic that captures the strategic situation or recent sequence of play.{{cite book |last=Fudenberg |first=Drew |title=Game Theory |publisher=The MIT Press |year=1995 |isbn=0-262-06141-4 |location=Cambridge, MA |pages=501–40}}

A profile of Markov strategies forms a Markov perfect equilibrium if it constitutes a Nash equilibrium in every possible state of the game. Markov strategies are widely used in dynamic and stochastic games, where the state evolves over time according to probabilistic rules.

Although the concept is named after Andrey Markov due to its reliance on the Markov property{{Cite web |last=Sack |first=Harald |date=2022-06-14 |title=Andrey Markov and the Markov Chains |url=http://scihi.org/andrey-markov/ |access-date=2017-11-23 |website=SciHi Blog |language=en-US}}—the idea that only the current state matters—the strategy concept itself was developed much later in the context of dynamic game theory.

References

{{reflist}}

{{Game theory}}

{{DEFAULTSORT:Markov Strategy}}

Category:Strategy (game theory)

{{Gametheory-stub}}