Hybrid learning in stochastic games and its applications in network security

by Q. Zhu, H. Tembine, T. Basar
Book chapters Year: 2013

Bibliography

Q. Zhu, H. Tembine, and T. Basar. Hybrid learning in stochastic games and its applications in network security. In F. L. Lewis, D. Liu (Eds.) Reinforcement Learning and Approximate Dynamic Programming for Feedback Control, Series on Computational Intelligence, IEEE Press/Wiley, 2013, chapter 14, pp. 305-329

Abstract

We consider in this chapter a class of two-player nonzero-sumstochastic games with incomplete information, which is inspired by recent applications of game theory in network security. We develop fully distributed reinforcement learning algorithms, which require for each player a minimal amount of information regarding the other player. At each time, each player can be in an active mode or in a sleep mode. If a player is in an active mode, she updates her strategy and estimates of unknown quantities using a specific pure or hybrid learning pattern. The players’ intelligence and rationality are captured by the weighted linear combination of different learning patterns. We use stochastic approximation techniques to show that, under appropriate conditions, the pure or hybrid learning schemes with random updates can be studied using their deterministic ordinary differential equation (ODE) counterparts. Convergence to state-independent equilibria is analyzed for special classes of games, namely, games with two actions, and potential games. Results are applied to network security games between an intruder and an administrator, where the noncooperative behaviors are well characterized by the features of distributed hybrid learning.

 

ISSN:9781118453988

Keywords

Strategic Learning Hybrid systems Game Theory Network Security