Deliver to Romania
IFor best experience Get the App
Full description not available
D**N
An excellent introduction
As a subfield of artificial intelligence, reinforcement learning has shown great success from both a theoretical and practical viewpoint. Taking the form of numerous applications in finance, network engineering, robot toys, and games, it is clear that his learning paradigm shows even greater promise for future developments. The authors summarize the foundations of reinforcement learning, some of this coming from their own work over the last decade.The authors define reinforcement learning as learning how to map situations to actions so as to maximize a numerical reward. The machine that is indulging in reinforcement learning discovers on its own which actions will optimize the reward by trying out these actions. It is the ability of such a machine to learn from experience that distinguishes it from one that is indulging in supervised learning, for in the latter examples are needed to guide the machine to the proper concept or knowledge. The authors emphasize the "exploration-exploitation" tradeoffs that reinforcement-learning machines have to deal with as they interact with the environment.For the authors, a reinforcement learning system consists of a `policy', a `reward function', a `value function', and a `model' of the environment. A policy is a mapping from the states of the environment that are perceived by the machine to the actions that are to be taken by the machine when in those states. The reward function maps each perceived state of the environment to a number (the reward). A value function specifies what is the good for the machine over the long run. A model, as the name implies, is a representation of the behavior of the environment. The authors emphasize that all of the reinforcement learning methods that are discussed in the book are concerned with the estimation of value functions, but they point out that other techniques are available for solving reinforcement learning problems, such as genetic algorithms and simulated annealing.The authors use dynamic programming, Monte Carlo simulation, and temporal-difference learning to solve the reinforcement learning problem, but they emphasize that each of these methods will not give a free-lunch. An entire chapter is devoted to each of these methods however, giving the reader a good overview of the weaknesses and strengths of each of these approaches. The differences between them usual boil down to issues of performance rather than accuracy in the generated solutions. Temporal difference learning in fact is viewed in the book as a combination of Monte Carlo and dynamic programming techniques, and in the opinion of this reviewer, has resulted in some of the most impressive successes for applications based on reinforcement learning. One of these is TD-Gammon, developed to play backgammon, and which is also discussed in the book.The authors emphasize that these three main strategies for solving reinforcement learning problems are not mutually exclusive. Instead each of them could be used simultaneously with the others, and they devote a few chapters in the book illustrating how this "unified" approach can be advantageous for reinforcement learning problems. They do this by using explicit algorithms and not just philosophical discussion. These discussions are very interesting and illustrate beautifully the idea that there is no "free lunch" in any of the different algorithms involved in reinforcement learning.In the last chapter of the book the authors overview some of the more successful applications of reinforcement learning, one of them already mentioned. Another one discussed is the `acrobot', which is a two-link, underactuated robot, which models to some extent the motion of a gymnast on a high bar. The motion of the acrobot is to be controlled by swinging its tip above the first joint, with appropriate rewards given until this goal is reached. The authors use the `Sarsa' learning algorithm, developed earlier in the book, for solving this reinforcement learning problem. The acrobot is an example of the current intense interest in machine learning of physical motion and intelligent control theory.Another example discussed in this chapter deals with the problem of elevator dispatching, which the authors include as an example of a problem that cannot be dealt with efficiently by dynamic programming. This problem is studied with Q-learning and via the use of a neural network trained by back propagation.The authors also treat a problem of great importance in the cellular phone industry, namely that of dynamic channel allocation. This problem is formulated as a semi-Markov decision problem, and reinforcement learning techniques were used to minimize the probability of blocking a call. Reinforcement learning has become very important in the communications industry of late, as well as in queuing networks.
J**.
The book.
Its "The Reinforcement Learning Book"! But seriously it is super useful for understanding classic forms of control.
A**R
Good book. Good store.
Its a very good book and the sellers are very nice. They answered all the questions I have and try to solve all my concerns. I very appreciate their consideration service. And the book's quality is also very good. It's the best book of reinforcement learning. I like it so much!
M**G
a must-read CS book
The basic reference to an important area of computer science. A must-read for all undergraduates and those new to CS or AI.
B**G
This book has all the "whats", all the "whys" and all the "hows"!
I have taken many courses online about supervised learning but the study material for RL is severely lacking, let alone high quality ones where you can follow and learn this topic in a systematic way. This book is the solution, you not only can learn the nitty gritty details of the mathematical justification, all the "whys" but also the "hows", the pseudo code are the parts that I enjoy the most. Shantong Zhang who helped replicated all the experiments also have ALL, I mean ALL the experiments implemented in Python, which you can easily find on github. By the time of this post, Sutton also has the complete draft of 2017Nov5 which is also public online, which integrated many of the new progress like deep learning, alphaGo, ..etc. One can easily expect spending hundreds of hours to swimming in the details if you want to, thanks to this book, you can also use it as a reference material!
A**R
This is a very readable introduction to reinforcement learning, ...
This is a very readable introduction to reinforcement learning, and spends a lot of time going over examples to give you an intuitive feel for what's going on. Anybody that want s to get in to this field should start here.
R**S
It's the first edition (no matter what the page says)
I knew the photo of book cover is from the first edition but the page said "second edition", so I decided to order one, hoping to get the second edition. Naturally, first version was delivered and I'm returning it.
M**N
Very good introduction
Very good introduction and answers the question of "how do I actually represent error in an online learning system where I do not know what the right answer is?"
Trustpilot
1 month ago
1 week ago