Revere

Handbook of Learning and Approximate Dynamic Programming by Jennie Si (English)

Description: FREE SHIPPING UK WIDE Handbook of Learning and Approximate Dynamic Programming by Jennie Si, Andrew G. Barto, Warren B. Powell, Don Wunsch ADP or Approximate Dynamic Programming has gone by many different names including: reinforcement learning (RL), adaptive critics (AC), and neuro-dynamic programming (NDP). The dynamic programming approach to decision and control problems involving nonlinear dynamic systems provides the optimal solution in any stochastic or uncertain environment. FORMAT Hardcover LANGUAGE English CONDITION Brand New Publisher Description A complete resource to Approximate Dynamic Programming (ADP), including on-line simulation codeProvides a tutorial that readers can use to start implementing the learning algorithms provided in the bookIncludes ideas, directions, and recent results on current research issues and addresses applications where ADP has been successfully implementedThe contributors are leading researchers in the field Back Cover Approximate dynamic programming solves decision and control problems While advances in science and engineering have enabled us to design and build complex systems, how to control and optimize them remains a challenge. This was made clear, for example, by the major power outage across dozens of cities in the Eastern United States and Canada in August of 2003. Learning and approximate dynamic programming (ADP) is emerging as one of the most promising mathematical and computational approaches to solve nonlinear, large-scale, dynamic control problems under uncertainty. It draws heavily both on rigorous mathematics and on biological inspiration and parallels, and helps unify new developments across many disciplines. The foundations of learning and approximate dynamic programming have evolved from several fields–optimal control, artificial intelligence (reinforcement learning), operations research (dynamic programming), and stochastic approximation methods (neural networks). Applications of these methods span engineering, economics, business, and computer science. In this volume, leading experts in the field summarize the latest research in areas including: Reinforcement learning and its relationship to supervised learning Model-based adaptive critic designs Direct neural dynamic programming Hierarchical decision-making Multistage stochastic linear programming for resource allocation problems Concurrency, multiagency, and partial observability Backpropagation through time and derivative adaptive critics Applications of approximate dynamic programming and reinforcement learning in control-constrained agile missiles; power systems; heating, ventilation, and air conditioning; helicopter flight control; transportation and more. Flap Approximate dynamic programming solves decision and control problems While advances in science and engineering have enabled us to design and build complex systems, how to control and optimize them remains a challenge. This was made clear, for example, by the major power outage across dozens of cities in the Eastern United States and Canada in August of 2003. Learning and approximate dynamic programming (ADP) is emerging as one of the most promising mathematical and computational approaches to solve nonlinear, large-scale, dynamic control problems under uncertainty. It draws heavily both on rigorous mathematics and on biological inspiration and parallels, and helps unify new developments across many disciplines. The foundations of learning and approximate dynamic programming have evolved from several fields-optimal control, artificial intelligence (reinforcement learning), operations research (dynamic programming), and stochastic approximation methods (neural networks). Applications of these methods span engineering, economics, business, and computer science. In this volume, leading experts in the field summarize the latest research in areas including: Reinforcement learning and its relationship to supervised learning Model-based adaptive critic designs Direct neural dynamic programming Hierarchical decision-making Multistage stochastic linear programming for resource allocation problems Concurrency, multiagency, and partial observability Backpropagation through time and derivative adaptive critics Applications of approximate dynamic programming and reinforcement learning in control-constrained agile missiles; power systems; heating, ventilation, and air conditioning; helicopter flight control; transportation and more. Author Biography JENNIE SI is Professor of Electrical Engineering, Arizona State University, Tempe, AZ. She is director of Intelligent Systems Laboratory, which focuses on analysis and design of learning and adaptive systems. In addition to her own publications, she is the Associate Editor for IEEE Transactions on Neural Networks, and past Associate Editor for IEEE Transactions on Automatic Control and IEEE Transactions on Semiconductor Manufacturing. She was the co-chair for the 2002 NSF Workshop on Learning and Approximate Dynamic Programming. ANDREW G. BARTO is Professor of Computer Science, University of Massachusetts, Amherst. He is co-director of the Autonomous Learning Laboratory, which carries out interdisciplinary research on machine learning and modeling of biological learning. He is a core faculty member of the Neuroscience and Behavior Program of the University of Massachusetts and was the co-chair for the 2002 NSF Workshop on Learning and Approximate Dynamic Programming. He currently serves as an associate editor of Neural Computation. WARREN B. POWELL is Professor of Operations Research and Financial Engineering at Princeton University. He is director of CASTLE Laboratory, which focuses on real-time optimization of complex dynamic systems arising in transportation and logistics. DONALD C. WUNSCH is the Mary K. Finley Missouri Distinguished Professor in the Electrical and Computer Engineering Department at the University of Missouri, Rolla. He heads the Applied Computational Intelligence Laboratory and also has a joint appointment in Computer Science, and is President-Elect of the International Neural Networks Society. Table of Contents Foreword. 1. ADP: goals, opportunities and principles. Part I: Overview. 2. Reinforcement learning and its relationship to supervised learning. 3. Model-based adaptive critic designs. 4. Guidance in the use of adaptive critics for control. 5. Direct neural dynamic programming. 6. The linear programming approach to approximate dynamic programming. 7. Reinforcement learning in large, high-dimensional state spaces. 8. Hierarchical decision making. Part II: Technical advances. 9. Improved temporal difference methods with linear function approximation. 10. Approximate dynamic programming for high-dimensional resource allocation problems. 11. Hierarchical approaches to concurrency, multiagency, and partial observability. 12. Learning and optimization - from a system theoretic perspective. 13. Robust reinforcement learning using integral-quadratic constraints. 14. Supervised actor-critic reinforcement learning. 15. BPTT and DAC - a common framework for comparison. Part III: Applications. 16. Near-optimal control via reinforcement learning. 17. Multiobjective control problems by reinforcement learning. 18. Adaptive critic based neural network for control-constrained agile missile. 19. Applications of approximate dynamic programming in power systems control. 20. Robust reinforcement learning for heating, ventilation, and air conditioning control of buildings. 21. Helicopter flight control using direct neural dynamic programming. 22. Toward dynamic stochastic optimal power flow. 23. Control, optimization, security, and self-healing of benchmark power systems. Review "…highly recommended to researchers, graduate students, engineers, and scientists…" (E-STREAMS, February 2006) "Clearly, this book is useful for researchers who do or want to do research on ADP." (IIE Transactions-Quality & Reliability Engineering, February 2006) "…I would like to congratulate the editors, for putting together this wonderful collection of research contributions." (Computing Reviews.com, March 18, 2005) Long Description Approximate dynamic programming solves decision and control problems While advances in science and engineering have enabled us to design and build complex systems, how to control and optimize them remains a challenge. This was made clear, for example, by the major power outage across dozens of cities in the Eastern United States and Canada in August of 2003. Learning and approximate dynamic programming (ADP) is emerging as one of the most promising mathematical and computational approaches to solve nonlinear, large-scale, dynamic control problems under uncertainty. It draws heavily both on rigorous mathematics and on biological inspiration and parallels, and helps unify new developments across many disciplines. The foundations of learning and approximate dynamic programming have evolved from several fields optimal control, artificial intelligence (reinforcement learning), operations research (dynamic programming), and stochastic approximation methods (neural networks). Applications of these methods span engineering, economics, business, and computer science. In this volume, leading experts in the field summarize the latest research in areas including: Reinforcement learning and its relationship to supervised learning Model-based adaptive critic designs Direct neural dynamic programming Hierarchical decision-making Multistage stochastic linear programming for resource allocation problems Concurrency, multiagency, and partial observability Backpropagation through time and derivative adaptive critics Applications of approximate dynamic programming and reinforcement learning in control-constrained agile missiles; power systems; heating, ventilation, and air conditioning; helicopter flight control; transportation and more. Review Text "?highly recommended to researchers, graduate students, engineers, and scientists?" (E-STREAMS, February 2006) "Clearly, this book is useful for researchers who do or want to do research on ADP." (IIE Transactions-Quality & Reliability Engineering, February 2006) "?I would like to congratulate the editors, for putting together this wonderful collection of research contributions." (Computing Reviews.com, March 18, 2005) Review Quote "...highly recommended to researchers, graduate students, engineers, and scientists..." ( E-STREAMS , February 2006) "Clearly, this book is useful for researchers who do or want to do research on ADP." ( IIE Transactions-Quality & Reliability Engineering , February 2006) "...I would like to congratulate the editors, for putting together this wonderful collection of research contributions." ( Computing Reviews.com , March 18, 2005) Promotional "Headline" "...highly recommended to researchers, graduate students, engineers, and scientists..." (E-STREAMS, February 2006)"Clearly, this book is useful for researchers who do or want to do research on ADP." (IIE Transactions-Quality & Reliability Engineering, February 2006)"...I would like to congratulate the editors, for putting together this wonderful collection of research contributions." (Computing Reviews.com, March 18, 2005) Feature A complete resource to Approximate Dynamic Programming, including: A tutorial which readers can use to start implementing the learning algorithms provided in the book; including simulation code posted on-line. Provides ideas, directions, and recent results on current research issues Offers an interdisciplinary approach in controls, operations research, machine learning, and neural networks collaborating on the same topic, from different perspectives. Represents the state-of-the-art on ADP. . Details ISBN047166054X Pages 672 Series IEEE Press Series on Computational Intelligence Year 2004 ISBN-10 047166054X ISBN-13 9780471660545 Place of Publication New York Country of Publication United States DEWEY 519.703 Format Hardcover Language English Edited by Don Wunsch Short Title HANDBK OF LEARNING & APPROXIMA Media Book Birth 1955 Publisher John Wiley & Sons Inc Edition 1st DOI 10.1604/9780471660545 Series Number 2 UK Release Date 2004-08-10 AU Release Date 2004-07-19 NZ Release Date 2004-07-19 Author Don Wunsch Publication Date 2004-08-10 Imprint Wiley-IEEE Press Illustrations Drawings: 40 B&W, 0 Color; Tables: 40 B&W, 0 Color Audience Professional & Vocational US Release Date 2004-08-10 We've got this At The Nile, if you're looking for it, we've got it. With fast shipping, low prices, friendly service and well over a million items - you're bound to find what you want, at a price you'll love! 30 DAY RETURN POLICY No questions asked, 30 day returns! FREE DELIVERY No matter where you are in the UK, delivery is free. SECURE PAYMENT Peace of mind by paying through PayPal and eBay Buyer Protection TheNile_Item_ID:1296834;

Price: 223.59 GBP

Location: London

End Time: 2024-11-11T22:11:53.000Z

Shipping Cost: 6.77 GBP

Product Images

Handbook of Learning and Approximate Dynamic Programming by Jennie Si (English)

Item Specifics

Return postage will be paid by: Buyer

Returns Accepted: Returns Accepted

After receiving the item, your buyer should cancel the purchase within: 30 days

Return policy details:

ISBN-13: 9780471660545

Book Title: Handbook of Learning and Approximate Dynamic Programming

Item Height: 247 mm

Item Width: 157 mm

Series: Ieee Press Series on Computational Intelligence

Author: Warren B. Powell, Don Wunsch, Andrew G. Barto, Jennie Si

Publication Name: Handbook of Learning and Approximate Dynamic Programming

Format: Hardcover

Language: English

Publisher: John Wiley & Sons INC International Concepts

Subject: Engineering & Technology, Computer Science

Publication Year: 2004

Type: Textbook

Item Weight: 1110 g

Number of Pages: 672 Pages

Recommended

Handbook of Mathematical Tables and Formulas
Handbook of Mathematical Tables and Formulas

$5.40

View Details
The Beginner's Handbook of Woodcarving: With Project Patterns for Li - VERY GOOD
The Beginner's Handbook of Woodcarving: With Project Patterns for Li - VERY GOOD

$4.48

View Details
Routledge Handbook of Surveillance Studies (Routledg...
Routledge Handbook of Surveillance Studies (Routledg...

$14.99

View Details
Handbook of Psychological Assessment - Hardcover By Groth-Marnat, Gary - GOOD
Handbook of Psychological Assessment - Hardcover By Groth-Marnat, Gary - GOOD

$3.97

View Details
Games People Play: The Basic Handbook of Transactional Analysis. - GOOD
Games People Play: The Basic Handbook of Transactional Analysis. - GOOD

$4.42

View Details
Handbook of Human Performance Technology: Improving Individual and...
Handbook of Human Performance Technology: Improving Individual and...

$6.38

View Details
The Complete Handbook of Pro Basketball 1971-1972 Edition PB Illus - Jim O'B...
The Complete Handbook of Pro Basketball 1971-1972 Edition PB Illus - Jim O'B...

$39.99

View Details
A Handbook of Chinese Healing Herbs - Paperback By Reid, Daniel - VERY GOOD
A Handbook of Chinese Healing Herbs - Paperback By Reid, Daniel - VERY GOOD

$4.39

View Details
The Complete Handbook of Novel Writing - Hardcover By Leder, Meg - GOOD
The Complete Handbook of Novel Writing - Hardcover By Leder, Meg - GOOD

$4.39

View Details
The Grief Recovery Handbook, 20th Anniversary Expanded Edition: The  - VERY GOOD
The Grief Recovery Handbook, 20th Anniversary Expanded Edition: The - VERY GOOD

$4.45

View Details