论文摘要
Obstacle avoidance becomes a very challenging task for an autonomous underwater vehicle(AUV) in an unknown underwater environment during exploration process. Successful control in such case may be achieved using the model-based classical control techniques like PID and MPC but it required an accurate mathematical model of AUV and may fail due to parametric uncertainties, disturbance, or plant model mismatch. On the other hand, model-free reinforcement learning(RL) algorithm can be designed using actual behavior of AUV plant in an unknown environment and the learned control may not get affected by model uncertainties like a classical control approach. Unlike model-based control model-free RL based controller does not require to manually tune controller with the changing environment. A standard RL based one-step Q-learning based control can be utilized for obstacle avoidance but it has tendency to explore all possible actions at given state which may increase number of collision.Hence a modified Q-learning based control approach is proposed to deal with these problems in unknown environment.Furthermore, function approximation is utilized using neural network(NN) to overcome the continuous states and large statespace problems which arise in RL-based controller design. The proposed modified Q-learning algorithm is validated using MATLAB simulations by comparing it with standard Q-learning algorithm for single obstacle avoidance. Also, the same algorithm is utilized to deal with multiple obstacle avoidance problems.
论文目录
文章来源
类型: 期刊论文
作者: Prashant Bhopale,Faruk Kazi,Navdeep Singh
来源: Journal of Marine Science and Application 2019年02期
年度: 2019
分类: 工程科技Ⅱ辑
专业: 船舶工业
单位: Electrical Engineering Department, Veermata Jijabai Technological Institute
基金: the support of Centre of Excellence (CoE) in Complex and Nonlinear dynamical system (CNDS),through TEQIP-II,VJTI,Mumbai,India
分类号: U674.941
页码: 228-238
总页数: 11
文件大小: 2542K
下载量: 75