This paper proposes an efficient method, based on reinforcement learning, to be used as ship controller in fast-time simulators within restricted channels. The controller must operate the rudder in a realistic manner in both time and angle variation so as to approximate human piloting. The method is well suited to scenarios where no previous navigation data is available; it takes into account, during training, both the effect of environmental conditions and also curves in channels. We resort to an asynchronous distributed version of the reinforcement learning algorithm Deep Q Network (DQN), handling channel segments as separate episodes and including curvature information as context variables (thus moving away from most work in the literature). We tested our proposal in the channel of Porto Sudeste, in the southern Brazilian coast, with realistic environment scenarios where wind and current incidence varies along the channel. The method keeps a simple representation and can be applied to any port channel configuration that respects local technical regulations.
|Número de artículo||9163360|
|Número de páginas||15|
|Estado||Publicada - 2020|
|Publicado de forma externa||Sí|