Preview

Informatics

Advanced search

Development of an imitation learning method for a neural network system of mobile robot’s movement on example of the maze solving

https://doi.org/10.37661/1816-0301-2024-21-3-48-62

Abstract

Objectives. To develop a new method for training a mobile robot control system to use a maze solver algorithm based on reinforcement learning and the right-hand algorithm.

Methods. The work uses the method of computer modeling in the MATLAB/Simulink environment.

Results. A new method for training a mobile robot control system capable of implementing the right-hand algorithm for finding an exit from a maze is proposed. The proposed method is based on the work of two agents interacting with each other: the first directly implements the search algorithm and searches for an exit from the maze, and the second, following it, tries to learn using the imitation learning method. The expert agent, implementing a discrete algorithm for moving through the maze, makes precise discrete steps and moves almost independently of the second agent. The only limitation is its speed, which is directly proportional to the distance between the agents. The second agent, the student agent, tries to reduce the distance to the first agent by trial and error. The learning process was implemented using the reinforcement learning method, which was used in the imitation mode and for which a corresponding reward function was developed, allowing the robot's center of mass to be kept in the center of the corridor and, if necessary, to turn, following the expert agent. The agents move along a virtual polygon consisting of branched corridors wide enough to implement various movement maneuvers.

Conclusion. It was proven that, thanks to the proposed method of imitative learning, the student agent is able not only to adopt the required behavior patterns from the expert agent – to search for an exit in a previously unknown labyrinth using the right-hand algorithm, but also to independently acquire new ones (changing speed on a turn, bypassing small dead-end corridors), which positively influence the performance of the assigned task.

About the Authors

T. Yu. Kim
The United Institute of Informatics Problems of the National Academy of Sciences of Belarus
Belarus

Tatyana Yu. Kim, Junior Researcher, Laboratory of Robotic Systems No. 116

st. Surganova, 6, Minsk, 220012



R. A. Prakapovich
The United Institute of Informatics Problems of the National Academy of Sciences of Belarus
Belarus

Ryhor A. Prakapovich, Ph. D. (Eng.), Assoc. Prof.

st. Surganova, 6, Minsk, 220012



References

1. Mustafa K. A. A., Botteghi N., Sirmacek B., Poel M., Stramigioli S. Towards continuous control for mobile robot navigation: A reinforcement learning and slam based approach. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2019, vol. 42, рр. 857–863. https://doi.org/10.5194/isprs-archives-XLII-2-W13-857-2019

2. Truong, X. T., Ngo T. D. Toward socially aware robot navigation in dynamic and crowded environments: A proactive social motion model. IEEE Transactions on Automation Science and Engineering, 2017, vol. 14, no. 4, рр. 1743–1760. https://doi.org/10.1109/TASE.2017.2731371

3. Mhin V., Kavukcuoglu K., Silver D., Graves A., Antonoglou I., …, Riedmiller M. Playing Atari with Deep Reinforcement Learning, 2013. Available at: https://doi.org/10.48550/arXiv.1312.5602 (accessed 20.06.2024).

4. Silver D., Huang A., Maddison C. J., Guez A., Sifre L., …, Ha abi D. Mastering the game of Go with deep neural networks and tree search. Nature, 2016, vol. 529, no. 7587, рр. 484–489.

5. Andrychowicz M., Baker B., Chociej M., Józefowicz R., McGrew B., …, Zaremba W. Learning dexterous in-hand manipulation. The International Journal of Robotics Research, 2020, vol. 39, no. 1, рр. 3–20. https://doi.org/10.1177/0278364919887447

6. Heess N., Dhruva T. B., Sriram S., Lemmon J., Merel J., …, Silver D. Emergence of Locomotion Behaviours in Rich Environments, 2017. Available at: https://doi.org/10.48550/arXiv.1707.02286 (accessed 20.06.2024).

7. Brummelen J. V., O'Brien M., Gruyer D., Najjaran H. Autonomous vehicle perception: The technology of today and tomorrow. Transportation Research Part C: Emerging Technologies, 2018, no. 86, рр. 384–406. https://doi.org/10.1016/j.trc.2018.02.012

8. Huang W., Braghin F., Wang Z. Learning to drive via Apprenticeship Learning and Deep Reinforcement Learning, 2020, рр. 1–7. Available at: https://doi.org/10.48550/arXiv.2001.03864 (accessed 20.06.2024).

9. Nageshrao S., Rahman Y., Ivanovic V., Jankovic M., T eng E., …, Filev D. Robu t AI driving trategy for autonomous vehicles. AI-enabled Technologies for Autonomous and Connected Vehicles. Springer, 2022, рр. 161–212.

10. Yeong D. J., Velasco-Hernandez G., Barry J., Walsh J. Sensor and sensor fusion technology in autonomous vehicles: A review. Sensors, 2021, vol. 21, iss. 6, р. 2140. https://doi.org/10.3390/s21062140

11. Kweon J., Kim K., Lee Ch. Deep reinforcement learning for guidewire navigation in coronary artery phantom. IEEE Access, 2021, vol. 9, рр. 166409–166422. https://doi.org/10.1109/ACCESS.2021.3135277

12. Osa T., Pajarinen J., Neumann G., Bagnell J. A., Abbeel P., Peters J. An Algorithmic Perspective on Imitation Learning. Bo ton, Now publishers Inc., 2018, 188 p.

13. Lonza, A. Reinforcement Learning Algorithms with Python. Packt Publishing, 2019, 366 р.

14. Chella, А imitation learning and anchoring through conceptual spaces. Applied Artificial Intelligence, 2007, no. 21, рр. 343–359.

15. Kim T., Prakapovich R. Automatic tuning of the motion control system of a mobile robot along a trajectory based on the reinforcement learning method. Communications in Computer and Information Science. Springer, Cham, 2022, vol. 1562, рр. 234–244. https://doi.org/10.1007/978-3-030-98883-8_17

16. Sutton R. S., Barto A. G. Reinforcement Learning: An Introduction, 2nd edition. London, England, The MIT Press, 2014, 352 р.

17. Watkin C., Dayan P. Q-learning. Machine Learning, 1992, vol. 8, i . 3–4, рр. 279–292.

18. Duan J. M., Chen Q. L. Prior knowledge ba ed Q-learning path planning algorithm. Electronics Optics & Control, 2019, vol. 26, i . 9, рр. 29–33.

19. Sutton R. S., Barto A. G. Reinforcement Learning: An Introduction, 2nd edition. London, England, The MIT Pre , 2014, 338 р.

20. Rossi F., Nardelli M., Cardellini V. Horizontal and vertical scaling of container-based applications using reinforcement learning. 2019 IEEE 12th International Conference on Cloud Computing (CLOUD), Milan, Italy, 8–13 July 2019. Milan, 2019, рр. 329–338. https://doi.org/10.1109/CLOUD.2019.00061

21. Strehl A. L., Li L., Wiewiora E., Langford J., Littman M. L. PAC model-free reinforcement learning. ICML’06: Proceeding of the 23th International Conference on Machine Learning. Pittsburgh, Pennsylvania, USA, 25–29 June 2006. Pittsburgh, 2006, рр. 881–888. https://doi.org/10.1145/1143844.114395

22. Ravichandiran S. Deep Reinforcement Learning with Python, 2nd edition. Packt Publishing, 2020, 760 p.

23. Yu Ch., Ren G. Supervised-actor-critic reinforcement learning for intelligent mechanical ventilation and sedative dosing in intensive care units. BMC Medical Informatics and Decision Making, 2020, no. 20 (S3), рр. 1–8. https://doi.org/10.1186/s12911-020-1120-5

24. Zheng B., Verma S., Zhou J., Tsang I., Chen F. Imitation learning: progress, taxonomies and challenges. IEEE Transactions on Neural Networks and Learning Systems, 2022, рр. 1–22. Available at: https://arxiv.org/abs/2106.12177 (accessed 20.06.2024).

25. Kim T. Yu., Prakapovich R. A. Lobatiy A. A. Forced motion control of a mobile robot. Informatika [Informatics], 2022, vol. 19, no. 3, pp. 86−100 (In Russ.). https://doi.org/10.37661/1816-0301-2022-19-3-86-100


Review

For citations:


Kim T.Yu., Prakapovich R.A. Development of an imitation learning method for a neural network system of mobile robot’s movement on example of the maze solving. Informatics. 2024;21(3):48-62. (In Russ.) https://doi.org/10.37661/1816-0301-2024-21-3-48-62

Views: 217


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 1816-0301 (Print)
ISSN 2617-6963 (Online)