1. Hedderich M. A., Lange L., Adel H., Strötgen J., Klakow D. A Survey on Recent Approaches for Natural Language Processing in Low-Resource Scenarios, 2020. Available at: https://arxiv.org/abs/2010.12309 (accessed 12.10.2021).
2. Dai A. M., Le Q. V. Semi-supervised sequence learning. Proceedings of the 28th International Conference on Neural Information Processing Systems, 2015, vol. 2, pp. 3079-3087. https://doi.org/10.18653/v1/P17-1161
3. Anastasopoulos A., Cattelan A., Dou Z.-Y., Federico M., Federman C., …, Tur S. TICO-19: the translation initiative for Covid-19. Proceedings of the 1st Workshop on NLP for COVID-19 (Part 2) at EMNLP 2020. December 2020. Available at: https://aclanthology.org/2020.nlpcovid19-2.5 (accessed 12.10.2021). https://doi.org/10.18653/v1/2020.nlpcovid19-2.5
4. Spangher A., Peng N., May J., Ferrara E. Enabling low-resource transfer learning across Covid-19 corpora by combining event-extraction and co-training. Proceedings of the 1st Workshop on NLP for COVID-19 at ACL 2020. July 2020. Available at: https://aclanthology.org/2020.nlpcovid19-acl.4 (accessed 12.10.2021).
5. Vaswani A., Shazeer N., Parmar N., Uszkoreit J., Jones L., …, Polosukhin I. Attention is all you need. Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, California, USA, 4-9 December 2017. Long Beach, 2017, pp. 6000-6010.
6. Kachkou D. I. Language modeling and bidirectional coders representations: an overview of key technologies. Informatika [Informatics], 2020, vol. 17, no. 4, pp. 61−72 (In Russ.). https://doi.org/10.37661/1816-0301-2020-17-4-61-72
7. Baevski A., Edunov S., Liu Y., Zettlemoyer L., Auli M. Cloze-driven pretraining of self-attention networks. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, 3-7 November 2019. Hong Kong, 2019, pp. 5360-5369. https://doi.org/10.18653/v1/D19-1539
8. Liu Y., Ott M., Goyal N., Du J., Joshi M., …, Stoyanov V. RoBERTa: A Robustly Optimized BERT Pretraining Approach, 2019. Available at: https://arxiv.org/abs/1907.11692 (accessed 12.10.2021).
9. Zamyatin K., Pasanen A., Saarikivi Ya. Kak i zachem sohranyat’ yazyki narodov Rossii. How and Why to Save Languages of Ethnic Groups in Russia. Helsinkim, 2012, 181 p. (In Russ.).
10. Meisel J. M. First and Second Language Acquisition (Cambridge Textbooks in Linguistics). Cambridge University Press, 2011, 318 r.
11. Clark E. V. First Language Acquisition. Cambridge University Press, 2nd ed., 2009, 490 r.
12. Luriya A. R. Yazyk i soznanie. Language and Conscience. In Homskaya E. D. (ed.). Moscow, Izdatel'stvo Moskovskogo universtiteta, 1979, 320 p. (In Russ.).
13. Burlak S. A. Proishozhdenie yazyka. Fakty, issledovaniya, gipotezy. Origin of the language. Facts, Researches, Hypothesis. Moscow, Alpina digital, 2019, 609 r. (In Russ.).
14. Nemov R. S. Obshchaya psihologiya. General Psychology. Vol. 2, kniga 4. Rech’. Psihicheskie sostoyaniya: uchebnik i praktikum dlya akademicheskogo bakalavriata. Speech. Psychological States: Textbook and Workshop for Bachelors. 6th ed., Moscow, Yurait, 2017, 243 p. (In Russ.).
15. Evans V. The Language Myth Why Language Is Not an Instinct. Cambridge University Press, 2014, 314 r.
16. Peirce C. S. Collected Papers of Charles Sanders Peirce, Volumes I and II: Principles of Philosophy and Elements of Logic. Belknap Press, 1932, vol. II, 535 p.
17. Winograd T. Programma, ponimaushchaya estestvennyj yazyk. Understanding Natural Language. Moscow, Mir, 1976, 296 p. (In Russ.).
18. Antol S., Agrawal A., Lu J., Mitchell M., Batra D., …, Parikh D. VQA: visual question answering. IEEE International Conference on Computer Vision (ICCV). Santiago, Chile, 2015, pp. 2425-2433. https://doi.org/10.1109/ICCV.2015.279
19. Das A., Datta S., Gkioxari G., Lee S., Parikh D., Batra D. Embodied question answering. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18-23 June 2018. Salt Lake City, 2018, pp. 1-10.
20. Luketina J., Nardelli N., Farquhar G., Foerster J., Andreas J., …, Rocktäschel T. A survey of reinforcement learning informed by natural language. Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, Macao, China, 10-16 August 2019. Macao, 2019, pp. 6309-6317. https://doi.org/10.24963/ijcai.2019/880
21. Janner M., Narasimhan K., Barzilay R. Representation learning for grounded spatial reasoning. Transactions of the Association for Computational Linguistics, 2018, vol. 6, pp. 49-61. https://doi.org/10.1162/tacl_a_00004
22. Côté M.-A., Kádár Á., Yuan X., Kybartas B., Barnes T., …, Trischler A. TextWorld: A learning environment for text-based games. Computer Games. CGW 2018. Communications in Computer and Information Science, Cham, Springer, 2018, vol. 1017, rr. 41-75. https://doi.org/10.1007/978-3-030-24337-1_3
23. Arora S., Doshi P. A survey of inverse reinforcement learning: Challenges, methods and progress. Artificial Intelligence, 2021, vol. 297. Available at: https://arxiv.org/abs/1806.06877 (accessed 12.10.2021). https://doi.org/10.1016/j.artint.2021.103500
24. Silver D., Hubert T., Schrittwieser J., Antonoglou I., Lai M., …, Hassabis D. A general reinforcement learning algorithm that masters chess, shogi, and go through self-play. Science, 2018, vol. 362, no. 6419, pp. 1140-1144. https://doi.org/10.1126%2Fscience.aar6404
25. Freudenthal D., Alishahi A. Computational models of language development. Encyclopedia of Language Development. In Brooks P. J., Kempe V. (eds.). 1st ed., SAGE Publications Inc., 2014, pp. 92-96.
26. Fazly A., Alishahi A., Stevenson S. A probabilistic computational model of cross‐situational word learning. Cognitive Science, 2010, vol. 34, iss. 6, pp. 1017-1063. https://doi.org/10.1111/j.1551-6709.2010.01104.x
27. Christiansen M. H., Chater N. Connectionist natural language processing: the state of the art. Cognitive Science, 1999, vol. 23, iss. 4, pp. 417-437. https://doi.org/10.1207/s15516709cog2304_2
28. Buttery P. J. Computational models for first language acquisition. Technical Report UCAM-CL-TR-675, University of Cambridge, 2006. Available at: https://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-675.pdf (accessed 21.03.2021).
29. MacWhinney, B. The CHILDES Project: Tools for Analyzing Talk: Transcription Format and Programs. 3rd ed., Lawrence Erlbaum Associates Publishers, 2000.
30. Jones G., Gobet F., Pine J. M. A process model of children’s early verb use. Proceedings of the 22th Annual Conference of the Cognitive Science Society, Philadelphia, PA, 13-15 August 2000. Philadelphia, 2000, pp. 723-728.
31. Alishahi A. Computational Modeling of Human Language Acquisition. Morgan & Claypool, 2010, 107 r.
32. Andersen E. S, Dunlea A., Kekelis L. The impact of input: language acquisition in the visually impaired. First Language, 1993, vol. 13, no. 37, pp. 23-49. https://doi.org/10.1177/014272379301303703
33. Vlasov V., Mosig J. E. M., Nicho A. Dialogue Transformers, 2019. Available at: https://arxiv.org/abs/1910.00486 (accessed 12.10.2021).
34. Andreev A. V., Mitrofanova O. A., Sokolov K. V. Vvedenie v formal’nuyu semantiky: uchebnoe posobie. Introduction Into Formal Semantics: Handbook. Saint-Petersburg, Saint-Petersburg State University, 2014, 88 p. (In Russ.).
35. Goddard C. The search for the shared semantic core of all languages. In Goddard C., Wierzbicka A. (eds.). Meaning and Universal Grammar - Theory and Empirical Findings. Amsterdam, John Benjamins, 2002, vol. I, pp. 5-40.
36. Barnes J. Evidentials in the tuyuca verb. International Journal of American Linguistics, 1984, vol. 50, no. 3, pp. 255-271.