Preview

Informatics

Advanced search

Extrapolating training of neural networks

Abstract

An approach for training neural networks is presented. The point is the knowledge contained in one network are used to generalize the input signals that are corresponded to classes what are unknown to it, in order to train them by another neural network with a simpler architecture. The paper observes the possibility of using the output signal of a trained handwriting recognition system on the images what are presented to it and which are absent in the original training set of symbols. This training process is performing in order to generalize and then extrapolate the reaction to the uniquely interpreted output of another system during its training to those unknown classes. Like a person in the process of studying what is able to perceive more and more complex concepts and learn new knowledge faster depending on already acquired information, as well as when learning new data – to keep in memory those that were obtained earlier, the approach allows us to use the result of input signal generalization from already trained system in the aim to perceive of new knowledge in a shorter time. Also it allows increasing the accuracy of the recognition process without a necessity to repeat the entire training cycle, and therefore – without changing the previously acquired knowledge in the net. The presented approach can be used to optimize the training process of recognition systems, increase the accuracy of already trained systems, and also to retrain or additional training them to new classes without the need to retrain the original training set.

About the Authors

Ya. A. Bury
Belarusian State University of Informatics and Radioelectronics, Minsk
Belarus
Assistant of the Department of Electronic Computing Machines


D. I. Samal
Belarusian State University of Informatics and Radioelectronics, Minsk
Belarus
Cand. Sci. (Eng.), Assoc. Prof. of the Department of Electronic Computing Machines


References

1. Haykin S. Neyronnyye seti. Polnyy kurs. Neural Networks. Full Course. Moscow, Saint Petersburg, Kiev, Vil'jams, 2006, 1104 p. (in Russian).

2. Nikolenko S., Kadurin A., Arhangel'skaja E. Glubokoye obucheniye. Pogruzheniye v mir neyronnykh

3. setey. Deep Learning. Immersion in the World of Neural Networks. Saint Petersburg, Piter, 2018, 480 p. (in Russian).

4. Golovko V. A. Neyronnyye seti: obucheniye, organizatsiya i primeneniye. Neural Networks: Training, Organization and Application. Moscow, IPRZhR, 2001, 256 p. (in Russian).

5. Baza izobrazhenij "The MNIST database of handwritten digits". Images dataset "The MNIST database of handwritten digits". Available at: http:// yann.lecun.com/exdb/mnist/ (accessed 12.09.2018).

6. Montgomeri D. K. Design and Analysis of Experiments. 9th edition. New York, John Wiley & Sons, Inc., 2017, 640 p.


Review

For citations:


Bury Ya.A., Samal D.I. Extrapolating training of neural networks. Informatics. 2019;16(1):86-92. (In Russ.)

Views: 705


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 1816-0301 (Print)
ISSN 2617-6963 (Online)