Preview

Informatics

Advanced search
Vol 20, No 1 (2023)
View or download the full issue PDF (Russian)

INFORMATION PROTECTION AND SYSTEM RELIABILITY 

7-26 318
Abstract

Objectives. The problem of constructing a new class of physically unclonable functions of the arbiter type (APUF) is being solved, based on the difference in delay times for the inputs of numerous modifications of the base element, due to both an increase in the number of inputs and the topology of their connection. Such an  approach allows building two-dimensional physically unclonable functions (2D-APUF), in which, unlike  classical APUF, the challenge generated for each basic element selects a pair of paths not from two possible, but from a larger number of them. The relevance of such a study is associated with the active development of  physical cryptography. The following goals are pursued in the work: the construction of the basic elements of the APUF and their modifications, the development of a methodology for constructing 2D-APUF.

Methods. The methods of synthesis and analysis of digital devices are used, including those based on  programmable logic integrated circuits, the basics of Boolean algebra and circuitry. 

Results. It is shown that the classical APUF uses a standard basic element that performs two functions,  namely, the function of choosing a pair of paths Select and the function of switching paths Switch, which, due to their joint use, allow achieving high performance. First of all, this concerns the stability of the APUF functioning, which is characterized by a small number of challenge, for which the response randomly takes one of two  possible values 0 or 1. Modifications of the base element in terms of the implementations of its Select and Switch functions are proposed. New structures of the base element are presented in which the modifications of their  implementations are made, including in terms of increasing the number of pairs of paths of the base element from which one of them is selected by the challenge, and the configurations of their switching. The use of  various basic elements makes it possible to improve the main characteristics of APUF, as well as to break the regularity of their structure, which was the main reason for hacking APUF through machine learning. 

Conclusion. The proposed approach to the construction of physically unclonable 2D-APUF functions, based on the difference in signal delays through the base element, has shown its efficiency and promise. The effect of improving the characteristics of such PUFs has been experimentally confirmed with noticeable improvement in the stability of their functioning. It seems promising to further develop the ideas of constructing two-dimensional physically unclonable functions of the arbiter type, as well as experimental study of their characteristics, as well as resistance to various types of attacks, including using machine learning.

BIOINFORMATICS 

27-39 365
Abstract

Objectives. To estimate effect sizes in quasi-experimental studies.

Methods. Methods of the theory of estimation, methods of mathematical statistics.

Results. Estimation of the effect size on an ordinal scale, estimation of the effect size on a binary in the case of opposite direction effects in groups, in quasi-experimental studies for the analytical method "differences in  differences".

Conclusion. The paper considers approaches to assessing absolute and standardized effect sizes in experimental and quasi-experimental studies. A brief review of the estimators of absolute and standardized effect sizes for quantitative and binary study variables is provided. The applied approach is proposed to assess the effect sizes of a binary variable in the case of opposite direction effects in groups within a quasi-experimental studies for the "differences in differences" analytical method. An example of assessment of absolute and standardized effect sizes of quantitative and binary variables in quasi-experimental studies in clinical epidemiology is considered. 

INTELLIGENT SYSTEMS 

40-54 314
Abstract

Objectives. The main goal is to improve person re-identification accuracy in distributed video surveillance systems.

Methods. Machine learning methods are applied.

Result. A technology for two-stage training of convolutional neural networks (CNN) is presented, characterized by the use of image augmentation for the preliminary stage and fine tuning of weight coefficients based on the original images set for training. At the first stage, training is carried out on augmented data, at the second stage, fine tuning of the CNN is performed on the original images, which allows minimizing the losses and increasing model efficiency. The use of different data at different training stages does not allow the CNN to remember training examples, thereby preventing overfitting.

Proposed method as expanding the training sample differs as it combines an image pixels cyclic shift, color  exclusion and fragment replacement with a reduced copy of another image. This augmentation method allows to get a wide variety of training data, which increases the CNN robustness to occlusions, illumination, low image resolution, dependence on the location of features.

Conclusion. The use of two-stage learning technology and the proposed data augmentation method made it possible to increase the person re-identification accuracy for different CNNs and datasets: in the Rank1 metric  by 4–21 %; in the mAP by 10–31 %; in the mINP by 39–60 %. 

55-74 327
Abstract

Objectives. The studied problem of loan classification is particularly important for financial institutions, which must efficiently allocate monetary assets between entities as part of the provision of financial services. Therefore, it is more important than ever for financial institutions to be able to identify reliable borrowers as accurately as possible. At the same time, machine learning is one of the tools for making such decisions. The aim of this work is to analyze the possibility of efficient use of logistic regression for solving the task of loan  classification.

Methods. Based on the logistic regression algorithm using historical data on loans issued, the following  metrics are calculated: cost function, Accuracy, Precision, Recall и  score. Polynomial regression and  principal component analysis are used to determine the optimal set of input data for the being studied logistic regression algorithm.

Results. The impact of data normalization on the final result is estimated, the optimal regularization parameter for solving this problem is determined, the impact of the balance of target values is assessed, the optimal  boundary value for the logistic regression algorithm is calculated, the influence of increasing input indicators by means of filling in missing values and using polynomials of different degrees is considered and the existing set of input indicators is analyzed for redundancy.

Conclusion. The research results confirm that the application of the logistic regression algorithm for solving loan classification problems is appropriate. The use of this algorithm allows to get quickly a working loan  classification tool. 

LOGICAL DESIGN 

75-90 231
Abstract

Objectives. The problem of low power state assignment of partial states of a parallel automaton is considered. The objective of the paper is to investigate the possibilities of using the decomposition in state assignment of partial states in order to decrease the task dimension.

Methods. Parallel automaton is decomposed into a net of sequential automata whose states are  assigned then with ternary vectors. The method for assignment uses searching for a maximal cut in a weighted graph that represents pairs of states connected by transitions. The edge weights of the graph are the values  related to the probabilities of transitions.

Results. A method to construct a net of sequential automata that realizes the given parallel automaton is  described. The probabilities of transitions between sets are calculated by means of solving a system of linear equations according to the Chapmann – Kolmogorov method. The values of inner variables assigned to the states of every component sequential automaton are obtained from two-block partitions of its set of states that are determined by the cuts of corresponding transition graph.

Conclusion. Applying parallel automaton decomposition allows decreasing the dimension of the laborious problem of state assignment. The proposed method is intended for application in computer aided systems for design of discrete devices.

91-101 330
Abstract

Objectives. Currently, electronic control devices are increasingly being introduced into various household and production products. Microcontrollers of a wide variety of configurations are widely used as such devices. Another approach can be proposed where a control device with a standard structure is synthesized from typical integrated circuits and implements a Boolean function describing the required control actions.

The purpose of the work is to investigate the possibility of implementing Boolean functions using devices with a standard structure, the design of which is based on the use of a discrete automaton model.

Methods. The original Boolean function to be implemented is given as a disjunctive normal form. A binary  decision diagram (BDD) is built for such function, optimized by the number of vertices, on the basis of which a graph of transitions of a synchronous Moore automaton with an abstract state is formed. Further, after performing the state encoding step of the machine, input information for flashing (programming) of the matrix memory of the read-only memory (ROM) is generated based on its transition table.

Results. A device that implements a Boolean function based on an automaton model is synthesized from typical microcircuits. The main component is ROM, which, according to the standard structure of the device, is  supplemented by a shift register, a state register, a trigger and three selectors of the initial and two final states.

Conclusion. The process of designing a device with standard structure that implements the Boolean function, as a result, comes down to programming the ROM matrix memory based on an automaton transition table. The use of a reprogrammable ROM allows to change the functionality of the device while maintaining the previous circuit implementation. The disadvantage of such a device, as well as devices implemented on the basis of  microcontrollers, is the low speed, the advantage is the possibility of use it in various products and devices, primarily for household purposes, which do not require a high-speed response to the change of  input signal.

INFORMATION TECHNOLOGY 

102-112 337
Abstract

Objectives. The problem of developing a method for calculating small-sized spectral features that increases the efficiency of existing machine learning systems for analyzing and classifying voice signals is being solved.

Methods. Spectral features are extracted using a generative approach, which involves calculating a discrete Fourier spectrum for a sequence of samples generated using an autoregressive model of input voice signal. The generated sequence processed by the discrete Fourier transform considers the periodicity of the transform and thereby increase the accuracy of spectral estimation of analyzed signal.

Results. A generative method for calculating spectral features intended for use in machine learning systems for the analysis and classification of voice signals is proposed and described. An experimental analysis of the  accuracy and stability of the spectrum representation of a test signal with a known spectral composition has been carried out using the envelopes. The envelopes were calculated using  proposed generative method and using discrete Fourier transform with different analysis windows (rectangular window and Hanna window).  The analysis showed that spectral envelopes obtained using the proposed method more accurately represent the spectrum of test signal according to the criterion of minimum square error. A comparison of the effectiveness of voice signal classification with proposed features and the features based on the mel-frequency kepstral  coefficients is carried out. A diagnostic system for amyotrophic lateral sclerosis was used as a basic test system to evaluate the effectiveness of proposed approach in practice. 

Conclusion. The obtained experimental results showed a significant increase of classification accuracy when using proposed approach for calculating features compared with the features based on the mel-frequency kepstral coefficients.

SCIENTISTS OF BELARUS 



Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 1816-0301 (Print)
ISSN 2617-6963 (Online)