Preview

Informatics

Advanced search
Vol 18, No 1 (2021)
View or download the full issue PDF (Russian)

LOGICAL DESIGN 

7-24 533
Abstract

The paper describes the research results of application efficiency of minimization programs of functional descriptions of combinatorial logic blocks, which are included in digital devices projects that are implemented in FPGA. Programs are designed for shared and separated function minimization in a disjunctive normal form (DNF) class and minimization of multilevel representations of fully defined Boolean functions based on Shannon expansion with finding equal and inverse cofactors. The graphical form of such representations is widely known as binary decision diagrams (BDD). For technological mapping the program of "enlargement" of obtained Shannon expansion formulas was applied in a way that each of them depends on a limited number of k input variables and can be implemented on one LUT-k – a programmable unit of FPGA with k input variables. It is shown that a preliminary logic minimization, which is performed on the domestic programs, allows improving design results of foreign CAD systems such as Leonardo Spectrum (Mentor Graphics), ISE (Integrated System Environment) Design Suite and Vivado (Xilinx). The experiments were performed for FPGA families’ Virtex-II PRO, Virtex-5 and Artix-7 (Xilinx) on standard threads of industrial examples, which define both DNF systems of Boolean functions and systems represented as interconnected logical equations.

25-42 605
Abstract

The urgency of the problem of testing storage devices of modern computer systems is shown. The mathematical models of their faults and the methods used for testing the most complex cases by classical march tests are investigated. Passive pattern sensitive faults (PNPSFk) are allocated, in which arbitrary k from N memory cells participate, where k << N, and N is the memory capacity in bits. For these faults, analytical expressions are given for the minimum and maximum fault coverage that is achievable within the march tests. The concept of a primitive is defined, which describes in terms of march test elements the conditions for activation and fault detection of PNPSFk of storage devices. Examples of march tests with maximum fault coverage, as well as march tests with a minimum time complexity equal to 18N are given. The efficiency of a single application of tests such as MATS ++, March C− and March PS is investigated for different number of k ≤ 9 memory cells involved in PNPSFk fault. The applicability of multiple testing with variable address sequences is substantiated, when the use of random sequences of addresses is proposed. Analytical expressions are given for the fault coverage of complex PNPSFk faults depending on the multiplicity of the test. In addition, the estimates of the mean value of the multiplicity of the MATS++, March C− and March PS tests, obtained on the basis of a mathematical model describing the problem of the coupon collector, and ensuring the detection of all k2k PNPSFk faults are given. The validity of analytical estimates is experimentally shown and the high efficiency of PNPSFk fault detection is confirmed by tests of the March PS type.

SIGNAL, IMAGE, SPEECH, TEXT PROCESSING AND PATTERN RECOGNITION 

43-60 1106
Abstract

One of the promising areas of development and implementation of artificial intelligence is the automatic detection and tracking of moving objects in video sequence. The paper presents a formalization of the detection and tracking of one and many objects in video. The following metrics are considered: the quality of detection of tracked objects, the accuracy of determining the location of the object in a frame, the trajectory of movement, the accuracy of tracking multiple objects. Based on the considered generalization, an algorithm for tracking people has been developed that uses the tracking through detection method and convolutional neural networks to detect people and form features. Neural network features are included in a composite descriptor that also contains geometric and color features to describe each detected person in the frame. The results of experiments based on the considered criteria are presented, and it is experimentally confirmed that the improvement of the detector operation makes it possible to increase the accuracy of tracking objects. Examples of frames of processed video sequences with visualization of human movement trajectories are presented.

61-71 569
Abstract

When applying classifiers in real applications, the data imbalance often occurs when the number of elements of one class is greater than another. The article examines the estimates of the classification results for this type of data. The paper provides answers to three questions: which term is a more accurate translation of the phrase "confusion matrix", how preferable to represent data in this matrix, and what functions to be better used to evaluate the results of classification by such a matrix. The paper demonstrates on real data that the popular accuracy function cannot correctly estimate the classification errors for imbalanced data. It is also impossible to compare the values of this function, calculated by matrices with absolute quantitative results of classification and normalized by classes. If the data is imbalanced, the accuracy calculated from the confusion matrix with normalized values will usually have lower values, since it is calculated by a different formula. The same conclusion is made for most of the classification accuracy functions used in the literature for estimation of classification results. It is shown that to represent confusion matrices it is better to use absolute values of object distribution by classes instead of relative ones, since they give an idea of the amount of data tested for each class and their imbalance. When constructing classifiers, it is recommended to evaluate errors by functions that do not depend on the data imbalance, that allows to hope for more correct classification results for real data.

72-83 553
Abstract

The researcher should choose the modes of recording spectra which allow to achieve the highest accuracy of spectral measurements in remote sensing systems. When registering a signal from aircraft which provide maximum coverage of the studied area, it is important to obtain a signal with the maximum signal-to- noise ratio in a minimum time, since the accumulation of spectra samples for averaging is impossible. The paper presents the experimental results of determining the noise components (readout noise, photon, electronic shot, pattern noise) for a monochrome uncooled CCD-line detector Toshiba TCD1304DG (CCD – charge-coupled devices) with various conditions of spectrum registration: detector temperature, exposition. Obtained dependences of the noise components make it possible to estimate the noise level for well-known conditions of spectra registration. The algorithm for processing CCD data based on an adaptive Wiener filter is proposed to increase the signal-to-noise ratio by using a priori information about the statistical parameters of the noise components. Such approach has allowed to increase the signal-to-noise ratio of sky spectral brightness by 4–9 dB for exposure times. The practical application of the algorithm has reduced the uncertainty in the vegetation index NDVI by 1.5 times when recording the reflection spectra of vegetation from the aircraft in the nadir measurement geometry.

INFORMATION TECHNOLOGY 

84-95 1074
Abstract

The reliability of computer-based information systems is largely determined by the reliability of the developed application software. The failure rate of its computer program is considered as an indicator of the reliability of the application software. To determine the expected reliability of the application software planned for the development (until writing the code of a program), the model is proposed that uses some parameters of the future computer program, data on the influence of various factors on its reliability, and further testing of the program. The model takes into account the field of software application and computer processor performance. The process of model parameters obtaining is analyzed., It is possible by use of proposed model to determine the predicted failure rate of the planned application computer program, and then the reliability of the computer-based information system as a whole. If necessary, the measures can be developed to ensure the required level of reliability of the computer-based information system.

96-104 604
Abstract

This paper proposes a new approach to the creation and use of rotational-hybrid model of organizing a modern educational process representing the integration of educational, information communication, testing, management, rotational and other technologies. A visual and mathematical model, an diagram of information and educational system, an algorithm for the rotation of trainees are presented. An approach is proposed for constructing the optimal way to determine the maximum assimilation of disciplines by students in each set of studied disciplines and the formation of its most acceptable subset. The program-algorithmic implementation of the rotational-hybrid model in the form of a universal teaching-testing electronic learning tool is shown. The mathematical basis of the rotational-hybrid model is set theory using graph models, which compare favorably with other mathematical apparatus in visibility and a matrix form of representation that can be easily processed on a computer. The effectiveness of using the developed rotational-hybrid model and its algorithmic implementation is illustrated with the help of developed teaching-testing electronic teaching tool, introduced into educational process at the Department of Information Systems and Technologies of the Institute of Information Technologies of the Belarusian State University of Informatics and Radioelectronics.

BIOINFORMATICS 

105-122 711
Abstract

The biomolecular technology progress is directly related to the development of effective methods and algorithms for processing a large amount of information obtained by modern high-throughput experimental equipment. The priority task is the development of promising computational tools for the analysis and interpretation of biophysical information using the methods of big data and computer models. An integrated approach to processing large datasets, which is based on the methods of data analysis and simulation modelling, is proposed. This approach allows to determine the parameters of biophysical and optical processes occurring in complex biomolecular systems. The idea of an integrated approach is to use simulation modelling of biophysical processes occurring in the object of study, comparing simulated and most relevant experimental data selected by dimension reduction methods, determining the characteristics of the investigated processes using data analysis algorithms. The application of the developed approach to the study of bimolecular systems in fluorescence spectroscopy experiments is considered. The effectiveness of the algorithms of the approach was verified by analyzing of simulated and experimental data representing the systems of molecules and proteins. The use of complex analysis increases the efficiency of the study of biophysical systems during the analysis of big data.



Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 1816-0301 (Print)
ISSN 2617-6963 (Online)