Preview

Vestnik NSU. Series: Information Technologies

Advanced search
Vol 22, No 2 (2024)
View or download the full issue PDF (Russian)
5-19 95
Abstract

Expert probability is used for decision-making, allows you to assess risks, and is especially useful in conditions of limited and inaccessible objective data. At the same time, there may be contradictions between the estimates, which may complicate analysis and decision-making. This paper describes an algorithm that checks the compliance of expert estimates using probability intervals. One of the sections is devoted to the development of an evaluation method for a new formula based on the original evaluation system. The article describes the user interface that provides interaction with the developed algorithms.

20-32 100
Abstract

Modern megalopolis face challenges in effectively managing energy resources and minimizing the impact of human activities on the environment. One of the significant aspects of anthropogenic impact is the heat flow generated by buildings, transport and industry. This article presents a set of programs for assessing anthropogenic heat flow (AHF) caused by heat loss from buildings during the heating season. The study is based on a spatial geometric model of the city constructed using OpenStreetMap data. The thermophysical properties of enclosing structures and specific characteristics of thermal energy consumption are obtained from building codes. The proposed method includes the stages of buildings modeling, filtering, supplementing information from Yandex Maps and State Information System of Housing and Communal Services, eliminating collisions, assigning specific characteristics and calculating AHF. The method is implemented through scripts for the Rhinoceros platform, known for its wide functionality and visual programming environment Grasshopper. The proposed approach allows us to effectively analyze and visualize anthropogenic heat flow in cities, which is a key step in developing strategies for sustainable energy management and reducing negative environmental impacts.

33-43 90
Abstract

The article discusses a method of digitally signing images that does not use metadata or additional files. The object for signature is an array of 8x8 pixel blocks with a discrete cosine transformation applied, which make up a JPEG file. Any known algorithm can be used as a signature algorithm, for example, RSA.

The resulting digital signature is converted from a binary form to an image. The conversion method is based on encoding every few bits of the signature of one of the 64 basic functions of the discrete cosine transform. Next, the basic function is transformed into an image by an inverse discrete cosine transformation. The resulting signature, encoded as an image, is attached to the original image on the right to form a signed image.

The digital signature is resistant to JPEG compression within adjustable limits. The following method is used to achieve compression resistance. Since the values in the blocks can change during compression, quantization is used – reducing the accuracy of the values. It is designed in such a way that when quantizing compressed and uncompressed images, the values in the blocks after quantization are becoming the same, which allows you to verify the validity of the digital signature. Quantization also includes a step of checking the parity of the received value to avoid misinterpretation of values when compression is strong.

The quantization step does not apply to the part of the image containing the digital signature. This is due to the fact that this part consists of the basic functions of the discrete cosine transform, and even with strong compression, the corresponding basic function will still retain a large coefficient and will be unambiguously interpreted.

44-56 78
Abstract

A variant of the algorithm has been developed to perform the procedure of automated recovery of numerical values of graphically represented chromatograph signal function, studying the component composition of heavy oil feedstock samples. The problem, which the developed method aims to solve, consists in the poor adaptation of chromatographs to the oil industry: oil is a natural raw material, which is not chemically pure, therefore not all numerical characteristics of the components contained in the investigated sample are fixed within the chromatographic study. In the current configuration of the method, the values from the chromatogram are recorded manually. The developed method takes as input data the images of oil chromatograms obtained in the laboratory, presented in the original black and white colour scheme. The output data of the method is an array of numerical values of coordinates reconstructed with a step of one pixel. The size of the error in the reconstruction of the values by the method is much smaller than the threshold set by the petrochemical laboratory. In addition to automating the indicated task, the array of obtained coordinate values was vectorized in order to use the vector as input data in the Transformer model to solve the problem of predicting the redistribution of hydrocarbon components of heavy oil under the influence of catalysts. As a result of the change in input data representation, the time required to obtain a prediction and the training time were reduced by a multiple, while the value of the average prediction error decreased.

57-67 173
Abstract

Modern software engineers use many tools to speed up the development process. Many of them use integrated development environments (IDEs), which provide services such as text editors, debuggers and even intelligent code completion. This paper is dedicated to the development of a model for predicting variants of program source code termination. To improve the accuracy of the model, we used combinations of Markov chains constructed using different ways of calculating the current context of the program: linear and with AST. The linear way of computing the context is an analysis of the tokenized representation of the source code. The second method, on the other hand, uses a representation of the source code in the form of an abstract syntax tree. Combining the different models preserves more semantic information about the code, also adding the ability to support custom code writing style features. In order to compare the different models, a new dataset has been created specifically for the Pascal language. A detailed comparison of the working mechanisms as well as the prediction accuracy on the collected data is given. The proposed model showed high enough accuracy of predictions with minimal computation costs, which allows using it in integrated development environments.

68-78 80
Abstract

A study of a number of neural networks of different architectures for determining gas concentrations from spectra obtained using an optical emission gas analyzer, which measures the spectrum of electromagnetic radiation emitted by gases when excited by an electric discharge, is presented. The neural network is trained on data from the optical spectroscopy laboratory and is able to predict gas concentrations from spectra at high speed. The research concerned the deep neural network architectures with convolutional and recurrent layers. Convolutional layers highlight the features of the spectra, while recurrent layers take into account the consistent structure of the data. The quality of the neural network is evaluated by the R2 coefficient of determination, and the comparison between networks by the RMSE indicator between the predicted and real gas concentrations.



Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 1818-7900 (Print)
ISSN 2410-0420 (Online)