Preview

Vestnik NSU. Series: Information Technologies

Advanced search
Vol 17, No 1 (2019)
View or download the full issue PDF (Russian)
5-17 79
Abstract
The project on automation of work with poetic texts, implemented in the Institute of Computational Technologies SB RAS, includes a complex of studies related to the analysis of poetic texts is carried out. Each component of the project belongs to one of the structural levels of text analysis: structural, semantic, pragmatic. The structural analysis of the poetic text is associated with the definition of its metrorhythmic characteristics. In the context of semantic analysis, the research on the extraction of semantic structures from the poetic texts is carried out. The pragmatic level includes the research on the automatic identification of high-level characteristics of poetic text, such as genre and style. This paper describes the process of designing and implementing of the creation of an information system for presenting the results of the analysis of poetic texts. At the design stage, the tasks to be solved by the information system are formulated, as well as the requirements in order of priority for the overall project. The presented information system combines heterogeneous information about the results of the analysis of poetic texts obtained at each level of representation. Based on the needs of potential users, the description of the external interacting elements of the system is performed. The test interface for the access to the information system storage was developed. The implementation of the information system will provide a significant simplification of the research of poetic texts.
18-27 113
Abstract
In this article, the authors based on the generalized algorithm of secure information exchange for wireless security systems and the secure control device for robot group developed a generalized algorithm for secure information exchange with a complicated message authentication code for wireless security systems. This algorithm is based on the use in the control unit of the programmable read-only memory control of unique identification data, in which a table of unique code sequences assigned to each controlled object is stored. Protection against imitation of transmitted commands is provided by adding to them the sum according to the XOR rule of the initial value of the first pseudo-random sequence of the PRS-1 and the unique identification data of the controlled object. Also with help of developed algorithm, with appropriate adaptation, it is possible to make it difficult for an outside observer to identify the controlled object being checked at a given moment, simultaneously perform an individual check of the required controlled object for authenticity and the availability of all controlled objects in the communication radius. The developed algorithm with a complicated message authentication code can find application in various areas of wireless security systems that require protection of the alarm and service messages transmitted over the radio channel from unauthorized access.
28-41 62
Abstract
The problem of timely warning the population about the danger of a tsunami is still relevant in the coastal areas around the world. Despite the fact that the tsunami is quite rare, the consequences could be disastrous. To assess the scale of possible damage, it is necessary to have information about the height and speed of the wave at the coastline. Scientists in Akademgorodok for many years engaged in the creation and implementation of tsunami waves modeling algorithms used to calculate the parameters of the tsunami near the coastline. The main purpose of this article is to integrate existing implementations of algorithms written in different programming languages (C ++, C, Python, Kotlin, Fortran, etc.) into a one system. This system will allow to calculate tsunami parameters at the protected coastline accepting seismic event data at the input. The article considered the main stages of the system, such as monitoring seismic activity, extracting data from the DART (Deep-ocean Assessment and Reporting of Tsunamis) stations, tidal components filtration and tsunami waves modeling, and intermediate steps associated with preprocessing and transformation of data. Special attention is paid to the development of the system architecture. The interaction between the modules is designed in such a way that new implementations of the algorithms can be easily integrated into the system.
42-52 82
Abstract
Among many geophysical research methods, pulsed neutron gamma ray logging stands out and is one of the most promising, due to the possibility of determining the composition of rocks. However, there are difficulties in interpreting the results of such research. The gamma ray signal are recorded in the process of pulsed neutron gamma ray logging. The gamma ray spectrums obtained after pre-processing of signal are noisy and have a high fluctuation, which affects the visual analysis of the research results. Visual analysis, on par with automatic, is important when working with logging data. It helps to prevent the transfer of false results and the work of defective devices. This paper describes an algorithm that allows to reduce fluctuation considering the statistical dependence of neighboring elements of the spectrum to improve the visualization of research results. Since the signals spectrums are random variables, during the consideration of various methods of processing spectrums, it was decided to use smoothing functions, namely the B-spline. After applying different variations of B-spline parameters to gamma ray spectrums, it turned out that a quadratic homogeneous B-spline method with a knots interval - 7 showed best results in achieving the goal of improving visualization. Thus, has been developed algorithm of additional pre-processing that allows reducing fluctuation of gamma ray spectrums, for improve the visualization of the results of pulsed neutron gamma ray logging.
53-60 42
Abstract
The paper presents a new way to obtain evidence with a mobile device - by means of an Android-application that allows to rapidly start video transmission to cloud services. For the sake of quick launch there is a widget available to the user which can be placed on the home screen. Tapping the widget starts video recording which is sent to cloud service as it is recorded. The latter is essential for the security of data that has already been received, even if the device has been damaged. To save as much data as possible in unexpected situations which lead to phone’s loss, small duration of one fragment has been chosen - one second. Upon completion of the recording the user has an opportunity to assemble the whole clip from its fragments, downloading missing fragments from cloud service if necessary. The practical value of the application is that it allows to record events reliably and quickly enough, and the collected video materials can be useful both for litigations and for self-study. The paper analyzes existing solutions. As a result, each of them is found to have certain disadvantages preventing them from usage in video evidence retrieval. An architecture for the application is proposed, featuring the MVC design pattern with 6 modules. The application implementation is described and the developed UI is presented.
61-71 59
Abstract
This article describes the automatization of the hypothesis proving with heterogeneous EEG data obtained using different equipment (Neuroscan and Brain Products). EEG data is recorded from the head surface of the subjects by a helmet, which has 118-128 electrodes (channels). The subject undergoes various tests (trials) during the experiment The EEG data is divided into parts, in accordance with the trials. The connection between electrodes and head is unreliable, some channels may be lost because of the signal weakness. The approach includes unification of EEG-recordings, restoring of lost channels, 3D reconstruction of brain activity and conducting a mediation analysis. Raw data is overwritten in a unified order. Unmatched channels are excluded. Single corrupted channels are restored by spherical spline interpolation. 3D localization of brain activity is based on the inverse problem. The localization is conducted in the functional areas of the cerebral cortex, according to the Talayrak atlas using the MN method. Reconstruction is carried out separately for each trial, and for each of the five standard frequency ranges. The results are recorded in the NIFTI format, focused on the voxel representation. Then a multi-level mediation analysis is carried out. The coordinates of the discovered clusters are compared to the brain map and serve as a basis for interpreting and verifying neurophysiological hypotheses. The approach was implemented as a set of MATLAB scripts, libraries: EEGLAB, NeuroElf, Alphasim, spm8, Mediation toolbox, and the sLORETA software package. The created tools have been practically tested in the processing of neurophysiological experiments on social interaction. Created scripts can be used to test a wide class of neurophysiological hypotheses.
72-81 93
Abstract
The article is devoted to the development of an ontological model core in the form of a software system customizable for a specific subject domain. The work is based on the theoretical-model approach to the knowledge representation. Knowledge in the system is represented using a fuzzy models and fragments of atomic diagrams of algebraic systems. The architecture of software system is modular. Base modules implement the functionality necessary for any ontological model. For example, consistency checking of stored knowledge. The functionality is extendable through creating and adding new modules to the system. The paper provides an overview of existing software solutions in the development of ontologies and ontological models. The structure of the core of the ontological model is described, the architecture of the software solution is given.
82-89 69
Abstract
Particle-In-Cell (PIC) method is widely used for plasma simulation and the GPUs appear to be the most efficient way to run this method. In this work we propose a technique that enables to speedup one of the most time-consuming operations in the GPU implementation of the PIC method. The operation is particle reordering, or redistribution of particles between cells, which is performed after pushing. The reordering operation provides data locality which is the key performance issue of the PIC method. We propose to divide the reordering into two stages. First, gather the particles that are going to leave a particular cell into buffer arrays, the number of arrays being equal to number of neighbour cells (26 for 3D case). Second, each neighbour cell copies the particles from the necessary array to her own particle array. Since the second operation is done in 26 threads independently with no synchronization, waiting and involves no critical sections, semaphores, mutexes, atomic operations etc. It results in the more than 10 times reduce of the reordering time compared to the straightforward reordering algorithm. Futhermore, we eliminate the 26 buffer arrays in the following way: the particles are just labeled instead of moving to buffer array. It enables to keep all the advantages with no memory wasted for buffer arrays.
90-100 59
Abstract
The presented paper is to describe the structure of the information-analytical system “Ixodes” working with the collection of Ixodidae ticks from different biotopes, namely form the territories of Altai, Siberia and the Far East. Variants of analyzing the genetic diversity for ticks and pathogens transferred by them have been shown with using statistical methods of building to circular and bar graphs (histograms). The implemented algorithms have been described that allow dealing with the analysis of the pathogen genetic sequence based on the L-gramm approach and the methods of partitioning the phylogenetic tree into groups of close sequences. At the same time, for the first processing a set of genomes, methods of multiple sequence alignment and the method of Neighbor-joining allowing to build a phylogenetic tree have been used. The presented algorithms and methods have been used to solve the problem of tick-borne encephalitis virus genotyping. The results of testing for phylogenetic tree partitioning methods and their comparative analysis have been presented. The architecture of the information-analytical system for analyzing a set of genomes has been described. The system helps in the analysis of a variety of genomes and their classification, namely, for analyzing genotypes within a single species of living organisms, with the methods to aimed at isolating subtle differences in genomes with a similar structure.


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 1818-7900 (Print)
ISSN 2410-0420 (Online)