In this article, by the combined application of the programs E&F Chaos, Past, Fractan, Eviews Student Version Lite, mathematical, numerical and computer modeling of some of the well-known two-dimensional generators of chaotic signals based on modular arithmetic presented in [4] was carried out, and the properties of the obtained chaotic signals were evaluated using nonlinear dynamics methods (time and spectral diagrams, BDS-statistics, Hurst exponent). As a result of the conducted research, it was found that the time and spectral diagrams obtained for the studied two-dimensional generators of chaotic signal based on modular arithmetic have a complex noise-like appearance similar to white noise. The resulting range of BDS-statistics values corresponds to white noise at a certain interval, and persistent processes (black noise) at a certain interval. The resulting range of values of the Hurst exponent is also close to white noise. The results obtained show that two-dimensional generators of chaotic signals based on modular arithmetic can relate to white noise and have more pronounced chaotic properties than classical generators of chaotic signals, on the basis of which they are created. The results obtained complement and expand the knowledge about two-dimensional generators of chaotic signals based on modular arithmetic and open up broad prospects for their use in various practical applications.
The article discusses the issues of analyzing the risks of information security violations if it is necessary to take into account the total damage in the event of computer incidents. A discrete time model is considered as a mathematical model of the occurrence of incidents, in which incidents in an information system occur at random discrete moments of time. Incidents are understood as unintended events, such as malfunctions, violations of operating rules, and intentional – computer attacks, unauthorized access attempts and similar situations. In the case of a risk-based approach, we believe that the occurrence of each incident is accompanied by damage, the magnitude of which is fixed. The use of protective equipment reduces the likelihood of the risk of incidents. But setting up an incident response can be implemented in one of two main scenarios. In one variant, when another incident occurs, the corresponding amount of damage is compared with its maximum allowable value. If the amount of damage received, regardless of the damage caused by other incidents, does not exceed the specified threshold, then the information system continues to operate normally. Otherwise, the security policy is reviewed, the necessary additional protective measures are introduced and other measures are taken to improve the security of the information system. In another variant of the scenario, when incidents occur that lead to a violation of information security, the values of damages from successive incidents are summed up and the value of the sum is compared with the maximum allowable amount of damage. If, during the next incident, the total damage from all previous incidents does not exceed the maximum set value, the information system continues to operate normally. Otherwise, it is concluded that it is necessary to introduce additional protection measures. In the work, within the framework of the models under consideration, the risks of information security violations are assessed, in particular, the probabilistic distribution of the time of safe operation of the information system is found. As an illustration of the considered approach, predictive models of the number of unauthorized transactions with accounts of legal entities and the number of unauthorized transactions using payment cards are constructed. The models under consideration are based on real data about incidents, using a previously developed forecasting method based on a continuous approximating function.
Expert evaluations are used everywhere and in solving a wide range of problems, but often there is a problem of inconsistency in the set of expert evaluations. In this paper, we propose an algorithm for checking the evaluative expert knowledge for consistency. As a result of the algorithm, we not only get an answer about whether the data is consistent or not, but also visualize the original data in the form of a tree. If the introduced expert evaluations are inconsistent, the constructed tree “highlights” the edges in which there is an inconsistency. The algorithm also offers two alternative ways of resolving evaluation inconsistencies. The article describes a software system developed on the basis of this algorithm.
There are domain areas where all transformations of data must be transparent and interpretable (medicine and finance for example). Dimension reduction is an important part of a preprocessing pipeline but algorithms for it are not transparent at the current time. In this work, we provide a genetic algorithm for transparent dimension reduction of numerical data. The algorithm constructs features in a form of expression trees based on a subset of numerical features from the source data and common arithmetical operations. It is designed to maximize quality in binary classification tasks and generate features explainable by a human which achieves by using human-interpretable operations in a feature construction. Also, data transformed by the algorithm can be used in a visual analysis. The multicriterial dynamic fitness function is provided to build features with high diversity.
Software system developers must respond quickly to failures in order to avoid reputational and financial losses for their customers. Therefore, it is important to detect behavioral anomalies in the operation of software systems in a timely manner. At the moment, various tools for automatic monitoring of systems are being actively developed, but logs are the main tool for analyzing failures. Logs contain information about the operation of the system at various points of execution. Modern systems often have a distributed microservice architecture, which significantly complicates the task of analyzing logs. Logs of such systems are collected centrally from different microservices, forming a huge flow of information that is very difficult to analyze manually. However, the problem of identifying logs related to a specific request to the system is solved by distributed tracing, the use of which opens up wide opportunities for the introduction of automatic analysis. There are already many solutions for detecting anomalies in logs, but they do not take advantage of distributed tracing. The article is considered to solving the problem of detecting behavioral anomalies in the work of distributed software systems based on automatic analysis of log traces. The solution is based on the synthesis of machine learning methods. Log traces are preprocessed and cleaned using process mining methods. Next, vectorization and clustering of log messages is performed. After that, a long short-term memory network (LSTM) is used to analyze deviations in the sequences of processed logs. As a result of the work performed, a prototype of the anomaly detection system was developed and tested.
ISSN 2410-0420 (Online)