<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.3 20210610//EN" "JATS-journalpublishing1-3.dtd">
<article article-type="research-article" dtd-version="1.3" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xml:lang="ru"><front><journal-meta><journal-id journal-id-type="publisher-id">intechngu</journal-id><journal-title-group><journal-title xml:lang="ru">Вестник НГУ. Серия: Информационные технологии</journal-title><trans-title-group xml:lang="en"><trans-title>Vestnik NSU. Series: Information Technologies</trans-title></trans-title-group></journal-title-group><issn pub-type="ppub">1818-7900</issn><issn pub-type="epub">2410-0420</issn><publisher><publisher-name>НГУ</publisher-name></publisher></journal-meta><article-meta><article-id pub-id-type="doi">10.25205/1818-7900-2023-21-1-46-61</article-id><article-id custom-type="elpub" pub-id-type="custom">intechngu-222</article-id><article-categories><subj-group subj-group-type="heading"><subject>Research Article</subject></subj-group><subj-group subj-group-type="section-heading" xml:lang="ru"><subject>Статьи</subject></subj-group></article-categories><title-group><article-title>Прозрачное уменьшение размерности с помощью генетического алгоритма</article-title><trans-title-group xml:lang="en"><trans-title>Transparent Reduction of Dimension with Genetic Algorithm</trans-title></trans-title-group></title-group><contrib-group><contrib contrib-type="author" corresp="yes"><contrib-id contrib-id-type="orcid">https://orcid.org/0000-0002-4334-5725</contrib-id><name-alternatives><name name-style="eastern" xml:lang="ru"><surname>Радеев</surname><given-names>Н. А.</given-names></name><name name-style="western" xml:lang="en"><surname>Radeev</surname><given-names>N. A.</given-names></name></name-alternatives><bio xml:lang="ru"><p>Никита Андреевич Радеев, магистрант</p><p>факультет информационных технологий</p><p>кафедра общей информатики</p><p>Новосибирск</p></bio><bio xml:lang="en"><p>Nikita A. Radeev, master</p><p>faculty of Information Technologies</p><p>General Informatics Department</p><p>Novosibirsk</p></bio><email xlink:type="simple">n.radeev@g.nsu.ru</email><xref ref-type="aff" rid="aff-1"/></contrib></contrib-group><aff-alternatives id="aff-1"><aff xml:lang="ru">Новосибирский государственный университет<country>Россия</country></aff><aff xml:lang="en">Novosibirsk State University<country>Russian Federation</country></aff></aff-alternatives><pub-date pub-type="collection"><year>2023</year></pub-date><pub-date pub-type="epub"><day>28</day><month>08</month><year>2023</year></pub-date><volume>21</volume><issue>1</issue><fpage>46</fpage><lpage>61</lpage><permissions><copyright-statement>Copyright &amp;#x00A9; Радеев Н.А., 2023</copyright-statement><copyright-year>2023</copyright-year><copyright-holder xml:lang="ru">Радеев Н.А.</copyright-holder><copyright-holder xml:lang="en">Radeev N.A.</copyright-holder><license license-type="creative-commons-attribution" xlink:href="https://creativecommons.org/licenses/by/4.0/" xlink:type="simple"><license-p>This work is licensed under a Creative Commons Attribution 4.0 License.</license-p></license></permissions><self-uri xlink:href="https://intechngu.elpub.ru/jour/article/view/222">https://intechngu.elpub.ru/jour/article/view/222</self-uri><abstract><p>   Существуют предметные области, где все преобразования данных должны быть прозрачными и объяснимыми (например, медицина и финансы). Уменьшение размерности данных является важной частью предварительной обработки данных, но алгоритмы для него в настоящее время не являются прозрачными. В данной работе мы предлагаем генетический алгоритм для прозрачного уменьшения размерности числовых табличных данных. Алгоритм строит признаки в виде деревьев выражений на основе подмножества числовых признаков из исходных данных и обычных арифметических операций. Он спроектирован так, чтобы стремиться к достижению максимального качества в задачах бинарной классификации и генерировать признаки, объяснимые человеком, что достигается за счет использования в построении признаков операций, понятных человеку. Кроме того, преобразованные алгоритмом данные могут быть использованы в визуальном анализе, если уменьшить размерность до двух. В алгоритме используется многокритериальная динамическая фитнес-функция, предназначенная для построения признаков с высоким разнообразием.</p></abstract><trans-abstract xml:lang="en"><p>   There are domain areas where all transformations of data must be transparent and interpretable (medicine and finance for example). Dimension reduction is an important part of a preprocessing pipeline but algorithms for it are not transparent at the current time. In this work, we provide a genetic algorithm for transparent dimension reduction of numerical data. The algorithm constructs features in a form of expression trees based on a subset of numerical features from the source data and common arithmetical operations. It is designed to maximize quality in binary classification tasks and generate features explainable by a human which achieves by using human-interpretable operations in a feature construction. Also, data transformed by the algorithm can be used in a visual analysis. The multicriterial dynamic fitness function is provided to build features with high diversity.</p></trans-abstract><kwd-group xml:lang="ru"><kwd>генетический алгоритм</kwd><kwd>уменьшение размерности</kwd><kwd>построение признаков</kwd><kwd>интерпретируемость</kwd><kwd>объяснимый ИИ</kwd><kwd>символьная регрессия</kwd></kwd-group><kwd-group xml:lang="en"><kwd>genetic algorithm</kwd><kwd>dimension reduction</kwd><kwd>feature construction</kwd><kwd>interpretability</kwd><kwd>explainable AI</kwd><kwd>symbolic regression</kwd></kwd-group></article-meta></front><back><ref-list><title>References</title><ref id="cit1"><label>1</label><citation-alternatives><mixed-citation xml:lang="ru">M. H. ur Rehman, C. S. Liew, A. Abbas, P. P. Jayaraman, T. Y. Wah, S. U. Khan. Big Data Reduction Methods: A Survey. Data Sci. Eng., 2016, vol. 1, no. 4, p. 265–284, DOI: 10.1007/s41019-016-0022-0.</mixed-citation><mixed-citation xml:lang="en">M. H. ur Rehman, C. S. Liew, A. Abbas, P. P. Jayaraman, T. Y. Wah, S. U. Khan. Big Data Reduction Methods: A Survey. Data Sci. Eng., 2016, vol. 1, no. 4, p. 265–284, DOI: 10.1007/s41019-016-0022-0.</mixed-citation></citation-alternatives></ref><ref id="cit2"><label>2</label><citation-alternatives><mixed-citation xml:lang="ru">C. H. Yoon, R. Torrance, N. Scheinerman. Machine learning in medicine: should the pursuit of enhanced interpretability be abandoned? J. Med. Ethics, 2022, vol. 48, no. 9, p. 581–585, DOI: 10.1136/medethics-2020-107102.</mixed-citation><mixed-citation xml:lang="en">C. H. Yoon, R. Torrance, N. Scheinerman. Machine learning in medicine: should the pursuit of enhanced interpretability be abandoned? J. Med. Ethics, 2022, vol. 48, no. 9, p. 581–585, DOI: 10.1136/medethics-2020-107102.</mixed-citation></citation-alternatives></ref><ref id="cit3"><label>3</label><citation-alternatives><mixed-citation xml:lang="ru">P. Linardatos, V. Papastefanopoulos, S. Kotsiantis. Explainable AI: A Review of Machine Learning Interpretability Methods. Entropy, 2021, vol. 23, no. 1, Art. no. 1, DOI: 10.3390/e23010018.</mixed-citation><mixed-citation xml:lang="en">P. Linardatos, V. Papastefanopoulos, S. Kotsiantis. Explainable AI: A Review of Machine Learning Interpretability Methods. Entropy, 2021, vol. 23, no. 1, Art. no. 1, DOI: 10.3390/e23010018.</mixed-citation></citation-alternatives></ref><ref id="cit4"><label>4</label><citation-alternatives><mixed-citation xml:lang="ru">Izenman. Introduction to manifold learning. Wiley Interdiscip. Rev. Comput. Stat., 2012, vol. 4, DOI: 10.1002/wics.1222.</mixed-citation><mixed-citation xml:lang="en">Izenman. Introduction to manifold learning. Wiley Interdiscip. Rev. Comput. Stat., 2012, vol. 4, DOI: 10.1002/wics.1222.</mixed-citation></citation-alternatives></ref><ref id="cit5"><label>5</label><citation-alternatives><mixed-citation xml:lang="ru">H. Han, W. Li, J. Wang, G. Qin, X. Qin. Enhance Explainability of Manifold Learning. Neurocomputing, 2022, vol. 500, DOI: 10.1016/j.neucom.2022.05.119.</mixed-citation><mixed-citation xml:lang="en">H. Han, W. Li, J. Wang, G. Qin, X. Qin. Enhance Explainability of Manifold Learning. Neurocomputing, 2022, vol. 500, DOI: 10.1016/j.neucom.2022.05.119.</mixed-citation></citation-alternatives></ref><ref id="cit6"><label>6</label><citation-alternatives><mixed-citation xml:lang="ru">D. Elizondo, R. Birkenhead, M. Gámez, N. Rubio, E. Alfaro-Cortés. Linear separability and classification complexity. Expert Syst. Appl., 2012, vol. 39, p. 7796–7807, DOI: 10.1016/j.eswa.2012.01.090.</mixed-citation><mixed-citation xml:lang="en">D. Elizondo, R. Birkenhead, M. Gámez, N. Rubio, E. Alfaro-Cortés. Linear separability and classification complexity. Expert Syst. Appl., 2012, vol. 39, p. 7796–7807, DOI: 10.1016/j.eswa.2012.01.090.</mixed-citation></citation-alternatives></ref><ref id="cit7"><label>7</label><citation-alternatives><mixed-citation xml:lang="ru">J. Koza, R. Poli. Genetic Programming. in Search Methodologies, 2005, p. 127–164. DOI: 10.1007/0-387-28356-0_5.</mixed-citation><mixed-citation xml:lang="en">J. Koza, R. Poli. Genetic Programming. in Search Methodologies, 2005, p. 127–164. DOI: 10.1007/0-387-28356-0_5.</mixed-citation></citation-alternatives></ref><ref id="cit8"><label>8</label><citation-alternatives><mixed-citation xml:lang="ru">U.-M. O’Reilly, E. Hemberg. Genetic programming: a tutorial introduction. 2021, p. 453. DOI: 10.1145/3449726.3461394.</mixed-citation><mixed-citation xml:lang="en">U.-M. O’Reilly, E. Hemberg. Genetic programming: a tutorial introduction. 2021, p. 453. DOI: 10.1145/3449726.3461394.</mixed-citation></citation-alternatives></ref><ref id="cit9"><label>9</label><citation-alternatives><mixed-citation xml:lang="ru">Vasuki. Genetic Programming. 2020, p. 61–76. DOI: 10.1201/9780429289071-5.</mixed-citation><mixed-citation xml:lang="en">Vasuki. Genetic Programming. 2020, p. 61–76. DOI: 10.1201/9780429289071-5.</mixed-citation></citation-alternatives></ref><ref id="cit10"><label>10</label><citation-alternatives><mixed-citation xml:lang="ru">L. Kallel, B. Naudts, C. Reeves. Properties of Fitness Functions and Search Landscapes. 2000, DOI: 10.1007/978-3-662-04448-3_8.</mixed-citation><mixed-citation xml:lang="en">L. Kallel, B. Naudts, C. Reeves. Properties of Fitness Functions and Search Landscapes. 2000, DOI: 10.1007/978-3-662-04448-3_8.</mixed-citation></citation-alternatives></ref><ref id="cit11"><label>11</label><citation-alternatives><mixed-citation xml:lang="ru">M. Schmidt, H. Lipson. Distilling Free-Form Natural Laws from Experimental Data. Science, 2009, vol. 324, no. 5923, p. 81–85, DOI: 10.1126/science.1165893.</mixed-citation><mixed-citation xml:lang="en">M. Schmidt, H. Lipson. Distilling Free-Form Natural Laws from Experimental Data. Science, 2009, vol. 324, no. 5923, p. 81–85, DOI: 10.1126/science.1165893.</mixed-citation></citation-alternatives></ref><ref id="cit12"><label>12</label><citation-alternatives><mixed-citation xml:lang="ru">W. La Cava et al. Contemporary Symbolic Regression Methods and their Relative Performance. in Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 2021, vol. 1.</mixed-citation><mixed-citation xml:lang="en">W. La Cava et al. Contemporary Symbolic Regression Methods and their Relative Performance. in Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 2021, vol. 1.</mixed-citation></citation-alternatives></ref><ref id="cit13"><label>13</label><citation-alternatives><mixed-citation xml:lang="ru">L. Sotto, P. Kaufmann, T. Atkinson, R. Kalkreuth, M. Basgalupp. Graph representations in genetic programming. Genet. Program. Evolvable Mach., 2021, vol. 22, DOI: 10.1007/s10710-021-09413-9.</mixed-citation><mixed-citation xml:lang="en">L. Sotto, P. Kaufmann, T. Atkinson, R. Kalkreuth, M. Basgalupp. Graph representations in genetic programming. Genet. Program. Evolvable Mach., 2021, vol. 22, DOI: 10.1007/s10710-021-09413-9.</mixed-citation></citation-alternatives></ref><ref id="cit14"><label>14</label><citation-alternatives><mixed-citation xml:lang="ru">P. Krtolica, P. Stanimirovic. Reverse Polish notation method. Int. J. Comput. Math., 2004, vol. IJCM, p. 273–284, DOI: 10.1080/00207160410001660826.</mixed-citation><mixed-citation xml:lang="en">P. Krtolica, P. Stanimirovic. Reverse Polish notation method. Int. J. Comput. Math., 2004, vol. IJCM, p. 273–284, DOI: 10.1080/00207160410001660826.</mixed-citation></citation-alternatives></ref><ref id="cit15"><label>15</label><citation-alternatives><mixed-citation xml:lang="ru">C. Ferreira. Gene Expression Programming: a New Adaptive Algorithm for Solving Problems. arXiv, 2001. DOI: 10.48550/arXiv.cs/0102027.</mixed-citation><mixed-citation xml:lang="en">C. Ferreira. Gene Expression Programming: a New Adaptive Algorithm for Solving Problems. arXiv, 2001. DOI: 10.48550/arXiv.cs/0102027.</mixed-citation></citation-alternatives></ref><ref id="cit16"><label>16</label><citation-alternatives><mixed-citation xml:lang="ru">C. Ferreira. Gene Expression Programming in Problem Solving. in Soft Computing and Industry: Recent Applications, Eds. London: Springer, 2002, p. 635–653. DOI: 10.1007/978-1-4471-0123-9_54.</mixed-citation><mixed-citation xml:lang="en">C. Ferreira. Gene Expression Programming in Problem Solving. in Soft Computing and Industry: Recent Applications, Eds. London: Springer, 2002, p. 635–653. DOI: 10.1007/978-1-4471-0123-9_54.</mixed-citation></citation-alternatives></ref><ref id="cit17"><label>17</label><citation-alternatives><mixed-citation xml:lang="ru">Jolliffe. Principal Component Analysis. Springer: Berlin, Germany, 1986, vol. 87, p. 41–64, DOI: 10.1007/b98835.</mixed-citation><mixed-citation xml:lang="en">Jolliffe. Principal Component Analysis. Springer: Berlin, Germany, 1986, vol. 87, p. 41–64, DOI: 10.1007/b98835.</mixed-citation></citation-alternatives></ref><ref id="cit18"><label>18</label><citation-alternatives><mixed-citation xml:lang="ru">L. van der Maaten, G. Hinton. Visualizing data using t-SNE. Journal of Machine Learning Research, 2008, vol. 9, p. 2579–2605.</mixed-citation><mixed-citation xml:lang="en">L. van der Maaten, G. Hinton. Visualizing data using t-SNE. Journal of Machine Learning Research, 2008, vol. 9, p. 2579–2605.</mixed-citation></citation-alternatives></ref><ref id="cit19"><label>19</label><citation-alternatives><mixed-citation xml:lang="ru">B. Hosseini, B. Hammer. Interpretable Discriminative Dimensionality Reduction and Feature Selection on the Manifold. arXiv, arXiv:1909.09218, 2019 DOI: 10.48550/arXiv.1909.09218.</mixed-citation><mixed-citation xml:lang="en">B. Hosseini, B. Hammer. Interpretable Discriminative Dimensionality Reduction and Feature Selection on the Manifold. arXiv, arXiv:1909.09218, 2019 DOI: 10.48550/arXiv.1909.09218.</mixed-citation></citation-alternatives></ref><ref id="cit20"><label>20</label><citation-alternatives><mixed-citation xml:lang="ru">T. Uriot, M. Virgolin, T. Alderliesten, P. Bosman. On genetic programming representations and fitness functions for interpretable dimensionality reduction. arXiv, arXiv:2203.00528, 2022. DOI: 10.48550/arXiv.2203.00528.</mixed-citation><mixed-citation xml:lang="en">T. Uriot, M. Virgolin, T. Alderliesten, P. Bosman. On genetic programming representations and fitness functions for interpretable dimensionality reduction. arXiv, arXiv:2203.00528, 2022. DOI: 10.48550/arXiv.2203.00528.</mixed-citation></citation-alternatives></ref><ref id="cit21"><label>21</label><citation-alternatives><mixed-citation xml:lang="ru">Lensen, M. Zhang, B. Xue. Multi-Objective Genetic Programming for Manifold Learning: Balancing Quality and Dimensionality. Genet. Program. Evolvable Mach., 2020, vol. 21, no. 3, p. 399–431, DOI: 10.1007/s10710-020-09375-4.</mixed-citation><mixed-citation xml:lang="en">Lensen, M. Zhang, B. Xue. Multi-Objective Genetic Programming for Manifold Learning: Balancing Quality and Dimensionality. Genet. Program. Evolvable Mach., 2020, vol. 21, no. 3, p. 399–431, DOI: 10.1007/s10710-020-09375-4.</mixed-citation></citation-alternatives></ref><ref id="cit22"><label>22</label><citation-alternatives><mixed-citation xml:lang="ru">M. Virgolin, T. Alderliesten, P. A. N. Bosman. On Explaining Machine Learning Models by Evolving Crucial and Compact Features. Swarm Evol. Comput., 2020 vol. 53, p. 100640, DOI: 10.1016/j.swevo.2019.100640.</mixed-citation><mixed-citation xml:lang="en">M. Virgolin, T. Alderliesten, P. A. N. Bosman. On Explaining Machine Learning Models by Evolving Crucial and Compact Features. Swarm Evol. Comput., 2020 vol. 53, p. 100640, DOI: 10.1016/j.swevo.2019.100640.</mixed-citation></citation-alternatives></ref><ref id="cit23"><label>23</label><citation-alternatives><mixed-citation xml:lang="ru">Lensen, B. Xue, M. Zhang. Can Genetic Programming Do Manifold Learning Too? in Genetic Programming, Cham, 2019, p. 114–130. doi: 10.1007/978-3-030-16670-0_8.</mixed-citation><mixed-citation xml:lang="en">Lensen, B. Xue, M. Zhang. Can Genetic Programming Do Manifold Learning Too? in Genetic Programming, Cham, 2019, p. 114–130. doi: 10.1007/978-3-030-16670-0_8.</mixed-citation></citation-alternatives></ref><ref id="cit24"><label>24</label><citation-alternatives><mixed-citation xml:lang="ru">Q. Zhang, H. Li. MOEA/D: A Multiobjective Evolutionary Algorithm Based on Decomposition. IEEE Trans. Evol. Comput., 2007, vol. 11, no. 6, p. 712–731, DOI: 10.1109/TEVC.2007.892759.</mixed-citation><mixed-citation xml:lang="en">Q. Zhang, H. Li. MOEA/D: A Multiobjective Evolutionary Algorithm Based on Decomposition. IEEE Trans. Evol. Comput., 2007, vol. 11, no. 6, p. 712–731, DOI: 10.1109/TEVC.2007.892759.</mixed-citation></citation-alternatives></ref><ref id="cit25"><label>25</label><citation-alternatives><mixed-citation xml:lang="ru">S. Roweis, L. Saul. Nonlinear Dimensionality Reduction by Locally Linear Embedding. Science, 2001, vol. 290, p. 2323–6, DOI: 10.1126/science.290.5500.2323.</mixed-citation><mixed-citation xml:lang="en">S. Roweis, L. Saul. Nonlinear Dimensionality Reduction by Locally Linear Embedding. Science, 2001, vol. 290, p. 2323–6, DOI: 10.1126/science.290.5500.2323.</mixed-citation></citation-alternatives></ref><ref id="cit26"><label>26</label><citation-alternatives><mixed-citation xml:lang="ru">B. K. Tripathy, S. Anveshrithaa, S. Ghela. Multidimensional Scaling (MDS). 2021, p. 41–51. DOI: 10.1201/9781003190554-6.</mixed-citation><mixed-citation xml:lang="en">B. K. Tripathy, S. Anveshrithaa, S. Ghela. Multidimensional Scaling (MDS). 2021, p. 41–51. DOI: 10.1201/9781003190554-6.</mixed-citation></citation-alternatives></ref><ref id="cit27"><label>27</label><citation-alternatives><mixed-citation xml:lang="ru">L. McInnes, J. Healy, J. Melville. UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction. arXiv, Sep. 17, 2020. DOI: 10.48550/arXiv.1802.03426.</mixed-citation><mixed-citation xml:lang="en">L. McInnes, J. Healy, J. Melville. UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction. arXiv, Sep. 17, 2020. DOI: 10.48550/arXiv.1802.03426.</mixed-citation></citation-alternatives></ref><ref id="cit28"><label>28</label><citation-alternatives><mixed-citation xml:lang="ru">C. Cortes, V. Vapnik. Support vector machines. Mach. Learn., 1995, vol. 20, p. 273–293.</mixed-citation><mixed-citation xml:lang="en">C. Cortes, V. Vapnik. Support vector machines. Mach. Learn., 1995, vol. 20, p. 273–293.</mixed-citation></citation-alternatives></ref><ref id="cit29"><label>29</label><citation-alternatives><mixed-citation xml:lang="ru">Vanschoren, J. N. van Rijn, B. Bischl, L. Torgo. OpenML: networked science in machine learning. ACM SIGKDD Explor. Newsl., 2014, vol. 15, no. 2, p. 49–60, DOI: 10.1145/2641190.2641198.</mixed-citation><mixed-citation xml:lang="en">Vanschoren, J. N. van Rijn, B. Bischl, L. Torgo. OpenML: networked science in machine learning. ACM SIGKDD Explor. Newsl., 2014, vol. 15, no. 2, p. 49–60, DOI: 10.1145/2641190.2641198.</mixed-citation></citation-alternatives></ref><ref id="cit30"><label>30</label><citation-alternatives><mixed-citation xml:lang="ru">S. Moro, P. Cortez, R. Laureano. Using Data Mining for Bank Direct Marketing: An Application of the CRISP-DM Methodology. 2011.</mixed-citation><mixed-citation xml:lang="en">S. Moro, P. Cortez, R. Laureano. Using Data Mining for Bank Direct Marketing: An Application of the CRISP-DM Methodology. 2011.</mixed-citation></citation-alternatives></ref><ref id="cit31"><label>31</label><citation-alternatives><mixed-citation xml:lang="ru">Yeh, K.-J. Yang, T.-M. Ting. Knowledge discovery on RFM model using Bernoulli sequence. Expert Syst Appl, 2009, vol. 36, p. 5866–5871, DOI: 10.1016/j.eswa.2008.07.018.</mixed-citation><mixed-citation xml:lang="en">Yeh, K.-J. Yang, T.-M. Ting. Knowledge discovery on RFM model using Bernoulli sequence. Expert Syst Appl, 2009, vol. 36, p. 5866–5871, DOI: 10.1016/j.eswa.2008.07.018.</mixed-citation></citation-alternatives></ref><ref id="cit32"><label>32</label><citation-alternatives><mixed-citation xml:lang="ru">Bennett, O. L. Mangasarian. Robust Linear Programming Discrimination Of Two Linearly Inseparable Sets. Optim. Methods Softw., 2002, vol. 1, DOI: 10.1080/10556789208805504.</mixed-citation><mixed-citation xml:lang="en">Bennett, O. L. Mangasarian. Robust Linear Programming Discrimination Of Two Linearly Inseparable Sets. Optim. Methods Softw., 2002, vol. 1, DOI: 10.1080/10556789208805504.</mixed-citation></citation-alternatives></ref><ref id="cit33"><label>33</label><citation-alternatives><mixed-citation xml:lang="ru">D. Dua, C. Graff. UCI Machine Learning Repository. University of California, Irvine, School of Information and Computer Sciences, 2019. [Online]. Available: http://archive.ics.uci.edu/ml</mixed-citation><mixed-citation xml:lang="en">D. Dua, C. Graff. UCI Machine Learning Repository. University of California, Irvine, School of Information and Computer Sciences, 2019. [Online]. Available: http://archive.ics.uci.edu/ml</mixed-citation></citation-alternatives></ref><ref id="cit34"><label>34</label><citation-alternatives><mixed-citation xml:lang="ru">V. Sigillito, S. Wing, L. Hutton, K. Baker. Classification of radar returns from the ionosphere using neural networks. Johns Hopkins APL Tech. Dig. Appl. Phys. Lab., 1989, vol. 10.</mixed-citation><mixed-citation xml:lang="en">V. Sigillito, S. Wing, L. Hutton, K. Baker. Classification of radar returns from the ionosphere using neural networks. Johns Hopkins APL Tech. Dig. Appl. Phys. Lab., 1989, vol. 10.</mixed-citation></citation-alternatives></ref><ref id="cit35"><label>35</label><citation-alternatives><mixed-citation xml:lang="ru">Guyon, S. Gunn, A. Ben-Hur, G. Dror. Result Analysis of the NIPS 2003 Feature Selection Challenge, vol. 17. 2004.</mixed-citation><mixed-citation xml:lang="en">Guyon, S. Gunn, A. Ben-Hur, G. Dror. Result Analysis of the NIPS 2003 Feature Selection Challenge, vol. 17. 2004.</mixed-citation></citation-alternatives></ref><ref id="cit36"><label>36</label><citation-alternatives><mixed-citation xml:lang="ru">R. P. Gorman, T. Sejnowski. Analysis of hidden units in a layered network trained to classify sonar targets. Neural Netw., 1988, vol. 1, p. 75–89, DOI: 10.1016/0893-6080(88)90023-8</mixed-citation><mixed-citation xml:lang="en">R. P. Gorman, T. Sejnowski. Analysis of hidden units in a layered network trained to classify sonar targets. Neural Netw., 1988, vol. 1, p. 75–89, DOI: 10.1016/0893-6080(88)90023-8</mixed-citation></citation-alternatives></ref></ref-list><fn-group><fn fn-type="conflict"><p>The authors declare that there are no conflicts of interest present.</p></fn></fn-group></back></article>
