Computer systems and information technologies https://csitjournal.khmnu.edu.ua/index.php/csit <div class="additional_content"> <p><strong><span class="VIiyi" lang="uk"><span class="JLqJ4b" data-language-for-alternatives="uk" data-language-to-translate-into="en" data-phrase-index="0">ISSN </span></span></strong><span class="VIiyi" lang="uk"><span class="JLqJ4b" data-language-for-alternatives="uk" data-language-to-translate-into="en" data-phrase-index="0">2710-0766</span></span><strong><span class="VIiyi" lang="uk"><span class="JLqJ4b" data-language-for-alternatives="uk" data-language-to-translate-into="en" data-phrase-index="0"><br /></span></span></strong></p> <p><span class="VIiyi" lang="uk"><span class="JLqJ4b" data-language-for-alternatives="uk" data-language-to-translate-into="en" data-phrase-index="0"><strong>ISSN</strong> 2710-0774 (online)</span></span></p> <p><strong>Published</strong> from the year 2020.</p> <p><strong>Publisher:</strong> <a title="Khmelhitsky National University" href="https://www.khnu.km.ua" target="_blank" rel="noopener">Khmelhytskyi National University (Ukraine)</a><a href="http://www.pollub.pl/">,</a></p> <p><strong>Frequency:</strong> 4 times a year</p> <p><strong>Manuscript languages:</strong> English</p> <p><strong>Editors:</strong> <a href="http://ki.khnu.km.ua/team/govorushhenko-tetyana/" target="_blank" rel="noopener">T. Hovorushchenko (Ukraine, Khmelnitskiy),</a></p> <p><strong>Certificate of state registration of print media:</strong> Series КВ № 24512-14452Р (20.07.2020).</p> <p><strong>Registration in Higher Attestation Commission of Ukraine:</strong> in processing</p> <p><strong>License terms:</strong> authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a <a href="http://creativecommons.org/licenses/by/4.0/" target="_blank" rel="noopener">Creative Commons Attribution License International CC-BY</a> that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.</p> <p><strong>Open-access Statement:</strong> journal Problems of Тribology provides immediate <a href="https://en.wikipedia.org/wiki/Open_access" target="_blank" rel="noopener">open access</a> to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge. Full-text access to scientific articles of the journal is presented on the official website in the <a href="http://tribology.khnu.km.ua/index.php/ProbTrib/issue/archive" target="_blank" rel="noopener">Archives</a> section.</p> <p><strong>Address:</strong> International scientific journal “Computer Systems and Information Technologies Journal”, Khmelnytsky National University, Institutskaia str. 11, Khmelnytsky, 29016, Ukraine.</p> <p><strong>Tel.:</strong> +380951122544.</p> <p><strong>E-mail:</strong> <a href="mailto:csit.khnu@gmail.com">csit.khnu@gmail.com</a>.</p> <p><strong>Website:</strong> <a href="http://csitjournal.khmnu.edu.ua" target="_blank" rel="noopener">http://csitjournal.khmnu.edu.ua</a>.</p> </div> Khmelnytskyi National University en-US Computer systems and information technologies 2710-0766 CONSTRUCTION OF A MATHEMATICAL MODEL FOR FINDING A DANCE STUDIO IN THE FORM OF A LOGICAL NETWORK USING FINITE PREDICATE ALGEBRA https://csitjournal.khmnu.edu.ua/index.php/csit/article/view/322 <p><em>The article is devoted to the research and implementation of methods and tools of finite predicate algebra for conducting a systematic analysis of the subject area, exemplified by the formalization of the task of finding a dance studio based on selected parameters.. Specifically, the process of choosing a studio depends on a number of parameters: the type of subscription based on the number of sessions, groups, specific dance style, the professionalism of the instructor, the location and proximity to certain types of transport, and the price. The goal of the work is to increase the processing speed of knowledge in the task of finding the optimal subscription by decomposing the initial multi-parameter relationship into a composition of binary ones. The methodology is based on the tools and methods of finite predicate algebra. The application of predicate decomposition in the method of constructing logical networks ensures parallel knowledge processing, thereby increasing query processing speed, while formalization through finite predicates provides universality in describing any subject area. Thus, the complex multi-parameter relationship was decomposed into a composition of binary relations, described in the language of predicate algebra, considering a detailed analysis of the subject area and further decompositions. The scientific novelty lies in the constructed mathematical model of the task of finding a dance studio, represented as a predicate depending on ten variables. This predicate is characterized by the composition of thirteen binary predicates, which are presented in the article as bipartite graphs and formulas of the corresponding predicates. The predicate of the model is a composition of all the constructed binary predicates. The practical significance is determined by the logical network built on the basis of the mathematical model, which allows transitioning from a "many-to-many" relationship to "one-to-one" relationships and parallelizing the information processing. The result of the work is the constructed logical network for the task of finding the optimal dance studio subscription based on specific input parameters, which facilitates the solution of synthesis, analysis, and comparison tasks. </em><br /><br /></p> Iryna VECHIRSKA Anna VECHIRSKA Copyright (c) 2025 Computer systems and information technologies 2024-12-26 2024-12-26 4 6 14 10.31891/csit-2024-4-1 THE PERFORMANCE OF CONVOLUTIONAL NEURAL NETWORKS USING AN ACCELERATOR https://csitjournal.khmnu.edu.ua/index.php/csit/article/view/344 <p><em>The effectiveness of convolutional neural networks (CNNs) has been demonstrated across various fields, including computer vision, natural language processing, medical imaging, and autonomous systems. However, achieving high performance in CNNs is not only a matter of model design but also of optimizing the training and inference processes. Using accelerators like the Google Coral TPU provides significant improvements in both computational efficiency and overall model performance. This paper focuses on the integration of the Coral TPU to enhance CNN performance by speeding up computations, reducing latency, and enabling real-time deployment.</em></p> <p><em>Training deep learning models, particularly CNNs, is computationally intensive. Traditional CPUs or GPUs can take hours or even days to train large networks on complex data. The accelerator offloads these intensive tasks, allowing the host machine to focus on other operations and making training more efficient. This enables researchers to experiment with multiple architectures and hyperparameters within shorter cycles, thereby improving the model's accuracy and robustness.</em></p> <p><em>CNNs are widely deployed in edge computing scenarios where real-time predictions are critical, such as in robotics, autonomous vehicles, and smart surveillance systems.Unlike traditional cloud-based solutions, where models are executed remotely and suffer from network delays, the Coral TPU ensures low-latency predictions directly on the device, making it ideal for time-sensitive applications.</em></p> <p><em>Another key advantage of using accelerators like Coral TPU is the ability to efficiently handle optimized and lightweight models. These optimized models are well-suited for the Coral TPU’s architecture, allowing developers to deploy high-performing networks even on resource-constrained devices. The TPU’s ability to handle quantized models with minimal loss in accuracy further enhances the CNN’s practical usability across various domains.</em></p> <p><em>The Coral TPU is designed to minimize power consumption, making it an ideal solution for battery-powered or energy-constrained devices. This energy efficiency ensures that CNNs can run continuously on devices like drones, IoT sensors, or mobile platforms without exhausting their power supply. Additionally, the scalability of the TPU makes it easy to deploy multiple accelerators in parallel, further improving throughput for applications that require processing high volumes of data, such as real-time video analysis.</em></p> <p><em>The Coral TPU also facilitates on-device learning, where models can be incrementally updated based on new data without requiring a full retraining session. This feature is particularly useful in dynamic environments, such as autonomous vehicles or security systems, where the model needs to adapt quickly to new conditions. With the TPU handling the computational workload, CNNs can be fine-tuned on the device, ensuring they remain accurate and responsive over time. </em></p> <p>&nbsp;</p> Tymur ISAIEV Tetiana KYSIL Copyright (c) 2025 Computer systems and information technologies 2024-12-26 2024-12-26 4 15 21 10.31891/csit-2024-4-2 QUALITY MODEL OF MEDIA SYSTEMS WITH INFOGRAPHIC DATA https://csitjournal.khmnu.edu.ua/index.php/csit/article/view/331 <p><em>The study of the quality of media systems development, which incorporates a large volume of infographic data, is a highly relevant task, as the increasing amount of information demands new approaches to its presentation that ensure rapid and efficient perception. This paper is dedicated to analyzing the factors influencing the quality of media systems development and constructing a model of prioritized factor influence, which will serve as the foundation for further research into predictive quality evaluation.</em></p> <p><em>The article employs graph theory tools and systems analysis methods, specifically the mathematical hierarchy modeling method. Based on expert evaluation, a set of factors influencing the quality of media systems development has been identified, including the target audience, content, interactivity, layout, prototype, typography, and data visualization. The influences and dependencies between these factors have been visualized using a directed graph. The priorities of the factors were determined through the method of mathematical hierarchy modeling, which involves the formation of a binary factor reachability matrix and the construction of iterative tables. These iterative tables contain information on the ordinal number of the factor in the set, the subset of reachable vertices, the subset of predecessor vertices, and the intersection of the subsets. It was found that the highest rank belongs to the factors “target audience” and “content”, while the lowest rank was assigned to the “typography” factor. Based on the data obtained during the iteration process, a model of prioritized factor influence on the quality of media systems development with infographic data was synthesized.</em></p> <p><em>The constructed model will assist in more effectively allocating resources, such as time and funds, across the key stages of media systems creation. Additionally, it will help minimize risks associated with the product’s mismatch with the target audience's needs, thereby reducing additional costs in the development process.</em></p> Alona KUDRIASHOVA Taras OLIYARNYK Copyright (c) 2025 Computer systems and information technologies 2024-12-26 2024-12-26 4 22 27 10.31891/csit-2024-4-3 THE CONCEPT OF AI-BASED INFORMATION SYSTEMS FOR THE ANALYSIS OF LEARNING FOREIGN WORDS https://csitjournal.khmnu.edu.ua/index.php/csit/article/view/342 <p><em>In the modern world, information systems based on artificial intelligence (AI) are increasingly used to automate learning and improve the educational process. One of the perspective areas of AI application is foreign language learning, particularly vocabulary acquisition. By integrating AI components, specifically those utilizing machine learning algorithms to analyze large volumes of data and provide automated recommendations to enhance the learning process, users gain constant access to self-assessment tools and automatic adjustment of cognitive workload.This paper examines the key role and significance of information systems for analyzing foreign language vocabulary acquisition with the help of AI. It investigates the working principles of such systems, their advantages, and various strategies used to enhance the efficiency of language learning, aiming for optimal results in acquiring new linguistic knowledge and improving learning outcomes. Learning new foreign terms is often a challenging task for many students, leading to a loss of motivation or slow progress, highlighting the urgent need for solutions that enhance material retention. Adapting to individual users, AI-based information systems have developed a range of services and platforms with global potential for language learning worldwide. These systems function by analyzing user behavior and success, based on specific indicators and metrics, whose numerical values are interpreted to identify patterns and correlations between user behavior and its impact on the system. The advantages of AI-based information systems for language learning are significant, offering an objective, reliable method for assessing learning achievements, eliminating the need for human intervention in many cases. Data collected by these systems serve as a valuable resource for analyzing user productivity, detecting common mistakes, creating effective study plans, and more. However, it's important to note that AI has not yet reached the level of understanding semantics or the cultural and historical nuances of certain words, complicating the implementation of more comprehensive functionality for evaluating and adjusting the learning process. This requires developers to prepare additional data through proprietary sources or gain useful input from user interactions with the system.</em></p> Olga PAVLOVA Artur KOZYRA Copyright (c) 2025 Computer systems and information technologies 2024-12-26 2024-12-26 4 28 36 10.31891/csit-2024-4-4 METHOD OF CREATING CUSTOM DATASET TO TRAIN CONVOLUTIONAL NEURAL NETWORK https://csitjournal.khmnu.edu.ua/index.php/csit/article/view/343 <p><em>The task of creating and developing custom datasets for training convolutional neural networks (CNNs) is essential due to the increasing adoption of deep learning across industries. CNNs have become fundamental tools for various applications, including computer vision, natural language processing, medical imaging, and autonomous systems. However, the success of a CNN depends heavily on the quality and relevance of the data it is trained on. The datasets used to train these models must be diverse, representative of the task at hand, and of sufficient quality to capture the underlying patterns that the CNN needs to learn. Thus, building custom datasets that align with the specific objectives of a neural network plays a critical role in enhancing the performance and generalization capability of the trained model.</em></p> <p><em>This paper focuses on developing a method and subsystem for generating high-quality custom datasets tailored to CNNs. The aim is to provide a framework that automates and streamlines the processes involved in data collection, preprocessing, augmentation, annotation, and validation. Moreover, the method integrates tools that allow the dataset to evolve over time, incorporating new data to adapt to changing requirements or environments, making the system flexible and scalable.</em></p> <p><em>The process of creating a dataset begins with the acquisition of raw data. The data can come from various sources such as images from cameras, videos, sensor feeds, open data repositories, or proprietary datasets.</em> <em>A key consideration during data collection is ensuring that the samples cover the full range of conditions or classes the CNN will encounter in production. For example, in an object recognition task, it is essential to collect images from diverse environments, lighting conditions, and angles to train the model effectively. Ensuring variability in the dataset increases the model's ability to generalize, reducing the risk of poor performance on unseen data.</em></p> <p><em>Data augmentation is a critical step in building a robust dataset, particularly when the size of the dataset is limited. Augmentation techniques introduce variability into the dataset by artificially modifying the existing samples, thereby simulating a wider range of conditions. This helps the CNN generalize better and prevents overfitting. In essence, it allows the model to experience different perspectives and distortions of the same data, strengthening its adaptability to real-world scenarios.</em></p> <p><em>Annotation involves labeling the data samples with the correct class or category information. Depending on the task, annotations may include bounding boxes for object detection, segmentation masks for semantic segmentation, or class labels for classification tasks. The importance of well-annotated data cannot be overstated, as CNNs rely on this labeled information to understand the relationships between input data and the desired output predictions.</em></p> <p><em>A balanced dataset is crucial for achieving good performance in CNN models. If one class or condition is overrepresented, the model may become biased toward that class, resulting in poor performance when encountering other classes.</em></p> Tymur ISAIEV Tetiana KYSIL Copyright (c) 2025 Computer systems and information technologies 2024-12-26 2024-12-26 4 37 44 10.31891/csit-2024-4-5 A MODEL OF AN ENHANCED COMPUTER GAME SERVER IN MULTIPLAYER ENVIRONMENTS https://csitjournal.khmnu.edu.ua/index.php/csit/article/view/345 <p><em>The rapid evolution of multiplayer gaming has led to increasingly complex virtual environments that require precise synchronous movement mechanics to be competitive. One of the main challenges of interacting with large numbers of users in real-time in a multiplayer environment is the effect of network delays on character movements. In addition to the constant component, network delays have a variable component that is random and at the same time can be different on different network segments when the server interacts with different clients. The article examines the operation of a computer game server and proposes a model of advanced character movement control for multiplayer environments that provides smooth transitions between animation states through the concept of client-side prediction. The model is based on the state transition diagram of the server and describes its operation during a multiplayer game. To analyze the processes implemented by the server, we defined its five states: listening state, packet delay check state, mobility check state, client data update state, and preauthorization data update state. The object of modeling is a random process characterized by discrete states and continuous time, the model of which is presented as a system of differential equations. The results of solving this system of equations are analytical expressions for estimating the probabilities of a computer game server being in each of the possible states depending on the intensity of transitions between states. The presented mathematical apparatus describes the influence of incoming requests of different intensities on maintaining the necessary quality of system operation. The resulting formulas can be used for further analysis of the server's operation in various scenarios. Based on them, recommendations for improving data exchange algorithms in the system can be developed.</em></p> Kvitoslava OBELOVSKA Artur HRYTSIV Oleh LISKEVYCH Andriy ABZYATOV Rostyslav LISKEVYCH Copyright (c) 2025 Computer systems and information technologies 2024-12-26 2024-12-26 4 45 50 10.31891/csit-2024-4-6 INFORMATION SYSTEM FOR EARTH’S SURFACE TEMPERATURE FORECASTING USING MACHINE LEARNING TECHNOLOGIES https://csitjournal.khmnu.edu.ua/index.php/csit/article/view/340 <p><em>Temperature forecasting is a topical issue in many areas of human life. In particular, climate change directly affects agriculture, energy, infrastructure, health care, logistics, and tourism. Anticipating future changes allows you to better prepare for challenges and minimize risks. The paper presents an information system for forecasting the temperature of the Earth’s surface using machine learning technologies. The forecast is formed by a model adapted to the region, by learning on the basis of historical data and tracking the most inherent patterns. The selection and training of the model was carried out on the basis of the analysis of the characteristics of climatic zones, according to the Köppen classification. A comparison of the performance of models for forecasting the average monthly temperatures of the earth’s surface in different climatic zones was carried out.</em></p> <p><em>The analysis of scientific publications confirmed the relevance of the chosen research topic. Modern approaches to forecasting climatic indicators are considered. Methods and approaches to temperature forecasting, their advantages and disadvantages are analyzed.</em></p> <p><em>The peculiarities of the application of machine learning methods for temperature forecasting are considered, and the criteria for choosing the most accurate and least energy-consuming methods are determined. The research results made it possible to identify machine learning methods that best adapt to temperature patterns and allow accurate short-term forecasting. An approach for long-term forecasting using recurrent neural networks is proposed.</em></p> <p><em>An information system has been developed for forecasting future temperatures depending on the climatic features of the studied territories based on the proposed methods. A concept for further research for the development and improvement of the developed information system has been formed.</em></p> Tetiana HOVORUSHCHENKO Vitalii ALEKSEIKO Valeriia SHVAIKO Juliia ILCHYSHYNA Andrii KUZMIN Copyright (c) 2025 Computer systems and information technologies 2024-12-26 2024-12-26 4 51 58 10.31891/csit-2024-4-7 CONCATENATION OF EFFICIENTNETB7 AND RESNET50 MODELS IN THE TASK OF CLASSIFYING OPHTHALMOLOGICAL DISEASES OF DIABETIC ORIGIN https://csitjournal.khmnu.edu.ua/index.php/csit/article/view/341 <p><em>Diagnosing diabetic eye diseases by doctors using medical equipment requires significant resources. It is advisable to use automated tools. Using combinations of models improves classification accuracy.</em></p> <p><em>The features of the architectures of convolutional neural networks EfficientNetB7 and ResNet50 are presented. The creation of a neural network model by concatenating the EfficientNetB7 and ResNet50 models is justified. Transfer learning is applied. The GlobalAveragePooling2D layer is added to each model. The models are combined using the Concatenate layer. The Flatten layer is added to the resulting model to convert the vector into a one-dimensional array. </em></p> <p><em>Two Dropout layers are added to prevent overtraining. Two Dense layers with 512 and 256 neurons and the ReLU activation function are added for nonlinear data transformation and abstract feature extraction. A Dense layer with 4 neurons and the softmax activation function is added to determine the image class. l2-regularization is used in all Dense layers. The developed neural network model was applied to process a dataset of 4 classes: cataract images, diabetic retinopathy images, glaucoma images, and healthy retina images. The model is compiled using the Adam optimizer, the categorical cross-entropy loss function. </em></p> <p><em>The callback functions ModelCheckpoint, LearningRateScheduler, EarlyStopping, and ReduceLROnPlateau are used to adjust the learning rate. The validation accuracy of the model is improved by augmentation (horizontal and vertical flipping), using l2-regularization, Dropout, and adjusting the callback functions. The training lasted 30 epochs. </em></p> <p><em>The best validation accuracy of 97.39% was achieved at the 29th epoch. The best value of the validation function 0.4323 was achieved at the 30th epoch. The proposed neural network model outperforms the accuracy indicators of models proposed in similar studies. The model can be applied to disease detection and classification tasks.</em></p> Dmitro PROCHUKHAN Copyright (c) 2025 Computer systems and information technologies 2024-12-26 2024-12-26 4 59 67 10.31891/csit-2024-4-8 WEB SERVICE MANAGEMENT SYSTEM FOR PREDICTING REAL ESTATE PRICES USING MACHINE LEARNING TECHNIQUES https://csitjournal.khmnu.edu.ua/index.php/csit/article/view/338 <p><em>Today there are many different web services for renting real estate, but none of them provides price forecasting capabilities. There is a need to create a platform that allows users to receive accurate real estate price forecasts with minimal time expenditures. The aim of this paper is to develop the architecture of a web service for real estate price forecasting, considering various apartment characteristics. We have prepared a review and analysis of existing analogues of real estate rental web services, functional and non-functional requirements for a web service for apartment price forecasting. The high-level architecture and technical tasks for the participants of our web service were also developed and described in our research. </em></p> <p><em>The paper proposes the development of a web service that predicts real estate prices based on various property characteristics. The key objectives are: analyze existing real estate rental web services and identify functional gaps, particularly the lack of price prediction capabilities; establish technical requirements for a comprehensive web service that unifies tenants, landlords, and administrators to facilitate informed decision-making; utilize machine learning techniques, such as linear regression, random forest, and decision trees, to develop a price forecasting module within the web service; evaluate the performance of different machine learning models using RMSE metric.</em></p> <p><em>The paper presents the high-level architecture of the web service, including modules for user registration, data validation, apartment search and interaction, and price forecasting. The experimental results demonstrate that the random forest model outperforms linear regression and decision trees in predicting apartment rental prices in Kyiv. Overall, the study highlights the potential of integrating machine learning into real estate web services to enhance transparency and informed decision-making for both tenants and landlords.</em></p> Vitaliy Kobets Copyright (c) 2025 Computer systems and information technologies 2024-12-26 2024-12-26 4 68 77 10.31891/csit-2024-4-9 DECISION-MAKING SUPPORT SYSTEM REGARDING THE OPTIMIZAION PROCESS OF CROP CULTIVATION USING REMOTE SENSING DATA https://csitjournal.khmnu.edu.ua/index.php/csit/article/view/335 <p><em>This paper explores the development of an information system to support decision-making in agriculture, specifically focusing on optimizing crop production. This system leverages the power of remote sensing (RS) data, which offers valuable insights into crop health and environmental conditions from a bird's-eye view. By analyzing this data, the proposed system empowers farmers with the information they need to make informed choices throughout the agricultural season, ultimately leading to increased yields and improved resource management.</em></p> <p><em>Following the introduction, the paper delves into the core functionalities of the information system. It details the process of acquiring RS data from various platforms, such as Landsat, Sentinel-2, or PlanetScope satellites. Here, the discussion emphasizes the importance of selecting data with appropriate spatial and temporal resolution to capture the most relevant details for specific agricultural applications. Pre-processing techniques for handling the raw RS data are then discussed, outlining methods for removing noise and errors to ensure the accuracy of subsequent analyses. The paper then details the implementation of various algorithms for data analysis. These algorithms extract meaningful features from the pre-processed RS data, such as vegetation indices that provide insights into plant health and biomass, or other indicators that can detect potential crop stress due to nutrient deficiencies or water scarcity.</em></p> Dmytro OKRUSHKO Olga PAVLOVA Copyright (c) 2025 Computer systems and information technologies 2024-12-26 2024-12-26 4 78 91 10.31891/csit-2024-4-10 COMPARATIVE ANALYSIS OF REAL-TIME SEMANTIC SEGMENTATION ALGORITHMS https://csitjournal.khmnu.edu.ua/index.php/csit/article/view/347 <p><em>Semantic segmentation is a fundamental task in computer vision that enables machines to interpret and understand images at the pixel level, providing a deeper understanding of scene composition. By assigning a class to each pixel, this technique is critical for applications requiring detailed visual comprehension, such as autonomous driving, robotics, medical imaging, and augmented reality. This article presents a comprehensive comparative analysis of deep learning models specifically designed for real-time semantic segmentation, focusing on their performance metrics, architectures, and various application contexts. This study compares advanced deep learning models, including PIDNet, PP-LiteSeg, BiSeNet, SFNet, and others, using key metrics such as Mean Intersection over Union (mIoU) and Frames Per Second (FPS), alongside the hardware specifications on which they were tested. Models like PIDNet, known for its multi-branch architecture, emphasize detailed, context, and boundary information to improve segmentation precision without sacrificing speed. On the other hand models like PP-LiteSeg, with its Short-Term Dense Concatenate Network (STDCNet) backbone, excels in reducing computational complexity while maintaining competitive accuracy and inference speed, making it well-suited for resource-constrained environments. The analysis evaluates the trade-offs between accuracy and computational efficiency using benchmark datasets such as Cityscapes and DeepScene. Additionally, we examine the adaptability of these models to diverse operational scenarios, particularly on edge devices like NVIDIA Jetson Nano, where computational resources are limited. This discussion extends to the challenges faced in real-time implementations, including maintaining robustness across varying environments and achieving high performance with minimal latency. Highlighting the strengths, limitations, and practical implications of these models, this analysis can serve as a valuable resource for researchers and practitioners aiming to advance the field of real-time semantic segmentation.</em></p> Markijan DURKOT Nataliia MELNYK Copyright (c) 2025 Computer systems and information technologies 2024-12-26 2024-12-26 4 92 97 10.31891/csit-2024-4-11 TOWARDS MULTI-AGENT PLATFORM DEVELOPMENT https://csitjournal.khmnu.edu.ua/index.php/csit/article/view/350 <p><em>This paper focuses on the design and evaluation of a FIPA standard compliant multi-agent platform. The relevance of the topic is due to the growing need for flexible, reliable, and efficient software solutions capable of solving complex intelligent problems in distributed environments. The study is dedicated to the problem of developing and evaluating an agent platform using the Kotlin programming language. The main goal of this work is to design and implement a modular, scalable, and adaptive agent platform. The existing frameworks for the development of multi-agent systems are reviewed, the key components of such systems are highlighted, and the advantages of using Kotlin in the context of a multi-agent architecture are discussed. The scientific contribution of the paper is the creation of a modern FIPA-compliant multi-agent platform that exploits the advantages of the Kotlin language. The performance and resource intensity of the developed system are analyzed, and the platform's compliance with FIPA standards and its interoperability are evaluated. Two different metrics are used to ensure the quality of the system. One of the metrics is the percentage of covered code. This metric is measured using the kover library.&nbsp; We achieved 71.4% coverage of classes and 57.1% coverage of commands. Further coverage is complicated by the use of multi-threaded technologies. The second metric is the system's score for comments from the sonarlint evaluation tool. During development, 16 comments were identified and fixed. This allows us to achieve a high level of code quality and ensure quality for the future. The study demonstrates the potential of integrating modern language capabilities with the multi-agent paradigm, opening new perspectives for the development of efficient and scalable solutions in the area of distributed intelligent systems.</em></p> Oleksandr KARATAIEV Copyright (c) 2025 Computer systems and information technologies 2024-12-26 2024-12-26 4 98 106 10.31891/csit-2024-4-12 DECISION-MAKING MODELS FOR COMPLEX THE ECO-ENERGY-ECONOMIC MONITORING SYSTEM https://csitjournal.khmnu.edu.ua/index.php/csit/article/view/339 <p><em>This article presents the development and implementation of a decision support system in a web application for complex eco-energy-economic monitoring. The study focuses on the analysis and development of decision-making methods, including decision tables, decision trees and expert systems, which ensure efficient and accurate problem-solving in complex environments. Decision tables are used for the systematic analysis of possible alternatives and select the best option based on specified criteria. Experts may use different criteria, including Wald, Bayesian-Laplace, Savage, or Hurwitz, to assess risks, averages, potential losses, or to integrate pessimistic and optimistic approaches. Decision trees provide a convenient way to model decision sequences and visualise scenarios. This method assesses the risks of each option, facilitating informed decision-making through analysis of potential future scenarios. The expert system is designed to accumulate and use knowledge in the form of rules containing conditions and actions. Knowledge engineers, working on the basis of expert experience, create a knowledge base that can be used to solve similar problems in the future. The developed decision support system allows experts to send their proposals to an analyst, who analyses the data in depth, assesses the available alternatives and formulates action programmes. The integration of decision tables, decision trees and expert systems into a single platform ensures high speed, accuracy and balanced decision making, which is essential for monitoring tasks. The system has significant practical value, providing analysts with tools for comprehensive data analysis, process optimisation and development of action strategies. Its implementation helps to improve the efficiency of management, particularly in the areas of the environment, energy and the economy, which is important for ensuring sustainable development and improving the health and quality of life of the population.</em></p> Volodymyr SLIPCHENKO Liubov POLIAHUSHKO Dmytro KRAVCHUK Copyright (c) 2025 Computer systems and information technologies 2024-12-26 2024-12-26 4 107 115 10.31891/csit-2024-4-13 USING FACIAL EXPRESSIONS FOR CUSTOM ACTIONS: DEVELOPMENT AND EVALUATION OF A HANDS-FREE INTERACTION METHOD https://csitjournal.khmnu.edu.ua/index.php/csit/article/view/346 <p><em>This study explores a novel facial expression-based interaction method designed to provide an accessible and hands-free alternative for performing precision tasks. Traditional input devices, such as keyboards and mice, are often unsuitable for individuals with limited mobility or for hands-free environments. The proposed system leverages standard computer hardware and machine learning-based facial landmark detection to map customizable facial expressions to specific actions, making it a low-cost and adaptable solution. This study evaluates the usability, learnability, and potential applications of this interaction method through a task-based user study. Sixteen participants aged 19–34 completed a series of five trials, performing the same color and number-matching task on an interactive grid. This approach allowed the evaluation of the learning curve by analyzing how participants improved their skills and reduced task completion times with each subsequent trial. Participants also provided feedback on challenging facial expressions to identify usability challenges. The evaluation focused on task completion time, participant-reported challenging actions, and qualitative feedback to assess system usability, user adaptability, and potential applications. The results indicate a clear learning curve, with participants improving task completion times over repeated trials. Feedback highlighted the potential of this interaction method for assistive technologies, gamified facial exercises, and as a supplementary input tool, while also identifying challenges such as facial fatigue and action complexity. The findings demonstrate the system's promise as an accessible and adaptable alternative interaction method, with opportunities for future refinement and broader application.</em></p> Serhii ZELINSKYI Yuriy BOYKO Copyright (c) 2025 Computer systems and information technologies 2024-12-26 2024-12-26 4 116 125 10.31891/csit-2024-4-14 APPLICATION OF SIMD-INSTRUCTIONS TO INCREASE THE EFFICIENCY OF NUMERICAL METHODS FOR SOLVING SLAE https://csitjournal.khmnu.edu.ua/index.php/csit/article/view/333 <p><em>Computational efficiency has become a key factor in progress across many fields of science and technology. However, traditional methods for improving the performance of computational systems have reached their limits, necessitating the search for new approaches to algorithm optimization. This paper explores the application of SIMD instructions to enhance the efficiency of numerical methods for solving systems of linear algebraic equations, particularly the Gauss method and the conjugate gradient method. The proposed approach enables the vectorization of computations, significantly reducing the number of iterative steps and accelerating algorithm execution. An optimization mechanism is presented, based on an analysis of the capabilities of SIMD instructions and their integration into existing SLAE-solving algorithms. The research includes an examination of the impact of vectorization on the performance and stability of numerical algorithms for problems of varying size, as well as a theoretical justification of the proposed approach’s effectiveness. The outcome of this work is the development of optimized versions of the Gauss and conjugate gradient methods, which demonstrate significant performance gains without loss of calculation accuracy. The proposed approach opens new perspectives for further development and improvement of numerical methods within the context of modern computing architectures, with broad applicability in engineering calculations, computer graphics, machine learning, and other fields where computational efficiency is of high priority.</em></p> Oleg ZHULKOVSKYI Inna ZHULKOVSKA Hlib VOKHMIANIN Alexander FIRSOV Illia TYKHONENKO Copyright (c) 2025 Computer systems and information technologies 2024-12-26 2024-12-26 4 126 133 10.31891/csit-2024-4-15 COMPARATIVE ANALYSIS OF CLASSIFICATION METHODS FOR HIGH-RESOLUTION OPTICAL SATELLITE IMAGES https://csitjournal.khmnu.edu.ua/index.php/csit/article/view/349 <p><em>High-resolution satellite image classification is used in various applications, such as urban planning, environmental monitoring, disaster management, and agricultural assessment. Traditional classification methods are ineffective due to the complex characteristics of high-resolution multichannel images: the presence of shadows, complex textures, and overlapping objects. This necessitates selecting an efficient classification method for further thematic data analysis. In this study, a comprehensive assessment of the accuracy of the most well-known classification methods (parallelepiped, minimum distance, Mahalanobis distance, maximum similarity, spectral angle map, spectral information difference, binary coding, neural network, decision tree, random forest, support vector machine, K-nearest neighbour, and spectral correlation map) is performed. This study comprehensively evaluates various classification algorithms applied to high-resolution satellite imagery, focusing on their accuracy and suitability for different use cases. To ensure the robustness of the evaluation, high-quality WorldView-3 satellite imagery, known for its exceptional spatial and spectral resolution, was utilized as the dataset. To assess the performance of these methods, error matrices were generated for each algorithm, providing detailed insights into their classification accuracy. The average values along the main diagonal of these matrices, representing the proportion of correctly classified pixels, served as a key metric for evaluating overall effectiveness. Results indicate that advanced machine learning approaches, such as neural networks and support vector machines, consistently outperform traditional techniques, achieving superior accuracy across various classes. Despite their high average accuracy, a deeper analysis revealed that only some algorithms are universally optimal. For instance, some methods, such as random forests or spectral angle mappers, exhibited strength in classifying specific features like vegetation or urban structures but performed less effectively for others. This underscores the importance of tailoring algorithm selection to the specific objectives of individual classification tasks and the unique characteristics of the target datasets. This study can be used to select the most effective method of classifying the earth's surface, depending on the tasks of further thematic analysis of high-resolution satellite imagery. Furthermore, it highlights the potential of integrating machine learning-based approaches to enhance the accuracy and reliability of classification outcomes, ultimately contributing to more practical applications.</em></p> Володимир HNATUSHENKO Vita KASHTAN Denys CHUMYCHOV Serhii NIKULIN Copyright (c) 2025 Computer systems and information technologies 2024-12-26 2024-12-26 4 134 142 10.31891/csit-2024-4-16