https://ejournal.gomit.id/ijaaiml/issue/feed International Journal of Advances in Artificial Intelligence and Machine Learning 2026-04-14T14:06:27+07:00 Prof. Khang Wen Goh gohkhangwen@gmail.com Open Journal Systems <p>The International Journal of Advances in Artificial Intelligence and Machine Learning (IJAAIML) is a prominent academic journal dedicated to publishing cutting-edge research and developments in the fields of Artificial Intelligence (AI) and Machine Learning (ML). It serves as an essential platform for researchers, practitioners, and professionals worldwide to share innovative ideas, technologies, and empirical studies that contribute to advancing AI and ML. The journal emphasizes both theoretical advancements and practical applications, showcasing how these technologies are shaping various industries, including healthcare, finance, education, robotics, and autonomous systems.</p> https://ejournal.gomit.id/ijaaiml/article/view/433 Plant Disease Detection Using Image Processing and Machine Learning 2025-12-20T11:53:05+07:00 Ranga Swamy Sirisati sirisatiranga@gmail.com J. Sravya sravyareddy373@gmail.com D. Sruthi dusarisruthi23@gmail.com A. Nandhu appamnandhu5@gmail.com R. Navya Sree navyasreeracha0020@gmail.com Dyah Ayu Irawati dyah.ayu.irawati@upnyk.ac.id <p><strong>Background: </strong>Plant diseases continue to threaten agricultural productivity worldwide, causing significant reductions in crop yield and quality. Traditional visual inspection by farmers or experts is often slow, subjective, and unreliable, especially across large plantation areas. With the increasing availability of digital imaging technologies, automated detection through image processing and machine learning presents a promising alternative.<br /><strong>Aims: </strong>This study aims to develop an enhanced plant disease detection framework using image processing combined with machine learning algorithms, particularly Support Vector Machine (SVM) and Convolutional Neural Networks (CNN).<br /><strong>Methods: </strong>A dataset of 54,306 leaf images from the PlantVillage collection was used to train and test the models. Preprocessing steps included resizing, noise removal, background segmentation, and feature extraction. CNNs were trained for end-to-end classification, while SVM operated on manually extracted features. A 10-fold cross-validation procedure was employed to ensure robustness. Fine-tuning strategies and comparative experiments were implemented to evaluate performance consistency across dataset variants.<br /><strong>Result: </strong>The system demonstrated strong capability in early disease detection, achieving 97% accuracy for healthy leaves and moderate performance (56%) for certain diseased classes due to visual similarity and image noise. Background segmentation improved focus on disease features, while grayscale images reduced reliance on color cues but lowered classification accuracy.<br /><strong>Conclusion: </strong>The findings confirm that machine learning, particularly CNN-based models, can significantly enhance plant disease diagnosis and support timely agricultural decision-making. Future improvements will explore advanced deep learning architectures, expanded datasets, multimodal imaging, and IoT integration for real-time field deployment.</p> 2026-03-04T00:00:00+07:00 Copyright (c) 2026 Ranga Swamy Sirisati, J. Sravya, D. Sruthi, A. Nandhu, R. Navya Sree, Dyah Ayu Irawati https://ejournal.gomit.id/ijaaiml/article/view/649 Knowledge Distillation for Enhancing Interpretability and Efficiency in Complex Machine Learning Models 2026-04-10T13:32:56+07:00 Jaesik Jeong 167030@o365.tku.edu.tw Kit Ling Chan 226002@hksyu.edu.hk Mageswaran Sanmugam mageswaran@usm.my <p><strong>Background:</strong> Complex machine learning (ML) systems often require substantial computational resources, making them difficult to deploy in real-world environments constrained by hardware limitations, interpretability requirements, and regulatory standards. While knowledge distillation (KD) has traditionally been viewed as a model compression technique, its broader implications for efficiency, interpretability, and regulatory compliance remain underexplored.<br /><strong>Aims:</strong> This study aims to reconceptualize knowledge distillation beyond model compression by framing it as a dual strategy for efficiency and interpretability enhancement. The paper proposes a structured distillation protocol that integrates predictive performance assessment, computational profiling, and feature attribution alignment within a unified experimental design.<br /><strong>Methods:</strong> The proposed distillation protocol employs a temperature-scaled objective function combining supervised cross-entropy loss and Kullback Leibler divergence to facilitate relational knowledge transfer from teacher to student models. Experiments were conducted across multiple benchmark datasets. Evaluation consisted of three components: (1) predictive performance measurement, (2) computational efficiency profiling including parameter counts and inference latency, and (3) interpretability analysis using feature attribution similarity and perturbation stability metrics. Statistical analyses were performed to assess performance differences.<br /><strong>Result:</strong> Across benchmark datasets, distilled student models achieved teacher-level accuracy ranging between 95% and 98%. Parameter counts and inference latency were reduced by more than 60%. Interpretability analyses showed improved explanation consistency, smoother decision structures, and higher feature attribution alignment. Statistical testing confirmed that efficiency and interpretability gains were obtained without significant performance degradation.<br /><strong>Conclusion:</strong> The findings support the reconceptualization of knowledge distillation as a dual optimization strategy that enhances both operational efficiency and interpretability while preserving predictive strength. Rather than serving solely as a compression mechanism, KD functions as a scalable and adaptive framework for deployment-ready AI systems that balance performance, computational constraints, and explanation stability.</p> 2026-03-30T00:00:00+07:00 Copyright (c) 2026 Jaesik Jeong, Kit Ling Chan, Mageswaran Sanmugam https://ejournal.gomit.id/ijaaiml/article/view/652 Constraint-Aware Machine Learning for Ensuring Feasible Predictions in Operational Data Science 2026-04-10T13:32:38+07:00 Wu Shukun wuskctrl@outlook.com Tri Basuki Kurniawan tribasukikurniawan@yahoo.com Muhammet Esad Kuloğlu mekuloglu@gmail.com <p><strong>Background:</strong> Machine learning models deployed in operational environments often demonstrate high predictive accuracy during benchmark evaluation. However, their practical reliability is frequently compromised when predictions violate domain-specific operational constraints.<br /><strong>Aims:</strong> This study aims to address the problem of infeasible predictions by proposing a unified framework that integrates operational constraints directly into the learning and inference processes.<br /><strong>Methods:</strong> The CALF framework incorporates operational constraints through a dual mechanism consisting of correction-based learning and regularization-based penalty functions. These mechanisms are embedded directly within the training and inference objectives, allowing the model to learn constraint-compliant predictions during optimization. The framework was evaluated by comparing predictive error and operational feasibility against an unconstrained baseline model. Sensitivity analysis was also conducted to examine the stability and flexibility of the constraint penalties under varying operational thresholds.<br /><strong>Result:</strong> Experimental results demonstrate that CALF achieved predictive error levels comparable to the unconstrained baseline while maintaining full operational feasibility. The framework reached 100% operational compliance, indicating that all generated predictions satisfied the defined constraints. Sensitivity analysis further showed that the regularization penalties operated within acceptable thresholds, allowing the model to maintain predictive flexibility while enforcing constraint adherence.<br /><strong>Conclusion:</strong> The findings highlight the importance of integrating operational constraints directly into machine learning model design. By embedding feasibility constraints within the optimization process, the CALF framework ensures that predictive outputs remain both accurate and operationally compliant. This approach repositions operational constraints as intrinsic components of predictive modeling and contributes to the development of reliable and deployable AI systems in real-world environments.</p> 2026-03-30T00:00:00+07:00 Copyright (c) 2026 Wu Shukun, Tri Basuki Kurniawan, Muhammet Esad Kuloğlu https://ejournal.gomit.id/ijaaiml/article/view/655 Bias Detection and Mitigation Techniques in Data Science Pipelines: An Empirical Evaluation 2026-04-13T15:48:34+07:00 Deshinta Arrova Dewi deshinta.ad@newinti.edu.my Ugochi Okengwu ugochi.okengwu1@uniport.edu.ng Zakka Ugih Rizqi zur@mp.aau.dk <p><strong>Background: </strong>Failure to consider algorithmic bias can result in discriminatory outcomes in machine learning systems, particularly when these models operate in high-stakes decision-making environments. Although numerous bias mitigation techniques have been proposed, most studies treat fairness assessment as a post hoc evaluation. This gap highlights the need for a lifecycle-oriented framework to examine interconnected bias and fairness mechanisms.<br /><strong>Aims: </strong>This study aims to conduct an empirical investigation of bias propagation across the data science continuum within a structured bias-processing framework.<br /><strong>Methods: </strong>The proposed framework was tested on benchmark datasets containing sensitive attributes. Three predictive models were implemented: Logistic Regression, Random Forest, and Gradient Boosting. Fairness was evaluated using Demographic Parity, Equal Opportunity, and Average Odds metrics. Predictive modeling techniques were further employed to interpret fairness outcomes. Bias mitigation strategies were applied at both data and model levels, including fairness-regularized optimization and hybrid approaches. Sensitivity analysis was conducted to examine the trade-off between fairness constraints and model loss.<br /><strong>Result:</strong> The empirical findings indicate that most disparities originate from bias embedded in the data rather than from model architecture. Data-level bias mitigation reduced disparity by 28%. The fairness-regularized optimization approach reduced disparity by 35%. The hybrid mitigation strategy achieved a demographic disparity reduction of 40–45%, with an accuracy decrease of no more than 2%. Sensitivity analysis revealed non-linear tensions between fairness constraints and optimization loss, demonstrating that early-stage bias mitigation stabilizes fairness without significantly increasing performance trade-offs.<br /><strong>Conclusion:</strong> This study extends both theoretical and practical understanding of lifecycle bias propagation in machine learning systems. The findings emphasize the importance of addressing bias at early stages of the data science pipeline to achieve stable and sustainable fairness outcomes. By integrating fairness engineering throughout the lifecycle, the proposed framework contributes to more robust and ethically aligned AI systems.</p> 2026-03-31T00:00:00+07:00 Copyright (c) 2026 Deshinta Arrova Dewi, Ugochi Okengwu, Zakka Ugih Rizqi https://ejournal.gomit.id/ijaaiml/article/view/656 Transfer Learning Effectiveness Across Domain Similarity Levels in Data Science Applications 2026-04-14T14:06:27+07:00 Eko Risdianto eko_risdianto@unib.ac.id Thai Ky Trung Pham trungptk@gmail.com William Yeoh william.yeoh@deakin.edu.au Sultan Hammad Alshammari sh.alshammari@uoh.edu.sa <p><strong>Background:</strong> Transfer learning has become increasingly prominent in data science due to the challenges posed by limited labeled data and distribution shifts between training and deployment environments. However, the success of transfer learning depends significantly on the structural compatibility between source and target domains.<br /><strong>Aims:</strong> This study aims to investigate the relationship between domain similarity and transfer learning performance using an experimental framework termed Similarity-Aware Transfer Evaluation (SATE).<br /><strong>Methods:</strong> Twelve pairs of benchmark datasets were selected to simulate varying levels of domain similarity and were made publicly available. Domain similarity was computed using Maximum Mean Discrepancy (MMD) in the learned representation space. Transfer performance was measured using a predefined Transfer Gain metric under bounded fine-tuning strategies. Correlation analysis and statistical testing were conducted to examine the relationship between similarity scores and transfer effectiveness, while fine-tuning depth was analyzed in relation to similarity magnitude.<br /><strong>Result:</strong> The results demonstrate a strong positive correlation between domain similarity and transfer gain (r = 0.83, p &lt; 0.01), indicating that approximately 69% of performance variability can be explained by similarity-based transfer effects. Negative transfer was observed when similarity scores were S ≤ 0.41. Furthermore, higher similarity levels were associated with deeper and more stable fine-tuning, whereas lower similarity resulted in increased instability during adaptation. These findings establish similarity as a structural compatibility constraint in transfer learning.<br /><strong>Conclusion:</strong> The study confirms that domain similarity plays a fundamental role in determining transfer learning success. By operationalizing similarity measurement and linking it to performance thresholds, the proposed SATE framework provides a structured method for evaluating transfer feasibility in real-world data science applications.</p> 2026-03-31T00:00:00+07:00 Copyright (c) 2026 Eko Risdianto, Thai Ky Trung Pham, William Yeoh, Sultan Hammad Alshammari