Implementasi Arsitektur MobileNetV2 Berbasis Citra untuk Deteksi Penyakit Dropsy dan Popeye pada Ikan Cupang
Abstract
The identification of diseases in betta fish based on visual symptoms remains a challenge, particularly for beginners who lack experience in recognizing disease characteristics. This study aims to implement an image-based MobileNetV2 architecture as a diagnostic support system to detect dropsy and popeye diseases in betta fish that have already exhibited visual symptoms. The dataset used in this study consists of 600 betta fish images divided into three classes: healthy, dropsy, and popeye, with 200 images in each class, collected from the internet. Data preprocessing was conducted through image ratio adjustment, normalization, and data augmentation to increase data variability. A transfer learning approach was applied by freezing most layers of the MobileNetV2 feature extractor and fine-tuning several of the final layers. Model evaluation was performed using 5-Fold Cross Validation to ensure experimental stability and reproducibility. The best model from each fold was then combined using an ensemble method based on average probability to improve prediction performance on the test dataset. Experimental results show that the average 5-Fold Cross Validation accuracy reached 74.71% with a standard deviation of ±4.57%, while the Macro-F1 score achieved ±74.43%. The ensemble approach produced a test accuracy of 85.56% with balanced classification performance across all classes. Grad-CAM visualizations indicate that the model is able to focus on image regions relevant to disease symptoms. These findings demonstrate that the MobileNetV2 architecture is effective as an image-based diagnostic support tool for betta fish diseases.
Downloads
References
M. Duman, I. B. Satıcıoğlu, and J. M. Janda, “A Review of the Industrial Importance, Common Bacterial Diseases, and Zoonotic Risks of Freshwater Aquarium Fish,” VECTOR-BORNE ZOONOTIC Dis., vol. 24, no. 2, pp. 69–85, 2024, doi: 10.1089/vbz.2023.0094.
N. M. Mahmud, A. A. Ansary, F. Y. Ritu, N. A. Hasan, and M. M. Haque, “An Overview of Fish Disease Diagnosis and Treatment in Aquaculture in Bangladesh,” Aquac. J., vol. 5, no. 18, pp. 1–29, 2025, doi: 10.3390/aquacj5040018.
S. A. Smith, Fish Diseases and Medicine, 1st ed. Boca Raton: CRC Press, 2019. [Online]. Available: https://www.routledge.com/Fish-Diseases-and-Medicine/Smith/p/book/9781498727860
S. Suhendar, A. Purnama, and E. Fauzi, “Deteksi Penyakit Pada Daun Tanaman Ubi Jalar Menggunakan Metode Convolutional Neural Network,” J. Ilm. Inform. Glob., vol. 14, no. 3, pp. 62–67, 2023, doi: 10.36982/jiig.v14i3.3478.
A. A. S. Nur, L. Iwan, and P. Rio, “Implementasi CNN untuk Identifikasi Penyakit Daun Cabai,” J. Comput. Sci. Inf. Technol., vol. 6, no. 3, pp. 619–624, 2025, doi: 10.37859/coscitech.v6i3.9381.
Z. L. R. Nafi’ and M. S. Hidayatullah, “DETEKSI KESEGARAN IKAN NILA MENGGUNAKAN CONVOLUTIONAL NEURAL NETWORKS BERBASIS CITRA DIGITAL,” J. Data Anal. Information, Comput. Sci., vol. 2, no. 2, pp. 192–198, 2025, doi: 10.70248/jdaics.v2i2.914.
A. Mustopa, A. Sasongko, H. M. Nawawi, S. K. Wildah, and S. Agustiani, “Deteksi Penyakit Ayam berdasarkan Citra Feses dengan Model EfficientNetV2L,” Sist. J. Sist. Inf., vol. 12, no. 3, pp. 715–725, 2023, doi: 10.32520/stmsi.v12i3.2807.
M. Lv et al., “Si-CA MobileNet: A lightweight and efficient convolutional neural network for distracted driver detection,” Neurocomputing, vol. 654, 2025, doi: 10.1016/j.neucom.2025.131281.
S. M. Roy et al., “Application of artificial intelligence in aquaculture – Recent developments and prospects,” Aquac. Eng., vol. 111, 2025, doi: 10.1016/j.aquaeng.2025.102570.
H. Tamut, R. Ghosh, K. Gosh, and M. A. S. Siddique, “Enhancing Disease Detection in the Aquaculture Sector Using Convolutional Neural Networks Analysis,” Aquac. J., vol. 5, no. 6, pp. 1–19, 2025, doi: 10.3390/aquacj5010006.
S.-B. Hwang, H.-Y. Kim, C.-Y. Heo, H.-Y. Jeong, S.-J. Jung, and Y.-J. Cho, “FLATFISH LESION DETECTION BASED ON PART SEGMENTATION APPROACH AND LESION IMAGE GENERATION,” J. World Aquac. Soc., pp. 1–25, 2025, doi: 10.1111/jwas.70031Digital Object Identifier (DOI).
J. H. Peh and M. N. Azra, “A global review of ornamental fish and shellfish research,” Aquac. J., vol. 596, no. 1, 2025, doi: 10.1016/j.aquaculture.2024.741719.
D. B. Daş, “An Intelligent and Lightweight Approach Based on MobilenetV2 Architecture for Identifying Brain Tumors,” Sak. Univ. J. Comput. Inf. Sci., vol. 8, no. 2, pp. 392–399, 2025, doi: 10.35377/saucis...
M. G. Somoal and A. R. Dzikrillah, “KOMPARASI MOBILENETV2 DENGAN KUSTOMISASI TRANSFER LEARNING DAN HYPERPARAMETER UNTUK IDENTIFIKASI TUMOR OTAK,” J. Teknol. Inf. dan Ilmu Komput., vol. 12, no. 1, pp. 229–240, 2025, doi: 10.25126/jtiik.2025129582.
D. Rastogi et al., “Brain Tumor Detection and Prediction in MRI Images Utilizing a Fine-Tuned Transfer Learning Model Integrated Within Deep Learning Frameworks,” Life, vol. 15, no. 327, pp. 1–37, 2025, doi: 10.3390/life15030327.
S. Murugesan, J. Chinnadurai, S. Srinivasan, S. K. Mathivanan, R. R. Chandan, and U. Moorthy, “Robust multiclass classification of crop leaf diseases using hybrid deep learning and Grad-CAM interpretability,” Sci. Rep., vol. 15, pp. 1–22, 2025, doi: 10.1038/s41598-025-14847-7.
D. Tribuana and H. Zainuddin, “Image Preprocessing Approaches Toward Better Learning Performance with CNN,” J. RESTI (Rekayasa Sist. dan Teknol. Informasi), vol. 8, no. 1, pp. 1–9, 2024, doi: 10.29207/resti.v8i1.5417.
K. R. Ummah, T. Karlita, R. Sigit, E. M. Yuniarno, I. K. E. Purnama, and M. H. Purnomo, “EFFECT OF IMAGE PRE-PROCESSING METHOD ON CONVOLUTIONAL NEURAL NETWORK CLASSIFICATION OF COVID-19 CT SCAN IMAGES,” Int. J. Innov. Comput. Inf. Control, vol. 18, no. 6, pp. 1895–1912, 2022, doi: 10.24507/ijicic.18.06.1895.
W. Zeng, “Image data augmentation techniques based on deep learning: A survey,” Math. Biosci. Eng., vol. 21, no. 6, pp. 6190–6224, 2024, doi: 10.3934/mbe.2024272.
R. B. PRADANA, C. PARAMITA, N. A. S. WINARSIH, and R. A. PRAMUNENDAR, “Lightweight Deep Learning Approach for Sugarcane Leaf Disease Classification Using MobileNetV2,” J. Teknol. Inf. Dan Terap., vol. 12, no. 2, pp. 115–126, 2025, doi: 10/25047/jtit.v12i2.456.
M. M. Islam, M. B. Hossain, M. N. Akhtar, M. A. Moni, and K. F. Hasan, “CNN Based on Transfer Learning Models Using Data Augmentation and Transformation for Detection of Concrete Crack,” Algorithms, vol. 15, no. 287, pp. 1–17, 2022, doi: 10.3390/a15080287.
J. Chaki, The Art of Deep Learning Image Augmentation: The Seeds of Success. Singapore: Springer, 2025. doi: 10.1007/978-981-96-5081-1.
D. Murcia-gómez, I. Rojas-valenzuela, and O. Valenzuela, “Impact of Image Preprocessing Methods and Deep Learning Models for Classifying Histopathological Breast Cancer Images,” Appl. Sci., vol. 12, no. 11375, pp. 1–18, 2022, doi: 10.3390/app122211375.
PyTorch, “Transforming images, videos, boxes and more.” Accessed: Feb. 02, 2026. [Online]. Available: https://docs.pytorch.org/vision/main/transforms.html
G. Habib, I. A. Malik, S. Sharma, S. Singh, and J. Kim, “Optimizing MobileNetV3 for multimodal eye gaze and emotion recognition via advanced pruning and quantisation techniques,” Sci. Rep., vol. 15, no. 37766, pp. 1–18, 2025, doi: 10.1038/s41598-025-19617-z.
M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “MobileNetV2: Inverted Residuals and Linear Bottlenecks,” Comput. Vis. Pattern Recognit., pp. 4510–4520, 2019, doi: 10.48550/arXiv.1801.04381.
A. Howard et al., “Searching for MobileNetV3,” Comput. Vis. Pattern Recognit., pp. 1314–1324, 2019, doi: 10.48550/arXiv.1905.02244.
M. S. A. M. Al-Gaashani, W. Xu, and E. Y. Obsie, “MobileNetV2-based deep learning architecture with progressive transfer learning for accurate monkeypox detection,” Appl. Soft Comput., vol. 169, 2025, doi: 10.1016/j.asoc.2024.112553.
M. Kamal, M. M. Haque, R. N. Farabi, and M. Ibrahim, “From Scratch to Fine Tuning: Comparing Transfer Learning and CNN Training Strategies on Five Bangladesh-Centric Datasets,” Res. Sq., pp. 1–16, 2026, doi: 10.21203/rs.3.rs-8546096/v1.
S. Rattanaphan and A. Briassouli, “Evaluating Generalization, Bias, and Fairness in Deep Learning for Metal Surface Defect Detection: A Comparative Study,” Processes, vol. 12, no. 3, pp. 1–32, 2024, doi: 10.3390/pr12030456.
A. E. Maxwell, M. S. Bester, and C. A. Ramezan, “Enhancing Reproducibility and Replicability in Remote Sensing Deep Learning Research and Practice,” Remote Sens., vol. 14, no. 22, pp. 1–12, 2022, doi: 10.3390/rs14225760.
A. Mohammed and R. Kora, “A comprehensive review on ensemble deep learning: Opportunities and challenges,” J. King Saud Univ. – Comput. Inf. Sci., vol. 35, no. 2, pp. 757–774, 2023, doi: 10.1016/j.jksuci.2023.01.014.
M. C. H. Lee and J. B. and J. Springael, “Performance Metrics for Multilabel Emotion Classification: Comparing Micro, Macro, and Weighted F1-Scores,” Appl. Sci., vol. 14, no. 21, pp. 1–21, 2024, doi: 10.3390/app14219863.
I. Said Almuniri et al., “Beyond peak accuracy: a stabilitycentric framework for reliable multimodal student engagement assessment,” Sci. Rep., vol. 16, no. 5, pp. 1–21, 2026, doi: 10.1038/s41598-025-31215-7.
J. Erbani, P.-E. Portier, E. Egyed-Zsigmond, S. Ben Mokhtar, and D. Nurbakova, “On the Normalization of Confusion Matrices: Methods and Geometric Interpretations,” 2025, doi: 10.48550/arXiv.2509.04959.
Bila bermanfaat silahkan share artikel ini
Berikan Komentar Anda terhadap artikel Implementasi Arsitektur MobileNetV2 Berbasis Citra untuk Deteksi Penyakit Dropsy dan Popeye pada Ikan Cupang
Pages: 2426−2438
Copyright (c) 2026 Fadhilah Rafi Musyaffa, Adzhal Arwani Mahfudh, Moh Hadi Subowo

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under Creative Commons Attribution 4.0 International License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (Refer to The Effect of Open Access).





















