Analisis Perbandingan Akurasi Model EfficientNetB0 dan Vision Transformer Dalam Klasifikasi Citra Motif Batik Giriloyo
Abstract
Batik is a cultural heritage owned by Indonesia and has been inaugurated by UNESCO on October 2, 2009. In this digital era, the variety of batik motifs must be preserved, especially in Giriloyo Batik Village located in Karang Kulon, Wukirsari, Imogiri Sub-district, Bantul. The complexity and diversity of batik motifs in the area require a modern technological approach to assist the accurate classification process. This study aims to compare the performance of the two current models, EfficientNetB0 and Vision Transformer, in classifying five classic batik motifs in Kampung Batik Giriloyo. This research method combines deep learning approach based on Convolutional Neural Network (CNN) and Transformer with training process from zero without transfer learning. The research stages used include dataset collection, prepocessing, augmentation, model building and training, evaluation and visualization of result comparison. Evaluation is done using accuracy, precision, recall, F1-score and inference time efficiency metrics. The final dataset amounted to 13,128 sliced batik images. The dataset is then divided into 3 main parts, namely training data by 80%, validation data by 10% and testing data by 10% of the total dataset. The final results showed that Vision Transformer achieved the best performance with testing accuracy reaching 99.85 and the EfficientNetB0 model gave an accuracy of 98.78% with stable efficiency. This research confirms that the Vision Transformer model is superior in extracting global patterns in complex batik motifs. This research also makes a real contribution to the utilization of artificial intelligence in cultural preservation through the classification of digital batik motifs and the development of a classic batik motif classification system in Giriloyo Batik Village.
Downloads
References
R. Andrian, R. Taufik, D. Kurniawan, A. S. Nahri, and H. C. Herwanto, “Lampung Batik Classification Using AlexNet, EfficientNet, LeNet and MobileNet Architecture,” Int. J. Adv. Comput. Sci. Appl., vol. 15, no. 11, pp. 930–935, 2024, doi: 10.14569/IJACSA.2024.0151191.
S. Aras, A. Setyanto, and Rismayani, “Deep Learning Untuk Klasifikasi Motif Batik Papua Menggunakan EfficientNet dan Trasnfer Learning,” Insect (Informatics Secur. J. Tek. Inform., vol. 8, no. 1, pp. 11–20, 2022, doi: 10.33506/insect.v8i1.1865.
R. R. Karim, “Implementasi Klasifikasi Senjata Tradisional Jawa Barat Menggunakan Convolutional Neural Network (CNN) Dengan Metode Transfer Learning,” J. Inform. dan Tek. Elektro Terap., vol. 12, no. 2, 2024, doi: 10.23960/jitet.v12i2.4166.
H. He, C. Wilson, T. T. Nguyen, and J. Dalins, “Sensitive Image Classification by Vision Transformers,” Conf. Proc. - IEEE Int. Conf. Syst. Man Cybern., pp. 1704–1711, 2024, doi: 10.1109/SMC54092.2024.10831156.
M. Hayat, N. Ahmad, A. Nasir, and Z. A. Tariq, “Hybrid Deep Learning EfficientNetV2 and Vision Transformer (EffNetV2-ViT) Model for Breast Cancer Histopathological Image Classification,” IEEE Access, vol. 12, no. December, pp. 184119–184131, 2024, doi: 10.1109/ACCESS.2024.3503413.
C. Testagrose, M. Shabbir, B. Weaver, and X. Liu, “Comparative Study Between Vision Transformer and EfficientNet on Marsh Grass Classification,” Proc. Int. Florida Artif. Intell. Res. Soc. Conf. FLAIRS, vol. 36, 2023, doi: 10.32473/flairs.36.133132.
R. Pucci, V. J. Kalkman, and D. Stowell, “Comparison between transformers and convolutional models for fine-grained classification of insects,” Arxiv, pp. 1–7, 2023, [Online]. Available: http://arxiv.org/abs/2307.11112
M. T. D. Putra et al., “Batiknet: Batik Classification-based Management Application for Inexperienced User,” Int. J. Informatics Vis., vol. 8, no. 4, pp. 2411–2418, 2024, doi: 10.62527/joiv.8.4.3086.
M. Sulhan, “Perbandingan Metode Naïve Bayes Dengan SVM Pada Analisis Sentimen Aplikasi Pemesanan Tiket Kapal Ferizy,” BITS, vol. 6, no. 4, pp. 0–9, 2025, doi: 10.47065/bits.v6i4.6715.
T. A. Bowo, H. Syaputra, and M. Akbar, “Penerapan Algoritma Convolutional Neural Network Untuk Klasifikasi Motif Citra Batik Solo,” J. Softw. Eng. Ampera, vol. 1, no. 2, pp. 82–96, 2020, doi: 10.51519/journalsea.v1i2.47.
B.-K. Ruan, H.-H. Shuai, and W.-H. Cheng, “Vision Transformers: State of the Art and Research Challenges,” Arxiv, no. 2, 2022, [Online]. Available: http://arxiv.org/abs/2207.03041
P. Khairunnisa, W. E. Putra, and W. Yitong, “Convolutional Neural Networks Using EfficientNetB0 Architecture and Hyperparameters on Skin Disease Classification,” Predatecs, vol. 2, no. 2, pp. 127–137, 2025 doi 10.57152/predatecs.v2i2.1569.
J. Maurício, I. Domingues, and J. Bernardino, “Comparing Vision Transformers and Convolutional Neural Networks for Image Classification: A Literature Review,” Appl. Sci., vol. 13, no. 9, 2023, doi: 10.3390/app13095521.
M. A. Salim, “Perbandingan Convolutional Neural Network dan SVM untuk Klasifikasi Penyakit Daun Padi ,” J. Teknol. Dan Sist. Inf., vol. 3, no. 1, pp. 117–125, 2021.
M. Telçeken, D. Akgun, and S. Kacar, “An Evaluation of Image Slicing and YOLO Architectures for Object Detection in UAV Images,” Appl. Sci., vol. 14, no. 23, pp. 1–16, 2024, doi: 10.3390/app142311293.
J. Sadaiyandi, P. Arumugam, A. K. Sangaiah, and C. Zhang, “Stratified Sampling-Based Deep Learning Approach to Increase Prediction Accuracy of Unbalanced Dataset,” Electron., vol. 12, no. 21, pp. 1–16, 2023, doi: 10.3390/electronics12214423.
A. Nazarkar, H. Kuchulakanti, C. S. Paidimarry, and S. Kulkarni, Impact of Various Data Splitting Ratios on the Performance of Machine Learning Models in the Classification of Lung Cancer, vol. 1. Atlantis Press International BV, 2023. doi: 10.2991/978-94-6463-252-1_12.
E. Setia Budi, A. Nofriyaldi Chan, P. Priscillia Alda, and M. Arif Fauzi Idris, “RESOLUSI : Rekayasa Teknik Informatika dan Informasi Optimasi Model Machine Learning untuk Klasifikasi dan Prediksi Citra Menggunakan Algoritma Convolutional Neural Network,” Media Online, vol. 4, no. 5, p. 509, 2024, [Online]. Available: https://djournals.com/resolusi
X. Zhai, A. Kolesnikov, N. Houlsby, and L. Beyer, “Scaling Vision Transformers,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2022-June, pp. 12094–12103, 2022, doi: 10.1109/CVPR52688.2022.01179.
M. Raghu, T. Unterthiner, S. Kornblith, C. Zhang, and A. Dosovitskiy, “Do Vision Transformers See Like Convolutional Neural Networks?,” Adv. Neural Inf. Process. Syst., vol. 15, no. NeurIPS, pp. 12116–12128, 2021.
F. Y. Tember, I. Najiyah, “Klasifikasi Motif Batik Jawa Barat menggunakan Convolutional Neural Network Classification of West Java Batik Motifs Using Convolutional Neural Network,” Jurnal Sistemasi, vol. 12, pp. 282–292, 2023, doi 10.32520/stmsi.v12i2.2259
Bila bermanfaat silahkan share artikel ini
Berikan Komentar Anda terhadap artikel Analisis Perbandingan Akurasi Model EfficientNetB0 dan Vision Transformer Dalam Klasifikasi Citra Motif Batik Giriloyo
Pages: 252-263
Copyright (c) 2025 Ratna Puspita Sari, Albert Yakobus Chandra

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under Creative Commons Attribution 4.0 International License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (Refer to The Effect of Open Access).





















