Image Classification of Beef and Pork Using Convolutional Neural Network in Keras Framework

salsa bila, Anwar Fitrianto, Bagus Sartono

Abstract

Beef is a food ingredient that has a high selling value. Such high prices make some people manipulate sales in markets or other shopping venues, such as mixing beef and pork. The difference between pork and beef is actually from the color and texture of the meat. However, many people do not understand these differences yet. In addition to socialization related to understanding the differences between the two types of meat, another solution is to create a technology that can recognize and differentiate pork and beef. That is what underlies this research to build a system that can classify the two types of meat. Convolutional Neural Network (CNN) is one of the Deep Learning methods and the development of Artificial Intelligence science that can be applied to classify images. Several regularization techniques include Dropout, L2, and Max-Norm were applied to the model and compared to obtain the best classification results and may predict new data accurately. It has known that the highest accuracy of 97.56% obtained from the CNN model by applying the Dropout technique using 0.7 supported by hyperparameters such as Adam's optimizer, 128 neurons in the fully connected layer, ReLu activation function, and 3 fully connected layers. The reason that also underlies the selection of the model is the low error rate of the model, which is only 0.111.

Keywords: Beef and Pork, Model, Classification, CNN

Full Text:

PDF

References

Y. Erwanto, A. Mohammad Z, E.Y.P, Muslim Sugiyono, and A. Rohman,” Identification of Pork Contamination in Meatballs of Indonesia Local Market Using Polymerase Chain Reaction-Restriction Fragment Length Polymorphism Analysis,” Asian Australasian Journal of Animal Science. vol 27, no.10, 2014. pp. 1487-1492.

S. Rahmati, N.M. Julkapli, W.A Yehye, and W.J Basirun, “Identification of Meat Origin in Food Products–A Review,” Food Control, vol. 68, 2016, pp. 379-390.

Neneng and Y. Fernando, “Klasifikasi Jenis Daging Berdasarkan Analisis Citra Tekstur Gray Level Co-Occurrence Matrices dan Warna,” Semnastek 2017.

N. Lihayati, R.E Pawening, and M. Furqon,”Klasifikasi Jenis Daging Berdasarkan Tekstur Menggunakan Metode Gray Level Coocurent Matrix,” in Proc. SENTIA 2016, vol. 8 – ISSN: 2085-2347.

Y. LeCun, K. Kavukcuoglu, and C. Farabet,”Convolutional networks and applications in vision,” ISCAS, 253–256, 2010.

S. Salman, and X. Liu,” Overfitting mechanism and avoidance in deep neural networks,” arXiv preprint arXiv:1901.06566, 2019.

M. S. Junayed et al., "AcneNet - A Deep CNN Based Classification Approach for Acne Classes," in 2019 12th Int. Conf. on Information & Comm. Tech.and System, Surabaya, Indonesia, 2019, pp. 203-208.

J. Nagi, F. Ducatelle, G. A. Di Caro, D. Cireşan, U. Meier, A. Giusti, F. Nagi, J. Schmidhuber, and L. M. Gambardella, "Max-pooling convolutional neural networks for vision-based hand gesture recognition Signal and Image Processing Applications,” presented at the 2011 IEEE International Conference, 2011, pp. 342-347.

M. Alaslani, and L. Elrefaei, “Convolutional Neural Network Based Feature Extraction for Iris Recognition," Int. J. of Computer Science & Information Technology, vol. 10, no.2, April 2018, pp. 65-78.

Ying, “An Overview of Overfitting and its Solutions,” Journal of Physics, vol.1168, no.2, 2019, p.022022. doi: 10.1088/1742-6596/1168/2/022022.

N. Srivastava, G. Hinton, A. Krizhevsky, and I. Sutskever,”Dropout: A Simple Way to Prevent Neural Networks from Overfitting,” Journal of Machine Learning Research, vol 15, no.1, 2014, pp.1929-1958.

H.H. Aghdam, and E.J. Heravi,”Convolutional Neural Network,” in A Guide to Convolutional Neural Network (A Practical Application to Traffic-Sign Detection and Classification), Tarragona, ES, SER: Springer, 2017.

DOI

https://doi.org/10.21107/ijseit.v5i02.9864

Metrics

Refbacks

  • There are currently no refbacks.


Copyright (c) 2021 salsa bila, Anwar Fitrianto, Bagus Sartono

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.