Design of a Fire Spot Identification System in PT. PAL Indonesia Work Area Using YOLOv5s

Agus Khumaidi

Abstract


PT. PAL Indonesia is a company that operates in the field of national-scale ship production, which has the potential for fire hazards during the ship fabrication process. Therefore, a fire protection system must implemented. This research began with observation to determine the suitability of active and passive fire protection systems in the workplace based on several standards such as SNI 03-3985-2000, NFPA 13, Permenaker no. 04/1980, Permen PU no. 26/PRT/M/2008, and SNI 03-1745-2000. Observational data collection used a checklist with a cross-sectional research design. This production site has the potential for danger, causing fires in large areas, including Irian Dock and Sumatera Dock. Active fire protection has several types such as alarms, detectors, sprinklers, fire extinguishers, and hydrants. In passive fire protection, the assessment is based on the building structure. Field observations showed that active protection systems such as alarms were in the good category, detectors in the good category, sprinklers in the good category, light fire extinguishers (APAR) in the good category, hydrants in the good category, and passive fire protection systems in the good category. To support active and passive fire protection systems, this research proposes a fire spot recognition system based on YOLOv5s by utilizing CCTV facilities that are installed in the PT PAL work area. Dataset collection was carried out using image samples for each class, four classes were used in this research, including RMO class (type of work that causes a combination of flash points and sparks such as grinding and welding work), spark class, fire spot class and finally the fire class. The research used 1971 training data, 515 validation data, and 262 testing data. The best results were obtained with an IoU threshold value of 0.5 which had an mAP value during testing for all classes of 0.919. The accuracy produced through the confusion matrix is 0.755 or 75.5% with object detection testing on running videos showing a fairly high and stable accuracy value

Keywords


Fire detection, sparks, RMO, fire, image processing, YOLOv5s.

Full Text:

PDF

References


[ 1 ]. Suma’mur, P. K., 2009. Higiene Perusahaan dan Kesehatan Kerja. Cetakan XII. Jakarta: PT. Toko Gunung Agung.

[ 2 ]. National Fire Protection Association, 2013. Fire loss in the United States. [Online] Tersedia di: www.nfpa. org/research/reports-and-statistics/ the-unitedstates [Diakses 27 April 2023]

[ 3 ]. D. Giancini, E. Yulia Puspaningrum, Y. Vita Via, U. Pembangunan Nasional, and J. Timur, “Seminar Nasional Informatika Bela Negara (SANTIKA) Identifikasi Penggunaan Masker Menggunakan Algoritma CNN YOLOv3-Tiny”.

[ 4 ]. J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You Only Look Once: Unified, RealTime Object Detection.” [Online]. Available: http://pjreddie.com/yolo/

[ 5 ]. J. Redmon and A. Farhadi, “YOLO9000: Better, Faster, Stronger” Dec. 2016, [Online]. Available: http://arxiv.org/abs/1612.08242.

[ 6 ]. J. Redmon and A. Farhadi, “YOLOv3: An Incremental Improvement” Apr. 2018, [Online]. Available: http://arxiv.org/abs/1804.02767.

[ 7 ]. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “YOLOv4: Optimal Speed and Accuracy of Object Detection” Apr. 2020, [Online]. Available: http://arxiv.org/abs/2004.10934.

[ 8 ]. Y. Liu, B. Lu, J. Peng, and Z. Zhang, “Research on the Use of YOLOv5 Object Detection Algorithm in Mask Wearing Recognition” World Scientific Research Journal, vol. 6, p. 2020, doi: 10.6911/WSRJ.202011_6(11).0038

[ 9 ]. J. Ieamsaard, S. N. Charoensook, and S. Yammen, “Deep Learning-based Face Mask Detection Using YoloV5” in Proceeding of the 2021 9th International Electrical Engineering Congress, iEECON 2021, Mar. 2021, pp. 428– 431. doi: 10.1109/iEECON51072.2021.9440346.

[ 10 ]. G. Jocher.(2021, 1 April). Tips For Best Training Results. Diakses 1 Juni 2022 dari https://github.com/ultralytics/yolov5/wiki/Tipsfor-BestTraining-Results.

[ 11 ]. J. Yan, H. Wang, M. Yan, W. Diao, X. Sun, and H. Li, “IoUadaptive deformable R-CNN: Make full use of IoU for multiclass object detection in remote sensing imagery” Remote Sensing, vol. 11, no. 3, Feb. 2019, doi: 10.3390/rs11030286.

[ 12 ]. Faiz Romadloni, Joko Endrasmono, Zindhu Maulana Ahmad Putra, Agus Khumaidi, Isa Rachman, Ryan Yudha Adhitya, “Identifikasi Warna Buoy Menggunakan Metode You Only Look Once Pada Unmanned Surface Vehicle” Jurnal Teknik Elektro dan Komputer TRIAC, vol. 10, no. 1 (2023), doi: 10.21107/triac.v10i1.19650.

[ 13 ]. M. Rizk and I. Bayad, "Human Detection in Thermal Images Using YOLOv8 for Search and Rescue Missions," 2023 Seventh International Conference on Advances in Biomedical Engineering (ICABME), Beirut, Lebanon, 2023, pp. 210-215, doi: 10.1109/ICABME59496.2023.10293139

[ 14 ]. M. R. N. Ariyadi, M. R. Pribadi and E. P. Widiyanto, "Unmanned Aerial Vehicle for Remote Sensing Detection of Oil Palm Trees Using You Only Look Once and Convolutional Neural Network," 2023 10th International Conference on Electrical Engineering, Computer Science and Informatics (EECSI), Palembang, Indonesia, 2023, pp. 226-230, doi: 10.1109/EECSI59885.2023.10295670.

[ 15 ]. Y. He and X. Jia, "Fast identification neural network combining on YOLOv5 and ViT," 2023 IEEE 7th Information Technology and Mechatronics Engineering Conference (ITOEC), Chongqing, China, 2023, pp. 485-490, doi: 10.1109/ITOEC57671.2023.10291752.

[ 16 ]. L. Qiu, W. Zhu, Y. Li, L. Qiu and F. Ma, "YOLO-based multi-class target detection for SAR images," 2023 IEEE 7th Information Technology and Mechatronics Engineering Conference (ITOEC), Chongqing, China, 2023, pp. 1671- 1675, doi: 10.1109/ITOEC57671.2023.10292083

[ 17 ]. R. I. Rovita, A. Y. Rahman and Istiadi, "Fertilizer Quality Detection For Purple Sweet Potato Plants Using YOLOv4- Tiny," 2023 International Seminar on Application for Technology of Information and Communication (iSemantic), Semarang, Indonesia, 2023, pp. 441-445, doi: 10.1109/iSemantic59612.2023.10295298.

[ 18 ]. Communication (iSemantic), Semarang, Indonesia, 2023, pp. 441-445, doi: 10.1109/iSemantic59612.2023.10295298.

[ 19 ]. T. Shindo, T. Watanabe, K. Yamada and H. Watanabe, "Accuracy Improvement of Object Detection in VVC Coded Video Using YOLO-v7 Features," 2023 IEEE International Conference on Artificial Intelligence in Engineering and Technology (IICAIET), Kota Kinabalu, Malaysia, 2023, pp. 247-251, doi: 10.1109/IICAIET59451.2023.10291646.

[ 20 ]. M. R. A. Priyadi and Suharjito, "Comparison of YOLOv8 and EfficientDet4 Algorithms in Detecting the Ripeness of Oil Palm Fruit Bunch," 2023 10th International Conference on ICT for Smart Society (ICISS), Bandung, Indonesia, 2023, pp. 1-7, doi: 10.1109/ICISS59129.2023.10291928.

[ 21 ]. Z. Lv, R. Wang, Y. Wang, F. Zhou and N. Guo, "Road Scene Multi-Object Detection Algorithm based on CMS-YOLO," in IEEE Access, doi: 10.1109/ACCESS.2023.3327735




DOI: https://doi.org/10.21107/triac.v12i1.29939

Refbacks

  • There are currently no refbacks.


Copyright (c) 2025 Jurnal Teknik Elektro dan Komputer TRIAC

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Creative Commons License
Indexed by: