산업현장 작업자 위험 상황 인식 및 분석에 관한 연구 = A Study on Recognition and Analysis of Dangerous Situations for Workers in Industrial Sites

박재현 2022년
논문상세정보
' 산업현장 작업자 위험 상황 인식 및 분석에 관한 연구 = A Study on Recognition and Analysis of Dangerous Situations for Workers in Industrial Sites' 의 주제별 논문영향력
논문영향력 선정 방법
논문영향력 요약
주제
  • Industrial safety
  • MobileNet
  • Object Detection
  • YOLO
  • ssd
동일주제 총논문수 논문피인용 총횟수 주제별 논문영향력의 평균
496 0

0.0%

' 산업현장 작업자 위험 상황 인식 및 분석에 관한 연구 = A Study on Recognition and Analysis of Dangerous Situations for Workers in Industrial Sites' 의 참고문헌

  • [9] S. Jang, Y. Seo, J. Lee, S. Kang and I. Jung,“A Study on the preparation of measures to strengthen safety and health of small and medium-sized business,”Korea Occupational Safety and Health Agency. Technical Report 2020-OSHRI-858. 2020.
    [2020]
  • [8] Severe Accident Penalty Act. Pub. L. No. 17907. § 2. 2020. [Online]. Available: https://www.law.go.kr/%EB%B2%95%EB% A0%B9/%EC%A4%91%EB%8C%80%EC%9E%AC%ED%95%B4 %20%EC%B2%98%EB%B2%8C%20%EB%93%B1%EC%97%90 %20%EA%B4%80%ED%95%9C%20%EB%B2%95%EB%A5%A0
  • [7] Office for Government Policy Coordination - Prime Minister ’s Secretariat, [Press Release] The 124th National Pending Examination Coordination Meeting [Internet]. Available: https ://www.korea.kr/news/pressReleaseView.do?newsId=15644295 6
  • [6] Industrial Safety and Health Act, Pub. L. No. 17326, § 2. 2020. [Online]. Available: https://www.law.go.kr/%EB%B2%95 %EB%A0%B9/%EC%82%B0%EC%97%85%EC%95%88%EC%A 0%84%EB%B3%B4%EA%B1%B4%EB%B2%95
  • [5] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth and B. Schiele, “The Cityscape s Dataset for Semantic Urban Scene Understanding,” in Proc eedings of the IEEE Conference on Computer Vision and Patt ern Recognition, pp. 3213-3223, 2016.
    [2016]
  • [56] Ministry of Science and ICT and National Information Societ y Agency. AI Hub [Internet]. Available: https://aihub.or.kr/.
  • [55] COCO(Common Object in Context). Detection Evaluation [Int ernet]. Available: https://cocodataset.org/#detection-eval.
  • [54] StackOverflow. Error“indices[0] = 0 not in [0, 0]”when t raining CenterNet MobileNetV2 FPN 512x512 with tensorflo w object detection api [Internet]. Available: https://stackover flow.com/questions/66457432/error-indices0-0-not-in-0-0 -when-training-centernet-mobilenetv2-fpn-512x/67705978 #67705978.
  • [53] TensorFlow. TensorFlow 2 Detection Model Zoo Github [Inte rnet]. Available: https://github.com/tensorflow/models/blob/ma ster/research/object_detection/g3doc/tf2_detection_zoo.md.
  • [52] AlexeyAB. DarkNet (Yolo v4, v3 and v2 for Windows and L inux) Github [Internet]. Available: https://github.com/AlexeyA B/darknet.
  • [51] OpenVINO Toolkit. CVAT Github [Internet]. Available: https: //github.com/openvinotoolkit/cvat.
  • [50] X. Zhou, J. Zhuo and P. Krahenbuhl,“Bottom-up Object Det ection by Grouping Extreme and Center Points,”In Proceed ings of the IEEE/CVF Conference on Conputer Vision and Pattern Recognition, pp. 850-859, 2019.
    [2019]
  • [4] T. Lin, M. Maire, S. Belongie, J. Hays, P. Peroma, D. Ramana n, P. Dollar and C. L. Zitnick,“Microsoft COCO: Common Obj ects in Context,”in European Conference on Computer Vision, Springer, Cham, pp. 740-755, 2014.
    [2014]
  • [49] H. Law and J. Deng,“CornerNet: Detecting Object as Paired Keypoints,”In Proceedings of the European Conference on Computer Vision (ECCV), pp. 734-750, 2018.
    [2018]
  • [48] X. Zhou, D. Wang and P. Krahenbuhl,“Object as Points,” arXiv preprint arXiv:1904.07850, 2019.
    [2019]
  • [47] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov and L.-C. Ch en,“MobileNetV2: Inverted Residuals and Linear Bottleneck, ”In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4510-4520, 2018.
    [2018]
  • [46] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto and H. Adam,“MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications,”arXiv preprint arXiv:1704.04861, 2017.
    [2017]
  • [45] Bichen Wu, SqueezeDet: Unified, Small, Low Power Fully Co nvolutional Neural Networks for Real-Time Object Detection for Autonomous Driving [Internet]. Available: https://github.c om/BichenWuUCB/squeezeDet.
  • [44] W. Liu, D. Auguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu and A. C. Berg,“SSD: Single Shot MultiBox Detector,”In European Conference on Computer Vision (ECCV), Springer, Cham, pp. 21-37, 2016.
    [2016]
  • [43] K. He, X. Zhang, S. Ren and J. Sun,“Spatial Pyramid Poolin g in Deep Convolutional Networks for Visual Recognition,”IE EE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 9, pp. 1904-1916, 2015.
    [2015]
  • [42] S. Liu, L. Qi, H. Qin, J. Shi and J. Jia,“Path Aggregation Ne twork for Instance Segmentation,”In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8759-8768, 2018.
    [2018]
  • [41] A. Bochkovskiy, C.-Y. Wang and H.-Y. M. Liao,“YOLOv4: Optimal Speed and Accuracy of Object Detection,”arXiv preprint arXiv:2004.10934, 2020.
    [2020]
  • [40] J. Redmon and A. Farhadi,“YOLO9000: Better, Faster, Stron ger,”In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7263-7271, 2017.
    [2017]
  • [3] S. Shim and S. Choi,“Development on Identification Algorithm of Risk Situation around Construction Vehicle using YOLO-v3, ”in Journal of the Korea Academia-Industrial cooperation Society, vol. 20, no, 7, pp. 622-629, Jul. 2019.
  • [39] J. Redmon, S. Divvala, R. Girshick and A. Farhadi,“You Only Look Once: Unified, Real-Time Object Detection,”In Procee dings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 779-788, 2016.
    [2016]
  • [38] T.-Y. Lin, P. Dollar, R. Girshick, K. He, B. Hariharan and S. Belongie,“Feature Pyramid Networks for Object Detection,” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2117-2125, 2017.
    [2017]
  • [37] K. He, G. Gkioxari, P. Dollar and R. Girshick,“Mask R-CNN ,”In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 2961-2969, 2017.
    [2017]
  • [36] F.-F. Li, J. Johnson and S. Yeung, Topic: 11. Object Detecti on, Segmentation, Localization, Classification, CS231n, Stanfo rd University, 2017.
    [2017]
  • [35] J. Long, E. Shelhamer and T. Darrell,“Fully Convolutional N etworks for Semantic Segmentation,”in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognitio n (CVPR), pp. 3431-3440, 2015.
    [2015]
  • [34] S. Ren, K. He, R. Girshick and J. Sun,“Faster R-CNN: Tow ards Real-Time Object Detection with Region Proposal Network,”IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1137-1149, 2016.
    [2016]
  • [32] A. M. Hafiz and G. M. Bhat,“A Survey on instance segment ation: state of the art,”International Journal of Multimedia Information Retrieval, Springer, pp. 1-19, Jun. 2020.
  • [31] R. Girshick, J. Donahue, T. Darrell and J. Malik,“Rich featur e hierarchies for accurate object detection and semantic seg mentation,”in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 580-587, 2014.
    [2014]
  • [30] L. Liu, W. Ouyang, X. Wang, P. Fieguth, J. Chen, X. Liu and M. Pietikainen,“Deep Learning for Generic Object Detection: A Survey,”International Jornal of Computer Vision, vol. 128, pp. 261-318, 2020.
    [2020]
  • [2] D. Kim, J. Kong, J. Lim and B. Sho,“A Study on Data Collect ion and Object Detection using Faster R-CNN for Application to Construction Site Safety,”in Journal of Korean Society of Hazard Mitigation,”vol. 20, no. 1, pp. 119-126, Feb, 2020.
  • [29] Kaggle, ImageNet Winning CNN Architectures (ILSVRC) [Online]. Available: https://www.kaggle.com/getting-started/ 149448.
  • [28] J. Hu, L. Shen, and G. Sun, “Squeeze-and-Excitation Networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7132-7141, 2018.
    [2018]
  • [27] X. Zeng, W. Ouyang, J. Yan, H. Li, T. Xiao, K. Wang, Y. Liu, Y. Zhou, B. Yang, Z. Wang, H. Zhou, and X. Wang, “Crafting GBD-Net for Object Detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol. 40, no. 9, pp. 2109-2123, 2017.
  • [25] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, “Incep tion-v4, Inception-ResNet and Impact of Residual Connections on Learning,” in Thirty-first AAAI Conference on Artificial Intelligence, 2017.
    [2017]
  • [24] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the Inception Architecture for Computer Vision,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2818-2826, 2016.
    [2016]
  • [23] S.-H. Gao, M.-M. Cheng, K. Zhao, X.-Y. Zhang, M.-H. Yang, and P. Torr, “Res2Net: A New Multi-scale Backbone Architecture,” IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2019.
    [2019]
  • [22] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770-778, 2016.
    [2016]
  • [21] M. Lin, Q. Chen, and S. Yan, “Network in Network,” in arXiv preprint arXiv:1312.4400, 2013.
    [2013]
  • [20] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguel ov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going Deeper with Convolutions,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1-9, 2015.
    [2015]
  • [1] Ministry of Employment and Labor, “Status of industrial accidents at the end of December 2020,”Dec. 2020.
  • [19] K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” in arXiv pre print arXiv:1409.1556, 2014.
    [2014]
  • [18] M. D. Zeiler and R. Fergus, “Visualizaing and Understanding Convolutional Networks,” in European Conference on Computer Vision (ECCV), Springer, Cham, pp. 818-833, 2014.
    [2014]
  • [17] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A Simple Way to Prevent Neural Networks from Overfitting,” The Journal of Machine Learning Research, vol. 15, no. 1, pp. 1929-1958, 2014.
    [2014]
  • [16] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,”Advances in Neural Information Processing Systems (NIPS), vol. 25, pp. 1097-1105, 2012.
    [2012]
  • [15] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient- Based Learning Applied to Document Recognition,”in Proceedings of the IEEE, vol. 86, no. 11, Nov. 1998.
    [1998]
  • [14] D. H. Hubel and T. N. Wiesel, “Receptive Fields and Functi onal Architecture of Monkey Striate Cortex,”The Jornal of Physiology, vol. 148, no. 3, pp. 574-591, 1959.
    [1959]
  • [12] I. Goodfellow, Y. Bengio and A. Courville, “Convolutional Ne tworks,”in Deep Learning, Cambridge, MIT Press, ch. 9, pp. 330-371, 2017.
    [2017]
  • [10] Ministry of Land, Infrastructure and Transport.“Smart Const ruction Technology Road-map for Innovation in Construction Productivity and Enhancement of Safety,”Oct. 2018.
    [2018]
  • Xception : Deep Learning with Depthwise Separa ble Convolution
    F. Chollet pp . 1251-1258 [2017]
  • Single Unit Activity in Striate Cortex of Unres trained Cats ,
    D. H. Hubel vol . 147 , no . 2 , pp . 226-238 , [1959]
  • Generalization and network design strategies
    Y. Lecun [1989]
  • Fast r-cnn
    R. Girshick pp . 1440-1448 [2015]