박사

Error concealment methods for HEVC with parameter replacement and CNN-based frame interpolation

박대현 2019년
논문상세정보
' Error concealment methods for HEVC with parameter replacement and CNN-based frame interpolation' 의 주제별 논문영향력
논문영향력 선정 방법
논문영향력 요약
주제
  • Convolutional Neural Network
  • Error Concealment
  • Frame Interpolation
  • Generative Adversarial Network
  • highefficiencyvideocoding
  • 고효율 부호화기
  • 오류 은닉
  • 적대적 생성 신경망
  • 프레임 보간
  • 합성곱 신경망
동일주제 총논문수 논문피인용 총횟수 주제별 논문영향력의 평균
654 0

0.0%

' Error concealment methods for HEVC with parameter replacement and CNN-based frame interpolation' 의 참고문헌

  • “Video coding for low bit rate communication,” ITU-T Rec. H.263, 1995.
  • “Video codec for audiovisual services at px64 kbit/s,” ITU-T Rec. H.261, 1990.
  • “Pytorch.”
  • “High efficiency video coding,” ITU-T Rec. H.265 and ISO/IEC 23008–2 (MPEG-H HEVC), ITU-T and ISO/IEC, 2013.
  • “Generic coding of moving pictures and associated audio informations – part 2: Video,” ITU-T Rec. H.262 and ISO/IEC 13818-2 (MPEG 2 Video), ITU-T and ISO/IEC JTC 1, 1994.
  • “Coding of moving pictures and associated audio for digital storage media at up to about 1,5 mbit/s – part 2: Video,” ISO/IEC 11172-2 (MPEG-1), ISO/IEC JTC 1, 1993.
  • “Coding of audio-visual objects – part 2: Video,” ISO/IEC 14496-2 (MPEG-4 Visual), ISO/IEC JTC 1, 1999.
  • “Advanced video coding for generic audiovisual services,” ITU-T Rec. H.264 and ISO/IEC 14496-10 (MPEG-4 AVC), ITU-T and ISO/IEC, 2003.
  • Z. Yu, H. Li, Z. Wang, Z. Hu, and C. W. Chen, “Multi-level video frame inter- polation: Exploiting the interaction among different levels,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 23, no. 7, pp. 1235–1248, 2013.
  • Y.-L. Chang, Y. A. Reznik, Z. Chen, and P. C. Cosman, “Motion compensated error concealment for hevc based on block-merging and residual energy,” in Packet Video Workshop (PV), 2013 20th International, pp. 1–6, IEEE, 2013.
  • Y.-K. Wang, Y. Sanchez, T. Schierl, S. Wenger, and M. M. Hannuksela, “RTP Payload Format for High Efficiency Video Coding (HEVC),” Request for Com-ments: 7798, 2016.
  • Y. Zhang, X. Xiang, D. Zhao, S. Ma, and W. Gao, “Packet video error conceal- ment with auto regressive model,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 22, no. 1, pp. 12–27, 2012.
  • X. Wei, H.-W. Tseng, J.-W. Jhang, T.-H. Su, Y.-C. Yu, Y. Wen, Z. Liu, T.-L. Lin, S.-L. Chen, Y.-S. Chiou, and H.-Y. Lee, “Video error concealment method using motion vector estimation propagation,” in 2017 International Conference on Applied System Innovation (ICASI), pp. 1335–1338, IEEE, may 2017.
  • W.-J. Tsai and J.-Y. Chen, “Joint temporal and spatial error concealment for multiple description video coding,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 20, no. 12, pp. 1822–1833, 2010.
  • V. Jacobson, R. Frederick, S. Casner, and H. Schulzrinne, “RTP: A Transport Protocol for Real-Time Applications,” Request for Comments: 3550, 2003.
  • V. Chellappa, P. C. Cosman, and G. M. Voelker, “Error concealment for dual frame video coding with uneven quality,” in , pp. 319–328, IEEE, 2005.
  • T.-L. Lin, N.-C. Yang, R.-H. Syu, C.-C. Liao, and W.-L. Tsai, “Error conceal- ment algorithm for hevc coded video using block partition decisions,” in Signal Processing, Communication and Computing (ICSPCC), 2013 IEEE Interna- tional Conference On, pp. 1–5, IEEE, 2013.
  • T. Wiegand, G. Sullivan, G. Bjontegaard, and A. Luthra, “Overview of the H.264/AVC video coding standard,” IEEE Transactions on Circuits and Sys- tems for Video Technology, vol. 13, pp. 560–576, jul 2003.
  • S.-H. Yang, C.-W. Chang, and C.-C. Chan, “An object-based error concealment technique for h. 264 coded video,” Multimedia Tools and Applications, vol. 74, no. 23, pp. 10785–10800, 2015.
  • S. Wenger., M. Hannuksela, T. Stockhammer, M. Westerlund, and D. Singer, “RTP Payload Format for H.264 Video,” Request for Comments: 3984, 2016.
  • S. Wenger, “H.264/AVC over IP,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 13, pp. 645–656, jul 2003.
  • S. Wenger, “H. 264/avc over ip,” IEEE Transactions on circuits and systems for video technology, vol. 13, no. 7, pp. 645–656, 2003.
  • S. Wenger, “Coding performance when not using in-picture prediction,” JVT- B024, 2002.
  • S. Wenger and M. Horowitz, “Scattered slices: a new error resilience tool for h. 26l,” JVT-B027, vol. 2, 2002.
  • S. Niklaus, L. Mai, and F. Liu, “Video frame interpolation via adaptive sep- arable convolution,” in Computer Vision (ICCV), 2017 IEEE International Conference on, pp. 261–270, IEEE, 2017.
  • S. Niklaus, L. Mai, and F. Liu, “Video frame interpolation via adaptive con- volution,” in IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, p. 3, 2017.
  • S. Nah, T. H. Kim, and K. M. Lee, \Deep multi-scale convolutional neural network for dynamic scene deblurring," in CVPR, vol. 1, p. 3, 2017.
  • S. Meyer, O. Wang, H. Zimmer, M. Grosse, and A. Sorkine-Hornung, “Phase- based frame interpolation for video,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1410–1418, 2015.
  • S. Kumar, L. Xu, M. K. Mandal, and S. Panchanathan, “Error resiliency schemes in h. 264/avc standard,” Journal of Visual Communication and Image Representation, vol. 17, no. 2, pp. 425–450, 2006.
  • S. Baker, D. Scharstein, J. Lewis, S. Roth, M. J. Black, and R. Szeliski, “A database and evaluation methodology for optical flow,” International Journal of Computer Vision, vol. 92, no. 1, pp. 1–31, 2011.
  • Q.-F. Zhu, Y. Wang, and L. Shaw, “Coding and cell-loss recovery in dct-based packet video,” IEEE Transactions on Circuits and Systems for Video Technol- ogy, vol. 3, no. 3, pp. 248–258, 1993.
  • Q. Peng, T. Yang, and C. Zhu, “Block-based temporal error concealment for video packet using motion vector extrapolation,” in Communications, Circuits and Systems and West Sino Expositions, IEEE 2002 International Conference on, vol. 1, pp. 10–14, IEEE, 2002.
  • P. Salama, N. B. Shroff, and E. J. Delp, “Error concealment in encoded video,” IEEE Journal on Selected Areas in Communications, vol. 18, no. 6, pp. 1129–114, 2000.
  • P. Ni, R. Eg, A. Eichhorn, C. Griwodz, and P. Halvorsen, “Spatial flicker effect in video scaling,” in Quality of Multimedia Experience (QoMEX), 2011 Third International Workshop on, pp. 55–60, IEEE, 2011.
  • P. Lambert, W. De Neve, Y. Dhondt, and R. Van de Walle, “Flexible mac- roblock ordering in H.264/AVC,” Journal of Visual Communication and Image Representation, vol. 17, no. 2, pp. 358–375, 2006.
  • P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5967–5976, IEEE, 2017.
  • O. Kupyn, V. Budzan, M. Mykhailych, D. Mishkin, and J. Matas, “Deblurgan: Blind motion deblurring using conditional adversarial networks,” arXiv preprint arXiv:1711.07064, 2017.
  • M. Usman, X. He, M. Xu, and K. M. Lam, “Survey of error concealment tech- niques: Research directions and open issues,” in Picture Coding Symposium (PCS), 2015, pp. 233–238, IEEE, 2015.
  • M. Livingstone, Vision and art : the biology of seeing. Harry N. Abrams, 2002.
  • M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein gan,” arXiv preprint arXiv:1701.07875, 2017.
  • L. Zhu, Y. Zhao, S. Wang, and H. Chen, “Spatial error concealment for stereo- scopic video coding based on pixel matching,” The Journal of Supercomputing, vol. 58, no. 1, pp. 96–105, 2011.
  • K. McCann, C. Rosewarne, B. Bross, M. Naccari, K. Sharman, and G. J. Sulli- van (editors), “High Efficiency Video Coding (HEVC) Test Model 16 (HM 16) Encoder Description,” JCTVC-R1002, 2014.
  • K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016.
  • Jing Liu, Guangtao Zhai, Xiaokang Yang, Bing Yang, and Li Chen, “Spatial Error Concealment With an Adaptive Linear Predictor,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 25, pp. 353–366, mar 2015.
  • J. Sun, W. Cao, Z. Xu, and J. Ponce, “Learning a convolutional neural network for non-uniform motion blur removal,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 769–777, 2015.
  • J. Seiler, M. Schoberi, and A. Kaup, “Spatio-temporal error concealment in video by denoised temporal extrapolation refinement,” in Image Processing (ICIP), 2013 20th IEEE International Conference on, pp. 1613–1616, IEEE, 2013.
  • J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” in European Conference on Computer Vision, pp. 694–711, Springer, 2016.
  • J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A largescale hierarchical image database,” in Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 248–255, IEEE, 2009.
  • I. Ismaeil, S. Shirani, F. Kossentini, and R.Ward, “An efficient, similarity-based error concealment method for block-based coded images,” in Image Processing, 2000. Proceedings. 2000 International Conference on, vol. 3, pp. 388–391, IEEE, 2000.
  • I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville, “Im- proved training of wasserstein gans,” in Advances in Neural Information Pro- cessing Systems 30 (I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, eds.), pp. 5767–5777, Curran Associates, Inc., 2017.
  • I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, \Generative adversarial nets," in Advances in neural information processing systems, pp. 2672{2680, 2014.
  • H.-M. Hang,W.-H. Peng, C.-H. Chan, and C.-C. Chen, “Towards the next video standard: high efficiency video coding,” in Proc. APSIPA Annual Summit and Conf, pp. 609–618, 2010.
  • H. Chen, J. Gu, O. Gallo, M.-Y. Liu, A. Veeraraghavan, and J. Kautz, “Re- blur2deblur: Deblurring videos via self-supervised learning,” in Computational Photography (ICCP), 2018 IEEE International Conference on, pp. 1–9, IEEE, 2018.
  • G. J. Sullivan, J. R. Ohm, W. J. Han, and T. Wiegand, “Overview of the high efficiency video coding (HEVC) standard,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 22, pp. 1649–1668, dec 2012.
  • G. Boracchi, A. Foi, et al., “Modeling the performance of image restoration from motion blur.,” IEEE Trans. Image Processing, vol. 21, no. 8, pp. 3502–3517, 2012.
  • FFmpeg Developers, “FFmpeg 3.1,” 2016.
  • F. Aguirre-Ramos, C. Feregrino-Uribe, and R. Cumplido, “Video error conceal- ment based on data hiding for the emerging video technologies,” in Pacific-Rim Symposium on Image and Video Technology, pp. 454–467, Springer, 2013.
  • D. Sun, X. Yang, M.-Y. Liu, and J. Kautz, “Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8934–8943, 2018.
  • D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  • D. Flynn and C. Rosewarne, “Common test conditions and software reference configurations for HEVC range extensions,” JCTVC-L1006, 2014.
  • C. Liu, R. Ma, and Z. Zhang, “Error concealment for whole frame loss in hevc,” in Advances on Digital Television and Wireless Multimedia Communications, pp. 271–277, Springer, 2012.
  • C. Dong, Y. Deng, C. Change Loy, and X. Tang, “Compression artifacts reduc- tion by a deep convolutional network,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 576–584, 2015.
  • A. L. Maas, A. Y. Hannun, and A. Y. Ng, “Rectifier nonlinearities improve neural network acoustic models,” in Proc. icml, vol. 30, p. 3, 2013.
  • A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazirbas, V. Golkov, P. Van Der Smagt, D. Cremers, and T. Brox, “Flownet: Learning optical flow with convolutional networks,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 2758–2766, 2015.
  • A. Bansal, X. Chen, B. Russell, A. Gupta, and D. Ramanan, “Pixelnet: Rep- resentation of the pixels, by the pixels, and for the pixels,” arXiv preprint arXiv:1702.06506, 2017.