박사

스타일 전이를 위한 반지도학습 오토인코더 기반의 이미지 임베딩 = Semi-Supervised Autoencoder based Image Embedding for Style Transfer

윤두밈 2018년
논문상세정보
' 스타일 전이를 위한 반지도학습 오토인코더 기반의 이미지 임베딩 = Semi-Supervised Autoencoder based Image Embedding for Style Transfer' 의 주제별 논문영향력
논문영향력 선정 방법
논문영향력 요약
주제
  • 딥 러닝
  • 생성 모델
  • 스타일 전이
  • 오토인코더
  • 잠재 공간
동일주제 총논문수 논문피인용 총횟수 주제별 논문영향력의 평균
2,026 0

0.0%

' 스타일 전이를 위한 반지도학습 오토인코더 기반의 이미지 임베딩 = Semi-Supervised Autoencoder based Image Embedding for Style Transfer' 의 참고문헌

  • WU, Jiajun, et al. Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. In: Advances in Neural Information Processing Systems. 2016. p. 82-90.
  • WONG, Becky; SZ CS, D nes. Single-digit Arabic numbers do not automatically activate magnitude representations in adults or in children: Evidence from the symbolic same–different task. Acta psychologica, 2013, 144.3: 488-498.
  • WIDROW, Bernard; LEHR, Michael A. 30 years of adaptive neural networks: perceptron, madaline, and backpropagation. Proceedings of the IEEE, 1990, 78.9: 1415-1442.
  • THEIS, Lucas; OORD, A ron van den; BETHGE, Matthias. A note on the evaluation of generative models. arXiv preprint arXiv:1511.01844, 2015.
  • TETKO, Igor V.; LIVINGSTONE, David J.; LUIK, Alexander I. Neural network studies. 1. Comparison of overfitting and overtraining. Journal of chemical information and computer sciences, 1995, 35.5: 826-833.
  • SUN, Yi; WANG, Xiaogang; TANG, Xiaoou. Deep learning face representation from predicting 10,000 classes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2014. p. 1891-1898.
  • SRIVASTAVA, Nitish, et al. Dropout: a simple way to prevent neural networks from overfitting. Journal of machine learning research, 2014, 15.1: 1929-1958.
  • SILVER, David, et al. Mastering the game of go without human knowledge. Nature, 2017, 550.7676: 354-359.
  • SILVER, David, et al. Mastering the game of Go with deep neural networks and tree search. Nature, 2016, 529.7587: 484-489.
  • SEIDE, Frank; AGARWAL, Amit. CNTK: Microsoft's Open-Source Deep-Learning Toolkit. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 2016. p. 2135-2135.
  • SANGER, Terence D. Optimal unsupervised learning in a single-layer linear feedforward neural network. Neural networks, 1989, 2.6: 459-473.
  • ROSENBLATT, Frank. The perceptron: A probabilistic model for information storage and organization in the brain. Psychological review, 1958, 65.6: 386.
  • RADFORD, Alec; METZ, Luke; CHINTALA, Soumith. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
  • PHILLIPS, P. Jonathon, et al. The FERET evaluation methodology for face-recognition algorithms. IEEE Transactions on pattern analysis and machine intelligence, 2000, 22.10: 1090-1104.
  • PENNINGTON, Jeffrey; SOCHER, Richard; MANNING, Christopher. Glove: Global vectors for word representation. In: Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). 2014. p. 1532-1543.
  • PASCANU, Razvan; MIKOLOV, Tomas; BENGIO, Yoshua. On the difficulty of training recurrent neural networks. In: International Conference on Machine Learning. 2013. p. 1310-1318
  • NG, Andrew. Sparse autoencoder. CS294A Lecture notes, 2011, 72.2011: 1-19.
  • NAIR, Vinod; HINTON, Geoffrey E. Rectified linear units improve restricted boltzmann machines. In: Proceedings of the 27th international conference on machine learning (ICML-10). 2010. p. 807-814.
  • MNIH, Volodymyr, et al. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
  • MIKOLOV, Tomas, et al. Recurrent neural network based language model. In: Interspeech. 2010. p. 3.
  • MIKOLOV, Tomas, et al. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013.
  • MIKOLOV, Tomas, et al. Distributed representations of words and phrases and their compositionality. In: Advances in neural information processing systems. 2013. p. 3111-3119.
  • MAKHZANI, Alireza, et al. Adversarial autoencoders. arXiv preprint arXiv:1511.05644, 2015.
  • LIU, Yifan, et al. Auto-painter: Cartoon Image Generation from Sketch by Using Conditional Generative Adversarial Networks. arXiv preprint arXiv:1705.01908, 2017.
  • LEA, Colin, et al. Temporal convolutional networks: A unified approach to action segmentation. In: Computer Vision–ECCV 2016 Workshops. Springer International Publishing, 2016. p. 47-54.
  • KRIZHEVSKY, Alex; SUTSKEVER, Ilya; HINTON, Geoffrey E. ImageNet classification with deep convolutional neural networks. In: Advances in neural information processing systems. 2012: p. 1097-1105.
  • KRIZHEVSKY, Alex; NAIR, Vinod; HINTON, Geoffrey. The CIFAR-10 dataset. online: http://www.cs.toronto.edu/kriz/cifar.html, 2014.
  • KLAMBAUER, G nter, et al. Self-Normalizing Neural Networks. arXiv preprint arXiv:1706.02515, 2017.
  • KINGMA, Diederik; BA, Jimmy. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  • KINGMA, Diederik P.; WELLING, Max. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
  • KINGMA, Diederik P., et al. Semi-supervised learning with deep generative models. In: Advances in Neural Information Processing Systems. 2014. p. 3581-3589.
  • KIELA, Douwe; BOTTOU, L on. Learning Image Embeddings using Convolutional Neural Networks for Improved Multi-Modal Semantics. In: EMNLP. 2014. p. 36-45.
  • KARLIK, Bekir; OLGAC, A. Vehbi. Performance analysis of various activation functions in generalized MLP architectures of neural networks. International Journal of Artificial Intelligence and Expert Systems, 2011, 1.4: 111-122.
  • IOFFE, Sergey; SZEGEDY, Christian. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: International Conference on Machine Learning. 2015. p. 448-456.
  • IIZUKA, Satoshi; SIMO-SERRA, Edgar; ISHIKAWA, Hiroshi. Let there be color!: joint end-to-end learning of global and local image priors for automatic image colorization with simultaneous classification. ACM Transactions on Graphics (TOG), 2016, 35.4: 110.
  • HORNIK, Kurt; STINCHCOMBE, Maxwell; WHITE, Halbert. Multilayer feedforward networks are universal approximators. Neural networks, 1989, 2.5: 359-366.
  • HOFFER, Elad; AILON, Nir. Deep metric learning using triplet network. In: International Workshop on Similarity-Based Pattern Recognition. Springer, Cham, 2015. p. 84-92.
  • HOCHREITER, Sepp. The vanishing gradient problem during learning recurrent neural nets and problem solutions. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 1998, 6.02: 107-116.
  • HERSHEY, John R.; OLSEN, Peder A. Approximating the Kullback Leibler divergence between Gaussian mixture models. In: Acoustics, Speech and Signal Processing, 2007. ICASSP 2007. IEEE International Conference on. IEEE, 2007. p. IV-317-IV-320.
  • HECHT-NIELSEN, Robert, et al. Theory of the backpropagation neural network. Neural Networks, 1988, 1.Supplement-1: 445-448.
  • HE, Kaiming, et al. Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. p. 770-778.
  • GOODFELLOW, Ian, et al. Generative adversarial nets. In: Advances in neural information processing systems. 2014. p. 2672-2680.
  • GLOROT, Xavier; BORDES, Antoine; BENGIO, Yoshua. Deep sparse rectifier neural networks. In: Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics. 2011. p. 315-323.
  • GEHRING, Jonas, et al. Convolutional Sequence to Sequence Learning. arXiv preprint arXiv:1705.03122, 2017.
  • GATYS, Leon A.; ECKER, Alexander S.; BETHGE, Matthias. A neural algorithm of artistic style. arXiv preprint arXiv:1508.06576, 2015.
  • ELGAMMAL, Ahmed, et al. CAN: Creative Adversarial Networks, Generating" Art" by Learning About Styles and Deviating from Style Norms. arXiv preprint arXiv:1706.07068, 2017.
  • DONG, Chao, et al. Learning a deep convolutional network for image super-resolution. In: European Conference on Computer Vision. Springer, Cham, 2014. p. 184-199.
  • DOERSCH, Carl. Tutorial on variational autoencoders. arXiv preprint arXiv:1606.05908, 2016.
  • D. E. Rumelhart, J. L. McClelland, and C. PDP Research Group, Eds., Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol. 1: Foundations. Cambridge, MA, USA: MIT Press, 1986.
  • CLEVERT, Djork-Arn ; UNTERTHINER, Thomas; HOCHREITER, Sepp. Fast and accurate deep network learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289, 2015.
  • CHEN, Xi, et al. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In: Advances in Neural Information Processing Systems. 2016. p. 2172-2180.
  • CHEN, Weihua, et al. Beyond triplet loss: a deep quadruplet network for person re-identification. arXiv preprint arXiv:1704.01719, 2017.
  • BROMLEY, Jane, et al. Signature verification using a" siamese" time delay neural network. In: Advances in Neural Information Processing Systems. 1994. p. 737-744.
  • BERTHELOT, David; SCHUMM, Tom; METZ, Luke. Began: Boundary equilibrium generative adversarial networks. arXiv preprint arXiv:1703.10717, 2017.
  • BELL, Sean; BALA, Kavita. Learning visual similarity for product design with convolutional neural networks. ACM Transactions on Graphics (TOG), 2015, 34.4: 98.
  • A. Krizhevsky, I. Sutskever, and G. E. Hinton, \u201cImageNet Classification with Deep Convolutional Neural Networks,\u201d in Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1, USA, 2012, pp. 1097\u20131105.