Enhanced Neural Architecture Search and Applications = 향상된 신경망 구조 탐색과 응용

나병국 2022년
논문상세정보
' Enhanced Neural Architecture Search and Applications = 향상된 신경망 구조 탐색과 응용' 의 주제별 논문영향력
논문영향력 선정 방법
논문영향력 요약
주제
  • 응용 물리
  • Cell-based Neural Network
  • Data Sampling
  • Neural Architecture Search
  • Spiking Neural Network
  • deep neural network
동일주제 총논문수 논문피인용 총횟수 주제별 논문영향력의 평균
4,807 0

0.0%

' Enhanced Neural Architecture Search and Applications = 향상된 신경망 구조 탐색과 응용' 의 참고문헌

  • [9] Han Cai, Chuang Gan, TianzheWang, Zhekai Zhang, and Song Han. Once for all: Train one network and specialize it for efficient deployment. In International Conference on Learning Representations, 2020.
    [2020]
  • [99] Emre O Neftci, Hesham Mostafa, and Friedemann Zenke. Surrogate gradient learning in spiking neural networks: Bringing the power of gradient-based optimization to spiking neural networks. IEEE Signal Processing Magazine, 36 (6):51–63, 2019.
    [2019]
  • [98] Kevin P Murphy. Machine learning: a probabilistic perspective. MIT Press, 2012.
    [2012]
  • [97] Paul Merolla, John V Arthur, Rodrigo Alvarez-Icaza, Andrew S Cassidy, Jun Sawada, Filipp Akopyan, Bryan L Jackson, Nabil Imam, Chen Guo, Yutaka Nakamura, et al. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science, 345(6197):668–673, 2014.
    [2014]
  • [96] Joe Mellor, Jack Turner, Amos Storkey, and Elliot J Crowley. Neural architecture search without training. In International Conference on Machine Learning, 2021.
  • [95] Abhinav Mehrotra, Alberto Gil CP Ramos, Sourav Bhattacharya, Łukasz Dudziak, Ravichander Vipperla, Thomas Chau, Mohamed S Abdelfattah, Samin Ishtiaq, and Nicholas Donald Lane. Nas-bench-asr: Reproducible neural architecture search for speech recognition. In International Conference on Learning Representations, 2021.
  • [93] Wolfgang Maass. Networks of spiking neurons: the third generation of neural network models. Neural Networks, 10(9):1659–1671, 1997.
    [1997]
  • [92] Fujun Luan, Sylvain Paris, Eli Shechtman, and Kavita Bala. Deep photo style transfer. In IEEE Conference on Computer Vision and Pattern Recognition, 2017.
    [2017]
  • [91] Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. In International Conference on Learning Representations, 2017.
    [2017]
  • [90] Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexander C Berg. Ssd: Single shot multibox detector. In European Conference on Computer Vision, 2016.
    [2016]
  • [8] Han Cai, Ligeng Zhu, and Song Han. Proxylessnas: Direct neural architecture search on target task and hardware. In International Conference on Learning Representations, 2019.
    [2019]
  • [89] Hanxiao Liu, Karen Simonyan, and Yiming Yang. Darts: Differentiable architecture search. In International Conference on Learning Representations, 2019.
    [2019]
  • [88] Guilin Liu, Fitsum A Reda, Kevin J Shih, Ting-Chun Wang, Andrew Tao, and Bryan Catanzaro. Image inpainting for irregular holes using partial convolutions. In European Conference on Computer Vision, 2018.
    [2018]
  • [87] Chenxi Liu, Barret Zoph, Maxim Neumann, Jonathon Shlens,Wei Hua, Li-Jia Li, Li Fei-Fei, Alan Yuille, Jonathan Huang, and Kevin Murphy. Progressive neural architecture search. In European Conference on Computer Vision, 2018.
    [2018]
  • [86] Ming Lin, Pichao Wang, Zhenhong Sun, Hesen Chen, Xiuyu Sun, Qi Qian, Hao Li, and Rong Jin. Zen-nas: A zero-shot nas for high-performance image recognition. In IEEE/CVF International Conference on Computer Vision, 2021.
  • [85] Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. In International Conference on Learning Representations, 2014.
    [2014]
  • [84] Liam Li and Ameet Talwalkar. Random search and reproducibility for neural architecture search. In Uncertainty in Artificial Intelligence, 2020.
    [2020]
  • [83] Hongmin Li, Hanchao Liu, Xiangyang Ji, Guoqi Li, and Luping Shi. Cifar10- dvs: an event-stream dataset for object classification. Frontiers in Neuroscience, 11:309, 2017.
    [2017]
  • [82] Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer, and Tom Goldstein. Visualizing the loss landscape of neural nets. In Advances in Neural Information Processing Systems, 2018.
    [2018]
  • [81] Guohao Li, Guocheng Qian, Itzel C Delgadillo, Matthias Muller, Ali Thabet, and Bernard Ghanem. Sgas: Sequential greedy architecture search. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020.
    [2020]
  • [80] Chaojian Li, Zhongzhi Yu, Yonggan Fu, Yongan Zhang, Yang Zhao, Haoran You, Qixuan Yu, Yue Wang, and Yingyan Lin. Hw-nas-bench: Hardware- aware neural architecture search benchmark. In International Conference on Learning Representations, 2021.
  • [79] Chankyu Lee, Syed Shakib Sarwar, Priyadarshini Panda, Gopalakrishnan Srinivasan, and Kaushik Roy. Enabling spike-based backpropagation for training deep neural network architectures. Frontiers in Neuroscience, 14:119, 2020.
    [2020]
  • [78] Christian Ledig, Lucas Theis, Ferenc Huszár, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, et al. Photo-realistic single image super-resolution using a generative adversarial network. In IEEE Conference on Computer Vision and Pattern Recognition, 2017.
    [2017]
  • [77] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradientbased learning applied to document recognition. IEEE, 86(11):2278–2324, 1998.
    [1998]
  • [76] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems, 2012.
    [2012]
  • [75] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.
    [2009]
  • [73] Nikita Klyuchnikov, Ilya Trofimov, Ekaterina Artemova, Mikhail Salnikov, Maxim Fedorov, and Evgeny Burnaev. Nas-bench-nlp: neural architec- ture search benchmark for natural language processing. arXiv preprint arXiv:2006.07116, 2020.
    [2020]
  • [72] Aaron Klein, Stefan Falkner, Simon Bartels, Philipp Hennig, and Frank Hutter. Fast bayesian optimization of machine learning hyperparameters on large datasets. In Artificial Intelligence and Statistics, 2017.
    [2017]
  • [71] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. In International Conference on Learning Representations, 2014.
    [2014]
  • [70] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2015.
    [2015]
  • [6] David M Blei, Alp Kucukelbir, and Jon D McAuliffe. Variational inference: A review for statisticians. Journal of the American Statistical Association, 112 (518):859–877, 2017.
    [2017]
  • [69] Seijoon Kim, Seongsik Park, Byunggook Na, and Sungroh Yoon. Spikingyolo: Spiking neural network for energy-efficient object detection. In AAAI Conference on Artificial Intelligence, 2020.
    [2020]
  • [68] Jiwon Kim, Jung Kwon Lee, and Kyoung Mu Lee. Accurate image superresolution using very deep convolutional networks. In IEEE Conference on Computer Vision and Pattern Recognition, 2016.
    [2016]
  • [67] Jinseok Kim, Kyungsu Kim, and Jae-Joon Kim. Unifying activation-and timing-based learning rules for spiking neural networks. In Advances in Neural Information Processing Systems, 2020.
    [2020]
  • [66] Jihwan Kim, Jisung Wang, Sangki Kim, and Yeha Lee. Evolved speechtransformer: Applying neural architecture search to end-to-end automatic speech recognition. In INTERSPEECH, 2020.
    [2020]
  • [65] Jaehyun Kim, Heesu Kim, Subin Huh, Jinho Lee, and Kiyoung Choi. Deep neural networks with weighted spikes. Neurocomputing, 311:373–386, 2018.
    [2018]
  • [64] Saeed Reza Kheradpisheh, Mohammad Ganjtabesh, Simon J Thorpe, and Timothée Masquelier. Stdp-based spiking deep convolutional neural networks for object recognition. Neural Networks, 99:56–67, 2018.
    [2018]
  • [63] Jacques Kaiser, Hesham Mostafa, and Emre Neftci. Synaptic plasticity dynamics for deep continuous local learning (decolle). Frontiers in Neuroscience, 14: 424, 2020.
    [2020]
  • [62] Yingyezhe Jin, Wenrui Zhang, and Peng Li. Hybrid macro/micro level backpropagation for training deep spiking neural networks. In Advances in Neural Information Processing Systems, 2018.
    [2018]
  • [61] Chenhan Jiang, Hang Xu, Wei Zhang, Xiaodan Liang, and Zhenguo Li. Spnas: Serial-to-parallel backbone search for object detection. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020.
    [2020]
  • [60] Martin Jankowiak and Fritz Obermeyer. Pathwise derivatives beyond the reparameterization trick. In International Conference on Machine Learning, 2018.
    [2018]
  • [5] Yassine Benyahia, Kaicheng Yu, Kamil Bennani Smires, Martin Jaggi, An- thony C. Davison, Mathieu Salzmann, and Claudiu Musat. Overcoming multimodel forgetting. In International Conference on Machine Learning, 2019.
    [2019]
  • [59] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2017.
    [2017]
  • [58] Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In IEEE Conference on Computer Vision and Pattern Recognition, 2018.
    [2018]
  • [57] Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017.
    [2017]
  • [56] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In Advances in Neural Information Processing Systems, 2020.
    [2020]
  • [55] Weihua He, YuJie Wu, Lei Deng, Guoqi Li, Haoyu Wang, Yang Tian, Wei Ding, Wenhui Wang, and Yuan Xie. Comparing snns and rnns on neuromorphic vision datasets: similarities and differences. Neural Networks, 132:108– 120, 2020.
    [2020]
  • [54] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask r-cnn. In IEEE International Conference on Computer Vision, 2017.
    [2017]
  • [53] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2016.
    [2016]
  • [52] Chaoyang He, Haishan Ye, Li Shen, and Tong Zhang. Milenas: Efficient neural architecture search via mixed-level reformulation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020.
    [2020]
  • [51] Yunzhe Hao, Xuhui Huang, Meng Dong, and Bo Xu. A biologically plausible supervised learning method for spiking neural networks using the symmetric stdp rule. Neural Networks, 121:387–395, 2020.
    [2020]
  • [50] Kai Han, Yunhe Wang, Qiulin Zhang, Wei Zhang, Chunjing Xu, and Tong Zhang. Model rubik’s cube: Twisting resolution, depth and width for tinynets. In Advances in Neural Information Processing Systems, 2020.
    [2020]
  • [4] Gabriel Bender, Hanxiao Liu, Bo Chen, Grace Chu, Shuyang Cheng, Pieter- Jan Kindermans, and Quoc V Le. Can weight sharing outperform random architecture search? an investigation with tunas. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020.
    [2020]
  • [49] Bing Han, Gopalakrishnan Srinivasan, and Kaushik Roy. Rmp-snn: Residual membrane potential neuron for enabling deeper high-accuracy and low-latency spiking neural network. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2020.
    [2020]
  • [48] Zichao Guo, Xiangyu Zhang, Haoyuan Mu, Wen Heng, Zechun Liu, Yichen Wei, and Jian Sun. Single path one-shot neural architecture search with uniform sampling. In European Conference on Computer Vision, 2020.
    [2020]
  • [47] Jianyuan Guo, Kai Han, Yunhe Wang, Chao Zhang, Zhaohui Yang, Han Wu, Xinghao Chen, and Chang Xu. Hit-detector: Hierarchical trinity architecture search for object detection. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020.
    [2020]
  • [46] Julia Guerrero-Viu, Sven Hauns, Sergio Izquierdo, Guilherme Miotto, Simon Schrodi, Andre Biedenkapp, Thomas Elsken, Difan Deng, Marius Lindauer, and Frank Hutter. Bag of baselines for multi-objective joint neural architecture search and hyperparameter optimization. arXiv preprint arXiv:2105.01015, 2021.
  • [45] Sorin Grigorescu, Bogdan Trasnea, Tiberiu Cocias, and Gigel Macesanu. A survey of deep learning techniques for autonomous driving. Journal of Field Robotics, 37(3):362–386, 2020.
    [2020]
  • [44] Alex Graves, Marc G. Bellemare, Jacob Menick, Rémi Munos, and Koray Kavukcuoglu. Automated curriculum learning for neural networks. In International Conference on Machine Learning, 2017.
    [2017]
  • [43] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde- Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. Advances in Neural Information Processing Systems, 2014.
    [2014]
  • [42] Xinyu Gong, Shiyu Chang, Yifan Jiang, and ZhangyangWang. Autogan: Neural architecture search for generative adversarial networks. In IEEE/CVF International Conference on Computer Vision, 2019.
    [2019]
  • [41] Wulfram Gerstner and Werner M Kistler. Spiking neuron models: Single neurons, populations, plasticity. Cambridge University Press, 2002.
    [2002]
  • [40] Rob Geada, Dennis Prangle, and Andrew Stephen McGough. Bonsai-net: One-shot neural architecture search via differentiable pruners. arXiv preprint arXiv:2006.09264, 2020.
    [2020]
  • [3] Gabriel Bender, Pieter-Jan Kindermans, Barret Zoph, Vijay Vasudevan, and Quoc Le. Understanding and simplifying one-shot architecture search. In International Conference on Machine Learning, 2018.
    [2018]
  • [39] Leon A Gatys, Alexander S Ecker, and Matthias Bethge. Image style transfer using convolutional neural networks. In IEEE Conference on Computer Vision and Pattern Recognition, 2016.
    [2016]
  • [38] Yang Gao, Hong Yang, Peng Zhang, Chuan Zhou, and Yue Hu. Graph neural architecture search. In International Joint Conference on Artificial Intelligence, 2020.
    [2020]
  • [37] Wei Fang, Zhaofei Yu, Yanqi Chen, Timothée Masquelier, Tiejun Huang, and Yonghong Tian. Incorporating learnable membrane time constant to enhance learning of spiking neural networks. arXiv preprint arXiv:2007.05785, 2020.
    [2020]
  • [36] Stefan Falkner, Aaron Klein, and Frank Hutter. Bohb: Robust and efficient hyperparameter optimization at scale. In International Conference on Machine Learning, 2018.
    [2018]
  • [35] Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. Neural architecture search: A survey. arXiv preprint arXiv:1808.05377, 2018.
    [2018]
  • [34] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2021.
  • [33] Xuanyi Dong, Lu Liu, Katarzyna Musial, and Bogdan Gabrys. NATS-Bench: Benchmarking nas algorithms for architecture topology and size. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2021.
  • [32] Xuanyi Dong and Yi Yang. Nas-bench-201: Extending the scope of reproducible neural architecture search. In International Conference on Learning Representations, 2020.
    [2020]
  • [31] Xuanyi Dong and Yi Yang. Searching for a robust neural architecture in four gpu hours. In IEEE Conference on Computer Vision and Pattern Recognition, 2019.
    [2019]
  • [30] Piotr Dollár, Mannat Singh, and Ross Girshick. Fast and accurate model scaling. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021.
  • [2] Dario Amodei, Sundaram Ananthanarayanan, Rishita Anubhai, Jingliang Bai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Qiang Cheng, Guoliang Chen, et al. Deep speech 2: End-to-end speech recognition in english and mandarin. In International Conference on Machine Learning, 2016.
    [2016]
  • [29] Mingyu Ding, Xiaochen Lian, Linjie Yang, PengWang, Xiaojie Jin, Zhiwu Lu, and Ping Luo. Hr-nas: Searching efficient high-resolution neural architectures with lightweight transformers. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021.
  • [28] Peter U Diehl, Daniel Neil, Jonathan Binas, Matthew Cook, Shih-Chii Liu, and Michael Pfeiffer. Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing. In International Joint Conference on Neural Networks, 2015.
    [2015]
  • [26] Terrance DeVries and GrahamWTaylor. Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552, 2017.
    [2017]
  • [25] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In The Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), 2019.
    [2019]
  • [24] Mike Davies, Andreas Wild, Garrick Orchard, Yulia Sandamirskaya, Gabriel A Fonseca Guerra, Prasad Joshi, Philipp Plank, and Sumedh R Risbud. Advancing neuromorphic computing with loihi: A survey of results and outlook. IEEE, 109(5):911–934, 2021.
  • [23] Mike Davies, Narayan Srinivasa, Tsung-Han Lin, Gautham Chinya, Yongqiang Cao, Sri Harsha Choday, Georgios Dimou, Prasad Joshi, Nabil Imam, Shweta Jain, et al. Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro, 38(1):82–99, 2018.
    [2018]
  • [22] Xiaoliang Dai, Alvin Wan, Peizhao Zhang, Bichen Wu, Zijian He, Zhen Wei, Kan Chen, Yuandong Tian, Matthew Yu, Peter Vajda, et al. Fbnetv3: Joint architecture-recipe search using predictor pretraining. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021.
  • [20] Cody Coleman, Christopher Yeh, Stephen Mussmann, Baharan Mirzasoleiman, Peter Bailis, Percy Liang, Jure Leskovec, and Matei Zaharia. Se- lection via proxy: Efficient data selection for deep learning. In International Conference on Learning Representations, 2020.
    [2020]
  • [1] Arnon Amir, Brian Taba, David Berg, Timothy Melano, Jeffrey McKinstry, Carmelo Di Nolfo, Tapan Nayak, Alexander Andreopoulos, Guillaume Garreau, Marcela Mendoza, et al. A low power, fully event-based gesture recognition system. In IEEE Conference on Computer Vision and Pattern Recognition, 2017.
    [2017]
  • [19] Xiangxiang Chu, Xiaoxing Wang, Bo Zhang, Shun Lu, Xiaolin Wei, and Junchi Yan. Darts-: Robustly stepping out of performance collapse without indicators. In International Conference on Learning Representations, 2021.
  • [18] Xiangxiang Chu, Tianbao Zhou, Bo Zhang, and Jixiang Li. FairDARTS: Eliminating unfair advantages in differentiable architecture search. In European Conference on Computer Vision, 2020.
    [2020]
  • [17] Yukang Chen, Tong Yang, Xiangyu Zhang, Gaofeng Meng, Xinyu Xiao, and Jian Sun. Detnas: Backbone search for object detection. In Advances in Neural Information Processing Systems, 2019.
    [2019]
  • [16] Xin Chen, Lingxi Xie, Jun Wu, and Qi Tian. Progressive differentiable architecture search: Bridging the depth gap between search and evaluation. In IEEE/CVF International Conference on Computer Vision, 2019.
    [2019]
  • [160] Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V Le. Learning transferable architectures for scalable image recognition. In IEEE Conference on Computer Vision and Pattern Recognition, 2018.
    [2018]
  • [15] Xiangning Chen, Ruochen Wang, Minhao Cheng, Xiaocheng Tang, and Cho- Jui Hsieh. DrNAS: Dirichlet neural architecture search. In International Conference on Learning Representations, 2021.
  • [159] Barret Zoph and Quoc V. Le. Neural architecture search with reinforcement learning. In International Conference on Learning Representations, 2017.
    [2017]
  • [158] Pan Zhou, Caiming Xiong, Richard Socher, and Steven Hoi. Theory-inspired path-regularized differential network architecture search. In Advances in Neural Information Processing Systems, 2020.
    [2020]
  • [157] Dongzhan Zhou, Xinchi Zhou, Wenwei Zhang, Chen Change Loy, Shuai Yi, Xuesen Zhang, and Wanli Ouyang. Econas: Finding proxies for economical neural architecture search. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020.
    [2020]
  • [156] Hanle Zheng, Yujie Wu, Lei Deng, Yifan Hu, and Guoqi Li. Going deeper with directly-trained larger spiking neural networks. In AAAI Conference on Artificial Intelligence, 2021.
  • [155] Xuanyang Zhang, Pengfei Hou, Xiangyu Zhang, and Jian Sun. Neural architecture search with random labels. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021.
  • [154] Xiong Zhang, Hongmin Xu, Hong Mo, Jianchao Tan, Cheng Yang, Lei Wang, and Wenqi Ren. Dcnas: Densely connected neural architecture search for semantic image segmentation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021.
  • [153] Wenrui Zhang and Peng Li. Temporal spike sequence learning via backpropagation for deep spiking neural networks. In Advances in Neural Information Processing Systems, 2020.
    [2020]
  • [152] Shifeng Zhang, Longyin Wen, Xiao Bian, Zhen Lei, and Stan Z Li. Singleshot refinement neural network for object detection. In IEEE Conference on Computer Vision and Pattern Recognition, 2018.
    [2018]
  • [151] Miao Zhang, Huiqi Li, Shirui Pan, Taoping Liu, and Steven W Su. One-shot neural architecture search via novelty driven sampling. In International Joint Conference on Artificial Intelligence, 2020.
    [2020]
  • [150] Miao Zhang, Huiqi Li, Shirui Pan, Xiaojun Chang, and Steven Su. Overcoming multi-model forgetting in one-shot nas with diversity maximization. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020.
    [2020]
  • [14] Xiangning Chen and Cho-Jui Hsieh. Stabilizing differentiable architecture search via perturbation-based regularization. In International Conference on Machine Learning, 2020.
    [2020]
  • [149] Arber Zela, Julien Siems, and Frank Hutter. Nas-bench-1shot1: Benchmarking and dissecting one-shot neural architecture search. In International Conference on Learning Representations, 2020.
    [2020]
  • [148] Arber Zela, Thomas Elsken, Tonmoy Saikia, Yassine Marrakchi, Thomas Brox, and Frank Hutter. Understanding and robustifying differentiable architecture search. In International Conference on Learning Representations, 2020.
    [2020]
  • [147] Tong Yu and Hong Zhu. Hyper-parameter optimization: A review of algorithms and applications. arXiv preprint arXiv:2003.05689, 2020.
    [2020]
  • [146] Shan You, Tao Huang, Mingmin Yang, Fei Wang, Chen Qian, and Changshui Zhang. Greedynas: Towards fast one-shot nas with greedy supernet. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020.
    [2020]
  • [145] Chris Ying, Aaron Klein, Eric Christiansen, Esteban Real, Kevin Murphy, and Frank Hutter. Nas-bench-101: Towards reproducible neural architecture search. In International Conference on Machine Learning, 2019.
    [2019]
  • [144] Yibo Yang, Shan You, Hongyang Li, FeiWang, Chen Qian, and Zhouchen Lin. Towards improving the consistency, efficiency, and flexibility of differentiable neural architecture search. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021.
  • [143] Bin Yan, Houwen Peng, Kan Wu, Dong Wang, Jianlong Fu, and Huchuan Lu. Lighttrack: Finding lightweight neural networks for object tracking via oneshot architecture search. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021.
  • [142] Yuhui Xu, Lingxi Xie, Xiaopeng Zhang, Xin Chen, Guo-Jun Qi, Qi Tian, and Hongkai Xiong. Pc-darts: Partial channel connections for memory-efficient architecture search. In International Conference on Learning Representations, 2020.
    [2020]
  • [141] Jingjing Xu, Liang Zhao, Junyang Lin, Rundong Gao, Xu Sun, and Hongxia Yang. Knas: Green neural architecture search. In International Conference on Machine Learning, 2021.
  • [140] Sirui Xie, Hehui Zheng, Chunxiao Liu, and Liang Lin. Snas: Stochastic neural architecture search. In International Conference on Learning Representations, 2019.
    [2019]
  • [13] Wuyang Chen, Xinyu Gong, and ZhangyangWang. Neural architecture search on imagenet in four gpu hours: A theoretically inspired perspective. In International Conference on Learning Representations, 2020.
    [2020]
  • [139] Lingxi Xie, Xin Chen, Kaifeng Bi, Longhui Wei, Yuhui Xu, Zhengsu Chen, Lanfei Wang, An Xiao, Jianlong Chang, Xiaopeng Zhang, and Qi Tian. Weight-sharing neural architecture search: A battle to shrink the optimization gap. arXiv preprint arXiv:1808.05377, 2020.
    [2020]
  • [138] Yujie Wu, Lei Deng, Guoqi Li, Jun Zhu, Yuan Xie, and Luping Shi. Direct training for spiking neural networks: Faster, larger, better. In AAAI Conference on Artificial Intelligence, 2019.
    [2019]
  • [137] Yujie Wu, Lei Deng, Guoqi Li, Jun Zhu, and Luping Shi. Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience, 12:331, 2018.
    [2018]
  • [136] Bichen Wu, Xiaoliang Dai, Peizhao Zhang, Yanghan Wang, Fei Sun, Yiming Wu, Yuandong Tian, Peter Vajda, Yangqing Jia, and Kurt Keutzer. Fbnet: Hardware-aware efficient convnet design via differentiable neural architecture search. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019.
    [2019]
  • [135] Colin White, Arber Zela, Binxin Ru, Yang Liu, and Frank Hutter. How powerful are performance predictors in neural architecture search? arXiv preprint arXiv:2104.01177, 2021.
  • [134] YaomingWang,Wenrui Dai, Chenglin Li, Junni Zou, and Hongkai Xiong. Sivdnas: Semi-implicit variational dropout for hierarchical one-shot neural architecture search. In International Joint Conference on Artificial Intelligence, 2020.
    [2020]
  • [133] Xiaoxing Wang, Chao Xue, Junchi Yan, Xiaokang Yang, Yonggang Hu, and Kewei Sun. Mergenas: Merge operations into one for differentiable architecture search. In International Joint Conference on Artificial Intelligence, 2020.
    [2020]
  • [132] TianzheWang, KuanWang, Han Cai, Ji Lin, Zhijian Liu, HanruiWang, Yujun Lin, and Song Han. Apq: Joint search for network architecture, pruning and quantization policy. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020.
    [2020]
  • [131] Ruochen Wang, Minhao Cheng, Xiangning Chen, Xiaocheng Tang, and Cho- Jui Hsieh. Rethinking architecture selection in differentiable nas. In International Conference on Learning Representations, 2021.
  • [130] Ziyu Wan, Bo Zhang, Dongdong Chen, Pan Zhang, Dong Chen, Jing Liao, and Fang Wen. Bringing old photos back to life. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020.
    [2020]
  • [12] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International Conference on Machine Learning, 2020.
    [2020]
  • [129] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, 2017.
    [2017]
  • [128] Mariya Toneva, Alessandro Sordoni, Remi Tachet des Combes, Adam Trischler, Yoshua Bengio, and Geoffrey J Gordon. An empirical study of example forgetting during deep neural network learning. In International Conference on Learning Representations, 2019.
    [2019]
  • [127] Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, Mark Sandler, Andrew Howard, and Quoc V Le. Mnasnet: Platform-aware neural architecture search for mobile. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019.
    [2019]
  • [126] Mingxing Tan and Quoc Le. Efficientnet: Rethinking model scaling for convolutional neural networks. In International Conference on Machine Learning, 2019.
    [2019]
  • [125] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In IEEE Conference on Computer Vision and Pattern Recognition, 2016.
    [2016]
  • [124] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In IEEE Conference on Computer Vision and Pattern Recognition, 2015.
    [2015]
  • [123] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2015.
    [2015]
  • [122] Xiu Su, Tao Huang, Yanxi Li, Shan You, Fei Wang, Chen Qian, Changshui Zhang, and Chang Xu. Prioritized architecture sampling with monto-carlo tree search. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021.
  • [120] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations, 2015.
    [2015]
  • [11] Jianlong Chang, xinbang zhang, Yiwen Guo, Gaofeng Meng, Shiming Xiang, and Chunhong Pan. Data: Differentiable architecture approximation. In Advances in Neural Information Processing Systems, 2019.
    [2019]
  • [119] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484–489, 2016.
    [2016]
  • [118] C. E. Shannon. A mathematical theory of communication. The Bell System Technical Journal, 27(3):379–423, 1948.
    [1948]
  • [117] B. Settles. Active learning literature survey. Computer Sciences Technical Report 1648, University of Wisconsin–Madison, 2009.
    [2009]
  • [116] Abhronil Sengupta, Yuting Ye, Robert Wang, Chiao Liu, and Kaushik Roy. Going deeper in spiking neural networks: Vgg and residual architectures. Frontiers in Neuroscience, 13:95, 2019.
    [2019]
  • [115] Ozan Sener and Silvio Savarese. Active learning for convolutional neural networks: A core-set approach. In International Conference on Learning Representations, 2018.
    [2018]
  • [114] John Schulman, Nicolas Heess, Theophane Weber, and Pieter Abbeel. Gradient estimation using stochastic computation graphs. In Advances in Neural Information Processing Systems, 2015.
    [2015]
  • [113] Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, et al. Mastering atari, go, chess and shogi by planning with a learned model. Nature, 588(7839):604–609, 2020.
    [2020]
  • [112] Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. In IEEE Conference on Computer Vision and Pattern Recognition, 2018.
    [2018]
  • [111] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211–252, 2015.
    [2015]
  • [110] Bodo Rueckauer, Iulia-Alexandra Lungu, Yuhuang Hu, Michael Pfeiffer, and Shih-Chii Liu. Conversion of continuous-valued deep networks to efficient event-driven networks for image classification. Frontiers in Neuroscience, 11: 682, 2017.
    [2017]
  • [10] Haw-Shiuan Chang, Erik Learned-Miller, and Andrew McCallum. Active bias: Training more accurate neural networks by emphasizing high variance samples. In Advances in Neural Information Processing Systems, 2017.
    [2017]
  • [109] Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. You only look once: Unified, real-time object detection. In IEEE Conference on Computer Vision and Pattern Recognition, 2016.
    [2016]
  • [108] Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V Le. Regularized evolution for image classifier architecture search. In AAAI Conference on Artificial Intelligence, 2019.
    [2019]
  • [107] Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc V Le, and Alexey Kurakin. Large-scale evolution of image classifiers. In International Conference on Machine Learning, 2017.
    [2017]
  • [106] Ilija Radosavovic, Justin Johnson, Saining Xie,Wan-Yen Lo, and Piotr Dollár. On network design spaces for visual recognition. In IEEE/CVF International Conference on Computer Vision, 2019.
    [2019]
  • [105] Hieu Pham, Melody Y. Guan, Barret Zoph, Quoc V. Le, and Jeff Dean. Efficient neural architecture search via parameter sharing. In International Conference on Machine Learning, 2018.
    [2018]
  • [104] Houwen Peng, Hao Du, Hongyuan Yu, Qi Li, Jing Liao, and Jianlong Fu. Cream of the crop: Distilling prioritized paths for one-shot neural architecture search. In Advances in Neural Information Processing Systems, 2020.
    [2020]
  • [103] Seongsik Park, Seijoon Kim, Byunggook Na, and Sungroh Yoon. T2fsnn: deep spiking neural networks with time-to-first-spike coding. In ACM/IEEE Design Automation Conference (DAC), 2020.
    [2020]
  • [102] Seongsik Park, Seijoon Kim, Hyeokjun Choe, and Sungroh Yoon. Fast and efficient information transmission with burst spikes in deep spiking neural networks. In ACM/IEEE Design Automation Conference (DAC), 2019.
    [2019]
  • [101] Hyeonwoo Noh, Seunghoon Hong, and Bohyung Han. Learning deconvolution network for semantic segmentation. In IEEE International Conference on Computer Vision, 2015.
    [2015]
  • [100] Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y. Ng. Reading digits in natural images with unsupervised feature learning. In NeurIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011.
    [2011]
  • Visualizing data using t-sne
  • Unsupervised learning of digit recognition using spike-timing-dependent plasticity
  • Reinforcement learning in robotics : A survey
  • Error-backpropagation in temporally encoded networks of spiking neurons .
  • Competitive hebbian learning through spike-timing-dependent synaptic plasticity
  • A survey of multilingual neural machine translation