박사

효율적인 딥러닝 기반 식물 병해 인식과 글로컬 증상 기술 연구

논문상세정보
' 효율적인 딥러닝 기반 식물 병해 인식과 글로컬 증상 기술 연구' 의 주제별 논문영향력
논문영향력 선정 방법
논문영향력 요약
주제
  • Deep learning
  • glocal description
  • localization
  • pests
  • plant diseases
  • recognition
  • 글로컬 기술
  • 딥 러닝
  • 식물질병
  • 인식
  • 해충
  • 현지화
동일주제 총논문수 논문피인용 총횟수 주제별 논문영향력의 평균
5,390 0

0.0%

' 효율적인 딥러닝 기반 식물 병해 인식과 글로컬 증상 기술 연구' 의 참고문헌

  • Zhang, C., Bengio, S., Hardt, M., Recht, B., and Vinyals, O. “Understanding Deep Learning Requires Re-thinking Generalization,” arXiv:1611.03530v2.
  • Zeiler, M.D., and Fergus, R.. (2014). “Visualizing and understanding convolutional networks,” In Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818- 833. Springer, Cham (2014). doi:10.1007/978-3-319- 10590-1_53
  • Yang, L., Tang, K., Yang, J., and Li, L. (2017). “Dense Captioning with Joint Inference and Visual Context,” in IEEE Conference on Computer Vision and Pattern Recognition (Honolulu).
  • Yang, B., Yan, J., Lei, Z., and Li, S. (2016). “CRAFT objects from images,” in IEEE Conference on Computer Vision and Pattern Recognition (Las Vegas, NV).
  • Xie, S., Girshick, R., Doll r, P., Tu, Z., and He, K. (2017). “Aggregated Residual Transformations for Deep Neural Networks,” arXiv 2017, arXiv:1611.05431.
  • Wah Liew, O., Chong, P., Li, B., and Asundi, K. (2008). “Signature Optical Cues: Emerging Technologies for Monitoring Plant Health”, Sensors, vol. 8, pp. 3205–3239.
  • Viola, P., and Jones, M. (2001). “Robust Real-time Object Detection,” in International Workshop on Statistical and Computational Theories of Vision - Modeling, Computing, and Sampling (Vancouver, BC).
  • Vinyals, O., Toshev, A., Bengio, S., and Erhan, D. (2017). Show and Tell: Lessons learned from the 2015 MSCOCO Image Captioning Challenge. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39:4. doi: 10.1109/TPAMI.2016.2587640
  • Vinyals, O., Toshev, A., Bengio, S., Erhan, D. (2015). “Show and Tell: A Neural Image Caption Generator,” arXiv:1411.4555v2.
  • Van-Laarhoven, T. (2017). L2 Regularization versus batch and weight normalization. arXiv:1706.05350v051.
  • Van Dam, B., Goffau, M, Van Lidt de Jeude, J., Naika, S. (2005). “Cultivating of Tomato - Production, Processing, and Marketing," Agromisa Foundation, Wageningen.
  • Turing, A. (1950). “Computing Machinery and Intelligence,” Mind, vol.49, pp. 433-460.
  • The World Bank. Reducing Climate-Sensitive Risks. (2014). Vol. 1. Available online: http://documents.worldbank.org/curated/en/48651146816794443 1/Reducing-climate-sensitive-disease-risks (accessed on 20 June 2017).
  • The University of Georgia. (2014). Commercial Tomato Handbook – Bulletin 1312”. Available at: http://extension.uga.edu/publications/detail.html
  • Tang, Y. (2013). “Deep Learning using Linear Support Vector Machines,” in Proceeding of the Int. Conf. on Machine Learning (ICML 2013).
  • Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich, A. (2015). “Going deeper with convolutions,” in Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9.
  • Sutskever, I., Martens, J., Dahl, G., and Hinton, G. (2013). “On the importance of initialization and momentum in deep learning,” in Proceeding of the Int. Conf. on Machine Learning (ICML 2013).
  • Sun, C., Paluri, M., C1ollobert, R., Nevatia, R., and Bourdev, L. (2016). “ProNet: learning to propose object-specific boxes for cascaded neural networks,” in IEEE Conference on Computer Vision and Pattern Recognition (Las Vegas, NV).
  • Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. (2014). “Dropout: A Simple Way to Prevent Neural Networks from Overfitting,” Journal of Machine Learning Research, vol. 15, pp. 1929-1958.
  • Srinivasan, R. (2010). “Safer tomato production techniques,” The World Vegetable Center, Shanhua.
  • Sladojevic, S., Arsenovic, M., Anderla, A., Culibrk, D., and Stefanovic, D. (2016). Deep Neural Networks Based Recognition of Plant Diseases by Leaf Image Classification. Comp. Intell. Neurosci. 2016: 3289801. doi: 10.1155/2016/3289801
  • Singh, A., Ganapathysubramanian, B., Singh, A., and Sarkar, S. (2016). Machine Learning for High-Throughput Stress Phenotyping in Plants. Trends in Plant Science, 21:2. doi: 10.1016/j.tplants.2015.10.015
  • Singh, A., Ganapathysubramanian, B., Sarkar, S. and Singh, A. (2018). Deep Learning for Plant Stress Phenotyping: Trends and Future Perspectives. Trends in Plant Science, 23:10. doi: 10.1016/j.tplants.2018.07.004
  • Simonyan, K., and Zisserman. (2014). “A Very deep convolutional networks for large-scale image recognition,” arXiv 2014, arXiv:1409.1556.
  • Silver, D., Schrittwieser, J., Simoyan, K., Antonoglou, I., Huang, A., Guez, A., et.al. (2016). “Mastering the Game of Go without Human Knowledge,” Nature, vol. 529, no. 7587, pp. 484-489. doi: 10.1038/nature16961
  • Shrivastava, A., Gupta, A., and Girshick, R. (2016). “Training region-based object detectors with online hard example mining,” arXiv:1604.03540.
  • Seebold, K. (2008). "Bacterial Canker of Tomato - Plant Pathology Fact Sheet," University of Kentucky - College of Agriculture.
  • Schapire, R. (1999). “A Brief Introduction to Boosting,” in Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence, vol. 2, pp. 1401–1406.
  • Savary, S., Ficke, A., Aubertot, J.N., and Hollier, C. (2012). Crop losses due to diseases and their implications for global food production losses and food security. Food Security. 4:519. doi: 10.1007/s12571-012-0200-5
  • Sankaran, S., Mishra, A., and Ehsani, R. (2010). “A review of advanced techniques for detecting plant diseases,” Comput. Electron. Agric., vol. 72, pp. 1–13.
  • Ryant, P., Dolezelova, E., Fabrik, I., Baloum, J., Adam, V., Babula, P., and Kizek, R. (2008). “Electrochemical Determination of Low Molecular Mass Thiols Content in Potatoes (Solanum tuberosum) Cultivated in the Presence of Various Sulphur Forms and Infected by Late Blight (Phytophthora infestans),” Sensors, vol. 8, pp. 3165–3182.
  • Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., et.al. (2015). “ImageNet Large Scale Visual Recognition Challenge,” International Journal of Computer Vision.
  • Rumelhart, D., Hinton, G., and Williams, R. (1986). “Learning internal representations by error propagation,” Parallel distributed processing: exploration in the microstructure of cognition, vol. 1, pp. 318-362.
  • Rosenblatt, F. (1958). “The perceptron: A probabilistic model for information storage and organization in the brain,” Psychological Review, vol. 65, no. 6, pp. 386-408.
  • Rifkin, R., and Klatau, A. (2004). “In Defense of One-Vs-All Classification,” Journal of Machine Learning Research, vol. 5, pp. 101-141.
  • Ren, S., He, K., Girschick, R., Sun. J. (2016). “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” IEEE Trans. Pattern Anal. Mach. Intell, vol.39, pp. 1137–1149.
  • Redmon, J., and Farhadi, A. (2017). “YOLO9000: better, faster, stronger,” in IEEE Conference on Computer Vision and Pattern Recognition (Honolulu).
  • Redmon, J., and Fahardi, A., (2018). “YOLOv3: An Incremental Improvement,” arXiv:1804.02767v1.
  • Redmon, J., S. Divvala, Girschick, R., and Farhadi. A. (2015). “You only look once: unified, real-time object detection,” in IEEE Conference on Computer Vision and Pattern Recognition (Boston, MA).
  • Pinheiro, P., Collobert, R., and Dollar, P. (2015). “Learning to segment object candidates,” arXiv:1506.06204v2.
  • Peyraud, R., Dubiella, U., Barbacci, A., Genin, S., Raffaele, S., and Roby, D. (2016). “Advances on plant-pathogen interactions from molecular toward systems biology perspectives,” The Plant Journal, vol. 90, no. 4, pp. 720-737.
  • Pereyra, G., Tucker, G., Chorowski, J., Kaiser, L., and Hinton, G. (2017). “Regularizing neural networks by penalizing confident output distributions,” in International Conference on Learning Representations (Toulon).
  • Pawara, P., Okafor, E., Surinta, O., Schomaker, L., and Wiering, M. (2017). “Comparing Local Descriptors and Bags of Visual Words to Deep Convolutional Neural Networks for Plant Recognition,” in Proceedings of the 6th International Conference on Pattern Recognition Applications and Methods (ICPRAM 2017), pp. 479–486.
  • Pascanu, R., Mikolov, T., and Bengio, Y. (2013). “On the difficulty of training recurrent neural networks,” Proceeding of the Int. Conf. on Machine Learning, vol. 28, pp. 1310-1318 (Atlanta, USA).
  • Owomugisha, G., and Mwebaze, E. (2016). “Machine Learning for Plant Disease Incidence and Severity Measurements from Leaf Images,” in Proceedings of the 2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA).
  • Nutter, F.W., Jr., Esker, P.D., Coelho, R. (2006). “Disease assessment concepts and the advancements made in improving the accuracy and precision of plant disease data,” Eur. J. Plant Pathol, vol. 115, pp. 95–113.
  • Mutka, A. and Bart, R. Image-based phenotyping of plant disease symptoms (2015). Frontiers in Plant Science, 5:734. doi: 10.3389/fpls.2014.00734
  • Munyaneza, J.E., Crosslin, J.M., Buchman, J.L., Sengoda, V.G. (2010). “Susceptibility of Different Potato Plant Growth Stages of Purple Top Disease,” Am. J. Potato Res, vol. 87, pp. 60–66.
  • Mohanty, S. P., Hughes, D., and Salathe, M. (2016). Using Deep Learning for Image-Based Plant Diseases Detection. Front. Plant Sci. 7:1419. doi: 10.3389/fpls.2016.01419
  • Mhaskar, H., and Poggio, T. (2016). “Deep vs. Shallow Networks: An Approximation Theory Perspective,” Analysis and Applications, vol. 14, no. 6, pp. 829-848. doi: 10.1142/S0219530516400042
  • Mhaskar, H., Liao, Q., and Poggio, T. (2017). “When and Why are Deep Networks Better than Shallow Ones,” Proceedings of the AAAI Conference on Artificial Intelligence (AAAI-17).
  • Meroni, M., Rosini, M., Picchi, V., Panigada, C., Cogliati, S., Nali, C., and Colombo, R. (2008). “Assessing Steady-state Fluorescence and PRI from Hyperspectral Proximal Sensing as Early Indicators of Plant Stress: The Case of Ozone Exposure,” Sensors, vol. 8, pp. 1740–1754.
  • Mazarei, M., Teplova, I., Hajimorad, M., and Stewart, C. (2008). “Pathogen Phytosensing: Plants to Report Plant Pathogens,” Sensors, vol. 8, pp. 2628–2641.
  • Martinelli, F., Scalenghe, R., Davino, S., Panno, S., Scuderi, G., Ruisi, P., Villa, P., Stroppiana, D., Boschetti, M., Boulart, L., Davis, C., and Dandekar, A. (2015). Advanced methods of plant disease detection. A review. Agronomy for Sustainable Development. 35:1, 1-25. doi: 10.1007/s13593-014-0246-1
  • Martinelli, F., Scalenghe, R., Davino, S., Panno, S., Scuderi, G., Ruisi, P., Villa, P., Stropiana, D., Boschetti, M., Goudart, L., et al. (2015). “Advanced methods of plant disease detection. A review,” Agron. Sust. Dev., vol. 35, pp. 1–25.
  • Mabvakure, B., Martin, D.P., Kraberger, S., Cloete, L., Van Bruschot, S., Geering, A.D.W., Thomas, J.E., Bananej, K., Lett, J., Lefeuvre, P., et al. (2016) “Ongoing geographical spread of Tomato yellow leaf curl virus,” Virology, vol. 498, pp. 257–264.
  • Lowe, D. (2004). “Distinctive Image Features from Scale- Invariant Keypoints,” Int. J. Comput. Vis., vol. 60, pp. 91–110.
  • Liu, W., Wen, Y., Yu, Z., and Yang, M. (2016). “ Large-margin Softmax Loss for Convolutional Neural Networks,” in Proceeding of the Int. Conf. on Machine Learning (ICML 2016).
  • Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C., et al. (2016). “SSD: Single Shot MultiBox Detector,” in European Conference on Computer Vision – ECCV (Amsterdam).
  • Liu, B., Zhang, Y., He, D., and Li, Y. (2018). Identification of apple leaf diseases based on deep convolutional neural networks. Sensors. 10, 11.doi: 10.3390/sym10010011
  • Lin, T., Maire, M., Belongie, S., Bourdev, L., Girschick, R., Hays, J., et.al. (2015). “Microsoft COCO: Common Objects in Context,” arXiv:1405.0312v3.
  • Lin, T., Goyal, P., Girshick, R., He, K., and Dollar, P. (2018). “Focal Loss for Dense Object Detection,” in Proceedings of the 2017 IEEE Conference on Computer Vision.
  • Lin, T., Dollar, P., Girschick, R., He, K., Hariharan, B., and Belongie, S. (2017). “Feature Pyramid Networks for Object Detection,” in IEEE Conference on Computer Vision and Pattern Recognition (Honolulu).
  • Lin, M., Chen, Q., and Yan, S. (2013). “Network in Network”. arXiv 2013, arXiv:1312.4400.
  • LeCun, Y., Botton, L., Orr, G., and Muller, K-R. (1998). “Efficient BackProp,” Neural Networks: Tricks of the Trade, pp. 9-50.
  • LeCun, Y., Botton, L., Bengio, Y., and Haffner, P. (1998). “Gradient-Based Learning Applied to Document Recognition,” Proceedings of the IEEE, vol 86, no. 11, pp. 2278-2324. doi: 10.1109/5.726791
  • Krizhevsky, A., Sutskever, I., and Hinton, G. (2012). “ImageNet classification with deep convolutional neural networks,” in Proceeding of the Int. Conference on Neural Inf. Proc. Syst., vol. 1, pp. 1097-1105 (Lake Tahoe, USA).
  • Kiros, R., Zemel, R., and Salakhutdinov, R. (2014). “Multimodal Neural Language Models,” in Proceedings of the International Conference on Machine Learning (ICML), 32:2, 595-60314.
  • Kiros, R., Salakhutdinov, R., and Zemel, R. (2014). Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models. arXiv:1411.2539v1.
  • Kawasaki, Y., Uga, H., Kagiwada, S. and Iyatomi, H. (2015). “Basic study of automated diagnosis of viral plant diseases using convolutional neural networks,” in Advances in Visual Computing. Lecture Notes in Computer Science, vol. 9475, eds G. Bebis et al. (Cham, Springer), 638–645.
  • Kaparthy, A. and Li, F. F., (2017). Deep Visual-Semantic Alignments for Generating Image Descriptions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39: 4, 664-676. doi: 10.1109/TPAMI.2016.2598339
  • Johnson, J., Kaparthy, A., and Li, F. F. (2016). “DenseCap: Fully Convolutional Localization Networks for Dense Captioning,” in IEEE Conference on Computer Vision and Pattern Recognition (Las Vegas).
  • Johannes, A., Picon, A., Alvarez-Gila, A., Echazarra, J., Rodriguez-Vaamonde, S., Diez-Navajas, A., and Ortiz-Barredo, A. (2017). “Automatic plant disease diagnosis using mobile capture devices, applied on a wheat use case,” Comput. Electron. Agric., vol. 138, pp. 200–209.
  • Irudayaraj, J. (2009). “Pathogen Sensors,” Sensors, vol. 9, pp. 8610–8612.
  • Ioffe, S., and Szegedy, C. (2015). Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv:1502.03167.
  • Huang, J., Rathod, V., Sun, C., Zhu, M., Korattikara, A., Fathi, A., Fischer, I., Wojna, Z., Song, Y., Guadarrama, S., et al. (2017). “Speed/accuracy trade-offs for modern convolutional object detectors,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition.
  • Huang, G., Liu, Z., Van der Maaten, L., and Weinberger, K. (2017). “Densely connected convolutional networks,” in IEEE on Computer Vision and Pattern Recognition (Honolulu).
  • Hu, J., Shen, L., and Sun, G. (2017). “Squeeze-and-Excitation Networks,” in IEEE Conference on Computer Vision and Pattern Recognition (Honolulu).
  • Hochreiter, S., and Schmidhuber, J. (1997). Long short-term memory. Neural Computation, vol. 9, no. 8, pp. 1735-1780. doi: 10.1162/neco.1997.9.8.1735
  • Heuvelink, E. (2005). “Tomatoes,” Crop Production Science and Horticulture.
  • Hernandez, A., Konig, P. (2018). “Data augmentation instead of explicit regularization,” arXiv:1806.03852v3.
  • He, K., Zhang, X., Ren, S., and Sun, J. (2016). “Identity Mapping in deep residual networks,” arXiv 2016, arXiv:1603.05027.
  • He, K., Zhang, X., Ren, S., and Sun, J. (2016). “Deep residual learning for image recognition,” in Proceedings of the 2016 IEEE Conference on Computer, Vision, Pattern Recognition, pp. 770– 778.
  • He, K., Gkioxari, G., Dollar, P., and Girshick, R. (2017). “Mask R-CNN,” in ICCV International Conference on Computer Vision (Venice).
  • Hanssen, I., Lapidot, M., and Thomma, B. (2010). “Emerging Viral Diseases of Tomato Crops,” Mol. Plant-Microbe Interact., vol. 23, pp. 539–548.
  • Gutierrez-Aguirre, I., Mehle, N., Delic, D., Gruden, K., Mumford, R., and Ravnikar, M. (2009). “Real-time quantitative PCR based sensitive detection and genotype discrimination of Pepino mosaic virus,” J. Virol. Methods, vol. 162, pp. 46–55.
  • Goyal, P., Dollar, P., Girshick, R., Noordhuis, P., Wesolowski, L., et.al. (2017). “Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour,” arXiv:1706.02677v2.
  • Goodfellow, I., Bengio, Y., and Courville, A. (2016). “Deep Learning,” MIT Press. Available at: http://www.deeplearningbook.org
  • Girshick, R., Donahue, J., Darrell, T., and Malik, J. (2015). “Region-based Convolutional Networks for Accurate Object Detection and Segmentation,” in TPAMI.
  • Girschick, R. (2015). “Fast R-CNN,” in IEEE International Conference on Computer Vision (ICCV).
  • Gilbertson, R.L., Batuman, O. (2013). “Emerging Viral and Other Diseases of Processing Tomatoes: Biology Diagnosis and Management,” Acta Hortic., vol. 1, pp. 35–48.
  • Georgoulis, S., Rematas, K., and Ritschel, T. (2017). “What is around the camera,” in Proceedings of the IEEE International Conference on Computer Vision.
  • Fukushima, K. (1980). “Neocognitron: A Self-organizing Neural Network Model for a Mechanism of Pattern Recognition Unaffected by Shift in Position,” Biological Cybernetics, vol. 36, pp. 193-202.
  • Fujita, E., Kawasaki, Y., Uga, H., Kagiwada, S., and Iyatomi, H. (2016). “Basic investigation on a robust and practical plant diagnosis system,” in Proceedings of the 2016 IEEE International Conference on Machine Learning and Applications (Anaheim, USA). doi: 10.1109/ICMLA.2016.0178
  • Fuentes, A., Youngki, H., Lee, Y., Yoon, S. and Park, D. S. (2016). “Characteristics of Tomato Diseases – A Study for Tomato Plant Diseases Identification,” in Proceedings International Symposium on Information Technology Convergence (Shangai, China).
  • Fuentes, A., Yoon, S., Lee, J., and Park, D. S. (2018). High- Performance Deep Neural Network-Based Tomato Plant Diseases and Pests Diagnosis System With Refinement Filter Bank. Front. Plant Sci. 9, 1162. doi: 10.3389/fpls.2018.01162
  • Fuentes, A., Im, D. H., Yoon, S., and Park, D. S. (2017). “Spectral Analysis of CNN for Tomato Diseases Identification,” in Artificial Intelligence and Soft Computing, Lecture Notes in Computer Science, ICAISC 2017, eds L. Rutkowski, M. Korytkowski, R. Scherer, R. Tadeusiewicz, L. Zadeh, and J. Zurada (Cham, Springer), 40, 51. doi: 10.1007/978-3-319-59063-9_4
  • Fuentes, A. Yoon, S. Kim, S. C., and Park, D. S. (2017). A Robust Deep-Learning-Based Detector for Real-Time Tomato Plant Diseases and Pests Recognition. Sensors, 17:2022. doi: 10.3390/s17092022
  • Food and Agriculture Organization of the United Nations. (2017): “Plant health and food security,” International Plant Convention Protection, Rome.
  • Food and Agriculture Organization of the United Nations. (2017). “Plant Pests and Diseases,” Available online: http://www.fao.org/emergencies/emergency-types/plant-pestsand- diseases/en/ (accessed on 20 June 2017).
  • Food and Agriculture Organization of the United Nations. (2012): “Wold Agriculture towards 2030/2050,” Rome.
  • Food and Agriculture Organization of the United Nations. (2009): “How to Feed the World in 2050,” Rome.
  • Food and Agriculture Organization of the United Nations. (2009): “Averting Risks to the Food Chain,” Rome.
  • Ferentinos, K. P. (2018). Deep learning models for plant disease detection and diagnosis. Computer and Electronics in Agriculture. 145, 311-318. doi: 10.1016/j.compag.2018.01.009
  • Everingham, M., Eslami, S., Gool, L., Williams, C., Winn, J., and Zisserman, A. (2015). “The Pascal Visual Object Classes Challenge: A Retrospective,” Internationa Journal of Computer Vision, vol. 111, no. 1, pp. 98-136. doi: 10.1007/s11263-014- 0733-5
  • Donatelli, M., Magarey, R.D., Bregaglio, S., Willocquet, L., Whish, J.P.M., and Savary, S. (2017). Monitoring the impacts of pests and diseases on agricultural systems. Agricultural Systems. 155, 213-224. doi: 10.1016/j.agsy.2017.01.019
  • Donahue, J., Hendircks, L. A., Gudarrama, S., and Rohrbach, M. (2017). Long-term Recurrent Convolutional Networks for Visual Recognition and Description. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39:4, 677-691. doi: 10.1109/TPAMI.2016.2599174
  • Diaz-Pendon, J.A., Canizares, M.C., Moriones, E., Bejarano, E.R., Czosnek, H., Navas-Castillo, J. (2010). “Tomato yellow leaf curl viruses: Menage a trois between the virus complex, the plant and whitefly vector,” Mol. Plant Pathol., vol. 11, pp. 414–450.
  • Dalal, N., and Trigs, B. (2005). “Histogram of Oriented Gradients for Human Detection,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 20–25.
  • Dai, J., Li, Y., He, K., Sun, J. (2016). “R-FCN: Object Detection via Region-based Fully Convolutional Networks,” arXiv 2016, arXiv:1605.06409v2.
  • Cugu, I., Sener, E., Erciyes, C., Balci, B., Akin, E., Onal, I., Oguz- Akyuz, A. (2017). “Treelogy: A Novel Tree Classifier Utilizing Deep and Hand-crafted Representations,” arXiv 2017, arXiv:1701.08291v1.
  • Cortes, C., and Vapnik, V. (1995). “Support Vector Networks,” Mach. Learn., vol. 20, pp. 293–297.
  • Cornell University. (2014). “Leaf Mold in High Tunnel Tomatoes,” Cornell University – Cooperative Extension.
  • Coakley, S.M., Scherm, H., Chakraborty, S. (1999). “Climate Change and Plant Disease Management,” Annu. Rev. Phytopathol., vol. 37, pp. 399–426.
  • Chaerani, R., and Voorrips, R.E. (2006). “Tomato early blight (Alternaria solani): The pathogens, genetics, and breeding for resistance,” J. Gen. Plant Pathol., vol. 72, pp. 335–347.
  • Canizares, M.C.; Rosas-Diaz, T.; Rodriguez-Negrete, E.; Hogenhout, S.A.; Bedford, I.D.; Bejarano, E.R.; Navas-Castillo, J.; and Moriones, E. (2015). “Arabidopsis thaliana, an experimental host for tomato yellow leaf curl disease-associated begomoviruses by agroinoculation and whitefly transmission,” Plant Pathol. 2015, 64, 265–271.
  • Bottou, L. (2010). “Large-Scale Machine Learning with Stochastic Gradient Descent,” In Lechevallier, Y., Saporta, G. (eds) Proceedings of the COMPSTAT’2010. Physica-Verlag HD.
  • Bock, C.H., Poole, G.H., Parker, P.E., and Gottwald, T.R. (2007). “Plant Disease Sensitivity Estimated Visually, by Digital Photography and Image Analysis, and by Hyperspectral Imaging,” Crit. Rev. Plant Sci., vol. 26, pp. 59–107.
  • Bloice, M., Stocker, C., and Holzinger, A. (2017). Augmentor: an image augmentation library for machine learning. arXiv:1708.04680.
  • Bishop, C. (2006). “Pattern Recognition and Machine Learning,” Springer-Verlag, Heidelberg.
  • Bernardi, R., Cakici, R., Elliot, D., Erdem, A., Erdem, E., Ikizler, N., Keller, F., Muscat, A, and Plank, B. (2016). Automatic Description Generation from Images: A Survey of Models, Datasets, and Evaluation Measures. arXiv:1601.03896v2.
  • Bergougnoux, V. (2013). “The history of tomato: From domestication to biopharming,” Biotechnology Advances, vol. 32, pp. 170-189.
  • Baurigel, E., and Herppich, W. (2014). “Hyperspectral and Chlorophyll Fluorescence Imaging for Early Detection of Plant Diseases, with Special Reference to Fusarium spec. Infections on Wheat,” Agriculture, vol. 4, no. 1, pp. 32-57. doi: 10.3390/agriculture4010032
  • Bardin, M., Ajouz, S., Comby, M., Lopez-Ferber, M., Graillot, B., Siegwart, M., and Nicot, P. (2015). “Is the efficacy of biological control against plant diseases likely to be more durable than that of chemical pesticides,” Frontiers in Plant Sciences, vol. 6, no. 566.
  • Barbedo, J. G. A. (2018). Impact of dataset size and variety on the effectiveness of deep learning and transfer learning for plant disease classification. Computers and Electronics in Agriculture. 153, 46-53. doi: 10.1016/j.compag.2018.08.013
  • Barbedo, J. (2018). Factors influencing the use of deep deep learning for plant disease recognition. Biosystems Engineering. 172, 84-91. doi: 10.1016/j.biosystemseng.2018.05.013
  • Bahdanau, D., Cho, K., and Bengio, Y. (2015). “Neural Machine Translation by Jointly Learning to Align and Translate,” in Proceedings. of the International Conference on Learning Representations (ICLR). arXiv:1409.0473v7.
  • Araus, J. L., Kefauver, S., Zaman-Allah, M., Olsen, M. and Cairns, J. (2018). Translating High-Throughput Phenotyping into Genetic Gain. Trends in Plant Science, 23:5. doi: 10.1016/j.tplants.2018.02.001
  • Araus, J. L., Kefauver, S. C., Zaman-Allah, M., Olsen, M. and Caims, J. (2018). Translating High-Throughput Phenotyping into Genetic Gain. Trends in Plant Sciences. 23:5, 451-466. doi: 10.1016/j.tplants.2018.02.001
  • Amara, J., Bouaziz, B., and Algergawy, A. (2017). “A deep learning-based approach for banana leaf diseases classification,” BTW 2017 - Workshopband, Lecture Notes in Informatics (LNI), Gesellschaft f r Informatik (Bonn), 79–88.
  • Alvarez, A.M. (2004). “Integrated approaches for detection of plant pathogenic bacteria and diagnosis of bacterial diseases,” Annu. Rev. Phytopathol., vol. 42, pp. 339–366.