Xiong, W., Droppo, J., Huang, X., Seide, F., Seltzer, M., Stolcke, A., Yu, D., & Zweig, G. (2016). Achieving Human Parity in Conversational Speech Recognition. arXiv preprint arXiv:1610.05256.
XOR 문제와 Neural Network. Bussiness Intelligence Research Center. http://www.birc.co.kr/2018/01/22/xor-문제와-neural-network/에 서 2018. 11. 9
Williams, C. K. (1997). Computing with Infinite Networks. In Advances in Neural Information Processing Systems.
Widrow, B., Hoff, M. E. (1960). Adaptive Switching Circuits. IRE WESCON Convention Record, 4, 96-104
Werbos, P. (1974). Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences. Ph. D. dissertation, Harvard University.
Van Rijsbergen, C. J. (1979). Information Retrieval. London: Butterworths
Van Gerven, M., Bohte, S. (Eds.). (2018). Artificial Neural Networks as Models of Neural Information Processing. Frontiers Media SA.
Tatsuoka, K. K. (1983). Rule Space: An Approach for Dealing with Misconceptions Based on Item Response Theory. Journal of Educational Measurement, 20(4), 345-354.
Stanford encyclopedia of philosophy [On-line Encyclopedia] Sun, Jimeng (2017). Feedforward Neural Networks. https://www.cc.gatech. edu/~san37/post/dlhc-fnn/ 에서 2018. 11. 9
자료 얻음[2015]
Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1), 1929-1958.
Snoek, J., Larochelle, H., & Adams, R. P. (2012). Practical Bayesian Optimization of Machine Learning Algorithms. In Advances in Neural Information Processing Systems.
Shu, Z., Henson, R., & Willse, J. (2013). Using Neural Network Analysis to Define Methods of DINA Model Estimation for Small Sample Sizes. Journal of Classification, 30(2), 173-194.
Shang, W., Sohn, K., Almeida, D., & Lee, H. (2016, June). Understanding and Improving Convolutional Neural Networks via Concatenated Rectified Linear Units. In International Conference on Machine Learning.
Sese, A., Palmer, A. L., & Montano, J. J. (2004). Psychometric Measurement Models and Artificial Neural Networks. International Journal of Testing, 4(3), 253-266.
Seni, G., & Elder, J. F. (2010). Ensemble Methods in Data Mining: Improving Accuracy through Combining Predictions. Synthesis Lectures on Data Mining and Knowledge Discovery, 2(1), 1-126.
Seide, F., Li, G., & Yu, D. (2011). Conversational Speech Transcription Using Context-dependent Deep Neural Networks. In Twelfth Annual Conference of the International Speech Communication Association.
Samuel, A. L. (1959). Some Studies in Machine Learning Using the Game of Checkers. IBM Journal of Research and Development, 3(3), 210-229.
Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A. C., & Li, F. (2015). ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision, 115(3), 211-252.
Rumelhart, D. E., McClelland, . L. (1986). Parallel Distributed Processing, Vol. 1: Explorations in the Microstructure of Cognition. MIT Press.
Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning Representations by Back-propagating Errors. Nature, 323(October), 533.
Rosenblatt, F. (1958). The perceptron: a probabilistic model for information storage and organization in the brain. Psychological review, 65(6), 386-408.
Rasmussen, C. E., Williams, C. K. I. (2006). Gaussian Processes for Machine Learning. MIT Press.
Ramachandran, P., Zoph, B., & Le, Q. V. (2017). Swish: a Self-gated Activation Function. arXiv preprint arXiv:1710.05941.
R core team. (2018). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL : https://www. R-project.org/.
Neal, R. (1994). Priors for Infinite Networks (Technical Report No. CRG-TR-94-1). University of Toronto.
Naver (2018). Hybrid DNN Text-to-Speech: A DNN Techniques for the Text-to-Speech and Voice Synthesis.
Nair, V., & Hinton, G. E. (2010). Rectified Linear Units Improve Restricted Boltzmann Machines. In Proceedings of the 27th International Conference on Machine Learning (ICML-10).
Murphy, K (2012). Machine Learning: a Probabilistic perspective. MIT Press.
Muller, A. C., Guido S. (2016). Introduction to Machine Learning with Python. O’Reilly Media, Inc.
Moridis, C. N., & Economides, A. A. (2009). Prediction of Student’s Mood during an Online Test Using Formula-based and Neural Network-based Method. Computers & Education, 53(3), 644-652.
Minsky, M. L., Papert, S. A. (1987). Perceptrons: An Introduction to Computational Geometry. MIT Press.
McKinley, J. C., & Hathaway, S. R. (1940). A Multiphasic Personality Schedule (Minnesota): II. A Differential Study of Hypochondriasisa. The Journal of Psychology, 10(2), 255-268.
McCulloch, W. S., Pitts, W. (1943). A Logical Calculus of the Ideas Immanent in Nervous Activity. Bulletin of Mathematical Biophysics, 5(4), 115-133.
McCullagh, P. and Nelder, J. A. (1992). Generalized Linear Models. Chapman & Hall.
McCorduck, P. (2004). Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence. AK Peters/CRC Press.
McClelland, J., Rumelhart, D. (1988). Explorations in Parallel Distributed Processing. Cambridge, MA: MIT Press.
Maxwell, S. E., Lau, M. Y., & Howard, G. S. (2015). Is Psychology Suffering from a Replication Crisis? American Psychologist, 70, 487-498.
Marsland, S. (2014). Machine Learning: An Algorithmic Perspective(2nd Ed.). Chapman and Hall.
Maas, A. L., Hannun, A. Y., & Ng, A. Y. (2013, June). Rectifier Non-linearities Improve Neural Network Acoustic Models. In Proceedings of ICML, 30(1), 3.
Leighton, J. P., Gierl, M. J., & Hunka, S. M. (2004). The Attribute Hierarchy Method for Cognitive Assessment: A Variation on Tatsuoka's Rule‐Space Approach. Journal of Educational Measurement, 41(3), 205-237.
Lee, J., Bahri, Y., Novak, R., Schoenholz, S. S., Pennington, J., & Sohl-Dickstein, J. (2017). Deep Neural Networks as Gaussian Processes. arXiv preprint arXiv:1711.00165.
LeCun. Y., Boser, B. E., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W. E., and Jackel, L. D. (1989). Backpropagation Applied to Handwritten Zip Code Recognition. Neural Computation, 1(4), 541-551.
LeCun, Y.; Bottou, L.; Bengio, Y.; and Haffner, P. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE 86(11):2278–2324.
Lantz, B. (2015). Machine Learning with R (2nd Ed.). Birmingham: Packt Publishing Ltd.
Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). ImageNet Classification with Deep Convolutional Neural Networks. In Advances in Neural Information Processing Systems.
Kohonen, T. (1989). Self-organization and Associative Memory (3rd Ed.). New York: Springer-Verlag.
Klein, R. A., Ratliff, K. A., Vianello, M., Adams Jr, R. B., Bahn k, Š., Bernstein, M. J., ... & Cemalcilar, Z. (2014). Investigating Variation in Replicability: A “Many Labs” Replication Project. Social Psychology, 45, 142–152.
Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
Johnson, M. (2017). How Much Data is Enough? Predicting How Accuracy Varies with Training Data Size. Sydney, Australia: Macquarie University.
Ioffe, S., & Szegedy, C. (2015, June). Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In International Conference on Machine Learning.
Huebner, A. (2010). An Overview of Recent Developments in Cognitive Diagnostic Computer Adaptive Assessments. Practical Assessment, 15(3), 1-7.
Hopfield, J. J. (1982). Neural Networks and Physical Systems with Emergent Collective Computational Abilities. Proceedings of the National Academy of Sciences, 79(8), 2554-2558.
Hinton, G., Deng, L., Yu, D., Dahl, G. E., Mohamed, A. R., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., Sainath, T. N., & Kingsbury, B. (2012). Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups. IEEE Signal Processing Magazine, 29(6), 82-97.
Hinton, G. E., Osindero, S., & Teh, Y.-W. (2006). A fast learning algorithm for deep belief nets. Neural computation, 18(7): 1527-1554.
Hinton, G. E., & Salakhutdinov, R. R. (2006). Reducing the Dimensionality of Data with Neural Networks. Science, 313(July), 504-507.
He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.
He, K., Zhang, X., Ren, S., & Sun, J. (2015). Delving Deep into Rectifiers: Surpassing Human-level Performance on ImageNet Classification. In Proceedings of the IEEE International Conference on Computer Vision.
Harp, S. A., Samad, T., & Villano, M. (1995). Modeling Student Knowledge with Self-organizing Feature Maps. IEEE Transactions on Systems, Man, and Cybernetics, 25(5), 727-737.
Hambleton, R. K., Swaminathan, H., & Rogers, H. J. (1991). Fundamentals of Item Response Theory. Sage Publications.
Hambleton, R. K., & Swaminathan, H. (2013). Item Response Theory: Principles and Applications. Springer Science & Business Media.
Hahnloser, R. H., Sarpeshkar, R., Mahowald, M. A., Douglas, R. J., & Seung, H. S. (2000). Digital Selection and Analogue Amplification Coexist in a Cortex-inspired Silicon Circuit. Nature, 405(June), 947-951.
Gregory, R. J. (2004). Psychological Testing: History, Principles, and Applications. Allyn & Bacon.
Green, R. A., & Michael, W. B. (1998). Using Neural Network and Traditional Psychometric Procedures in the Analysis of Test Scores: An Exploratory Study. Educational Research Quarterly, 22(2), 52-61.
Goodfellow, I., Bengio, Y., Courville, A., & Bengio, Y. (2016). Deep Learning. Cambridge: MIT press.
Glorot, X., & Bengio, Y. (2010, March). Understanding the Difficulty of Training Deep Feedforward Neural Networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics.
Gierl, M. J., Wang, C., & Zhou, J. (2008). Using the Attribute Hierarchy Method to Make Diagnostic Inferences about Examinees' Cognitive Skills in Algebra on the SAT. Journal of Technology, Learning, and Assessment, 6(6).
Gierl, M. J., Leighton, J. P., & Hunka, S. M. (2007). Using the Attribute Hierarchy Method to Make Diagnostic Inferences about Examinees' Cognitive Skills.
Gierl, M. J., Cui, Y., & Hunka, S. (2008). Using Connectionist Models to Evaluate Examinees’ Response Patterns to Achievement Tests. Journal of Modern Applied Statistical Methods, 7(1), 234-245.
Gierl, M. J., Cui, Y., & Hunka, S. (2007, April). Using Connectionist Models to Evaluate Examinees’ Response Patterns on Tests: An Application of the Attribute Hierarchy Method to Assessment Engineering. Paper presented at the Annual Meeting of the National Council on Measurement in Education (NCME), Chicago, Il.
Gallant, S. I., (1993). Neural Network Learning and Expert Systems. MIT press.
Fukushima, K., Miyake., S. (1982). Neocognitron: a New Algorithm for Pattern Recognition Tolerant of Deformations and Shifts in Position. Pattern Recognition, 15(6), 455-469.
Freund, Y., Schapire, R. (1997). A Decision-theoretic Generalization of Online Learning and an Application to Boosting. Journal of Computer and System Sciences, 55, 119-139.
Flach, P. (2012). Machine Learning: the Art and Science of Algorithms That Make Sense of Data. Cambridge: Cambridge University Press.
Erhan, D., Bengio, Y., Courville, A., Manzagol, P. A., Vincent, P., Bengio, S. (2010). Why does unsupervised pre-training help deep learning? Journal of Machine Learning Research, 11(Feb), 625–660.
Duchi, J., Hazan, E., & Singer, Y. (2011). Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul): 2121-2159.
Di Nuovo, A. G., Di Nuovo, S., Buono, S., & Catania, V. (2009, June). Feedforward Artificial Neural Network to Estimate IQ of Mental Retarded People from Different Psychometric Instruments. In Neural Networks, 2009. IJCNN 2009. International Joint Conference on IEEE.
Di Nuovo, A. G., Di Nuovo, S., & Buono, S. (2012). Intelligent Quotient Estimation of Mental Retarded People from Different Psychometric Instruments Using Artificial Neural Networks. Artificial Intelligence in Medicine, 54(2), 135-145.
De Vreese, L. P., Gomiero, T., Uberti, M., De Bastiani, E., Weger, E., Mantesso, U., & Marangoni, A. (2015). Functional Abilities and Cognitive Decline in Adult and Aging Intellectual Disabilities. Psychometric Validation of an Italian Version of the Alzheimer's Functional Assessment Tool (AFAST): Analysis of Its Clinical Significance with Linear Statistics and Artificial Neural Networks. Journal of Intellectual Disability Research, 59(4), 370-384.
De Ayala, R. J. (2013). The Theory and Practice of Item Response Theory. Guilford Publications.
Cui, Y., Gierl, M., & Guo, Q. (2016). Statistical Classification for Cognitive Diagnostic Assessment: An Artificial Neural Network Approach. Educational Psychology, 36(6), 1065-1082.
Cohen, R. J., & Swerdlik, M. E. (2002). Psychological Testing and Assessment: An Introduction to Tests and Measurement(5th ed.). New York: McGraw-Hill.
Cohen, J. (1988). Statistical Power Analysis for the Behavioral Sciences. Hillsdale, NJ: L. Lawrence Earlbaum Associates.
Clevert, D. A., Unterthiner, T., & Hochreiter, S. (2015). Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs). arXiv preprint arXiv:1511.07289.
Camara, W. J., Nathan, J. S., & Puente, A. E. (2000). Psychological Test Usage: Implications in Professional Psychology. Professional Psychology: Research and Practice, 31(2), 141-154.
Brown, T. A. (2014). Confirmatory factor analysis for applied research. Guilford Publications.
Briggs, D. C., & Circi, R. (2017). Challenges to the Use of Artificial Neural Networks for Diagnostic Classifications with Student Test Data. International Journal of Testing, 17(4), 302-321.
Breiman, L. (2001). Random Forest. Machine Learning, 45, 5-32.
Breiman, L. (1996). Heuristics of Instability and Stabilization in Model Selection. Annals of Statistics, 24, 2350-2383.
Boyd, S., & Vandenberghe, L. (2004). Convex Optimization. Cambridge university press.
Binet, A., & Simon, T. (1916). The Development of Intelligence in Children: The Binet-Simon Scale. Williams & Wilkins Company.
Allen, M. J., & Yen, W. M. (1979). Introduction to Measurement Theory. Cole Publications.
Alessandri, M., Heiden, L. A., & Dunbar-Welter, M. (1995). History and Overview. In Heiden, L. A., & Hersen, M. (eds.), Introduction to Clinical Psychology. New York: Plenum Press.
Ahmed, M. (2017). On Improving The Performance And Resource Utilization of Consolidated Virtual Machines: Measurement, Modeling, Analysis, and Prediction. Ph.D. dissertation, Information Technologies, University of Sydney.
Agresti, A. (2007). An Introduction to Categorical Data Analysis(2nd Ed.). New York: John Wiley & Sons Inc.
'
자기보고식 심리검사에서 심층신경망 모형의 적용 가능성 탐색 = Investigating the Feasibility of Employing a Deep Neural Network Model in Self Report Inventories'
의 유사주제(
) 논문