'
채점의 일관성 유형에 따른 국어교사의 쓰기 평가 특성 분석 = An Analysis of the Writing Assessment of Korean Language Teachers’ According to the Consistency Patterns' 의 주제별 논문영향력
논문영향력 요약
주제
다국면 라쉬 모형
쓰기 채점
쓰기 평가
쓰기 평가 전문성
채점의 일관성
평가자 연수
평가자 유형
평가자 편향
동일주제 총논문수
논문피인용 총횟수
주제별 논문영향력의 평균
143
0
0.0%
주제별 논문영향력
논문영향력
주제
주제별 논문수
주제별 피인용횟수
주제별 논문영향력
주제어
다국면 라쉬 모형
16
0
0.0%
쓰기 채점
10
0
0.0%
쓰기 평가
98
0
0.0%
쓰기 평가 전문성
12
0
0.0%
채점의 일관성
2
0
0.0%
평가자 연수
1
0
0.0%
평가자 유형
1
0
0.0%
평가자 편향
3
0
0.0%
계
143
0
0.0%
* 다른 주제어 보유 논문에서 피인용된 횟수
0
'
채점의 일관성 유형에 따른 국어교사의 쓰기 평가 특성 분석 = An Analysis of the Writing Assessment of Korean Language Teachers’ According to the Consistency Patterns' 의 참고문헌
Wright, B. D., Linacre, J. M., Gustafson, J. E. & Martin-Lof, P.(1994), Reasonable mean-square fit values, Rasch Measurement Transactions, 8(3), 369-386.
Wright, B. D. & Linacre, J. M.(1991), BIGSTEPS computer program for Rasch measurement, Chicago: MESA Press.
Wolfe, E. W., Kao, C. W., & Ranney, M.(1998), Cognitive differences in proficient and nonproficient essay scorers, Written Communication, 15, 465-492.
Wolfe, E. W. (1997). The relationship between essay reading style and scoring proficiency in a psychometricscoring system. Assessing Writing, 4(1), 83-106.
Wiseman, S.(1949), The marking of English composition in English grammar school selection, British Journal of Educational Psychology, 19(3), 200-209.
Wiseman, C. S.(2012), Rater effects: Ego engagement in rater decision-making. Assessing Writing, 17(3), 150-173.
Wind, S. A. & Engelhard Jr, G.(2013), How invariant and accurate are domain ratings in writing assessment?, Assessing Writing, 18(4), 278-299.
Wigglesworth, G.(1993), Exploring bias analysis as a tool for improving rater consistency in assessing oral interaction, Language Testing, 10(3), 305?335.
Whithaus, C., Harrison, S. B. & Midyette, J.(2008), Keyboarding compared with handwriting on a high-stakes writing assessment, Assessing Writing, 13(1), 4-25.
White, E. M. (1984). Holisticism. College Composition and Communication, 35,400-409.
Weigle, S. C. (2002). Assessing writing. Cambridge, UK: Cambridge University Press.
Weigle, S. C. (1998). Using FACETS to model rater training effects. Language Testing, 15(2), 263-287.
Voss, J. F., & Post, T. A.(1988), On the solving of ill-structured problems. In M. Chi., R. Glaser, & M. J. Farr(eds.), The nature of expertise(pp. 261-285), New Jersey: Lawrence Erlbaum Associates.
Vincelette, E. J. & Bostic, T.(2013), Show and tell: Student and instructor perceptions of screencast assessment, Assessing Writing, 18(4), 257-277.
Vaughan, C.(1991), Holistic assessment :What hoes on in the rater’s mind? In L. Hamp-Lyons(eds.), Assessing Second Language Writing in Academic Context(pp. 111-125), Norwood, NJ:Ablex.
Tsui, A. B. M.(2003), Understanding Expertise in Teaching: Case Studies of Second Language Teachers, UK: Cambridge University Press.
Thorndike, E. L.(1920), A constant error in psychological ratings, Journal of Applied Psychology, 4, 25-29.
Teasdale, A., & Leung, C.(2000), Teacher assessment and psychometric theory: A case of paradigm crossing?, Language Testing, 17(2), 163-184.
Sweedler-Brown, C. O.(1985), The influence of training and experience on holistic essay evaluation, The English Journal, 74(5), 49-55
Stuhlmann, J., Daniel, C., Dellinger, A., Denny, R. K. & Powers, T.(1999), A generalization study of the effects of training on teachers' abilities to rate children's writing using a rubric, Journal of Reading Psychology, 20, 107-127.
Stewart, M. F. & Grobe, C. H.(1979), Syntactic maturity, mechanics of writing, and teachers quality ratings, Research in the Teaching of English, 13, 207-215.
Stevens, D. D. & Levi, A. J.(2012), Introduction to Rubrics: An Assessment Tool to Save Grading Time, Convey Effective Feedback, and Promote Student Learning, Sterling, VA: Stylus Publishing.
Song,B.,& Caruso,I.(1996).Do English and ESL Faculty differinEvaluating the Essays ofNative English-Speaking and ESLstudents?JournalofSecondLanguageWriting,5(2),163-182.
Smallwood, M. L.(1935), An Historical Study of Examinations and Grading Systems in Early American Universities : a Critical Study of the Original Records of Harvard, Cambridge, MA: Harvard University Press.
Simon, H.(1978), Information-processing theory of human problem solving. In W. K. Estes(eds.), Handbook of learning and cognitive processes: Vol. V, Human information processing, Hillsdale, NJ: Erlbaum.
Scul1en, S. E., Mount, M. K., and Goff, M.(2000), Understanding the latent structure of job perforrnance ratings, Journal of Applied Psychology, 85, 956-970.
Schafer, W. D. (1993), Assessment literacy for teachers. Theory Into Practice, 32(2), 118-126.
Scannell, D. P. & Marshall, J. C.(1966), The effect of selected composition errors on grades assigned to essay examinations, American Educational Research Journal, 3(2), 125-130.
Saxton, E., Belanger, S., & Becker, W.(2012), The Critical Thinking Analytic Rubric(CTAR): Investigating intra-rater and inter-rater reliability of a scoring mechanism for critical thinking performance assessments, Assessing Writing, 17(4), 251-270.
Saal, F. E., Downey, R. G. & Lahey, M. A.(1980), Rating the ratings: Assessing the psychometric quality of rating date. Psychological Bulletin, 88(2), 413-428.
Rudner, L. M., & Schafer, W. D.(2002), What Teachers Need to Know about Assessment, Washington, DC: National Education Association.
Roulis, E.(1995), Gendered voice in composing, gendered voice in evaluating: Gender and the assessment of writing quality. In D. L. Rubin(eds.), Composing Social Identity in Written Language (pp. 151-185), Hillsdale, NJ: Lawrence Erlbaum.
Purves, A. C.(1984), In search of an internationally-valid scheme for scoring compositions, College Composition and Communication, 35(4), 426-438.
Purves, A. C. (1992). Reflections on research and assessment inwritten composition. Research in the Teaching ofEnglish,26(1),108-122.
Pula, J. J. & Huot, B. A.(1993), A model of background influences on holistic raters, In M. M. Williamson & B. A. Huot(eds.), Validating Holistic Scoring for Writing Assessment: Theoretical and Empirical Foundations(pp. 237-265), Cresskill, NJ:Hampton.
Preston, C. C., & Colman, A. M. (2000). Optimal number of response categories in rating scales: Reliability, validity, discriminating power, and respondent preferences. Acta Psychologica, 104(1), 1-15.
Plake, B. S., Impara, J. C., & Fager, J. J. (1993). Assessment competencies of teachers: A national survey. Educational Measurement: Issues and Practice, 12(4), 10-12.
Peterson S., & Kennedy, K.(2006), Sixth grade teachers' written comments on student writing: Genre and gender influences, Written Communication, 23(1), 36- 62.
Osborne, J., & Walker, P. (2014). Just ask teachers: Building expertise, trusting subjectivity, and valuingdifference in writing assessment. Assessing Writing, 22, 33-47.
Osborn Popp, S. E., Ryan, J. M. & Thompson, M. S.(2009), The critical role of anchor paper selection in writing assessment. Applied Measurement in Education, 22(3), 255-271.
Oliver,D.W.& Shaver,J.P.(1966).Teaching public issues in the high school,Boston:HoughtonMifflin.
Myford, C. M. & Wolfe, E. W.(2004), Detecting and Measuring Rater Effects Using Many-Facet Rasch Measurement: Part Ⅱ, Journal of Applied Measurement , 5(2), 189-227.
Myford, C. M. & Wolfe, E. W.(2003), Detecting and Measuring Rater Effects Using Many-Facet Rasch Measurement: Part Ⅰ, Journal of Applied Measurement , 4(4), 386-422.
Murphy, S. & Yancey, K. B.(2007), Construct and consequence: Validity in writing assessment. In Bazerman, C.(eds.), Handbook of Research on Writing: History, Society, School, Individual, Text(pp. 365-385), NY: Lawrence Erlbaum Associates.
Murphy, K. R., & Balzer, W. K.(1989), Rater errors and rating accuracy, Journal of Applied Psychology, 74(4), 619.
Mitchell, R.(1992), Testing for Learning, NY: Simon and Schuster. Moss, P. A.(1994), Can there be validity without reliability?, Educational Researcher, 23(2), 5-12.
Milanovic, M., Saville, N. & Shuhong, S.(1996), A study of decision-making behaviour of composition markers, In M. Milanovic & N. Saville(eds.), Studies in Language Testing 3(pp. 92-114), Cambridge, UCLES: Cambridge University Press.
Messick, S. 1989. Meaning and values in test validation: The science and ethics ofassessment. Educational researcher, 18(2): 5-11.
Mertler, C. A.(2004), Secondary teachers' assessment literacy: Does classroom experience make a difference?, American Secondary Education, 33(1), 49-64.
Meristo, M. & Eisenschmidt, E.(2014), Novice teachers’ perceptions of school climate and self-efficacy, International Journal of Educational Research, 67, 1-10.
McNamara, T. & Roever, C.(2006), Language Testing: The Social Dimension, Malden, MA & Oxford: Blackwell
Mabry, L. (1999), Writing to the Rubric: Lingering Effects of Traditional Standardized Testing on Direct Writing Assessment, Phi Delta Kappan, 80(9), 673-79.
Lunz, M. E., Wright, B. D. & Linacre, J. M.(1990), Measuring the Impact of Judge Severity on Examination Scores, American Measurement in Education, 3(4), 331-345.
Lumley, T.(2005), Assessing second language writing: The rater’s perspective (Vol. 3). Frankfurt: Peter Lang.
Lumley, T. (2002). Assessment criteria in a large-scale writing test: What do they really mean to the raters?Language Testing, 19(3), 246?276.
Lumley, T. & McNamara, T, F.(1995), Rater characteristics and rater bias: Implications for training, Language Testing, 12(1), 54-71.
Linacre, J. M. (1989). Many-facet Rasch measurement. Chicago:MESA Press.
Leckie, G. & Baird, J.(2011), Rater effects on essay scoring: A multilevel analysis of severity drift, central tendency, and rater experience, Journal of Educational Measurement Winter, 48(4), 399-418.
Korman, A. K.(1971), Expectancies as determiniants of performance, Journal of Applied Psychology, 55, 218-222.
Knoch, U.(2011), Rating scales for diagnostic assessment of writing: What should they look like and where should the criteria come from?, Assessing Writing, 16(2), 81-96.
Knoch, U.(2011), Diagnostic Writing Assessment, Frankfurt: Peter Lang.
Klein, J. & Taub, D.(2005), The effect of variations in handwriting and print on evaluation of student essays. Assessing Writing, 10(2), 134-148.
Klassen, R. M., Tze, V. M., Betts, S. M., & Gordon, K. A. (2011).Teacher efficacy research 1998?2009: signs of progress orunfulfilled promise?. Educational Psychology Review, 23(1),21-43.
Kingsbury, F. A. (1922), Analyzing Ratings and Training Raters, Journal of Personnel Research, 1, 377-383.
Kegley, P. H. (1986). The effect of mode discourse on student writingperformance: Implications for policy. Educational Evaluation and PolicyAnalysis,8(2),147-154.
Kassim, NL Abu(2011), Judging behaviour and rater errors: An application of the Many-facet Rasch Model, GEMA Online Journal of Language Studies, 11(3), 179-197.
Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educationalconsequences. Educational Research Review, 2(2), 130-144.
Huot, B.(1993), The influence of holistic scoring procedures on reading and rating student essays, In M. M. Williamson & B. Huot (eds.), Validating Holistic Scoring for Writing Assessment: Theoretical and Empirical Foundations(pp. 206-236), Cresskill, NJ: Hampton Press.
Hoyt, W. T.(2000), Rater bias in psychological research: When is it a problem and what can we do about it?, Psychological Methods, 5(1), 64.
Hoyt, W. T. & Kerns, M.(1999), Magnitude and moderators of bias in observer ratings: A meta-analysis. Psychological Methods, 4, 403-424.
Hout, B.(1996), Toward a new theory of writing assessment, College Composition and Communication, 47, 549-566.
Hayes, J. R.(1996), A new framework for understanding cognition and affect in writing. In C. M. Levy and S. Ransdell(eds.), The Science of Writing: Theories, Methods, Individual Differences and Applications, Mahwah, NJ:Lawrence Erlbaum Associates.
Harris, W. H.(1977), Teacher Response to student writing: A study of the response patterns of high school english teachers to determine the basis for teacher, Research in the Teaching of English, 11(2), 175-185.
Hamp-Lyons, L.(1991), Assessing Second Language Writing in Academic Context, Norewood, NJ: Praeger.
Guilford, J. P. (1954). Psychometric methods. New York: McGraw-Hill
Graham, S., Harris, K. R., MacArthur, C. & Fink, B.(2002), Primary grade teachers' theoretical orientations concerning writing instruction: construct validation and a nationwide survey, Contemporary Educational Psychology, 27(2), 147-166.
Graham, S., Harris, K. R., Barbara, C. & MacArthur, C. A.(2001), Teacher efficacy in writing: A construct validation with primary grade teachers, Scientific Studies of Reading, 5(2), 177-202.
Graham, M., Milanowski, A. & Miller, J.(2012), Measuring and Promoting Inter-Rater Agreement of Teacher and Principal Performance Ratings, Washington, DC : Center for Educator Compensation Reform.
Flower, L. S., & Hayes, J. R. (1981). A cognitive process theory of writing. CollegeComposition and Communication, 32, 365-387.
Farrokhi, F. & Esfandiari, R.(2011), A Many-facet Rasch Model to Detect Halo Effect in Three Types of Raters, Theory and Practice in Language Studies, 1(11), 1531-1540.
Engelhard, G.(1994), Examining Rater Errors in the Assessment of Written Composition with a Many-Faceted Rasch Model, Journal of Educational Measurement , 31(2), 93-112.
Engelhard, G. (2002), Monitoring raters in performance assessments. In G. Tindal and T. Haladyna(eds.), Large-Scale Assessment Programs for All Students: Development, Implementation, and Analysis(pp. 261?287), Mahwah, NJ: Lawrence Erlbaum Associates.
Elder, C., Knoch, U., Barkhuizenm G. & von Randow, J.(2005), Individual feedback to enhance rater training: Does it work?, Language Assessment Quarterly, 2, 175-196.
Eckes, T.(2011), Introduction to Many-Facet Rasch Measurement, Frankfurt: Peter Lang.
Eckes, T.(2008), Rater types in writing performance assessment: A classification approach to rater variability, Language Testing, 25(2), 155-185.
Eckes, T. (2012). Operational Rater Types in Writing Assessment:Linking Rater Cognition to Rater Behavior. Language AssessmentQuarterly, 9, 270-292.
Duncan P. W., Bode R. K., Lai S. M. & Perera S.(2003), Rasch analysis of a new stroke-specific outcome scale: The stroke impact scale, Archive of Physical Medicine and Rehabilitation, 84(7), 950-963.
Diederich, P. B.(1974), Measuring Growth in English. Urbana, IL: National Council of Teacher of English.
DeRemer, M. L. (1998). Writing assessment: Raters' elaboration of the rating task. Assessing Writing, 5(1),7-29.
Cumming, A. (1990). Expertise in evaluating second language compositions. Language Testing, 7(1), 31-51.
Cooper, C. R.(1977), Holistic evaluation of writing. In Cooper, C. R. & Odell, L.(eds.), Evaluating Writing: Describing, Measuring, Judging(pp. 3-31), Urbana, IL: National Council of Teachers.
Cooper, C. R. & Odell, L.(1977), Evaluating Writing: Describing, Measuring, Judging, Urbana, IL: National Council of Teachers.
Cicchetti, D. V., Shoinralter, D., & Tyrer, P. J.(1985), The effect of number of rating scale categories on levels of interrater reliability: A Monte Carlo investigation, Applied Psychological Measurement, 9(1), 31-36.
Camp, R.(1993), The place of portfolios in our changing views of writing assessment, In Ward, W. C. & Bennett, R. E.(eds.), Construction Versus Choice in Cognitive Measurement: Issues in Constructed Response, Performance Testing, and Portfolio Assessment(pp. 183-212), Hillsdale, NJ: Lawrence Erlbaum Associates Publishers.
Brown, E. M.(1968), Influence of training, method, and relationship of the halo effect. Journal of Applied Psychology, 52(3), 195.
Bouwer, R., B guin, A., Sanders, T. & van den Bergh, H.(2015), Effect of genre on the generalizability of writing scores, Language Testing, 32(1), 83-100.
Bond, T. G., & Fox, C. M.(2001), Applying the Rasch Model: Fundamental measurement in the human sciences, Mahwah, NJ: Lawrence Erlbaum Associates Publishers.
Betts, E. A.(1946), Foundations of Reading Instruction, with Emphasis on Differentiated Guidance, Oxford, England: American Book Co.
Bejar, I. I. (2012). Rater cognition: Implications for validity. Educational Measurement: Issues And Practice, 31(3),2-9.
Beck, S. W. & Jeffery, J. V.(2007), Genres of high-stakes writing assessments and the construct of writing competence, Assessing Writing, 12(1), 60-79.
Barrett, S. (2001). The impact of training on rater variability.International Education Journal, 2, 49?58.
Barnes, L. L.(1990), Gender bias in teachers’ written comments. In S. L. Gabriel & I. Smithson(eds.), Gender in the Classroom: Power and Pedagogy(pp. 140-159), Chicago, IL: University of Illinois Press.
Baker, N. L.(2014), Get it off my stack: Teachers’ tools for grading papers, Assessing Writing, 19, 36-50.
Ashton, P. T. (1984). Teacher efficacy: A motivational paradigm for effective teacher education. Journal of Teacher Education. 35(5), 28-32.
Applebee, A. N.(1993), Literature in the Secondary School: Studies of Curriculum and Instruction in the United States, Urbana, IL: National Council of Teachers of English.
Anderson, J.(1990), Cognitive Psychology and its Implications, New York: Freeman.
'
채점의 일관성 유형에 따른 국어교사의 쓰기 평가 특성 분석 = An Analysis of the Writing Assessment of Korean Language Teachers’ According to the Consistency Patterns'
의 유사주제(
) 논문