Automated Essay Scoring Sat

Automated essay scoring (AES) is the use of specialized computer programs to assign grades to essays written in an educational setting. It is a method of educational assessment and an application of natural language processing. Its objective is to classify a large set of textual entities into a small number of discrete categories, corresponding to the possible grades—for example, the numbers 1 to 6. Therefore, it can be considered a problem of statistical classification.

Several factors have contributed to a growing interest in AES. Among them are cost, accountability, standards, and technology. Rising education costs have led to pressure to hold the educational system accountable for results by imposing standards. The advance of information technology promises to measure educational achievement at reduced cost.

The use of AES for high-stakes testing in education has generated significant backlash, with opponents pointing to research that computers cannot yet grade writing accurately and arguing that their use for such purposes promotes teaching writing in reductive ways (i.e. teaching to the test).

History[edit]

Most historical summaries of AES trace the origins of the field to the work of Ellis Batten Page.[1][2][3][4][5][6][7] In 1966, he argued[8] for the possibility of scoring essays by computer, and in 1968 he published[9] his successful work with a program called Project Essay Grade™ (PEG™). Using the technology of that time, computerized essay scoring would not have been cost-effective,[10] so Page abated his efforts for about two decades.

By 1990, desktop computers had become so powerful and so widespread that AES was a practical possibility. As early as 1982, a UNIX program called Writer's Workbench was able to offer punctuation, spelling, and grammar advice.[11] In collaboration with several companies (notably Educational Testing Service), Page updated PEG and ran some successful trials in the early 1990s.[12]

Peter Foltz and Thomas Landauer developed a system using a scoring engine called the Intelligent Essay Assessor™ (IEA). IEA was first used to score essays in 1997 for their undergraduate courses.[13] It is now a product from Pearson Educational Technologies and used for scoring within a number of commercial products and state and national exams.

IntelliMetric® is Vantage Learning's AES engine. Its development began in 1996.[14] It was first used commercially to score essays in 1998.[15]

Educational Testing Service offers e-rater®, an automated essay scoring program. It was first used commercially in February 1999.[16] Jill Burstein was the team leader in its development. ETS's CriterionSM Online Writing Evaluation Service uses the e-rater engine to provide both scores and targeted feedback.

Lawrence Rudner has done some work with Bayesian scoring, and developed a system called BETSY (Bayesian Essay Test Scoring sYstem).[17] Some of his results have been published in print or online, but no commercial system incorporates BETSY as yet.

Under the leadership of Howard Mitzel and Sue Lottridge, Pacific Metrics developed a constructed response automated scoring engine, CRASE®. Currently utilized by several state departments of education and in a U.S. Department of Education-funded Enhanced Assessment Grant, Pacific Metrics’ technology has been used in large-scale formative and summative assessment environments since 2007.

Measurement Inc. acquired the rights to PEG in 2002 and has continued to develop it.[18]

In 2012, the Hewlett Foundation sponsored a competition on Kaggle called the Automated Student Assessment Prize (ASAP).[19] 201 challenge participants attempted to predict, using AES, the scores that human raters would give to thousands of essays written to eight different prompts. The intent was to demonstrate that AES can be as reliable as human raters, or more so. This competition also hosted a separate demonstration among 9 AES vendors on a subset of the ASAP data. Although the investigators reported that the automated essay scoring was as reliable as human scoring,[20][21] this claim was not substantiated by any statistical tests because some of the vendors required that no such tests be performed as a precondition for their participation.[22] Moreover, the claim that the Hewlett Study demonstrated that AES can be as reliable as human raters has since been strongly contested,[23][24] including by Randy E. Bennett, the Norman O. Frederiksen Chair in Assessment Innovation at the Educational Testing Service.[25] Some of the major criticisms of the study have been that five of the eight datasets consisted of paragraphs rather than essays, four of the eight data sets were graded by human readers for content only rather than for writing ability, and that rather than measuring human readers and the AES machines against the "true score", the average of the two readers' scores, the study employed an artificial construct, the "resolved score", which in four datasets consisted of the higher of the two human scores if there was a disagreement. This last practice, in particular, gave the machines an unfair advantage by allowing them to round up for these datasets.[23]

Procedure[edit]

From the beginning, the basic procedure for AES has been to start with a training set of essays that have been carefully hand-scored.[26] The program evaluates surface features of the text of each essay, such as the total number of words, the number of subordinate clauses, or the ratio of uppercase to lowercase letters—quantities that can be measured without any human insight. It then constructs a mathematical model that relates these quantities to the scores that the essays received. The same model is then applied to calculate scores of new essays.

Recently, one such mathematical model was created by Isaac Persing and Vincent Ng.[27] which not only evaluates essays on the above features, but also on their argument strength. It evaluates various features of the essay, such as the agreement level of the author and reasons for the same, adherence to the prompt's topic, locations of argument components (major claim, claim, premise), errors in the arguments, cohesion in the arguments among various other features. In contrast to the other models mentioned above, this model is closer in duplicating human insight while grading essays.

The various AES programs differ in what specific surface features they measure, how many essays are required in the training set, and most significantly in the mathematical modeling technique. Early attempts used linear regression. Modern systems may use linear regression or other machine learning techniques often in combination with other statistical techniques such as latent semantic analysis[28] and Bayesian inference.[17]

Criteria for success[edit]

Any method of assessment must be judged on validity, fairness, and reliability.[29] An instrument is valid if it actually measures the trait that it purports to measure. It is fair if it does not, in effect, penalize or privilege any one class of people. It is reliable if its outcome is repeatable, even when irrelevant external factors are altered.

Before computers entered the picture, high-stakes essays were typically given scores by two trained human raters. If the scores differed by more than one point, a third, more experienced rater would settle the disagreement. In this system, there is an easy way to measure reliability: by inter-rater agreement. If raters do not consistently agree within one point, their training may be at fault. If a rater consistently disagrees with whichever other raters look at the same essays, that rater probably needs more training.

Various statistics have been proposed to measure inter-rater agreement. Among them are percent agreement, Scott's π, Cohen's κ, Krippendorf's α, Pearson's correlation coefficient r, Spearman's rank correlation coefficient ρ, and Lin's concordance correlation coefficient.

Percent agreement is a simple statistic applicable to grading scales with scores from 1 to n, where usually 4 ≤ n ≤ 6. It is reported as three figures, each a percent of the total number of essays scored: exact agreement (the two raters gave the essay the same score), adjacent agreement (the raters differed by at most one point; this includes exact agreement), and extreme disagreement (the raters differed by more than two points). Expert human graders were found to achieve exact agreement on 53% to 81% of all essays, and adjacent agreement on 97% to 100%.[30][31]

Inter-rater agreement can now be applied to measuring the computer's performance. A set of essays is given to two human raters and an AES program. If the computer-assigned scores agree with one of the human raters as well as the raters agree with each other, the AES program is considered reliable. Alternatively, each essay is given a "true score" by taking the average of the two human raters' scores, and the two humans and the computer are compared on the basis of their agreement with the true score.

Some researchers have reported that their AES systems can, in fact, do better than a human. Page made this claim for PEG in 1994.[12] Scott Elliot said in 2003 that IntelliMetric typically outperformed human scorers.[14] AES machines, however, appear to be less reliable than human readers for any kind of complex writing test.[32][33][34]

In current practice, high-stakes assessments such as the GMAT are always scored by at least one human. AES is used in place of a second rater. A human rater resolves any disagreements of more than one point.[35]

Criticism[edit]

AES has been criticized on various grounds. Yang et al. mention "the overreliance on surface features of responses, the insensitivity to the content of responses and to creativity, and the vulnerability to new types of cheating and test-taking strategies."[35] Several critics are concerned that students' motivation will be diminished if they know that no human will read their writing.[36][37][38] Among the most telling critiques are reports of intentionally gibberish essays being given high scores.[39]

HumanReaders.Org Petition[edit]

On March 12, 2013, HumanReaders.Org launched an online petition, "Professionals Against Machine Scoring of Student Essays in High-Stakes Assessment". Within weeks, the petition gained thousands of signatures, including Noam Chomsky,[40] and was cited in a number of newspapers, including The New York Times,[41][42][43] and on a number of education and technology blogs.[44][45]

The petition describes the use AES for high-stakes testing as "trivial", "reductive", "inaccurate", "undiagnostic", "unfair", and "secretive".[46]

In a detailed summary of research on AES, the petition site notes, "RESEARCH FINDINGS SHOW THAT no one—students, parents, teachers, employers, administrators, legislators—can rely on machine scoring of essays ... AND THAT machine scoring does not measure, and therefore does not promote, authentic acts of writing."[47][48]

The petition specifically addresses the use of AES for high-stakes testing and says nothing about other possible uses.

Software[edit]

Most resources for automated essay scoring are proprietary.

  • eRater – Published by ETS
  • Intellimetric – by Vantage Learning
  • Project Essay Grade[49] – by Measurement, Inc.
  • PaperRater.

References[edit]

  1. ^Page, E.B. (2003). "Project Essay Grade: PEG", p. 43. In: Automated Essay Scoring: A Cross-Disciplinary Perspective. Shermis, Mark D., and Jill Burstein, eds. Lawrence Erlbaum Associates, Mahwah, New Jersey, ISBN 0805839739
  2. ^Larkey, Leah S., and W. Bruce Croft (2003). "A Text Categorization Approach to Automated Essay Grading", p. 55. In: Automated Essay Scoring: A Cross-Disciplinary Perspective. Shermis, Mark D., and Jill Burstein, eds. Lawrence Erlbaum Associates, Mahwah, New Jersey, ISBN 0805839739
  3. ^Keith, Timothy Z. (2003). "Validity of Automated Essay Scoring Systems", p. 153. In: Automated Essay Scoring: A Cross-Disciplinary Perspective. Shermis, Mark D., and Jill Burstein, eds. Lawrence Erlbaum Associates, Mahwah, New Jersey, ISBN 0805839739
  4. ^Shermis, Mark D., Jill Burstein, and Claudia Leacock (2006). "Applications of Computers in Assessment and Analysis of Writing", p. 403. In: Handbook of Writing Research. MacArthur, Charles A., Steve Graham, and Jill Fitzgerald, eds. Guilford Press, New York, ISBN 1-59385-190-1
  5. ^Attali, Yigal, Brent Bridgeman, and Catherine Trapani (2010). "Performance of a Generic Approach in Automated Essay Scoring", p. 4. Journal of Technology, Learning, and Assessment, 10(3)
  6. ^Wang, Jinhao, and Michelle Stallone Brown (2007). "Automated Essay Scoring Versus Human Scoring: A Comparative Study", p. 6. Journal of Technology, Learning, and Assessment, 6(2)
  7. ^Bennett, Randy Elliot, and Anat Ben-Simon (2005). Toward Theoretically Meaningful Automated Essay ScoringArchived October 7, 2007, at the Wayback Machine., p. 6. Retrieved 2012-03-19.
  8. ^Page, E.B. (1966). "The imminence of grading essays by computers". Phi Delta Kappan, 47, 238-243.
  9. ^Page, E.B. (1968). "The Use of the Computer in Analyzing Student Essays". International Review of Education, 14(3), 253-263.
  10. ^Page, E.B. (2003), pp. 44-45.
  11. ^MacDonald, N.H., L.T. Frase, P.S. Gingrich, and S.A. Keenan (1982). "The Writers Workbench: Computer Aids for Text Analysis". IEEE Transactions on Communications, 3(1), 105-110.
  12. ^ abPage, E.B. (1994). "New Computer Grading of Student Prose, Using Modern Concepts and Software". Journal of Experimental Education, 62(2), 127-142.
  13. ^Rudner, Lawrence. "Three prominent writing assessment programsArchived March 9, 2012, at the Wayback Machine.". Retrieved 2012-03-06.
  14. ^ abElliot, Scott (2003). "Intellimetric TM: From Here to Validity", p. 75. In: Automated Essay Scoring: A Cross-Disciplinary Perspective. Shermis, Mark D., and Jill Burstein, eds. Lawrence Erlbaum Associates, Mahwah, New Jersey, ISBN 0805839739
  15. ^"IntelliMetric®: How it Works". Retrieved 2012-02-28.
  16. ^Burstein, Jill (2003). "The E-rater(R) Scoring Engine: Automated Essay Scoring with Natural Language Processing", p. 113. In: Automated Essay Scoring: A Cross-Disciplinary Perspective. Shermis, Mark D., and Jill Burstein, eds. Lawrence Erlbaum Associates, Mahwah, New Jersey, ISBN 0805839739
  17. ^ abRudner, Lawrence (ca. 2002). "Computer Grading using Bayesian Networks-OverviewArchived March 8, 2012, at the Wayback Machine.". Retrieved 2012-03-07.
  18. ^"Assessment Technologies", Measurement Incorporated. Retrieved 2012-03-09.
  19. ^"Hewlett prize". Retrieved 2012-03-05.
  20. ^University of Akron (12 April 2012). "Man and machine: Better writers, better grades". Retrieved 4 July 2015. 
  21. ^Shermis, Mark D., and Jill Burstein, eds. Handbook of Automated Essay Evaluation: Current Applications and New Directions. Routledge, 2013.
  22. ^Rivard, Ry (March 15, 2013). "Humans Fight Over Robo-Readers". Inside Higher Ed. Retrieved 14 June 2015. 
  23. ^ abPerelman, Les (August 2013). "Critique of Mark D. Shermis & Ben Hamner, "Contrasting State-of-the-Art Automated Scoring of Essays: Analysis"". Journal of Writing Assessment. 6 (1). Retrieved June 13, 2015. 
  24. ^Perelman, L. (2014). "When 'the state of the art is counting words', Assessing Writing, 21, 104-111.
  25. ^Bennett, Randy E. (March 2015). "The Changing Nature of Educational Assessment". Review of Research in Education. 39 (1): 370–407. 
  26. ^Keith, Timothy Z. (2003), p. 149.
  27. ^Persing, Isaac, and Vincent Ng (2015). "Modeling Argument Strength in Student Essays", pp. 543-552. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Retrieved 2015-10-22.
  28. ^Bennett, Randy Elliot, and Anat Ben-Simon (2005), p. 7.
  29. ^Chung, Gregory K.W.K., and Eva L. Baker (2003). "Issues in the Reliability and Validity of Automated Scoring of Constructed Responses", p. 23. In: Automated Essay Scoring: A Cross-Disciplinary Perspective. Shermis, Mark D., and Jill Burstein, eds. Lawrence Erlbaum Associates, Mahwah, New Jersey, ISBN 0805839739
  30. ^Elliot, Scott (2003), p. 77.
  31. ^Burstein, Jill (2003), p. 114.
  32. ^Bennett, Randy E. (May 2006). "Technology and Writing Assessment: Lessons Learned from the US National Assessment of Educational Progress"(PDF). International Association for Educational Assessment. Retrieved 5 July 2015. 
  33. ^McCurry, D. (2010). "Can machine scoring deal with broad and open writing tests as well as human readers?". Assessing Writing. 15: 118–129. 
  34. ^R. Bridgeman (2013). Shermis, Mark D.; Burstein, Jill, eds. Handbook of Automated Essay Evaluation. New York: Routledge. pp. 221–232. 
  35. ^ abYang, Yongwei, Chad W. Buckendahl, Piotr J. Juszkiewicz, and Dennison S. Bhola (2002). "A Review of Strategies for Validating Computer-Automated ScoringArchived January 13, 2016, at the Wayback Machine.". Applied Measurement in Education, 15(4). Retrieved 2012-03-08.
  36. ^Wang, Jinhao, and Michelle Stallone Brown (2007), pp. 4-5.
  37. ^Dikli, Semire (2006). "An Overview of Automated Scoring of Essays". Journal of Technology, Learning, and Assessment, 5(1)
  38. ^Ben-Simon, Anat (2007). "Introduction to Automated Essay Scoring (AES)". PowerPoint presentation, Tbilisi, Georgia, September 2007.
  39. ^Winerip, Michael (22 April 2012). "Facing a Robo-Grader? Just Keep Obfuscating Mellifluously". The New York Times. Retrieved 5 April 2013. 
  40. ^"Signatures >> Professionals Against Machine Scoring Of Student Essays In High-Stakes Assessment". HumanReaders.Org. Retrieved 5 April 2013. 
  41. ^Markoff, John (4 April 2013). "Essay-Grading Software Offers Professors a Break". The New York Times. Retrieved 5 April 2013. 
  42. ^Larson, Leslie (5 April 2013). "Outrage over software that automatically grades college essays to spare professors from having to assess students'". Daily Mail. Retrieved 5 April 2013. 
  43. ^Garner, Richard (5 April 2013). "Professors angry over essays marked by computer". The Independent. Retrieved 5 April 2013. 
  44. ^Corrigan, Paul T. (25 March 2013). "Petition Against Machine Scoring Essays, HumanReaders.Org". Teaching & Learning in Higher Ed. Retrieved 5 April 2013. 
  45. ^Jaffee, Robert David (5 April 2013). "Computers Cannot Read, Write or Grade Papers". Huffington Post. Retrieved 5 April 2013. 
  46. ^"Professionals Against Machine Scoring Of Student Essays In High-Stakes Assessment". HumanReaders.Org. Retrieved 5 April 2013. 
  47. ^"Research Findings >> Professionals Against Machine Scoring Of Student Essays In High-Stakes Assessment". HumanReaders.Org. Retrieved 5 April 2013. 
  48. ^"Works Cited >> Professionals Against Machine Scoring Of Student Essays In High-Stakes Assessment". HumanReaders.Org. Retrieved 5 April 2013. 
  49. ^"Assessment Technologies." Measurement, Inc. http://www.measurementinc.com/products-services/automated-essay-scoring.

External links[edit]

In 2016, around1.6 million students took the SAT (either old or new) at least once. If every student submitted an essay, the College Board needed to grade 1.6 million essays. Since the essay was first offered with the writing section in 2005, the College Board has relied on human graders to evaluate the student work. Assuming that a grader reads one essay every 3 minutes, 800 essays a week, and is paid$15 per hour, one grader can grade 40,000 essays in a year at a cost of $30,000. Put another way, each essay costs $1.50 for two graders to evaluate each student essay. Using these metrics, the College Board spends $2.4 million each year paying graders to evaluate essays, not considering the cost of administering, transporting, scanning, and storing essays, or paying a third grader if the scores of the first two differed significantly. If only there were another way to grade essays and use the $2.4 million for other meaningful purposes…

Enter the automated essay scorer, a mere theory in 1966 that has grown into a reality for many institutions. In 1999, the ETS (Educational Testing Service) offered one of the first automatic essay scorers, called e-rater, and testing companies have had more than 15 years to improve upon that earlier model. More recently, the GMAT published a 2009 study affirming the fairness of its automated essay scorer, IntelliMetric. The analytical writing assignment is scored by a human as well as a computer, and the two scores are averaged together. By incorporating a computer into the grading process, the GMAT not only saves half the cost of grading the essay, but also is able to perform an objective analysis of sentence structure, word count, and complexity that a human reader would not have the time to complete. With a human reader assessing the coherence of the argument and the computer comparing the essay with its database of essays, the GMAT can enjoy the best of both worlds.

It makes sense, then, that the College Board and ACT would be eager to follow in GMAT’s footsteps. If they could replace one reader with a computer, there is the potential to save the hypothetical $1.2 million per year and invest it elsewhere. The fact that both tests have expressed a desire to move to a digital format in the coming decade makes the transition that much simpler: if a test taker types an essay rather than writes it, a computer could deliver a tentative score instantaneously. Only one human reader would be required to follow up and ensure that the computer graded the essay appropriately.

In a preview of that world, the College Board teamed up with Khan Academy to  grade electronically the practice essays available online. Currently, students can input essays for SAT Tests 1 and 2 on Khan’s website and receive automated feedback based on the College Board’s essay rubric: 3 scores for reading, analysis, and writing, each out of 8 points.

Naturally, we had to test out the automated essay grader for ourselves.

0, 0, 0

Simply copying and pasting an unrelated article resulted in zeroes across the board.

0, 0, 0

Writing one relevant paragraph and copy/pasting it several times also resulted in zeroes.

7, 4, 7

Five well-written but shorter paragraphs yielded high marks for reading and writing, but low marks for analysis. The computer grader, like its human predecessors, knows the limits of a short essay.

8, 6, 7

Adding an additional paragraph to create a longer essay boosted analysis as well as reading.

Thus far, we noticed that the essay grader does a good job of identifying irrelevant, repeated material. It also evaluates the length when determining its score. To test the program further, we asked ourselves how the grader would respond to a nonsensical essay that used all the right words and sentence structure, even referencing rhetorical devices and making quotations of the passage. Try and make sense of the following introduction, written by one of our more linguistically creative tutors. (The essay asked students to evaluate the rhetorical devices used by Bogard, who in a persuasive essay laments the diverse and damaging effects of light pollution on humans and animals.)

Darkness can symbolize a protean notion of absolute nihilism, floating endlessly in a void without any smattering of perception or purpose. Bogard embraces this absence and sees darkness as a lofty pursuit necessary for absolute harmony within our fractured post-modern existence. For when we lose the dark, we become absorbed by the light and the nocturnal chimeras of our subconscious cannot take flight. Using alliterative juxtapositions, carcinogenic conceits, and allusions to fiscal collapse, Borgard persuades the audience that we need to embrace the abyss in order to keep balance in an increasingly fractured and oppressive world.(click here to continue reading).

This essay used very high-level vocabulary and sentence structure, relevantly addressed the rhetorical devices within the author’s passage, and even supplied quotations from key parts of the passage. Surely a human would be required to recognize the ingenious absurdity of this author’s writing!

The computer gave 7’s for reading and writing, fairly evaluating the author’s ability to read Bogard’s argument critically and craft well-written paragraphs. Much to our surprise, the computer gave the writer a 2 for analysis, easily recognizing that the author’s essay, however well it was written and however well it interacted with the rhetorical passage, was absurd to the extreme. Nicely done, automated grader.

In addition to the essay grader which provides scores for Tests 1 and 2, Khan Academy also provides more personalized feedback. To serve the students looking for more in-depth analysis, the College Board partnered with TurnItIn to give specific line-by-line suggestions for the practice essay section. Students can write essays and receive comments on particular sections of their essays based on their reading, analysis, and writing abilities.

The College Board and ACT have their work cut out for them to persuade colleges and universities that their essays are predictive of college success for applicants. Despite the initially lukewarm reception to the redesigned essays, the College Board is investing resources into electronic essay grading, demonstrating its belief that the exercise provides a valuable metric for colleges. We can expect at least one set of human eyes to continue grading student essays in the short term, but if the Khan Academy essay grader is any indication, even that role may be close to retirement.



Applerouth is a trusted test prep and tutoring resource. We combine the science of learning with a thoughtful, student-focused approach to help our clients succeed. Call or email us today at 866-789-PREP (7737) or info@applerouth.com.

0 comments

Leave a Reply

Your email address will not be published. Required fields are marked *