skip to main content
10.1145/2724660.2724670acmconferencesArticle/Chapter ViewAbstractPublication Pagesl-at-sConference Proceedingsconference-collections
research-article
Open access

PeerStudio: Rapid Peer Feedback Emphasizes Revision and Improves Performance

Published: 14 March 2015 Publication History

Abstract

Rapid feedback is a core component of mastery learning, but feedback on open-ended work requires days or weeks in most classes today. This paper introduces PeerStudio, an assessment platform that leverages the large number of students' peers in online classes to enable rapid feedback on in-progress work. Students submit their draft, give rubric-based feedback on two peers' drafts, and then receive peer feedback. Students can integrate the feedback and repeat this process as often as they desire. In MOOC deployments, the median student received feedback in just twenty minutes. Rapid feedback on in-progress work improves course outcomes: in a controlled experiment, students' final grades improved when feedback was delivered quickly, but not if delayed by 24 hours. More than 3,600 students have used PeerStudio in eight classes, both massive and in-person. This research demonstrates how large classes can leverage their scale to encourage mastery through rapid feedback and revision.

References

[1]
Anderson, S. and Rodin, J. Is Bad News Always Bad?: Cue and Feedback Effects on Intrinsic Motivation. Journal of Applied Social Psychology 19, 6 (1989), 449--467.
[2]
Andrade, H.G. The Effects of Instructional Rubrics on Learning to Write. Current Issues in Education 4, 4 (2001).
[3]
Andrade, H.G. Teaching with rubrics: The good, the bad, and the ugly. College Teaching 53, 1 (2005), 27--3.
[4]
André, P., Bernstein, M., and Luther, K. Who gives a tweet? Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work - CSCW '12, ACM Press (2012), 471.
[5]
Balcazar, F.E., Hopkins, B.L., and Suarez, Y. A critical, objective review of performance feedback. Journal of Organizational Behavior Management 7, 2 (1986).
[6]
Brooks, M., Basu, S., Jacobs, C., and Vanderwende, L. Divide and Correct: Using Clusters to Grade Short Answers at Scale. Learning at Scale, (2014).
[7]
Buxton, B. Sketching user experiences: getting the design right and the right design. Morgan Kaufmann, 2007.
[8]
Carlson, P.A. and Berry, F.C. Calibrated Peer Review and assessing learning outcomes. Frontiers in Education Conference, (2003).
[9]
Dawes, R.M. The robust beauty of improper linear models in decision making. American psychologist 34, 7 (1979), 571.
[10]
Dow, S., Fortuna, J., Schwartz, D., Altringer, B., and Klemmer, S. Prototyping dynamics: sharing multiple designs improves exploration, group rapport, and results. Proceedings of the 2011 annual conference on Human factors in computing systems, (2011), 2807--2816.
[11]
Dow, S.P., Heddleston, K., and Klemmer, S.R. The efficacy of prototyping under time constraints. Proceeding of the ACM conference on Creativity and cognition, ACM Press (2009), 165.
[12]
Ericsson, K.A., Krampe, R.T., and Tesch-Römer, C. The role of deliberate practice in the acquisition of expert performance. Psychological review 100, 3 (1993).
[13]
Falchikov, N. and Goldfinch, J. Student peer assessment in higher education: A meta-analysis comparing peer and teacher marks. Review of educational research 70, 3 (2000), 287--322.
[14]
Fast, E., Lee, C., Aiken, A., Bernstein, M.S., Koller, D., and Smith, E. Crowd-scale interactive formal reasoning and analytics. Proceedings of the 26th annual ACM symposium on User interface software and technology, (2013).
[15]
Gray, W.D. and Boehm-Davis, D.A. Milliseconds matter: an introduction to microstrategies and to their use in describing and predicting interactive behavior. Journal of experimental psychology. Applied 6, 4 (2000), 322--35.
[16]
Guskey, T.R. Closing Achievement Gaps: Revisiting Benjamin S. Bloom's "Learning for Mastery." Journal of Advanced Academics 19, 1 (2007), 8--31.
[17]
Hattie, J. and Timperley, H. The Power of Feedback. Review of Educational Research 77, 1 (2007), 81--112.
[18]
Heffernan, N., Heffernan, C., Dietz, K., Soffer, D., Pellegrino, J. W. Goldman, S.R., and Dailey, M. Improving Mathematical Learning Outcomes Through Automatic Reassessment and Relearning. AERA, (2012).
[19]
Kim, S.-M., Pantel, P., Chklovski, T., and Pennacchiotti, M. Automatically assessing review helpfulness. Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics (2006), 423--430.
[20]
Kittur, A., Chi, E.H., and Suh, B. Crowdsourcing user studies with Mechanical Turk. Proc. of CHI, ACM Press (2008), 453.
[21]
Kluger, A.N. and DeNisi, A. The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin 119, 2 (1996), 254--284.
[22]
Krosnick, J.A. Survey research. Annual review of psychology 50, 1 (1999), 537--567.
[23]
Kulik, J.A. and Kulik, C.-L.C. Timing of Feedback and Verbal Learning. Review of Educational Research 58, 1 (1987), 79--97.
[24]
Kulkarni, C., Socher, R., Bernstein, M.S., and Klemmer, S.R. Scaling Short-answer Grading by Combining Peer Assessment with Algorithmic Scoring. ACM Conf on Learning@Scale, (2014).
[25]
Kulkarni, C., Wei, K.P., Le, H., Chia, D., Papadopoulos, K., Cheng, J., Koller, D., and Klemmer, S.R. Peer and self assessment in massive online classes. ACM Transactions on Computer-Human Interaction (TOCHI) 20, 6 (2013), 33.
[26]
Latham, G.P. and Locke, E.A. Self-regulation through goal setting. Organizational Behavior and Human Decision Processes 50, 2 (1991), 212--247.
[27]
Marsh, R.L., Landau, J.D., and Hicks, J.L. How examples may (and may not) constrain creativity. Memory & cognition 24, 5 (1996), 669--80.
[28]
Sommers, N. Responding to Student Writing. College Composition and Communication 33, 2 (1982), 148--156.
[29]
Tinapple, D., Olson, L., and Sadauskas, J. CritViz: Web-Based Software Supporting Peer Critique in Large Creative Classrooms. Bulletin of the IEEE Technical Committee on Learning Technology 15, 1 (2013), 29.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
L@S '15: Proceedings of the Second (2015) ACM Conference on Learning @ Scale
March 2015
438 pages
ISBN:9781450334112
DOI:10.1145/2724660
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 14 March 2015

Check for updates

Author Tags

  1. deliberate practice
  2. mastery learning
  3. mooc
  4. peer assessment
  5. peer learning

Qualifiers

  • Research-article

Funding Sources

Conference

L@S 2015
Sponsor:
L@S 2015: Second (2015) ACM Conference on Learning @ Scale
March 14 - 18, 2015
BC, Vancouver, Canada

Acceptance Rates

L@S '15 Paper Acceptance Rate 23 of 90 submissions, 26%;
Overall Acceptance Rate 117 of 440 submissions, 27%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)409
  • Downloads (Last 6 weeks)39
Reflects downloads up to 21 Dec 2024

Other Metrics

Citations

Cited By

View all

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media