Peer Assessment Using Student Co-Designed Rubrics

Abstract

This pilot study explores peer assessment using student co-designed rubrics as a strategy that might enhance the clarity of assessment tasks for students and lead to their greater engagement with assessment. The strategy was implemented twice. Student responses, collected in a focus group, suggested the first implementation was not as successful as we had hoped, and the second was designed with alterations intended to improve the experience for students, which included collecting students’ responses online. The research is qualitative, and narrative-based rather than data-focused. The student cohorts are too small, and too different in size, to make generalisations possible. Nevertheless, the two iterations of the experiment yielded interesting contrasts and suggestions for future practice. Many academics might be interested in peer assessment but unsure about its effectiveness or how best to implement it. This research indicates ways that it may be used to enhance students’ experience of assessment.

Share and Cite:

Bacchus, R. and Wallace, J. (2024) Peer Assessment Using Student Co-Designed Rubrics. Creative Education, 15, 164-177. doi: 10.4236/ce.2024.152009.

1. Introduction

The idea driving this research was that students might be assisted in developing what Sadler calls the “awareness and responsiveness” needed to analyse the strengths and weaknesses of their work, both before and after they create it (Sadler, 2005: p. 57) , by playing a role in the assessment of their peers and by co-designing the rubric used for this assessment. This could help them to better understand the criteria and standards by which they would measure each other’s work and by which their own would be measured. Taking a more active role in assessment might also help students to understand more clearly how to respond to criteria and standards in ways they could transfer to future assessment tasks.

We first introduced the idea of peer assessment to an on-campus second-level English cohort of 25 students. The assessment task was a class presentation. The rubric was designed by the students in the first week, and printed copies taken for them to write feedback on. These were collected by the lecturer and returned to the students concerned the following week. The students’ responses to the peer assessment process were collected in a focus group. Their comments showed that while the students were supportive of the element of rubric co-design, they disliked assessing and being assessed by their peers. This seemed to warrant further investigation of ways students’ experience of peer assessment could be improved.

We intended to make several changes, including more explicit discussion both as the rubric was designed and throughout the teaching period, and making the students’ feedback online to guarantee anonymity. Another important change, as suggested by Bloxham and West’s (2004) research, was to award a small part of the students’ total marks for the subject for the quality of the feedback they provided on their peers’ presentations. COVID-19 also brought some unintended changes: where the first class had been on-campus, the second was online; where the first student responses were collected in a focus group, and the responses of the second cohort were collected online through a questionnaire via Survey Monkey. In addition, the student cohorts differed in size (the first was larger), degrees of cohesion, and in individual personalities. It might be impossible to control for the degree of trust and cohesion among a student cohort (something mentioned by students in both groups) in any single subject. Both the differences in cohort sizes and the unintended changes mean there can be neither an exact correspondence nor an exact comparison between the iterations of the exercise; however, they yield interesting contrasts, with much more positive student responses in the second iteration of the experiment. We argue that, at least in part, students’ more positive responses in the second phase can be attributed to the following interventions: the awarding of marks for quality feedback; the fact that peer feedback was given online and was thus anonymous; and the collection of students’ responses online, thus adding another layer of anonymity and a more structured series of questions leading to more thoughtful and richer information.

2. Literature Review

Rubrics are used in higher and even secondary education in Australia and much of the world. As well as feedback and grades, they are intended to provide greater consistency and fairness in marking, and to increase the clarity of assessment tasks. However, research suggests they are not entirely successful. McConlogue (2012) argues that marking involves subjective judgement and is influenced by markers’ previous experiences (McConlogue, 2012: p. 114) . Bell, Mladenovic and Price (2013) argue that both criteria and standards may be understood differently by different markers, but also, crucially, by markers and students. Such understandings draw on both “explicit and tacit knowledge” (see Bell et al., 2013: p. 771 ), and students may lack access to the implicit or tacit knowledge needed to understand a rubric, both before beginning an assessment task and upon receiving feedback. As McConlogue argues, even “the definition of an apparently clear term like ‘argument’ may vary” (McConlogue, 2012: p. 120) . Such gaps in shared understanding, either between markers and students, or between markers, compromise the consistent interpretation of assessment items (Reddy & Andrade, 2009: p. 443) .

Tapp argues that adding more detail in the attempt to add clarity can actually make rubrics “ultimately less intelligible” (Tapp, 2013: p. 324) . This is reflected in Colvin, Bacchus, Knight, and Ritter’s research (2016) , which found that many students find rubrics difficult to comprehend and use effectively and may not provide students with sufficient clarity and transparency about assessment tasks, either before beginning a task or upon receiving feedback. While 71% of respondents in their study reported that they always read rubrics, only 43% said they usually understood from them what was required (Colvin et al., 2016: pp. 6-7) . The gulf between reading and comprehension of rubrics may reflect issues to do with differences between explicit and implicit knowledge or may be a result of more detailed rubrics, which are intended to aid students but result in further lack of intelligibility. Clearly though, together with their finding that some students did not read rubrics because they believed they were too complicated, Colvin et al.’s (2016) research suggests many students may not benefit from the provision of lecturer-created rubrics alone. Reddy and Andrade propose that “an alternative strategy is a collaborative approach in which rubrics were … co-created with … students before they began an assignment” (Reddy & Andrade, 2009: p. 437) . However, in Bacchus, Colvin, Knight, and Ritter’s (2020) comparative study of exemplars and student co-created rubrics as a means of making assessment more transparent, there was a higher percentage of approval in the cohort where exemplars were trialled than in the cohort where co-created rubric were trialled.

When we considered the next iteration of the project, we believed that despite the findings of Bacchus et al. (2020) , student co-design of the rubric would be essential in the case of peer assessment because it would involve explicit discussion of the criteria and standards by which student work would be measured (see Handley & Williams (2009); Hendry & Anderson (2012); and Bell et al. (2013) ). It might also help to reduce some problems of implicit or assumed knowledge and perhaps result in a clearer rubric.

The literature suggests that both students’ and academics’ attitudes to peer assessment are also mixed. Although much research points to its benefits, some shows resistance to peer assessment, as a concept and a process, in part because of concerns about the reliability of student grading, in part because assessment is often considered to be the sole responsibility of the academic, and in part because of the issues it raises about power relations (see Brindley & Schoffield, 1998; Falchikov, 1995; Liu & Carless, 2006 ).

Liu and Carless argue that an advantage of peer involvement in assessment is that it more actively engages students with the identification of standards and criteria, which can in turn help them to “develop conceptions of quality approaching that of their lecturers and so be in a better position to process feedback” (Liu & Carless, 2006: p. 287) . They see that students can learn “not only from the peer feedback itself, but through meta-processes such as reflecting on and justifying what they have done” (Liu & Carless, 2006: p. 289) . Orsmond et al. (2000) also argue that peer involvement in assessment, conducted in a non-threatening, collaborative atmosphere, enables students to learn better because it prompts them to think more critically. In addition, several studies have found agreement between academics’ and students’ marks (see Falchikov, 1995; Falchikov & Goldfinch, 2000 ), and Liu and Carless conclude that “it is now well-recognized that students are reasonably reliable assessors” (Liu & Carless, 2006: p. 282) .

However, Liu and Carless found that both academics and students generally considered marking to be solely the responsibility of academics, who are viewed as “the custodians of standards because they are thought to possess the necessary knowledge and expertise” (Liu & Carless, 2006: p. 284) . Both may therefore be reluctant to participate in peer assessment because students, with less knowledge and expertise than academics, may be considered less likely to carry out reliable assessment. Another reason cited for resistance to peer assessment using grades is that it can disrupt power relations. Brew (1999) , cited in Liu & Carless (2006: p. 285) argues that sharing assessment with students leads to sharing of the academic’s power. While academics may resist sharing their power (Orsmond, Merry, & Reiling, 2000) , students may also feel uncomfortable about having power over peers or peers having power over them, or grading friends or fellow students too harshly (Liu & Carless, 2006: p. 286) . Cheng and Warren reported their students as having misgivings about peer assessment, with some regarding it as “unfair and risky” (Cheng & Warren, 1997: p. 268) because of doubts about the seriousness and objectivity of their classmates.

While Liu and Carless suggest peer feedback may be preferable to peer assessment involving grading, since the reliability of student grading would not then be an issue, they also acknowledge that to advocate the abandonment of peer grading altogether may be to ignore the centrality of marks to students’ experience of assessment. In teaching and learning cultures that emphasise individual achievement over more collaborative approaches, there may be low student motivation for the process. Liu and Carless cite several strategies to integrate peer feedback flexibly with grading. For instance, the peer portion of the assessment could carry a modest weighting; peer assessment could involve multiple peers to minimise risks of bias; or perhaps not all peer-generated marks would need to be counted. Bloxham & West (2004) encouraged students to carry out peer feedback rigorously by awarding 25% of their assignment marks for the quality of peer marking as an extra incentive for students to think carefully about the assessment criteria and the writing of feedback. Many of the students involved were reported as recognising the benefits of peer marking for their own development as learners, and Bloxham & West suggest that awarding marks for the peer feedback element added to the motivation of students and the amount they gained from the exercise. A further strategy for facilitating effective peer involvement in assessment is to embed it within regular course processes (Liu & Carless, 2006: p. 279) . Boud (2000) recommends creating a “course climate” in which the giving and receiving of peer feedback is a normal part of teaching and learning processes, since the more involvement they have in peer feedback processes the more likely students are to develop the necessary expertise for sound judgements. These suggestions indicate that students’ and academics’ initial resistance to peer assessment may be overcome, and that this form of assessment has potential to aid student learning.

3. Method

As reported in Bacchus et al. (2020) , we first introduced the idea of peer assessment and rubric co-creation to a cohort of Bachelor of Education students in an English literature subject who designed the rubric for the task, a class presentation. These students were varied in age from school-leavers to mature-age students. All will be required to design rubrics and provide grades and feedback as part of their professional practice. The rubric was sent to students to confirm it was what had been agreed upon. During the teaching session, students were given printed copies of the rubrics each week, so they were able to write feedback and marks on them. These were collected, checked for any feedback that could be hurtful or offensive, the average peer marks were calculated, and the total posted on a part of an online site to which only the student concerned had access. The rubrics were returned to the student/s concerned at the next week’s class. Students’ responses to the peer assessment exercise were collected in a voluntary focus group led by a colleague. The participants were asked whether they found the exercise valuable or useful, whether they had any difficulties in either providing or receiving peer feedback, and if they could suggest ways to improve the process.

In the second iteration of the strategy, the rubric was designed in the same manner, during the first week of an on-campus class in the same English literature subject. Again, the task was an in-class presentation, the students were of various ages, and would be required to design rubrics and provide marks and feedback as part of their professional practice. Again, the completed rubric was returned to the students to ensure it was the one agreed upon in the class, and the feedback and the mark were moderated (again checking for any potentially offensive comments) and collated before returning it to the student concerned. However, two changes were made to the process: the students were asked to send their marks and feedback online, so that other students could not identify their handwriting, thus protecting their anonymity; and students were awarded a small proportion of their grade (5%) in the subject determined by the quality of the feedback they provided to their peers.

COVID-19 brought other changes: classes were conducted online after the first week, and thus the presentations were conducted in an online meeting. Participants’ responses to the peer-feedback exercise were also collected online through a questionnaire on Survey Monkey. While the students’ decision to provide a response was again voluntary, this difference from the earlier trial perhaps lent more structure to the responses.

The method of analysis for both iterations of the study was the same. The method is qualitative: we identified dominant themes and patterns in the participants’ oral or written responses, loosely following Braun and Clarke’s (2013) method of analysis, a flexible approach which enables researchers to identify themes or semantic patterns in responses, and review and name those that seem to emerge as important.

4. Results

4.1. Results of the First Implementation of the Strategy

As discussed in Bacchus et al. (2020) , in the first experiment the students’ responses to the process of designing and using the rubric were mixed, though some were quite positive. However, responses to the peer assessment exercise were almost unequivocally negative. One issue was the use of a printed rubric because the students could identify each other’s handwritten feedback, and thus the process did not guarantee enough anonymity. The responses also indicated a perception that other students were marking and giving feedback according to a kind of implicit or tacit system of knowledge, with one remarking that the process “needed to be more specific for peer application … [there was] a hidden agenda beyond the marking rubric.” Another said that everyone was “still using their own pre-conceived ideas and the individual marking at the end was a problem.”

It was apparent that the participants liked neither marking, nor being marked by, their peers. One remarked that “Early discussions sort of helped but I became confused when marks did not match expectations.” Presumably, this student is referring to her or his own marks and felt they should have been higher. While this situation is not unusual, it is perhaps more easily accepted in relation to teacher-marked assessment, as suggested by the following comment: “Would have liked some teacher moderation for consistency.” Several students noted there was a tendency towards using grades in the middle of the spectrum: “No one wanted to be harsh and give only a Fail or Pass.” The averaged marks also tended to “flatten out”. Most students achieved grades around a high Credit or low Distinction level. As much as they might have wanted to avoid harshness, students may also have wanted to avoid being so generous that their classmates “took all the good marks”. This suggests that it is important that an educator explains that with criterion-referenced marking, it is possible, in theory, for every student to achieve a high grade.

As mentioned, the literature on peer assessment suggests that students experience some difficulty in marking and being marked by friends; perhaps a greater difficulty in this cohort was being assessed by peers considered not to be friends. There was considerable conflict, particularly between a few high-achieving mature-age students and some school-leaver students who were less committed and whom the mature students resented for a lack of engagement, as demonstrated by lateness to classes, and occasional disruptive behavior. Such lack of cohesion may have skewed the results. This reflects Orsmond, Merry and Reiling’s finding that while most of the students in their study “found the peer assessment exercise beneficial” (Orsmond et al., 1996: p. 246) , three issues emerged. First, a minority of students treated the exercise in a rather cavalier manner, which annoyed other (mostly mature) students; in the present study, this was found in comments like: “Problem of different engagement levels” and “a lot of students not even there every week”. Second, some students were sceptical about how meaningful other students’ marks could be (in our study this was reflected in the comment that “I only want to be marked by kids I respect.”). Third, several students felt: “unqualified” to mark the work of their peers and were reluctant to do so; in our study this feeling was expressed by one respondent: “it’s not really my place to mark others.”

In sum, the students’ responses suggested that discussion of the process needed to be ongoing, rather than only in the first week, that student feedback should be submitted online so that it would be unidentifiable by handwriting, and perhaps that marks should be given for the quality of feedback to provide more motivation for the process.

We believed we had gleaned some ideas about how peer assessment and student rubric co-creation could be modified in a future trial. We still thought that these strategies had potential to help students understand the criteria and standards against which their own and their peers’ work would be measured. The main modifications we made to the process were to ask students to submit their peer feedback online, so that it would be anonymous, and to award students a small proportion of their grade (5%) for providing quality feedback.

4.2. Results of the Second Implementation of the Strategy

The participants’ responses to the second implementation of the strategy were very positive about the process of co-designing the rubric:

Participant A: I did benefit from the process of co-constructing the rubric for the class presentation as I was able to grasp the idea of marking and understanding the most important aspects of a presentation to mark. I think it also benefited my own presentation construction as I knew what I was expecting as a marker so I wanted to tick all of my own boxes when creating my own.

Participant B: We knew exactly what we had to do! For example, I knew my presentation had to consider interactiveness, critical mention, organisation, evidence of knowledge, and be educational to the audience, and so I tried to address each of these components equally.

Participant C: Co-constructing the rubric was very effective because it gave us, as the students, an opportunity to gain a full understanding of the requirements and expectations of the presentations. All too often, rubrics are difficult to follow and we are marked against our ability to comprehend the task (rather than show our skills in the subject), and it was great to eliminate that factor for once.

These comments suggest not only that many students may find lecturer-constructed rubrics difficult to comprehend, but also that they may feel there is a gap between rubrics and assessment of skills, and that they are often judged on their ability to comprehend the task or rubric rather than their skills or subject knowledge.

The participants were also much more positive than those in the first implementation about the peer feedback process. Asked if they felt confident in giving feedback and marks, in most cases the anonymity of the feedback contributed to students’ confidence, as did a general feeling of respect for others’ efforts:

Participant C: [It] allowed me to feel confident in giving honest marks and feedback as I was able to frame and support my reasoning for each mark. I found confidence in the fact that the feedback was anonymous and was typed rather than handwritten, meaning we could not identify “who said what”.

Participant A: I feel pretty confident in the marks and feedback given for the presentation of my peers … I think they all worked really hard to create a satisfactory presentation that educated and engaged us as a whole. My main confidence was in the fact that I knew each peer would reflect and take their feedback into account and would not feel upset when told they needed to reflect on something a little more in the presentation.

Interestingly, one student distinguished between enjoying the process and finding it worthwhile because it facilitated the reward for effort and created a dynamic in which their judgement was valued:

Participant D: I don’t really like doing it but think it is a good thing. Having it anonymous was the only way I would do it unless I was face to face with the person being assessed. I did like being able to reward effort. It felt quite egalitarian and as though our opinions really mattered. Different to usual teacher/student dynamic.

However, Participant E wrote:

I like giving supportive feedback but didn’t feel very well qualified to give it.

Students were also asked if they felt confident in their peers’ ability to give feedback and marks and here again, the students’ respect for each other’s integrity and honesty emerged as an important factor:

Participant A: I do feel confident as I trust their judgement and appreciate their suggestions for further presentation. I think they’re all rather honest and provided me with reasonable feedback.

Participant B: Yes, partly because we had agreed on the rubric as a group and partly because I believe in “the wisdom of the crowd”, in that an average would probably be quite accurate.

Participant C: Yes, it felt more equal that the mark was an average of a range of perspectives. I feel that sometimes the marks we receive at uni are dependant on the individual marker, and this can be quite disheartening in many instances, whereas here i could have confidence that my mark was a collective and fair one.

Participant E: Yes, the more perspectives the better.

These responses to questions about confidence in both giving and receiving peer feedback suggest that having created the rubric together and therefore collectively deciding on the criteria and standards helped students feel confident in framing and giving a rationale for their feedback and marks. The responses also suggest that students may feel that their opinions are not always taken seriously by academics, and that a collectively derived mark may eliminate elements of individual marker bias.

However, concerns about issues of power relations were threaded through students’ responses. One wrote: “It is nerve-wracking to be judged by one’s peers.” This feeling reflects Liu and Carless’ observation that power relations impact on students because the audience for their work is no longer just the academic, and they may “resent the pressure, risk or competition peer assessment could easily engender” (Liu & Carless, 2006: p. 286) .

Asked if they were motivated to take the task of giving marks and feedback more seriously because it contributed to their total mark in the subject, most students answered that they were, suggesting that this adjustment to the process was beneficial:

Participant C: Yes, if this were not the case I wouldn’t have put as much thought and time into my marks and feedback as I did.

Participant A: Absolutely. I tried my best to provide my peers with serious and helpful marks because I wanted serious marks as well.

The peer feedback given was generally detailed and supportive, and the marks generous. Several participants felt there was potential for students’ marks to be more generous than an academic’s marks would have been, which sometimes created discomfort about giving other students low marks:

Participant E: I think the students were probably very generous and this made me feel uncomfortable marking them down for any reason.

For at least one student, though, the initial discomfort about being “overgenerous” was dispelled by the feedback and marks being anonymous:

Participant C: Because the group of people working in the subject are quite close, I was initially worried that there may have been the potential for marks to be higher (and subjectively more generous), but because the feedback was anonymous, this potential was eliminated and I do believe the marks were on par with what the teacher would have given.

One student raised the interesting point that the cohort may have expected reciprocal generosity:

Participant A: I think my marks may have been lower if the teacher had marked them as they may not have been as “kind” towards me—haha. I think the peers were hoping that I would give a high mark to them and therefore they gave me a higher mark.

Finally, when asked whether they would recommend this method of assessment, students provided some thoughtful criticisms, based largely on issues that either did or could arise due to unequal effort on the part of some members of the cohort.

Participant A: Although this was an interesting and different assessment that I have never done before … I feel it proved to be a little unfair to the class members that attended every class, marking their peers and giving their peers feedback as well as contributing to the task. I think the peers that did not attend were not worthy of getting feedback or marks as they did not provide it to others.

Participant D: I think this method was interesting and may only be beneficial for smaller cohorts. It could also be an ethical issue, particularly if people don’t get along in the class as this may change the marks. To overcome this however, if it was an extreme outlier it could be taken out so the mean is not affected. Overall, I would recommend it because it did make the marking criteria easier to follow for the assessment.

One student offered a positive response to the co-creation of the rubric, especially in terms of its relative intelligibility, but also some qualifications about the peer assessment process and its reliance on the willingness of all students to participate equally:

Participant C: Yes! It allows peers to overcome the pressures of rubrics in university assessment as they can find confidence in knowing exactly what is expected of them. There is no confusion or wasted time trying to deconstruct pages and pages of unnecessary criteria which we find in many assignments. In saying this, the success of this method of assessment relies heavily on class communication, willingness and participation, and all members must be prepared to partake for it to be fair, equal, and enjoyable. I would recommend that equity issues be carefully considered should this method of assessment be used in the future.

However, other students enthusiastically expressed their approval of the process:

Participant D: I think I would. It did make me really listen and observe the tutorial presentations and I enjoyed it.

Participant E: Yes, being a Bach of Education student, I feel being involved in the assessing end of assessment should be done early and frequently.

Here, the participant echoes Boud’s (2000) call to create a course climate in which the giving and receiving of peer feedback is a normal part of teaching and learning processes.

Participant C: I really commend […lecturer’s name] for her willingness to give this method of assessment a go with our … class. I feel privileged to have been a part of it and wish all my assessments could be as stress free as this one was. The stress of university assessment was alleviated but I still gained the same amount of knowledge (without the stress and anxiety). I highly recommend it in future practice!

The differences between the results for each iteration of the exercise show that students’ responses to peer assessment were far more positive in the second. Nevertheless, some concerns were raised, and these will be discussed below.

5. Discussion

Collecting the student responses online, through a series of questions, rather than in a focus group, resulted in anonymous and more detailed, structured and perhaps more thoughtful answers, but it also makes exact comparisons impossible, as do the perhaps inevitable differences in the relationships among students in the two cohorts. Nevertheless, it seems clear that the alterations to the design of the assessment led to greater student satisfaction.

Certainly, making the peer feedback and marking online, and thus anonymous, was an important improvement. Likewise, awarding students 5% of their total mark for the quality of feedback encouraged most to engage seriously. However, despite some very enthusiastic responses, some students raised concerns centered on the issue of others putting less effort into both their presentation and the peer assessment process. It is difficult to see how, despite the awarding of marks for quality feedback, such concerns can be overcome. That said, the concern itself has a positive side, as it suggests that the students who expressed it valued feedback over and above marks: they seemed to feel that despite losing the 5% for giving feedback on others’ work, those students who did not attend others’ presentations benefitted in their learning from receiving feedback whilst not returning the favour to the rest of the class.

The differences in the results cannot be attributed only to the alterations to the design. In the first cohort, there was not the same degree of cohesion and mutual respect as there was among the second, who felt that for the most part other students put effort into both their presentations and the process of peer assessment. This perhaps led to their greater willingness to give and accept feedback and, along with the fact that the marks were mostly generous, their greater confidence in the process of peer assessment. In turn, this very issue of respect and trust is a positive that could be factored into future trials of peer assessment, as we discuss in our conclusion.

In summary, the differences between the participants’ responses to the first and second implementations of the strategy suggest that both student codesign of rubrics and peer assessment might be successful innovations if peer feedback and grading are made unidentifiable and there is some reward for providing quality feedback.

6. Conclusion

The results of the study may not be able to be generalised, but some conclusions may nevertheless be drawn. Our alterations to the design of the process clearly led to greater student satisfaction. Albeit with a few well-considered reservations, the second cohort of participants viewed the peer assessment process as fairer than assessment carried out by a teacher alone as it eliminated elements of individual marker “bias”. One reported feeling more confidence in their mark being fair because it was reached collectively, and another felt the process was less stressful. This indicates that there is potential for peer assessment to be further explored as a way of helping students improve their present and future performance and, particularly for those who will become teachers themselves, helping them find confidence in designing rubrics and marking and giving feedback. This opens the way for further research into how the design of peer feedback processes may be improved, and how the seemingly inevitable issue of differences in cohorts may be ameliorated. If the greater success of the second iteration of the strategies can be attributed at least in part to the greater degree of closeness and trust that already existed among the students, might it be possible to build such cohesion and trust throughout a teaching session? It may be difficult for a single academic to make the giving and receiving of peer feedback a normal and ongoing part of whole-of-course design, as called for by Boud (2000) , but it is certainly possible to make it an ongoing part of a given subject. This could be achieved by creating an assessable task in which students give each other small amounts of feedback, with or without marks, regularly throughout a teaching session. By creating such a space for interaction, in which confidence and trust in giving and receiving feedback might grow throughout the course of the subject, such an assessment task might also help create a “community of learners.”

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Bacchus, R., Colvin, E., Knight, E., & Ritter, L. (2020). When Rubrics Aren’t Enough: Exploring Exemplars, Student Rubric Construction, and Peer Assessment as Ways of Enhancing Student Engagement with Assessment Tasks in Higher Education. Journal of Curriculum and Pedagogy, 17, 48-61.
https://doi.org/10.1080/15505170.2019.1627617
[2] Bell, A., Mladenovic, R., & Price, M. (2013). Students’ Perceptions of the Usefulness of Marking Guides, Grade Descriptors and Annotated Exemplars. Assessment & Evaluation in Higher Education, 38, 769-788.
https://doi.org/10.1080/02602938.2012.714738
[3] Bloxham, S., & West, A. (2004). Understanding the Rules of the Game: Marking Peer Assessment as a Medium for Developing Students’ Conceptions of Assessment. Assessment & Evaluation in Higher Education, 29, 721-733.
https://doi.org/10.1080/0260293042000227254
[4] Boud, D. (2000). Sustainable Assessment: Rethinking Assessment for the Learning Society. Studies in Continuing Education, 22, 151-167.
https://doi.org/10.1080/713695728
[5] Braun, V., & Clarke, V. (2013). Teaching Thematic Analysis: Overcoming Challenges and Developing Strategies for Effective Learning. Psychologist, 26, 120-123.
[6] Brew, A. (1999). Towards Autonomous Assessment: Using Self-Assessment and Peer Assessment. In S. Brown, & A. Glasner (Eds.), Assessment Matters in Higher Education (pp. 159-171). Open University Press.
[7] Brindley, C., & Scoffield, S. (1998). Peer Assessment in Undergraduate Programmes. Teaching in Higher Education, 3, 79-90.
https://doi.org/10.1080/1356215980030106
[8] Cheng, W., & Warren, M. (1997). Having Second Thoughts: Student Perceptions before and after a Peer Assessment Exercise. Studies in Higher Education, 22, 233-239.
https://doi.org/10.1080/03075079712331381064
[9] Colvin, E., Bacchus, R., Knight, E., & Ritter, L. (2016). Exploring the Way Students Use Rubrics in the Context of Criterion Referenced Assessment. In The 39th HERDSA Annual International Conference. Higher Education Research and Development Society of Australasia, Inc.
[10] Falchikov, N. (1995). Peer Feedback Marking: Developing Peer Assessment. Innovations in Education & Training International, 32, 175-187.
https://doi.org/10.1080/1355800950320212
[11] Falchikov, N., & Goldfinch, J. (2000). Student Peer assessment in Higher Education: A Meta-Analysis Comparing Peer and Teacher Marks. Review of Educational Research, 70, 287-322.
https://doi.org/10.3102/00346543070003287
[12] Handley, K., & Williams, L. (2009). From Copying to Learning: Using Exemplars to Engage Students with Assessment Criteria and Feedback. Assessment & Evaluation in Higher Education, 36, 95-108.
https://doi.org/10.1080/02602930903201669
[13] Hendry, G. D., & Anderson, J. (2012). Helping Students Understand the Standards of Work Expected in an Essay: Using Exemplars in Mathematics Pre-Service Education Classes. Assessment & Evaluation in Higher Education, 38, 754-768.
https://doi.org/10.1080/02602938.2012.703998
[14] Liu, N., & Carless, D. (2006). Peer Feedback: The Learning Element of Peer Assessment, Teaching in Higher Education, 11, 279-290.
https://doi.org/10.1080/13562510600680582
[15] McConlogue, T. (2012). But Is It Fair? Developing Students’ Understanding of Grading Complex Written Work through Peer Assessment. Assessment & Evaluation in Higher Education, 37, 113-123.
https://doi.org/10.1080/02602938.2010.515010
[16] Orsmond, P., Merry, S., & Reiling, K. (1996). The Importance of Marking Criteria in the Use of Peer Assessment. Assessment & Evaluation in Higher Education, 21, 239-250.
https://doi.org/10.1080/0260293960210304
[17] Orsmond, P., Merry, S., & Reiling, K. (2000). The Use of Student Derived Marking Criteria in Peer and Self-Assessment. Assessment & Evaluation in Higher Education, 25, 23-28.
https://doi.org/10.1080/02602930050025006
[18] Reddy, Y. M., & Andrade, H. (2009). A Review of Rubric Use in Higher Education. Assessment & Evaluation in Higher Education, 35, 435-448.
https://doi.org/10.1080/02602930902862859
[19] Sadler, D. R. (2005). Interpretations of Criteria-Based Assessment and Grading in Higher Education. Assessment & Evaluation in Higher Education, 30, 175-194.
https://doi.org/10.1080/0260293042000264262
[20] Tapp, J. (2013). ‘I Actually Listened, I’m Proud of Myself.’ The Effects of a Participatory Pedagogy on Students’ Constructions of Academic Identities. Teaching in Higher Education, 19, 323-335.
https://doi.org/10.1080/13562517.2013.860108

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.