Experience of Conducting Objective Structured Clinical Evaluation (OSCE) in Malawi ()
1. Introduction
Assessment of students learning is a debatable issue [1] and has proved to be a challenge in many educational institutions. The challenge is compounded when it comes to the assessment of students learning in clinical practice [2] [3] . Mahara (1998) points out that, clinical evaluation is intended to provide feedback to students and teachers on what learning has taken place and what is required to improve the teaching-learning process, thereafter allowing the teachers to make a definitive judgment whether the students’ practice meets the professional or the academic requirements [4] . The effectiveness of learning in the clinical setting can be evaluated by students’ achievement of clinical competences [5] .
Competence has been defined in different ways. ICN Framework of Competences for the Nurse Specialist (2005) described competence as application of a combination of knowledge, skill and judgment demonstrated by an individual in daily practice or job performance [6] . In agreement to this the Australian National Competence Standards for Nurses in General Practice (2005) defined Competence as the ability to perform tasks and duties to the standard expected in employment [7] . Furthermore, Cowan et al. (2005) suggested having a holistic definition of competence that should include knowledge, skills, performance, attitudes and values [8] . From the three definitions above it can be seen that there is an agreement in the definitions that competence reflects the holistic nature of nursing roles. Such student nurses to be certified fit to practice need to demonstrate that they have acquired the competences. Therefore, there is need to have effective means of assessing students competencies.
Many clinical assessment strategies are based on direct observations. While (1991) asserts that the main challenge to clinical evaluation lies in the subjectivity of the observational process [9] . He states that human observation is noted to have an inherent bias and is a subjective process. Chapman (1999) also supports this view arguing that it is difficult to overcome subjectivity as assessments are based on a value judgement, which varies from person to person [10] . A major challenge in any assessment process is to ensure that objective measurement is used and to guarantee objectivity is particularly difficult in the assessment of clinical competence [11] . Furthermore, the clinical evaluation should be based upon a constant one in one observation period with a student [9] . Clinical teachers or ward sisters are usually required to accommodate a varying number of students in their clinical supervision teaching schedule so that assessments of individual students’ performance are usually based upon a sample of the students’ total experience in the placement [9] . The forgoing discussion reflects some of the challenges in the assessment of students’ clinical competence and it is possible to overcome some of these challenges with OSCE.
Objective Structured Clinical Evaluation (OSCE) is defined as “an approach to the assessment of clinical competence in which the components of competence are assessed in a well-planned or structured way with attention being paid to objectivity” [12] . OSCE is a valid and reliable method of assessment [13] [14] . Further to this, a review done by Bartfay et al. (2004) regard OSCE as a gold standard assessment strategy for health professionals [15] and they enhance the quality of health professional education [16] .
Moreover, studies demonstrate that OSCE preparation may motivate students to participate more while in clinical practice [17] . OSCE motivates students to learn the clinical skills being examined [18] -[21] . Nulty et al. (2011) argue that OSCEs present one viable educational strategy to promote student engagement and the achievement of desired learning outcomes, notably including clinical competence [22] . It is increasingly being used as a method of assessment in nursing and allied health curricula [15] [23] [24] . OSCE is gaining popularity in undergraduate nursing programs throughout the western world [25] [26] . Conversely, there is scant literature pertaining to OSCE as an approach in evaluating undergraduate nursing programs in other settings. The purpose of this paper is to discuss how Kamuzu College of Nursing (KCN), a constituent college of the University of Malawi has been designing and conducting OSCE. The discussion will be relevant to nurse educators who use OSCE as a means of clinical skills assessment.
Why OSCE in Malawi?
Malawi faces challenges with shortage of nurses and the nurse/patient ratio is at 38 nurses per 100,000 population [27] . Responding to the shortage most training institutions have increased nursing students intake within the limited available resources. This may mean that students fail to learn adequately because there are too many of them in a clinical area [28] . In addition, patient acuity has increased in in-patient settings; the need for closer supervision of students has intensified. Given the current shortage of nurses in most facilities and the increasingly complex needs of patients, staff nurses do not have the time to provide acceptable level of supervision [29] . These changes significantly limit the ability of the institutions to provide high quality clinical education for nursing students, thereby increasing the imperative to develop alternative and innovative learning opportunities [22] . Tanner (2006) recommends integrating simulation as a complement to hands-on clinical experiences as it has the capacity to reduce clinical placement demands and improve the preparation for new graduates [29] . Similarly, Nulty et al. (2010) assert that simulated clinical situations such as OSCEs are intrinsically aligned and authentic, and should also promote student engagement and the achievement of desired learning outcomes and argue that this justifies their use of OSCE as both learning and assessment tool [22] .
Over the past ten years Kamuzu College of Nursing (KCN) has adopted the use of OSCE in the assessment of student’s attainment of clinical competences for the undergraduate nursing programme. The conduct of OSCE has varied from year to year and continuously being informed by each preceding year. However, OSCE is not used as a sore assessment strategy for student’s clinical competences. To ensure reliability and validity of our OSCE other assessment strategies are used, these include portfolios and case studies. However, Rushforth (2007: p. 488) argues that OSCE offers particular strengths in terms of assessor objectivity and parity of the assessment process for all students, especially when compared with other assessment of practice processes [24] . Additionally, Watson et al. (2002) observes that these other assessments do not assess the student’s acquisition of skills [30] . We believe and agree with Rushforth (2007) on the application of Millar (1999) model that OSCE puts the students at the “show how” level hence student’s competences are assessed in a more objective and standardized manner [24] .
2. OSCE Process
At KCN OSCE is administered to undergraduate student nurses after each clinical placement. This is usually at the end of each semester from the first year to the fourth year. The intention of OSCE is to facilitate learning while assessing whether the students have acquired the knowledge, skills and appropriate attitudes. In each semester the students start with theory block then they go for the clinical placement and after the clinical placement students are given OSCE. Usually the practice module is related to the content they cover during the theory block and the skills chosen for the OSCE are mapped with the learning outcomes and the students’ level of clinical exposure [31] . During the OSCE a number of skills are assessed within the examination and each skill is tested at a station. The length of the OSCE station is generally eight to ten minutes. Consistent with Pender & Looy (2004) and Byrne & Smyth (2008), all candidates are assessed using exactly the same stations with the same marking sheet and they rotate between stations until they have completed a circuit [32] [33] . Two examiners assess the student using the mark sheet and after the bell rings, to signify that the time is up, the two examiners agree on the average mark of the student and the grades are entered simultaneously. Rushforth (2007) pointed out that evidence caution relying on the judgments of single examiners [24] . By the end of the OSCE all the students will have gone through each station and been marked according to the mark sheet.
To accommodate large numbers of students, the circuits are duplicated. For instance, we organize multiple stations where students would be required to perform the same skill. This process is costly, very stressful and requires extensive preparations. Similarly, Walters and Adams (2002) agree that OSCE is labor intensive especially on the day [34] . Additionally, Khattab and Rawlings (2001) point out that the process requires careful organization [35] . Despite the above challenges, the education benefits of OSCE far outweighs the implications [35] since it greatly enhances the application of theoretical principles to practice and less time is required for marking of the mark sheets [34] . Moreover, the results are fulfilling because you are able to see the skills of an individual students and we believe that at the end of the programme our students are competent. Therefore, it is important to start the preparation for the assessment well in advance [31] . There is also need for extensive commitment from all the people involved.
2.1. Student Preparation
Student preparation is very vital before administering any OSCE. Barry (2011) regards OSCE preparation to include lecturer led theory and workshops, individual preparation and practicing in the laboratory in groups [36] . At the outset, of the academic year, students are given a detailed explanation that OSCE is one of the strategies that would be used to assess their competence. Further to this, during the course of learning and clinical practice, students are invited in groups to the skills laboratory to practice skills mostly those that are examined during OSCE. The clinical instructors and lecturers demonstrate to the student different nursing skills following a checklist and the students are given the opportunity to do the return demonstration. Khattab and Rawlings (2001) observe that demonstrating to the students help them to develop competence in clinical skills [35] . Similar to Furlong et al. (2005) at the end of each practical session the checklist are given to the students [37] .
When administering OSCE we appreciate that students consider OSCE to be very stressful [36] - [38] . To ensure that students are well prepared, a day before the OSCE students get oriented to the whole process of OSCE setup this is done to interact with the students and to respond to any other quires they may have. During this time the lectures, Dean of Students and the OSCE coordinator meet with the students. Corresponding to Walters and Adams (2002) our students have regarded this session as beneficial as it helps them to cope with the stress [34] . On the day of the OSCE students are checked in to a comfortable waiting area and are also briefed on the nature of the examination by the coordinator. According to Alinier et al. (2003) in whatever way the OSCE is used, students should be clearly briefed and informed about the aims and objectives of the session [20] . The briefing before the OSCE allows students time to become orientated to the process [20] .
The information during both briefing sessions include the instructions to the students, time allocated for each station, number of assessors and the role of assessors and the type of interaction to be expected. We agree with Pender & de Looy (2004) and Brosnan et al. (2006) that the highest stress is experienced prior to the assessment [32] [38] . As such, the coordinator continuous reassures the students before getting into the examination room. We strive to identify a lecturer with good communication skills to be the coordinator. Our students have reported reduced anxiety when interacting with the coordinator. This is congruent to the findings by Brosnan et al. (2006) who found that the corridor facilitator was “calming” and “reassuring” [38] . Nonetheless, there is a need to emphasize the role of the examiners to the students. Our students have reported that some lecturers are very serious and they make students to be more stressed during the assessment. Barry et al. (2011) allude to this that the level of stress experienced interferes with students’ performance [36] .
2.2. Simulated Patients
Over the years we have shifted from using manikin alone to using both manikins and simulated patient during OSCE. We noted that students were encountering some challenges because of the artificial nature of OSCE [30] more especially when manikins alone are used. For example, the use of manikins for procedures hinders nurse- patient interaction and sometimes students may get confused as whom to communicate with regarding the procedure. This is congruent to the findings of Barry et al. (2011) that some of the students felt that the use of the simulators could not replicate clinical practice in relation to assessment of communication and interpersonal skills [36] . Where the students are to perform a task on a manikin, a simulated patient is asked to sit in for purposes of communication. Simulated patients are individuals who portray a specific clinical case, typically, they are not affected by bio-psychosocial conditions they are depicting but they are simulating clinical problems solely for the purpose of training and assessment [39] . Simulated patients are given thorough instructions for them to effectively carry out their role and to ensure that they give the same information to all candidates. We have learnt that the use of manikin and simulated patients make the OSCE environment very artificial. Wass et al. (2001) maintain that the most rigorously controlled OSCE is still removed from the real world of clinical practice [40] . However, the use of real patients as subjects for the OSCE stations is very difficult and may not be appropriate.
One of the challenges we have had over the years is whether to let the simulated patients to give feedback on individual student’s attitude when performing the task. It has been urged that for us to assess the attitudes of the students, it is important to hear from the feeling of the simulated patients. Major (2005) maintains that asking simulated patient to give in their views adds objectivity to OSCEs [21] . Similarly, Walters and Adams (2002), Boursicot and Roberts (2005) encouraged simulated patient to feedback to the examiners [34] [31] . However, literature surrounding this argument is sparse.
2.3. Examiners
Equitable and consistent marking of OSCE stations is essential to ensure parity of assessment for students. Our OSCE is designed to be an objective assessment; however, we recognize that examiners can have potential subjective opinions when scoring and rating students. To ensure that objectivity is sustained we recruit lecturers from different departments in the college plus those in the department then the examiners are oriented to the examiners instruction, and scoring of the students using the mark sheets. Jones et al. (2010) argue that although a structured mark sheet enables consistency of marking, the role of the examiner in ensuring reliability is also crucial and careful preparation of all examiners is therefore essential [26] . We understand that the role of the examiner is to observe and record the student’s performance [20] . Rennie and Main (2006) point out that training of assessors is crucial to ensure reliability and consistency in the marking criteria [2] . Similarly, Alinier (2003) suggests that preparation of nurse educators before OSCE is essential [20] .
These briefing sessions clarify most of the issues the examiners would have. However, to conform to the assessment rules and regulations of our college, these examiners are not told the exact OSCE tasks. On the day of the examination the examiners arrive early enough to allow familiarization with their station mark sheet, initial conversations between examiners and simulated patients or volunteers at their respective stations. The challenge of involving lecturers from different department is that most of them feel uncomfortable to score the students [33] . Nonetheless, the continuous involvement in the OSCE has made most of the examiners to be comfortable to participate.
Lecturers marking the same station in different circuits are required to liaise with each other to ensure consistency in their approach. This helps to ensure that they are not influenced by their own values and beliefs, thereby promoting inter-observer reliability [26] . A reserve examiner is identified for the examination day. Usually it is the person in overall charge of the organization, and has familiarity with each of the task and can step in at each station if required.
2.4. Vetting
The OSCE is carefully structured to include parts from all elements of the curriculum as well as a wide range of skills. While designing OSCE we keep in mind that the process is aimed to direct students learning as such the stations are diversified, to help students improve different skills as well as their confidence [20] . The module coordinator together with lecturers, involved in teaching a particular clinical module, develop a blue print for the OSCE. The blue print is then used to come up with OSCE questions/tasks. Blue-printing is a process by which the skills to be examined within the stations that make up an OSCE are mapped to the specific learning outcomes of a module or course [26] . Newble (2004) maintains that this is an extremely valuable strategy for enhancing and defending the validity of an examination [41] .
The team is also responsible for the formulation of the mark sheets, examiners instructions, simulated patient instructions and a list of all available equipment. A meeting of all lecturers in the department is then called to vet all the documents developed. The aim of the vetting is to ensure that; candidates instructions include exactly what task they should perform in a station, examiners instructions assist the examiners at each station to understand their role and conduct the station properly, the mark sheet include all the important aspects of the skill being tested and checking the availability of all the equipment to be used. The conduct of vetting is congruent to the recommendation by Byrne and Smyth (2007) who recommended the formulation of a panel of nurse educators to validate the stations both for content and accuracy [33] . Additionally, Rushforth (2007: p. 488) emphasises that research “makes very clear that each new OSCE should be subject to rigorous scrutiny and piloting to ensure that the reliability and validity of that particular assessment is maximized” [24] .
2.5. Examination Time
During vetting the time for each station is determined. Allocating time to the station is not an easy task bearing in mind that the time should be the same for all the stations. The time for the stations has varied year to year in response to the feedback from the students and examiners. Students have always complained that the time allotted for each station is not enough. Similar findings are reported by [2] [33] [36] . However, with the large numbers of students, giving students more time per station may result in finishing the OSCE very late or reducing the number of stations. This may have an implication on the elements of the curriculum that are assessed. Furthermore, examiners may tire and their scoring may be jeopardized. To overcome this challenge, many authors recommend “mock running the stations before handing to check if the tasks are achievable within the specified time [33] [36] . This means that it is important to prepare for the OSCE well in advance in order to allow time for trying the stations. Mock running the stations seems to be a feasible and reliable strategy of ensuring validity of the OSCE. There is room for us to review and adopt the mock run to improve our conduct of OSCE.
2.6. Creating Marking Sheets
Students are scored using a predetermined mark sheet which is developed well in advance. The mark sheet is carefully designed to act as a score sheet as well as a checklist. Allowing the examiner check if the candidate does the task and also score the student simultaneously. Checklists are beneficial as they enable assessors with less experience with the skills to reliability test students’ performance [42] . An examiner assigned to one station observes and scores the student as they perform the task. Each mark sheet is accompanied by specific examiners instructions (see Table 1 and Table 2). Table 1 gives specific instructions to the examiners for grading a student ability to manage a child with hypoglycemia.
The challenge the school has had in the development of the mark sheet is to ensure that the mark sheet only relates to the skill being assessed. Most of the steps in the skills we assess the students on have elements that are not directly related to the skill for example, communication, hand washing, donning of gloves and documentation. Scoring the students performing these elements has made students, who were otherwise meant to fail, to pass OSCE although their performance of the skill was not safe. Jones et al. (2010) observe that other elements, such as greeting the patient and hand decontamination, whilst acknowledged as good practice, may not be considered essential elements of the skill and may distort the student’s overall score if marks are awarded [26] . While Jones et al. (2010) asserts that by removing these arguably unnecessary marking criteria, the content validity of the station is strengthened by measuring only components of the actual skill [26] , we maintain that the face validity of the skill is still vital.
There is no agreeable way of ensuring that the mark sheet only assesses what it intends to assess. In their approach, Walters and Adams (2002) deducted ten marks for the student whose practice was shown to be unsafe at any of the stations from the allocated score for that station [34] . However, one quick and effective way to identify a student’s demonstrated competence and safety for passing an OSCE session is to categorise some of the marking criteria as essential criteria or starred or critical points, for which a positive score must be elicited in order to pass the station. Theses essential criteria help to maintain safety. Table 2 is an example of marking mark sheet for a child with hypoglycemia. The candidate is expected to calculate the correct volume of 10% dextrose and administer the correct dose. The critical points are given more marks for example for calculating the volume the student can either be given a 0 or a 4. Using the format, we have seen candidates passing because they performed according to the expected standard.
2.7. Evaluation and Feedback
Students are allowed to evaluate the OSCE at the end of the session. Once they completed the circuit, they are given the evaluation form to complete. The form consists of; the organization of OSCE, relevance of the tasks, time allocated for each task, the examiners attitude and any suggestions they students may for the improvement of the OSCE. The lecturers that are coordinating the OSC quickly go through the evaluation form and communicate any issues raised by the students to the examiners. Immediate issues to do with students’ perception of bad reception from the examiners are dealt with immediately. Long term issues such as time and any suggestions are usually taken onboard and always informed our conduct of OSCE. We observed a tremendous improvement of examiners attitude over the years as the comments from the students shifted from examiners being unfriendly to being very friendly. At the end of the OSCE the students are given the general/overall feedback. This session is aimed to provide feedback to the students that help them improve their practice and build their knowledge. Alinier et al. (2006) recommended that students should regularly receive feedback to make sure that they take away from the experience what was expected [42] . Similarly, Pender and de Looy (2004) reported that OSCEs helps students to be aware of key skills necessary for the competent practitioners [32] . Our students are always keen to get the general feedback at the end of the OSCE agreeing with Brosnan et al. (2005) and Alinier et al. (2006) that feedback to students is so important and they highly value it [38] [42] .
Individual feedback is given to students who failed any of the station. To achieve this, examiners are encouraged to indicate on the mark sheets the comments for the practice of each student. This helps the students to understand why they failed and realize the areas that they need to improve. Congruent to Brosnan et al. (2005) Students who fail a station are required to attend remedial supervised practical skills before repeating the OSCE [38] . Therefore, early feedback to students is very vital especially to students who failed.
3. Conclusion
This paper has described the OSCE that is currently being done at KCN in Malawi as a tool to teach and measure clinical competence for undergraduate nursing students. We maintain that OSCE is a meaningful and fair form of assessment in our setting and that it has had a positive effect on our curriculum. The conduct of OSCE is congruent to the findings of most of the studies done on conducting OSCE with nursing students. However, we observe that the planning and conduct of OSCE can be changed in different settings. The paper concludes that OSCE can be a worthwhile valid strategy of teaching and assessing nursing students as long as it is properly designed. Nonetheless, profound commitment of all stakeholders involved is very vital.
Acknowledgements
The authors would like to thank all the lecturers and students from the KCN for their valuable participation and feedback in the numerous OSCE sessions that have been at KCN.