Moments in Transformation: Newly Qualified Lifelong Learning Teachers’ Reconceptualization of Assessment in Practice

DOI: 10.4236/ce.2020.1111182   PDF   HTML   XML   69 Downloads   159 Views  


This study aims to establish the time and process of transformation in the understanding and practice of beginner teachers in their assessment practice. It investigates the pre- and post-training conceptualisations of assessment amongst recently qualified teachers in the lifelong learning sector in the UK. Using the lens of transformative learning, it maps out the relevant processes, factors and time in re-conceptualisation. The study employed a combination of statistical tools and content analysis for data analysis. It found that the dominant pre-training conceptualisation of assessment was in its summative essence and that re-conceptualization is dynamic and occurred mostly during practice through participatory rather than chronological experience. The study calls for a review of the structure and content of assessment education beyond training programmes.

Share and Cite:

Ade-Ojo, G. and Duckworth, V. (2020) Moments in Transformation: Newly Qualified Lifelong Learning Teachers’ Reconceptualization of Assessment in Practice. Creative Education, 11, 2477-2497. doi: 10.4236/ce.2020.1111182.

1. Introduction

The discourse on assessment substantially focuses on assessment types and the uses they are put to (National Research Council, 2001; Taras, 2005; Harlen, 2007; Berry, 2008). Two assessment types, summative and formative, have dominated the literature (Harlen, 2007; McDonald, 2012; The Assessment Reform Group, 2006). While summative assessment connotes a final engagement (McDonald, 2012), formative assessment is positioned as a non-ending process (Harlen, 2007). Differences have also been identified across other assessment types such as assessment for, of, as well as assessment as learning (Earl & Katz, 2006). While both assessment for, and as learning are focused on improving and monitoring learning, assessment of learning presents teachers and learners as users of a product created by others (Berry, 2008; Earl, 2006; Earl & Katz, 2006; Earl, 2013), sometimes erroneously, as they are sometimes created and utilised by teachers in testing their learners (McDonald, 2012). The former is viewed as a part of teachers’ pedagogical tool in the same way as delivery strategies, resource creation and lesson planning are (Harlen, 2007).

More contemporary themes in assessment discourse focus on the attributes that teachers need to develop in order to be able to use assessment as and for learning. These include teachers’ belief and knowledge of assessment ( Brown & Gao, 2015; Brown, 2004a; Hamdan-Mansour, 2010) and assessment attitude (Jones & Leagon, 2014). Assessment knowledge and belief combine to make up the conception of assessment (Brown, 2004b), which is described as a “general mental structure” (Thompson, 1992: p. 141), and “the ideas, values and attitudes people have toward what something is” (Brown & Gao, 2015: p. 4). Although attitude is a product of the integration of beliefs and knowledge (Fabrigar, MacDonald, & Wegener, 2005), it is a “learnt process” (Osgood, Suci, & Tannenbaum 1967: p. 190) and does not happen instantaneously (Oraif, 2007; Oskamp & Schultz, 2005). These features contribute to Teacher Assessment Identity (TAI) (Looney, Cumming, van Der Kleijb, & Harris, 2017) and inform the disposition of teachers to assessment.

Assessment attribute is explored through the lens of assessment literacy (AL) (Xu & Brown, 2016) and transcends mere assessment knowledge to include the understanding of the relationship between assessment and learners’ achievement (Xu & Brown, 2016). Variables such as teachers’ conceptions of assessment, values and attitudes are significant in this context (Xu & Brown, 2016, Oguledo, 2016).

Three main themes emerge from various studies on AL (Xu & Brown, 2016): the constituent knowledge and skills required for assessment practice which generates outputs such as The standards for Teacher Competence in Educational Assessment of Students (AFT, NCME, & NEA, 1990) in the American context, and Understanding Assessment: its role in safeguarding academic standards and quality in higher education (QAA, 2012); in the American and UK contexts respectively, the various factors that could mediate in assessment including “training needs, conceptions of assessment and efficacy” (Xu & Brown, 2016: p 162; Popham, 2011, Jeong, 2013, Hill, Ell, Grudnoff, & Limbrick, 2014; Graham, 2005, Lam, 2015), and contextual factors such as policy (Forsberg & Wermke, 2012), structural conditions (Xu & Liu, 2009), teachers’ awareness (Adie, 2013) and the conceptualisation of AL within the professional context (Fleer, 2015).

While several studies have focused on exploring assessment education requirements and structure, there is a consensus that extra-knowledge base and contextual factors such as attitude and conceptualisation are, at least, as important as the knowledge base (Xu & Brown, 2016). Important and yet to be answered questions, therefore, emerge around the contextual factors which this study aims to explore. In particular, given that teachers come into the profession with a range of pre-conceptualisations (Xu & Brown, 2016, Oguledo, 2016; Sethusa, 2012, Brown & Remesal, 2012), what are these pre-conceptualisations and how are they transformed? This study, therefore, sets out to answer the following research questions. First, what were recently qualified teachers’ original conceptualisations, and therefore, attitudes towards assessment pre-teacher education? Second, what are their new conceptualisations and what were the key factors responsible for the changes? Finally, what were the moments in transformation, when their conceptualisations changed?

1) Theoretical Grounding: The Concept of Transformative Learning (TL)

This study is anchored to the framework of transformative learning (TL) (Mezirow, 1978, 1991, 2000), explored over the last decades by several scholars (See Duckworth & Ade-Ojo, 2016; Hodges, 2014; Servage, 2008). We draw on three theoretical positions regarding TL to explore the factors responsible for, and the moments of new meaning-making by newly qualified teachers. We highlight Mezirow’s meaning construction (1997 & 2000), Hodges inter-practice phenomenon (2014) and Servage’s application of critical reflection (2008) as crucial features in the journey through transformation.

TL is a humanistic theory that conceptualizes how individuals identify their limiting assumptions, and construct their own response autonomously (Hodges, 2014). A critical focus of Mezirow’s TL (1991: p. xii) is on filling a perceived gap in the psychological approach to adult learning, particularly meaning—“how it is constructed, validated, and reformulated—and the social conditions that help adults to make meaning of their experience”.

Meaning can be of two types: the specific and limited and the scheme-based meaning structures (Mezirow, 1991). While the former is anchored to emergent assumptions that enable our previous experiences to assimilate and transform our new experience, the latter combines our continuously evolving concept, belief, judgment, and feeling to interpret contemporaneous experiences. We suggest that in the context of adult learning, the crucial component is the scheme-based meaning structure because adult learning is essentially a process rather than a one-off diffusion of knowledge and embodies the complete transformation process of rejecting a previously held position (Hodges, 2014). Therefore, identifying the triggers for such a rejection and the reconstruction of a new position is even more important, as the ability to facilitate the creation of change-triggering factors is crucial in the education of adults.

Two factors have been identified as successful triggers for transformation, the “inter-practice” phenomenon (Hodges, 2014: p. 165) and critical reflection (Servage, 2008). Inter-practice refers to the movement of practitioners across different aspects of their practice, some of which might conflict. As such, changes in meaning making can be triggered by engagement with different aspects of practice and can produce both conflicting and complementary meanings. Servage’s (2008) critical reflection, is “an apolitical reflection that focuses on beliefs and practices specific to the immediate daily work of teaching” (p. 66). This suggests that critical reflection should be focused on individual aspects of practice, rather than generally constructed around a perception of practice as a unitary structure.

Drawing from Freire’s (1972) thoughts on liberation and emancipation as a process that includes a recursive model of thought, reflection and action, Servage argues that true transformation only happens when critical reflection is involved. It is from these theoretical standpoints, that this study is particularly interested in the process of new meaning-making by newly qualified teachers’ in respect of their attitudes towards assessment.

2) Lifelong learning and transformative learning

The term Lifelong learning has been used to signify various processes and settings (Collins, 2018, Eschenbacher & Fleming, 2020). Most commonly, it is equated to the notion of adult learning (See e.g. Collins, 2018). In this research, the term is used essentially to describe an educational setting in the UK rather than a process. The lifelong learning sector in the UK refers to an educational setting that involves studying beyond the compulsory school age, that is, post-secodary. It does not, however, used to include studying at University settings. At the heart of this setting are colleges of further education and adult learning centres. Over the years, the sector has also being labelled as the post-compulsory education and training and the learning and skills sectors. In general, curriculum in such institutions focus on vocational education where it is assumed that intermediate level qualifications and awards can be achieved by students. More importantly, the majority in of students in the sector are adult/mature students. The beginner teachers referred to in this study represent those who are beginning a career in teaching in these institutions.

Although the reference is to a setting rather than a process, the commonality in terms of the nature of students in the setting with those proposed in Mezirow’s conceptualisation of transformative learning process signals the relevance of the concept to the setting being explored in this study. The central concepts of “disorientation, disorienting dilemmas and critical reflection” which are core to the concept of transformative learning (Eschenbacher & Fleming, 2020: p. 1) are all represented within the setting that we have used the term lifelong learning to represent in this study. Transformative learning occurs through the onset of disorientation which induces the questioning of assumptions that had been previously held as sacrosanct, and a resultant search for adequate strategies in response to learning needs (Kegan & Lisa Lahey, 2009). In essence, therefore, what we are hoping to track with our participants are the moments in which disorientation, disorientating dilemmas and critical reflection leading to the emergence of fruitful pedagogical strategies occur in the journey of our participants towards excellence in assessment. The moments these processes occur are what we have referred to as “moments in transformation”.

2. Research Design and Data Collection

This study is designed as an iterative mixed methods research (Onwuegbuzie & Combs, 2011). Iteration can be generated across methods with one method being employed in order to iterate findings from another method previously used. Mixed methods research involves “mixing or combining quantitative and qualitative research techniques, methods, approaches, concepts or language into a single study” (Johnson & Onwuegbuzie, 2004: p. 17). The use of mixed analysis can be “guided either a priori, a posteriori, or iteratively (representing analytical decisions that occur both prior to the study and during the study)” (Onwuegbuzie & Combs, 2011: p. 3).

Iteration in this research was reflected in our use of data collection methods. The initial data were collected through a survey questionnaire and further explored through interviews. Data collected were revisited to establish a synergy between our research questions and what the initial set of quantitative data appears to be presenting. In furtherance of the mixed analysis framework, some of the textual/qualitative data gathered through the survey were quantitatively represented and interrogated using the SPSS statistics tool. The findings from the analysis provided a set of preliminary answers to our research questions and these played a significant role in the content and structuring of our interviews thus furthering the principles of complementarity and iteration.

1) Sampling

A total of 170 participants were drawn from the group of recent graduates of the Professional (Graduate) Certificate in Education (P(G)CE) of two universities, who trained to become teachers in the Lifelong learning sector (LLS) in the UK. The LLS sector caters for the education of learners in the post-compulsory education setting. The two universities offer a convenient and readily available sample and are two of the biggest providers of the adult teacher education in the UK. Both were considered good and outstanding in their most recent inspection by Ofsted (2014). As lecturers in these universities, the researchers had ready access to this convenience sample. We recognised that “a convenience sample can lead to the under-representation or over-representation of particular groups within the sample” (Lund Research Ltd., 2012: p. 1). However, the response rate from the potential participating convenience sample, 56%, provides us with a reasonable level of assurance regarding the representativeness of our sample. Of a total of 280 potential participants, 170 responded. Suggestions around the acceptable response rate range between 25% and 75% (Nulty, 2008; NSNC, 2016). Our response rate of 56% which was towards the upper end of the range was, therefore, considered reasonable.

The use of a convenience sample also raises the potential ethical issues of researcher positionality and reflexivity (Corlett & Mavin, 2019), as the researchers had previously been tutors of the participants. Because we recognised the second component of reflexivity as identified by Day (2012) which is the need to reflect on our relationships with the research context, the research subjects/participants and the research data, we ensured that participants were only interviewed by the researcher that was not their tutor or one of the associates. In addition, participants were only identified by numbers in the survey thus ensuring anonymity. To further the course of ethical engagement, we secured the approval of the ethics committees of the universities in which the study was located.

2) Survey

The survey questions were a mixture of open and closed questions. The closed questions were designed to collect demographic information such as age, gender and years of experience. The open questions, however, were designed to enable respondents to provide a variety of responses which are not necessarily similarly structured using their own words.

3) Interviews

The nature and content of the interviews were dictated by a combination of the research questions and the responses collated from the analysis of the survey data. All interviewees had previously responded to the survey and the goal of the interviews was to seek iteration. A total of thirty-five (35) participants were interviewed. 25 interviewees reflected the larger group of participants in terms of gender and experience of teaching and were chosen through “simple sampling” in which “every individual in the sampling frame {170 survey respondents} has an equal chance of being chosen” (Onwuegbuzie & Collins, 2007: p. 285). The other 10 interviewees were chosen because of the need to track survey responses that we felt required further exploration. This was done through the contact details provided by participants, who confirmed their willingness to be interviewed. Interviews were alternated amongst the researchers and a team of three other associates, who had no relationships with the participants.

The interviews were semi structured in nature. Although focus areas were initiated by the researchers, respondents were given the opportunity to introduce additional views and to discuss other related issues. The open ended nature of the interview prompts enabled the researchers to understand the world as seen by the participants (Patton, 2002).

All interviews were recorded and professionally transcribed. The transcriptions were subjected to content analysis. We adopted a cyclic process without a finite interpretation process (Vaismoradi, Jones, Turunen, & Snelgrove, 2016) which enabled us to return repeatedly to the data and the coding process and to compare findings from the quantitative data with those from the interviews. In generating our themes, we went through the phases of “initialization, construction, rectification, and finalization” (Vaismoradi, Jones, Turunen, & Snelgrove, 2016: p. 103). We subjected the data to repeated readings, during which we highlighted, discussed and debated emergent meanings. We then created codes to account for concepts, in relation to domains and dimensions of the study, relationships to identify links between elements, and settings to account for the context in which phenomena are reported. This enabled us to organize the message emerging from the findings, to carry out comparisons, and to present our interpretation in a logical way (Vaismoradi, Jones, Turunen, & Snelgrove, 2016). We associated labels to ideas with similar meaning which emerged from the content and which was facilitated through repeated translation and transliteration. After repeating these processes, a clear and logical picture of our participants’ views began to emerge.

3. Findings and Discussions

Findings and discussions are structured around the key research questions and this enables us to bring to the fore the answers to the questions.

1) Demographic distributional pattern of participants

Table 1 shows the distribution pattern of participants in respect of gender, length and setting of teaching experience. This helps to track the significance of some of the variables reflected by the participants. The survey provided information in this respect based on straightforward questions on gender, work experience and years of experience.

Our first research question was what are the pre-training conceptualisations of assessment? Answers to this question were provided in part through responses of participants to the survey question: what was your understanding of the term assessment when you commenced your teacher education? Table 2 shows the various pre-conceptions of assessment held by the participants including as a tool for testing or examining students particularly at the end of the

Table 1. Showing distributional pattern of participants in terms of gender, experience and setting of experience.

Table 2. Showing the various conceptions of assessment held before training.

year and as a diagnostic/measurement tool. Some participants had no conceptions of assessment prior to the commencement of their training. Only a few participants saw assessment as a tool that can be used by both learners and teachers, and as something that is also related to learning.

Responses in the survey clearly set out some of the pre-conceptions:

assessmentinitially just meant an end assessment e.g. exam results or course work” (Participant 1)

My understanding was that assessment was entirely about end of course examinations” (Participant 29)

At this point, assessment to me meant marking. Usually something done at the end of the learning process …” (Participant 106)

I thought assessment meant exams and coursework only” (Participant 157”)

While the above reflects the dominant views of assessment, there were significant contrasting and divergent views which include:

Activities to determine if learning had taken place and objectives had been met” (Participant 42)

Assessment is the term we use to collectively describe the various strategies that we use inside and outside of the classroom to help facilitate and validate the learning and the progress made by learners” (Participant 152)

Assessment should be regular, meaningful and effective and can take the form of either a formal or informal task . The bests forms of assessment also allow for individualised feedback to improve performance” (Participant 63)

Test for significance was through Pearson chi-square analysis, as the data being analysed are based on qualitative type variables (Diener-West, 2016) and as shown in Tables 3-5, the only statistically significant associations were between previous teaching-related experiences of participants and their conceptualisation of assessment prior to their initial teacher education. X2 (4, N = 170) = 28.634, P = .001 for relationship between teaching experience and conceptualisation of assessment, X2 (8, N = 170) = 63.670, P = .001, for relationship between years of teaching experience and conceptualisation of assessment, and X2 (8, N = 170) = 32.760, P = .001 relationship between experience of assessment and conceptualisation of assessment.

2) New conceptualisations of assessment and what aspect of practice was responsible

Our second research question was what participants’ new conceptualisations of assessment are and what aspects of their practice informed these conceptualisations. Answers to this question were provided in part through responses of

Table 3. Showing the chi-square relationship between previous teaching experience and understanding of assessment.

Table 4. Showing the chi-square relationship between years of teaching experience and understanding of assessment.

Table 5. Showing a chi-square analysis of the significance between experience of assessment in school and conceptualisation of assessment.

participants to the survey questions: Have you developed other understandings of the term assessment that are different to your original understanding? And What are the new conceptualisations of assessment you have developed? Table 6 shows that four categories of new assessment understandings emerged. Common to these new conceptualisations were; the recognition of the importance of learning and learners; and the re-focusing of the measurement element to include self-measurement by tutors and learners. This represents a transformation in perceptions and in attitude towards assessment.

3) Aspects of practice responsible for change in assessment attitude

Answers to this subsidiary question emerged in part from the survey question: what do you think is responsible for the change in your understanding of assessment? Table 7 indicates that most participants attributed their transformation to elements of practice. 32.9%, N = 56 attributed their transformation to observation of experienced colleagues at work. 46.5%, N = 79 attributed it to feedback and reflection on their own teaching, while 10.6%, N = 18 associated it with the process of planning their lessons.

4) Findings from interviews on aspects of practice responsible for change

Table 6. Showing new understandings of assessment held by participants.

Table 7. Showing the various aspects of training and practice seen as responsible for change in participants’ conceptualisation of assessment.

Interviews revealed that participants’ transformation in attitude was clearly linked to practical application, reflection, and the support of the community of practice. For some participants, transformation occurred when they had to respond to specific requirements of lesson planning and delivery. For example, the need to highlight and discuss assessment methods in their lesson planning was seen as a defining moment. One participant noted,

My colleges lesson plan pro-forma requires me to indicate my assessment strategy at the end of each stage of delivery. Because of this, I started seeing assessment in a different light.” (Interview Participant 27)

Participants also indicated that transformation through reflection occurred through the “helpful comment” (Interview participant 16) of other practitioners. Reflection on this aspect of their practice was often triggered by what one of them classified as the “how did you know” question (Interview participant 12). Their reflection on this subject often led to the recognition that it was not just teachers who assessed learning, but also learners. One commented,

while thinking about this question, it occurred to me that my learners tell me what they know and demand to know of other aspects of the lesson. The question is, how did they identify what they know and what they still need to know more of? When trying to find answers to these questions, it dawned on me that there were things I could do to make my learners realise the extent and limitations of their learning. It is at that point that I realised the full scope of assessment” (Interview Participant 23)

5) Time of change in understanding

Our third research question was when did the changes in understanding occur?

Answers to this question were provided in part through responses of participants to the survey question: At what point in your training did you develop a new/additional conceptualisation of assessment? Please clarify if it was at a fixed point or if it cuts across different periods. Table 8 shows that for 97.1%, N = 165, change in understanding occurred at a point directly relating to their practice. For 38.2%, N = 65, the change occurred gradually and although it started with lectures during training, it became actualised at a point during practice.

6) Interview findings

The dominant message from our interviews was that although participants became aware of assessment types and roles through their lectures, the actual practicality of using assessment remained hazy. One participant’s comment about the limitations to the contribution of lectures encapsulates this position;

Yes, I think all of us could define the various assessment types and roles {during lectures}, but have little knowledge of what they look like and how to use them effectively in practice” (Interview Participant 3).

Another noted,

After our lectures on assessment, we could all recount the definitions of assessment types, but how we use them and integrate them into our lessons was something we learned later” (Interview Participant 24).


Overall, we could sum up the findings from this study as follows.

First, there are different conceptualisations of assessment that existed amongst our participants before they embarked on their training. This suggests that one size fits all strategy cannot work with such a group of learners. More importantly, these learners would have embarked on their training under the influence of varying factors.

Second, although most participants commenced their journeys with different conceptualisations of assessment, this study shows that through engagement with practice, most participants undergo a form of transformation leading to a new understanding of, and attitude towards assessment.

Further, and perhaps most importantly, the study found that the time of change in conceptualisation outside is essentially outside of the lecture period and more in the practice period. This gives an indication that the moment in

Table 8. Showing when the change in assessment understanding took place amongst participants.

transformation is essentially located in practice and the focus should be on practice rather than instruction. The essence of transformation should, therefore be in practice rather than instruction.

4. Discussion

1) Pre-training views of assessment

The first relevant issues here are the divergent views of assessment represented in the response to this question. While this may be a reflection of the dichotomy foregrounded in the various uses of assessment, it may also reflect the twin concepts of assessment as a process (Swaffield & Dudley, 2010) and as a product (Taras, 2005). With participants who said assessment had no significance, there is an indication that this might be related to semantics. One participant suggested this, noting:

Now that I am a teacher, I understand what this term means. … I was more familiar with the terms examination and tests” (Interview Participant 6)

Thus, for these participants, examination and test were initially synonymous with assessment.

The dominant view of assessment as a tool for testing suggests that the attitude of many trainees towards assessment at this point in their development focused on a misconception of its role as solely for examination and testing. It is conceivable that their thinking on assessment is generally limited by its perception as a summative tool (McDonald, 2012). This reaffirms existing views that most teachers embark on their practice journeys with pre-conceptions of assessment (Brown, 2008, 2011; Sethusha, 2012; Levy-Vered & Alhija, 2015; Oguledo, 2016).

However, further questions must be raised about the various conceptualisations. Do the pre-conceptions “resist training” or is there a “positive relationship between assessment training and teacher AL” (Xu & Brown, 2016: p. 163). Also, is there an agreement on what these pre-conceptions might be across studies and periods? In this context, there is a difference between the finding of this study and previous studies which have found particular pre-conceptions such as improvement and accountability (Brown, 2002, 2004b, 2006) to be dominant. An emergent view, therefore, is that pre-conceptions might vary according to the participants. Given the dominance of the conceptualisation of assessment as a testing tool, we suggest that the starting point for transformation in attitude towards assessment must be its conceptualisation as a tool for testing and measurement. Strategies for effecting a change in attitude must, therefore, find a way of rupturing that perception in order to achieve perspective transformation through which we critically examine our prior interpretations (Anderson & Krathwohl, 2001).

The divergent views of assessment held by participants imply that there is likely to be different starting points for any transformative journey for different individuals. A valid question, therefore, would be whether it is fruitful for teacher education to commence the journey for transformation from the same point and to use a similar instrument for stimulating that journey as is currently done. The current strategy for developing assessment knowledge and attitude is located within the framework of lectures although, it is assumed that placement experience would contribute subsequently. However, as shown in this study, trainees come with divergent views and, therefore, would require different forms of rupture. A common starting point, therefore, might not be the most effective approach.

The significance test identifies various types of teaching-related experiences as significant variables in the conceptualisation of assessment. While this resonates with the findings of previous studies that experience informs the AL of teachers (Brown, 2011; Oguledo, 2016), it leaves questions around the type and time of experience. Because the significant experience here is pre-training, it raises questions about the role, structure and relevance of contemporary assessment training. Although the conceptualisation of assessment is based on pre-training experiences, it is logical to assume that experience post-training must also have a role. Further, we could argue that conceptualisations will change as experiences change, as teachers’ assessment identity is dynamic and continues to evolve as their attitudes change (Looney, Cumming, van Der Kleijb, & Harris, 2017). This, therefore, demands that teacher educators should constructively work towards learner disorientation. Because many newly qualified teachers had spent years being comfortable with different conceptualisations of assessment, we can anticipate some resistance to trying to alter these conceptualisations. Teacher trainers must, therefore, find a way of offering students perspective-expanding information which is not likely to occur naturally through the current dominant lecture structure in teacher education in order to create experiences for their trainees that can become sources of rupture (Mezirow, 1991; Glisczinski, 2011).

2) Changes in existing attitude towards assessment during and post-training

Our findings highlight the importance of practice and bring into play the related concepts of experience, reflection and practice. We identified a nuanced variation to the perception of experience as it relates to practice post-qualification for our participants. Although our data on pre-training conceptualisations reflect experience in its chronological form, experience post-qualification was more participatory and in terms of teaching and learning activities they participated in. This resonates with Hodges’s (2014) view on the importance of inter-practice as a primer for transformation. Contrary to the assumption that years of experience is the crucial driver in the reformulation and transformation of teachers’ AL, (Olson & Maio, 2003; Hassanein, 2015; Holmes & Singh, 2012), it would seem that the nature of the experience is more important. Furthermore, the role of participatory experience validates the claim that teachers’ TAI does indeed evolve (Looney, Cumming, van Der Kleijb, & Harris, 2017). Teacher educators must, therefore, consider the development of strategies that go beyond the immediate to one that is more long-term and which will facilitate gradual but progressive evolution of assessment conceptualisation.

A further insight that emerged from these findings is that assessment conceptualisations can be multiple, fluid and dynamic. As teachers develop, they accumulate a pool of conceptualisations from which they draw. Conceptualisations, therefore, do not assume a unitary, homogenous form which can be applied to all situations. Rather, they are informed by changing attitudes, come in different forms, and are available to the teacher as a toolbox. One participant puts this in context, explaining that,

“… For example, I found that developing and reviewing ILPS {Individual Learning plans} at key points in the year in collaboration with my students was a requirement. I had never done this. The template required that students have their own input into this document. I was at a loss and spoke to an experienced tutor … and erm … It was how I found out that assessments can be a tool for students to measure their own learning. You see, in the document, they asked how you found out what more your students need to learn and if they {the learners} were aware of it. So, my view of assessment changed, maybe, better to say expanded, you know, became wider” (Interview Participant 15).

Finally, we could also conclude that experience does not necessarily change practice on its own. Rather, it changes perceptions and attitudes. It would seem that as teachers encounter more participatory experiences, their attitudes continue to alter, thereby creating new conceptualisations (Hassanein, 2015; Holmes & Singh, 2012), and new Teacher Assessment Identities (TAI) (Looney, Cumming, van Der Kleijb, & Harris, 2017).

3) What aspect of practice is responsible for change?

Answer to this question emerged partly from the survey question; what do you think led to the change in your view about assessment? Our findings indicate that various elements of practice including peer support through observations and feedback, as well as reflection were responsible for our participants’ change in conceptualisation. This immediately raises questions about whether lectures are a good way of causing the desired rupture in perceptions. As shown in table seven, the driving factors for change in conceptualisations all relate to practice. We could, therefore, argue that, students associate transformation more with practice. Perhaps teacher education programmes need to reconsider what elements of the programmes should be included in the theoretical lecturing phase and prioritise practice in the development of assessment skills.

Two existing arguments provide some insights in this context. First, Hodges, 2014 suggests that there is a strong relationship between transformative learning and practice, noting;

Despite their differing views of the relationship between social context, individual experience, and the processes of learning, transformative learning and practice-based learning theories can be regarded as complementary” (p. 165).

The identification of practice-related factors as the drivers for change in perception appears to have validated the claims made by Hodge.

The second argument is the relationship between transformation, critical reflection and learning communities (Servage, 2008). Servage suggests that we focus on both the end and the means. This, we suggest, requires a total change in orientation such that the dominant focus on prescribed best practices will need to yield in part to a “more collaborative process ... resulting in collective imagination” (p. 65). Mapping out the process requires inputs from the learners or beginners just as it requires inputs from the experienced members of the community, as

studying best practices has value and utility as a form of teacher learning, but it is an incomplete representation of collaborative processes. It is not transformative” (p. 65).

Transformation, therefore, cannot be facilitated solely through prescribed best practices, but through a committed learning community which factors in both their own actions, as well as the social and policy context framing the actions (Servage, 2008). What leads to transformation, amongst others, is the ability of the members of a professional learning community to collaboratively engage in critical reflection. Collaborative teacher learning calls on professionals to develop a strong sense of community, the glue of which is collective responsibility for student learning (Harris & Muijs, 2005). However, this must go beyond a step-by-step prescription of how to do things and must include a reflection on why things were done in particular ways. A preliminary conclusion we can draw from the data in this respect, therefore, is that practice may be very important for transformation because it involves the element of community bonding and critical reflection.

4) Time of change

Our third research question was what time did the new conceptualisation occur? It is instructive to note that only 2.9%, N = 5, acknowledged lectures during training as responsible for the change in their understanding of assessment. As such, we may conclude that while knowledge base is a necessary condition, it might not be sufficient in developing the right assessment attitude (Xu & Brown, 2016). Because teachers’ knowledge base is often developed through lectures, a re-examination of the structure of teacher education in the area of assessment is essential. Further, our findings here challenge the arguments that; inadequate assessment attitude is caused by poor tuition and inadequate course content (DeLuca, Chavez, Bellara, & Cao, 2013), limited course duration (Greenberg & Walsh, 2012), and a total lack of course coverage of assessment (Popham, 2011). Further, it emphasises the potential roles that communities of practice can play in initiating a disorientation leading to transformation (Servage, 2008) and the interaction between reflection and transformation (Hodge, 2014). It is plausible to suggest that the formation of attitude towards assessment is a journey which is a form of movement in transition. While this movement might be initiated through lecture inputs, it is possibly the beginning of a journey which culminates in a transformation through practice, reflection and the support of communities of practice. A crucial point is that the changes indicated by these participants are not merely of practice but more fundamentally, of attitude. In essence, practice change is made possible because of attitude change.

The dominance of practice-related events as the time for change, we suggest, highlights the difference between assessment knowledge and assessment attitude (Brown, 2004a; Brown & Gao, 2015). Although lectures appeared to have initiated the process of acquiring assessment knowledge, the actual transformation in attitude, the moments of change, occurred during practice. Therefore, assessment knowledge is a base from which other attributes such as “teachers” conceptions, macro socio-cultural and micro-institutional contexts’ and other factors could develop (Xu & Brown, 2016: p. 167).

5. Conclusion and Implications

This study sets out to answer three research questions:

What were the pre-training pre-conceptions of assessment held by newly qualified teachers? What are the new conceptualisations during and post-training and what factors are responsible and finally, when do these changes occur? Our findings confirm that different conceptualisations of assessment exist pre-training and that through engagement with practice, most participants undergo a form of transformation leading to a new understanding of, and attitude towards assessment. Further, it located the time of change in conceptualisation outside of the lecture period and more in the practice period.

A significant learning point from the study is strong but dynamic link between the transformation in understanding of participants and practice. The study shows that while experience in terms of years of practice might be significant, it is the nature of practice experience that is more significant. As such, practitioners, as they acquire more chronological and participatory experience, develop a multiplicity of conceptualisations of assessment. The different understandings thus become a form of tool box, from which practitioners select when confronted with different assessment requirements. While this confirms the notion that experience may inform one’s attitude (Oguledo, 2016, Olson & Maio, 2003; Hassanein, 2015; Holmes & Singh, 2012), it goes further to demonstrate that as more experiences are gained, additional attitudes are developed. In some cases, new attitudes obliterate existing ones, in others, they simply complement them. Further, it becomes obvious that the development of attitudes will vary from community to community depending on the type of activity carried out within each community. A newly trained teacher in one setting, might, therefore, develop a set of attitudes totally different from those developed by another newly trained teacher in another setting.

What then might be the implication(s) of this study? The findings have implications for two stakeholders in the context of teacher development. They raise the issue of the structure and content of the training programme offered to LLS teacher trainees and call into question the current standard-driven curriculum which has informed the development of a course focused on assessment. It invites programme developers in the sector to consider whether there are other ways of getting trainees to become more effective as users of assessment for learning and to develop different conceptualisations of assessment. Evidently, there is a need to anchor the development of these attitudes more to specific elements of practice than to theory-informed knowledge. We suggest that the findings of this study demand that the current structure be reviewed.

Secondly, as this study has located the moment of transformation in the context of practice, the current developers of post-training programmes for newly and recently qualified teachers in the sector need to consider how they can utilise the findings of this study. This, we hope, can lead to a conscious development of Newly Qualified Lecturers programmes for newly qualified teachers in the LLS which are driven by communities of practice and can provide opportunities for transformation of attitude towards assessment.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.


[1] Adie, L. (2013). The Development of Teacher Assessment Identity through Participation in Online Moderation. Assessment in Education: Principles, Policy & Practice, 20, 91-106.
[2] American Federation of Teachers, National Council on Measurement in Education, & National Education Association (AFT, NCME, & NEA) (1990). Standards for Teacher Competence in Educational Assessment of Students. Educational Measurement: Issues and Practice, 9, 30-32.
[3] Anderson, L. W., & Krathwohl, D. R. (2001). A Taxonomy for Learning, Teaching, and Assessing a Revision of Bloom’s Taxonomy of Educational Objectives. New York: Longman.
[4] Berry, R. (2008). Assessment for Learning. Hong Kong: Hong Kong University Press.
[5] Brown, G. (2002). Teachers’ Conceptions of Assessment. Unpublished Doctoral Dissertation, Auckland: University of Auckland.
[6] Brown, G. (2004b). Teachers’ Conceptions of Assessment: Implications for Policy and Professional Development. Assessment in Education: Principles, Policy and Practice, 11, 301-318.
[7] Brown, G. (2006). Teachers’ Conceptions of Assessment: Validation of an Abridged Instrument. Psychological Reports, 99, 166-170.
[8] Brown, G. (2011). Teachers’ Conceptions of Assessment: Comparing Primary and Secondary Teachers in New Zealand. Assessment Matters, 3, 45-70.
[9] Brown, G. T. L. (2008). Conceptions of Assessment: Understanding What Assessment Means to Teachers and Students. New York: Nova Science.
[10] Brown, G., & Gao, L. (2015). Chinese Teachers’ Conceptions of Assessment for and of Learning: Six Competing and Complementary Purposes. Cogent Education, 2, Article ID: 993836.
[11] Brown, G., & Remesal, A. (2012). Prospective Teachers’ Conceptions of Assessment: A Cross-Cultural Comparison. Spanish Journal of Psychology, 15, 75-89.
[12] Brown, S. (2004a). Assessment for Learning. Learning and Teaching in Higher Education, 1, 81-89.
[13] Collins, J. (2018). Lifelong Learning as a Transformative Endeavour: How Do Part-Time Mature Learners Make Sense of Barriers and Opportunities in Higher Education? A Dissertation Is Submitted for the Degree of PhD in Higher Education: Research, Evaluation and Enhancement, October 2018.
[14] Corlett, S., & Mavin, S. (2018). Reflexivity and Researcher Positionality. In C. Cassell, A. Cunliffe, & G. Grandy (Eds.), The Sage Handbook of Qualitative Business and Management Research Methods (pp. 377-389). London: Sage.
[15] Day, S. (2012). A Reflexive Lens: Exploring Dilemmas of Qualitative Methodology through the Concept of Reflexivity. Qualitative Sociology Review, 8, 60-85.
[16] DeLuca, C., Chavez, T., Bellara, A., & Cao, C. (2013). Pedagogies for Preservice Assessment Education: Supporting Teacher Candidates’ Assessment Literacy Development. The Teacher Educator, 48, 128-142.
[17] Diener-West, M. (2016). Use of the Chi-Square Statistic. Baltimore, MD: The John Hopkins Bloomberg School of Health, John Hopkins University.
[18] Duckworth, V., & Ade-Ojo, G. O. (2016). Journey through Transformation: A Case Study of Two Literacy Learners. Journal of Transformative Education, 14, 285-304.
[19] Earl, L. (2006). Rethinking Classroom Assessment with Purpose in Mind. Winnipeg: Manitoba Education, Citizenship and Youth.
[20] Earl, L. (2013). A Review of Assessment as Learning: Using Classroom Assessment to Maximize Student Learning (2nd ed.). Thousand Oaks, CA: Corwin Press.
[21] Earl, L., & Katz, S. (2006). Leading Schools in a Data-Rich World. Thousand Oaks, CA: Corwin Press.
[22] Eschenbacher, S., & Fleming, T. (2020). Transformative Dimensions of Lifelong Learning: Mezirow, Rorty and COVID-19. International Review of Education.
[23] Fabrigar, L. R., MacDonald, K., & Wegener, D. (2005). The Structure of Attitudes. In D. Alberracin, B. Johnson, & M. Zanna (Eds.), The Handbook of Attitudes (Second Edition, pp. 79-125). Mahwah, NJ: Lawrence Erlbaum Associates.
[24] Fleer, M. (2015). Developing an Assessment Pedagogy: The Tensions and Struggles in Retheorizing Assessment from a Cultural-Historical Perspective. Assessment in Education: Principles, Policy & Practice, 22, 224-246.
[25] Forsberg, E., & Wermke, W. (2012). Knowledge Sources and Autonomy: German and Swedish Teachers’ Continuing Professional Development of Assessment Knowledge. Professional Development in Education, 38, 741-758.
[26] Freire, P. (1972). Pedagogy of the Oppressed. London: Penguin Press.
[27] Glisczinski, D. (2011). Lighting up the Mind: Transforming Learning through the Applied Scholarship of Cognitive Neuroscience. International Journal for the Scholarship of Teaching and Learning, 5, Article No. 24.
[28] Graham, P. (2005). Classroom-Based Assessment: Changing Knowledge and Practice through Pre-Service Teacher Education. Teaching and Teacher Education, 21, 607-621.
[29] Greenberg, J., & Walsh, K. (2012). What Teacher Preparation Programs Teach about K-12 Assessment: A Review. Washington DC: National Council on Teacher Quality.
[30] Hamdan-Mansour, A. (2010). Predictors of Hostility among University Students in Jordan. Scandinavian Journal of Caring Sciences, 24, 125-130.
[31] Harlen, W. (2007). Assessment of Learning. Los Angeles, CA: SAGE Publications.
[32] Harris, A., & Muijis, D. (2005). Improving Schools through Teacher Leadership. Maidenhead: Open University Press.
[33] Hassanein, E. (2015). Inclusion, Disability and Culture. Rotterdam: Sense Publishers.
[34] Hill, M. F., Ell, F., Grudnoff, L., & Limbrick, L. (2014). Practice What You Preach: Initial Teacher Education Students Learning about Assessment. Assessment Matters, 7, 90-112.
[35] Hodges, S. (2014). Transformative Learning as an “Inter-Practice” Phenomenon. Adult Education Quarterly, 64, 165-181.
[36] Holmes, J., & Singh, S. (2012). Social Psychology: Student Handbook to Psychology (Vol. 7). New York: Facts on File.
[37] Jeong, H. (2013). Defining Assessment Literacy: Is It Different for Language Testers and Non-Language Testers? Language Testing, 30, 345-362.
[38] Johnson, R. B., & Onwuegbuzie, A. J. (2004). Mixed Methods Research: A Research Paradigm Whose Time Has Come. Educational Researcher, 33, 14-26.
[39] Jones, G. M., & Leagon, M. (2014). Science Teacher Attitudes and Beliefs: Reforming Practice. In N. G. Lederman, & S. K. Abell (Eds.), Handbook of Research on Science Education (Vol. II, pp. 830-847). New York and London: Routledge.
[40] Kegan, R., & Lahey, L. (2009). Immunity to Change: How to Overcome It and Unlock the Potential in Yourself and Your Organization. Cambridge, MA: Harvard Business School.
[41] Lam, R. (2015). Language Assessment Training in Hong Kong: Implications for Language Assessment Literacy. Language Testing, 32, 169-197.
[42] Levy-Vered, A., & Alhija, F. N.-A. (2015). Modelling Beginning Teachers’ Assessment Literacy: The Contribution of Training, Self-Efficacy, and Conceptions of Assessment. Educational Research and Evaluation, 21, 378-406.
[43] Looney, A., Cumming, J., van Der Kleij, J., & Harris, K. (2017). Reconceptualising the Role of Teachers as Assessors: Teacher Assessment Identity. Assessment in Education: Principles, Policy & Practice, 25, 442-467.
[44] Lund Research Ltd. (2012). Convenience Sampling. Lærd Dissertation.
[45] McDonald, B. (2012). Assessment in Service-Learning. ERIC, Online Submission.
[46] Mezirow, J. (1978). Perspective Transformation. Adult Education Quarterly, 28, 100-110.
[47] Mezirow, J. (1991). Transformative Dimensions of Adult Learning. San Francisco, CA: Jossey-Bass.
[48] Mezirow, J. (1997). Transformative Learning: Theory to Practice. New Directions for Adult and Continuing Education, 1997, 5-12.
[49] Mezirow, J. (2000). Learning as Transformation. San Francisco, CA: Jossey-Bass.
[50] National Research Council (2001). Knowing What Students Know: The Science and Design of Educational Assessment. Washington DC: The National Academies Press.
[51] NSNC (2016). What Is an Acceptable Survey Response Rate? The National Social Norms Center, Michigan State University.
[52] Nulty, D. (2008). The Adequacy of Response Rates to Online and Paper Surveys: What Can Be Done? Assessment & Evaluation in Higher Education, 33, 301-314.
[53] Oguledo, N. (2016). A Study of Science Teachers’ Attitudes towards Assessment. Unpublished EDD Thesis of the University of Greenwich.
[54] Olson, J., & Maio, G. (2003). Attitudes in Social Behavior. In T. Millon, & M. Lerner (Eds.), Comprehensive Handbook of Psychology: Personality and Social Psychology (Vol. 5, pp. 299-325). Hoboken, NJ: Wiley.
[55] Onwuegbuzie, A. J., & Collins, K. M. T. (2007). A Typology of Mixed Methods Sampling Designs in Social Science Research. The Qualitative Report, 12, 281-316.
[56] Onwuegbuzie, A. J., & Combs, J. P. (2011). Data Analysis in Mixed Research: A Primer. International Journal of Education, 3, E13.
[57] Oraif, F. (2007). An Exploration of Confidence Related to Formal Learning in Saudi Arabia. Doctoral Thesis.
[58] Osgood, C., Suci, G., & Tannenbaum, P. (1967). The Measurement of Meaning. Urbana, IL: University of Illinois Press.
[59] Oskamp, S., & Schultz, P. (2005). Attitudes and Opinions. Mahwah, NJ: L. Erlbaum Associates.
[60] Patton, M. (2002). Qualitative Research and Evaluation Methods. Thousand Oaks, CA: SAGE Publications.
[61] Popham, W. J. (2011). Assessment Literacy Overlooked: A Teacher Educator’s Confession. The Teacher Educator, 46, 265-273.
[62] Quality Assurance Agency, UK (QAA) (2012). Understanding Assessment: Its Role in Safeguarding Academic Standards and Quality in Higher Education: A Guide for Early Career Staff. QAA, UK.
[63] Servage, L. (2008). Critical and Transformative Practices in Professional Learning Communities. Teacher Education Quarterly, 35, 63-77.
[64] Sethusha, M. (2012). “Sixth Grade Teachers” Conceptions of Classroom Assessment. Literacy Information and Computer Education Journal, 3, 663-670.
[65] Swaffield, S., & Dudley, P. (2010). Assessment Literacy for Wise Decisions. London: Association of Teachers and Lecturers.
[66] Taras, M. (2005). Assessment—Summative and Formative—Some Theoretical Reflections. British Journal of Educational Studies, 53, 466-478.
[67] The Assessment Reform Group (2006). The Role of Teachers in the Assessment of Learning.
[68] Thompson, A. G. (1992). Teachers Beliefs & Conceptions: A Synthesis of the Research. In D. A. Grouws (Ed.), Handbook of Research on Mathematics Teaching and Learning (pp. 127-146). New York: Macmillan.
[69] Vaismoradi, M., Jones, J., Turunen, H., & Snelgrove (2016). Theme Development in Qualitative Content Analysis and Thematic Analysis. Journal of Nursing Education and Practice, 6, 100-110.
[70] Xu, Y., & Brown, G. T. L. (2016). Teacher Assessment Literacy in Practice: A Reconceptualization. Teaching and Teacher Education, 58, 149-162.
[71] Xu, Y., & Liu, Y. (2009). Teacher Assessment Knowledge and Practice: A Narrative Inquiry of a Chinese College EFL Teacher’s Experience. TESOL Quarterly, 43, 493-513.

comments powered by Disqus

Copyright © 2020 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.