The Application of Portfolio-Based Formative Assessment in Senior High School English Writing Teaching: An Empirical Study

Abstract

This study investigates the effectiveness of formative assessment in senior high school English writing teaching, with a focus on its impact on students’ writing competence and their attitudinal acceptance. Adopting a quasi-experimental design, the research selected two second-year classes from a provincial high school in Z Province in China as participants. The experimental group underwent a one-month formative assessment intervention in writing instruction, while the control group maintained conventional teaching approaches. Data were collected through pre-test and post-test continuation writing tasks and questionnaires, with subsequent statistical analysis conducted using SPSS software. Results revealed statistically significant improvements in the experimental group’s writing performance compared to the control group in post-test assessments. Moreover, students demonstrated positive perceptions towards formative assessment activities, acknowledging their efficacy in enhancing writing proficiency. These findings suggest that formative assessment serves as an effective pedagogical approach for developing English writing competence among senior high school students, offering both theoretical implications and practical value for the innovation of writing instruction methodologies in EFL contexts.

Share and Cite:

Sun, Y.M. (2025) The Application of Portfolio-Based Formative Assessment in Senior High School English Writing Teaching: An Empirical Study. Open Access Library Journal, 12, 1-18. doi: 10.4236/oalib.1113679.

1. Introduction

In the context of globalization, English, as an international lingua franca, has grown increasingly vital. English writing proficiency, a critical component of comprehensive language competence, holds paramount importance for senior high school students. It not only aids students in articulating their thoughts and perspectives but also lays a solid linguistic foundation for their future academic and professional endeavors. However, current English writing instruction in senior high schools faces numerous challenges, with students’ writing skills generally requiring improvement [1]. Effectively enhancing senior high school students’ English writing proficiency has thus become an urgent issue for educators.

Formative assessment, an advanced pedagogical evaluation concept, has garnered significant attention in recent years. Emphasizing continuous and dynamic evaluation during the teaching process, it prioritizes students’ learning progress over final outcomes [2]. The core of formative assessment lies in providing timely and specific feedback, enabling students to understand their learning status, clarify goals, adjust strategies, and thereby foster academic growth. In English writing instruction, formative assessment holds theoretical and practical value by allowing teachers to identify students’ specific challenges and offer targeted guidance, facilitating iterative improvement.

However, existing research on formative assessment in senior high school English writing instruction remains limited, particularly regarding practical implementation methods, effectiveness evaluation, and its specific impact on writing proficiency. This study aims to empirically investigate the effects of formative assessment on English writing instruction, analyzing its influence on students’ writing proficiency and their acceptance of such assessment. The findings will enrich theoretical research on formative assessment in writing pedagogy and provide practical references for educators.

In summary, this study carries both theoretical and practical significance. It elucidates the mechanisms and effects of formative assessment in senior high school English writing instruction, offering innovative approaches to optimize teaching practices and enhance students’ writing proficiency.

2. Literature Review

2.1. Formative Assessment

As an important educational evaluation method, formative assessment aims to promote students’ learning and development through continuous monitoring and feedback on their learning processes. The concept was first proposed by American evaluation expert M. Scriven in 1967, and later introduced into educational assessment practice by educational psychologist B. S. Bloom. Formative assessment not only focuses on students’ learning outcomes but also emphasizes obtaining feedback information in the teaching process to improve teaching strategies and promote students’ learning [3] [4].

According to the definition by Black and William, formative assessment refers to all activities between teachers and students aimed at obtaining information and using it to improve teaching. This evaluation method emphasizes teachers’ observation and feedback on students’ performance in the teaching process, aiming to help students identify learning difficulties and make corresponding adjustments [5].

The characteristics of formative assessment include its dynamics and adaptability. Shepard pointed out that formative assessment is a dynamic process in which teachers and students jointly promote learning through interaction [6]. Vygotsky emphasized the learning process from the perspective of social constructivism, arguing that learning is the result of mutual cooperation between teachers and students [7].

Formative assessment is implemented through various methods, including classroom observation, discussion, and assignment analysis [8]. These methods not only provide immediate feedback but also help teachers adjust teaching strategies in a timely manner to meet students’ individual needs [9]. In recent years, the introduction of digital technology has provided new possibilities for formative assessment.

In summary, as a multi-dimensional educational evaluation method, formative assessment has significantly promoted the improvement of educational practice. Its ability to provide immediate feedback, promote self-regulated learning, and combine with digital technology has made it occupy an important position in modern education. However, to maximize the potential benefits of formative assessment, it remains crucial to address challenges in teacher training and implementation.

2.2. Learning Portfolio

The term “portfolio” was originally used in the art world to refer to a selection of works displayed by artists. In recent years, this concept has been widely applied as a teaching and assessment tool in education at all levels and various disciplines [10] [11]. Although there are various definitions of “portfolio” in educational practice in the literature, the definition proposed by Cooper and Love is particularly comprehensive: a portfolio is a planned compilation that showcases knowledge, skills, values, and achievements, and includes reflections and interpretations of the displayed works [12]. A learner’s portfolio may include representative works such as texts, photos, or videos, as well as evaluations from teachers or mentors.

Early portfolios were presented in physical form, but with the development of technology, electronic portfolios (ePortfolios) have gradually emerged. Electronic portfolios are essentially similar to paper portfolios, with the main difference lying in the storage method. Although there is no consensus on the definition of ePortfolios in academia, they have significant advantages over paper portfolios. EPortfolios can display a richer and more diverse range of materials, are accessible to the public, are not limited by linear structures, and are more flexible in operation [13]. In addition, ePortfolios allow learners to share their understanding of development during the creative process, which helps clarify thinking processes and more accurately assess final works. Meanwhile, creators can frequently receive feedback from peers and teachers, which itself may become valuable works. Therefore, ePortfolios have become the norm in suitable environments.

In English, terms related to ePortfolios include efolio, digital portfolio, web-based portfolio, and online portfolio, among others. Although these terms focus on different storage media, the core purpose of a portfolio remains to showcase representative works and achievements. Portfolios can be divided into “display-type” and “assessment-type”: the former is mainly used to support job applications, while the latter is used for assessment, and students’ grades are based on the submitted content. In contrast, the “learning-type” portfolio includes first drafts and unpolished works, emphasizing reflection, formative assessment, and feedback in the learning process, aiming to promote and record learning and development.

The theoretical basis of learning portfolios indicates their multiple benefits. Creators can actively participate in selecting displayed works, emphasizing the reflection process, thereby transferring the responsibility for learning from teachers to learners and enabling them to play a more active role in learning. According to social constructivism and metacognitive learning theories, such participation can promote deep learning and self-cognition improvement. Therefore, learning portfolios are regarded as effective tools to support self-regulation and cognitive monitoring, helping to cultivate the awareness of lifelong learning. In addition, learning portfolios are suitable for the cultivation and assessment of comprehensive, cross-curricular knowledge, and transferable skills, rather than being limited to professional knowledge in a single discipline. As higher education institutions face the pressure of bridging the gap between what students learn and what employers expect, the application of learning portfolios has gained increasing attention. Academics and the media increasingly emphasize cultivating T-shaped talents, that is, graduates who possess both professional knowledge and soft skills [14].

In terms of application, the use of learning portfolios in higher education can be traced back to pre-service teacher education in the late 1980s and early 1990s. Although early research emphasized their potential value as learning tools, most studies mainly focused on learning assessment [15]. Since the late 1990s, research focus has gradually shifted to learning itself, influenced by new research achievements in the field of metacognition and closely related to the promotion of learner-centered pedagogy. At the same time, the number of research literatures on learning portfolios has increased steadily, and their application scope and disciplines have expanded, reflecting the progress of e-learning portfolio technology.

Portfolios not only display final achievements but also record various stages of the learning process, which helps teachers and students assess learning progress and identify areas for improvement, thus achieving the goals of formative assessment. By creating and maintaining portfolios, students can actively participate in their own learning, select displayed works, and reflect. This self-regulated process is consistent with the concept of formative assessment, encouraging students to take responsibility for their learning and promoting their autonomous learning ability. In addition, portfolios provide diversified assessment methods, allowing teachers to understand students’ performance in different tasks and projects by reviewing their portfolios. This diversified assessment method facilitates the implementation of formative assessment because it does not rely solely on a single exam or test but comprehensively considers students’ overall learning performance. Portfolios can also serve as a bridge of communication between teachers and students: teachers can provide specific feedback by reviewing portfolios, and students can express their learning feelings and reflections in portfolios. Such interaction contributes to the effective implementation of formative assessment.

In conclusion, learning portfolios aim to support, assess, and record lifelong learning through critical self-reflection, becoming valuable teaching tools in higher education that expand learning experiences and help graduates master “21st-century skills.”

3. Research Design

3.1. Research Questions

The purpose of this study is to analyze the impact of formative assessment on senior high school students’ English writing ability, which can be specifically summarized into two aspects: first, the attitude of 11th-grade students in ordinary senior high schools toward the application of formative assessment in teachers’ writing instruction; second, the degree of influence of formative assessment on the English writing performance of 11th-grade students in ordinary senior high schools.

3.2. Research Subject

This study takes students from two 11th-grade classes in a general senior high school in Z province in China as the experimental subjects, including a control class with 43 students and an experimental class with 45 students, both of which are taught by the same teacher.

3.3. Research Method

This study adopts a quasi-experimental design, which has some characteristics of experimental design but does not meet all the strict experimental design requirements. It is suitable for situations where random assignment or strict control of all variables is impossible [16]. In this study, the research subjects are students from two 11th-grade classes in a general senior high school, which are comparable in teaching content, teaching progress, etc. Since the classes preexist and random assignment is impossible, natural grouping is used: one class serves as the experimental class to receive a one-month formative assessment English writing classroom experiment, and the other class serves as the control class to maintain the conventional writing teaching model. Although this non-random grouping cannot completely eliminate potential initial differences between the two groups, it can still reflect the effect of formative assessment to a certain extent.

3.4. Research Tool

There are two main research tools: one is the writing test exercise, and the other is the questionnaire survey on formative assessment activities.

(1) Writing Test

Before the entire experiment, teachers need to comprehensively grasp the writing levels of students in both classes, that is, judge whether the overall writing levels of the two classes are suitable for carrying out comparative teaching experiments through pre-testing; after the experiment, both classes simultaneously take the same test to examine the effect of formative assessment in English writing teaching. On the basis of ensuring the scientificity of the writing exercise questions, teachers use reading and continuation writing, one of the question types in the college entrance examination English writing, for the writing test in combination with the students’ grade, and ensure that the questions in the pre-test and post-test are not obviously repeated, but not too unconventional.

Due to the scarcity of examiner resources, only one English teacher can grade a total of 176 writing tests before and after, leading to significant subjectivity in grading. At the same time, the General Senior High School Curriculum Standards -English states: “The general high school English curriculum should attach importance to the reform of teaching models and learning methods under the background of modern information technology, make full use of information technology to promote the deep integration of information technology and curriculum teaching, and scientifically organize and carry out online and offline blended teaching according to the characteristics of English learning in information-based environments, enrich curriculum resources, and expand learning channels” [17]. Therefore, this study uses the iWrite English Writing Teaching and Evaluation System 2.0 for automatic scoring. Developed by FLTRP in cooperation with the China Foreign Language Education Research Center of Beijing Foreign Studies University, this system uses big data technology to “assess students’ compositions from four dimensions: language, content, text structure, and technical specifications, with good scoring reliability” [18]. It also provides functions for peer evaluation and comparison of different versions submitted by students multiple times, providing strong support for this study.

For the collected students’ writing scores, this study uses the statistical software IBM SPSS 27.0 for data analysis. This software supports data processing, analysis, and visualization, and has been widely used in social sciences.

(2) Questionnaire Survey

When conducting research on the application of formative assessment in English writing teaching for 11th-grade students in ordinary senior high schools, a questionnaire survey method is mainly adopted, that is, designing a questionnaire targeted at the research purpose and analyzing students’ feedback on the implementation of formative assessment in teachers’ classroom instruction. Methods such as mutual evaluation and self-evaluation are used, and the questionnaire is divided into six options: strongly agree, agree, somewhat agree, somewhat disagree, disagree, and strongly disagree. The surveyed students should choose the option closest to their actual writing situation based on their own conditions. Before the experiment, teachers explain the importance of the questionnaire to students to ensure the authenticity and reliability of the questionnaire data.

3.5. Research Process

In this study, the research process fully reflects the design of intra-subject comparison, where each participant completes two rounds of writing tests. First, the researcher selects two classes from the 11th grade of a general senior high school as research subjects, with one class as the experimental class and the other as the control class. Although this grouping method cannot completely eliminate individual differences, the subsequent intra-subject comparison can control the influence of these differences on the experimental results to a certain extent.

Before the experiment, all students in the experimental class and the control class participated in a pre-test, that is, completing a reading and a continuation writing composition. The purpose of the pre-test is to obtain the baseline data of students’ English writing ability before the experiment, providing a benchmark for subsequent comparative analysis. At this time, each student serves as their own control. After the pre-test, the iWrite scoring system is used to conduct machine scoring on students’ compositions, and the score of each student is recorded in detail.

Subsequently, while ensuring that the teaching objectives and content of the experimental class and the control class are the same and without increasing the number of teaching hours, efforts are gradually made to cultivate and improve students’ reading and continuation writing ability. The study lasts for one month; that is, the experimental class carries out a one-month formative assessment of the English writing classroom experiment. During this stage, teachers actively adopt formative assessment methods in the experimental class to provide students with continuous feedback and guidance to help them improve their English writing ability. In addition to regular writing exercises, periodic learning tests, and various quizzes, a combination of teaching evaluations is adopted, including student peer evaluation, self-evaluation, learning portfolio records, teacher records, etc., as shown in Figure 1.

Figure 1. Specific composition of formative assessment in this study.

The student learning portfolio includes multiple parts, such as text display (contents of each composition), personal development (involving improvements in knowledge and skills, as well as changes in emotional attitudes, etc.), and after-class reflection. The portfolio is kept by students, and teachers can access it at any time; students can also ask teachers to view it and seek guidance whenever necessary. The teacher’s record portfolio mainly summarizes classroom observations, teacher interviews, and student homework correction situations. This prompts teachers to pay attention to the development trends of each student, promptly understand their needs, carry out specific analysis, and provide guidance in learning methods, strategies, etc. The evaluation of learning effects needs to be conducted from three dimensions: student self-assessment, peer-assessment, and teacher assessment. Student self-assessment focuses on evaluating their own learning interest, attitude, learning strategies, participation degree, cooperative ability, and writing development level; peer assessment can be carried out according to standards such as the gains from each pair-work composition modification; teacher evaluation can be conducted by observing students’ performance of English proficiency and comprehensive quality in the writing process, enthusiasm for polishing and modifying compositions, and approaches to addressing challenges and solving problems. The control class continues to receive traditional English writing teaching, only conducting summative evaluations such as regular writing exercises, periodic learning tests, and various quizzes.

After the formative assessment English writing classroom experiment ends, all students in the experimental class and the control class participate in a post-test again, that is, completing a reading and continuation writing composition of the same type as the pre-test. The purpose of the post-test is to detect students’ English writing ability level after the experiment and the improvement effect of formative assessment on their writing ability. After the post-test, the iWrite scoring system is also used to score students’ compositions, and the score of each student is recorded.

Finally, the researcher analyzes the collected data. By comparing the changes in writing scores of students in the experimental class and the control class between the pre-test and post-test, the researcher can assess the impact of formative assessment on senior high school students’ English writing ability. Since each student has experienced both the pre-test and post-test, individual differences can be more accurately controlled, making the experimental results more convincing. At the same time, considering that the practice effect may have a certain impact on students’ post-test scores [19], teachers do not announce the specific four-dimensional scores, comments, and improvement suggestions from iWrite to students after the pre-test, ensuring a more accurate assessment of the effect of formative assessment.

4. Research Results and Discussion

4.1. The Experimental Study

The pre-test and post-test each yielded 88 valid writing test responses. In the questionnaire survey conducted in the experimental class, 45 valid questionnaires were obtained. The data from the writing test scores and questionnaire results were processed using SPSS 27.0 statistical analysis software. If the data showed a normal distribution, an independent samples t-test was used to compare the significance of differences in English writing scores between the experimental class and the control class before the experiment. After the experiment, a paired samples t-test was used to compare the significance of differences in pre-test and post-test scores within the experimental class and the control class, and an independent samples t-test was used to compare the significance of differences in post-test scores between the experimental class and the control class. If the data did not show a normal distribution, non-parametric test methods were employed. The questionnaire results were analyzed using frequency statistics with Excel tables.

First, the pre-test and post-test scores of the two classes were imported into SPSS for normality testing. Since the sample sizes of Class 10 (control class) and Class 11 (experimental class) were both n < 5000, and the probability P of the W value was > 0.05, it can be considered that the pre-test and post-test scores were both normally distributed (see Table 1).

Table 1. Normality test of scores.

Normality Test

Class 10 or 11

Kolmogorov-Smirnova

Shapiro-Wilk

Statistic

df

Significance

Statistic

df

Significance

Pre-test Level

10

0.145

43

0.024

0.971

43

.333

11

0.104

45

0.200*

0.963

45

.163

Post-test Level

10

0.104

43

0.200*

0.979

43

.596

11

0.114

45

0.175

0.965

45

.190

*. This is the lower bound of the true significance. a. Lilliefors significance correction

Thus, an independent samples t-test was conducted to compare the pre-test writing scores of the experimental class and the control class, as shown in Table 2. Assuming equal variances, the significance was 0.162, greater than 0.05, indicating homogeneous variances; the sig. value was 0.063, greater than 0.05, suggesting no significant difference in pre-test scores between the two classes. This indicates that the students in the experimental class and the control class had comparable English writing levels, allowing for the implementation of comparative analysis in the next step of the English teaching experiment.

Table 2. Independent samples T-test results of pre-test scores between experimental class and control class.

Independent Samples Test

Levene’s Test for Equality of Variances

t-test for Equality of Means

F

Sig.

t

df

Sig. (2-tailed)

Mean difference

Std. Error Difference

Pre-test Level

Assume equal variances

1.990

0.162

1.881

86

0.063

2.182

1.160

Do not assume equal variances

1.875

83.161

0.064

2.182

1.163

After implementing the comparative analysis of English teaching experiments to test whether the English writing abilities of students in the experimental class and the control class had improved, a post-test was conducted for both classes, and a paired samples t-test was used to examine and analyze the differences in scores between the pre-test and post-test of the two classes. Before this, correlation coefficient and significance checks were performed, revealing coefficients of 0.702 and 0.906, respectively, which were relatively high, and the significance levels were low (below 0.05), indicating that the two groups of data were suitable for a paired samples t-test (see Tables 3 and 4).

Table 3. Paired Samples correlation data of pre-test and post-test scores in the experimental class.

Paired Samples Correlations

N

Correlation

Sig.

Pair 1

Pre-test Level & Post-test Level

45

0.702

0.000

Table 4. Paired samples correlation data of pre-test and post-test scores in the control class.

Paired Samples Correlations

N

Correlation

Sig.

Pair 1

Pre-test Level & Post-test Level

43

0.906

0.000

The final results of the test showed that the P values of the paired samples t-test for the experimental class and the control class were 0.001 and 0.000 (approximate values), respectively. The pre-test and post-test writing scores of both classes met the requirement of P < 0.05, indicating a significant difference. This suggests that both the experimental class and the control class significantly improved their English writing scores after a period of study compared to before the experiment. The specific research data are detailed in Tables 5-8.

Table 5. Pre-test and post-test scores of the experimental class.

Paired Samples Statistics

Mean

N

Std. Deviation

Std. Error Mean

Pair 1

Pre-test Level

75.09

45

5.053

0.753

Post-test Level

77.36

45

4.386

0.654

Table 6. Paired samples test data of pre-test and post-test scores in the experimental class.

Paired Samples Test

Paired Differences

t

df

Sig. (2-tailed)

Mean

Std. Deviation

Std. Error Mean

95% Confidence Interval of the Difference

Lower

Upper

Pair 1

Pre-test Level - Post-test Level

−2.267

4.303

0.641

−3.560

−0.974

−3.533

44

0.001

Table 7. Pre-test and post-test scores of the control class.

Paired Samples Statistics

Mean

N

Std. Deviation

Std. Error Mean

Pair 1

Pre-test Level

72.91

43

5.814

0.887

Post-test Level

74.40

43

5.178

0.790

Table 8. Paired samples test data of pre-test and post-test scores in the control class.

Paired Samples Test

Paired Differences

t

df

Sig. (2-tailed)

Mean

Std. Deviation

Std. Error Mean

95% Confidence Interval of the Difference

Lower

Upper

Pair 1

Pre-test Level - Post-test Level

−1.488

2.463

0.376

−2.246

−0.730

−3.963

42

0.000

By conducting an independent samples t-test to compare the post-test writing scores of students in the experimental class and the control class, the purpose of testing the experimental results was achieved. Assuming equal variances, the significance was 0.405, greater than 0.05, indicating homogeneous variances; the sig. value was 0.005, less than 0.05, suggesting a significant difference, that is, a significant difference in post-test scores between the two classes. This indicates that the writing level of students in the control class, which only used traditional summative evaluation methods, was lower than that of the experimental class, which combined formative evaluation with summative evaluation. Therefore, it can be concluded that formative evaluation is conducive to improving students’ writing ability. The specific research data are detailed in Table 9.

Table 9. Comparison of the independent samples t-test results of post-test scores between the experimental class and control class.

Independent Samples Test

Levene’s Test for Equality of Variances

t-test for Equality of Means

F

Sig.

t

df

Sig. (2-tailed)

Mean Difference

Std. Error Difference

Post-test Level

Assume equal variances

0.700

0.405

2.898

86

0.005

2.960

1.021

Do not assume equal variances

2.887

82.368

0.005

2.960

1.025

4.2. Questionnaire Survey

Corresponding data results were obtained through the questionnaire survey. In the experimental class, approximately 66% - 80% of students were willing to participate in formative assessment activities organized by the teacher, such as “establishing their own learning growth portfolios,” “writing learning reflections,” “conversing with teachers about writing process gains,” and the sequence of “student self-evaluation,” followed by “peer evaluation” and “teacher evaluation” after writing. Additionally, 72% - 84% of students believed that such activities were conducive to improving writing ability, among whom about 18% (8 students) had strong confidence in improving their own writing ability. The above data indicate that, overall, 11th-grade students in this general senior high school showed support and affirmation toward the teacher’s adoption of formative assessment in writing instruction. They not only liked this new teaching model but also actively participated in it, strongly recognizing the teaching method of using formative assessment to enhance writing ability.

Analyzing the reasons behind this result, on the one hand, for students in the experimental class, teachers paid constant attention to their learning changes during teaching explanations and assessment processes, provided appropriate encouragement, and offered timely guidance and affirmation, thereby stimulating students’ interest and enthusiasm in participation, fully mobilizing their learning initiative, and enhancing their confidence to overcome difficulties.

The reason why students were willing to participate may, on the one hand, be due to the strong interactivity and participation of formative assessment activities, which enabled students to play a dominant role in the learning process, no longer as passive recipients of knowledge but as active participants in all aspects of learning, thereby enhancing their learning interest and motivation. For example, in the “peer evaluation” link, students could exchange writing ideas and share writing experiences, and this interaction among peers could stimulate their learning enthusiasm, allowing them to gain more inspiration and harvests in the process of mutual learning. On the other hand, formative assessment activities focus on process evaluation, paying attention to students’ performance and progress in the learning process rather than just the final result, which enables students to pay more attention to accumulation and effort in the learning process, thus more actively participating in learning activities to continuously improve themselves. For instance, the “learning growth portfolio” was a learning method that most students had not known or used before. With their own exclusive portfolio, students might be more willing to record their growth experiences, fully satisfying their motivational needs, and thus be willing to invest more time in learning.

Among the students participating in formative assessment activities, 8 students had strong confidence in improving their writing ability. This indicates that formative assessment activities can not only help students improve their writing ability but also enhance their confidence in their own abilities. The enhancement of confidence may stem from several aspects: First, formative assessment activities provide students with opportunities to demonstrate their writing abilities. Through continuous practice and improvement, students can see their progress and growth in writing, thereby enhancing their confidence in their own abilities. Second, positive feedback from teachers and peers is also an important factor in enhancing students’ confidence. During the formative assessment process, recognition and encouragement from teachers and peers can make students feel that their efforts have been affirmed, thus more confidently engaging in learning. Due to the existence of the “teacher record bag”, teachers paid more attention to students’ learning processes than before, making students feel more cared for and recognized. Finally, formative assessment activities focus on process evaluation, paying attention to students’ progress and efforts. This evaluation method allows students to pay more attention to their own growth and progress, enhancing their confidence in their own abilities.

In summary, the questionnaire survey results show that formative assessment activities have obvious advantages in English writing teaching. They can stimulate students’ learning interest and enthusiasm, make students more actively participate in learning, and improve learning autonomy and interactivity. Moreover, formative assessment activities can provide students with timely and specific feedback information, helping them promptly understand their learning situation and make targeted improvements. In addition, they can cultivate students’ self-monitoring and self-regulation abilities, enabling students to better manage their own learning processes, thereby continuously improving writing ability in long-term learning. However, it is undeniable that some issues need attention when implementing formative assessment activities. For example, about 4% (2 students) of students expressed complete disapproval of such teaching activities, indicating that some students may not fully understand or accept formative assessment activities, and teachers need to provide more guidance and explanations.

4.3. Practical Implications

(1) Implications for Teachers and Teaching Activities

First, formative assessment should be integrated with summative assessment in teaching. The experimental data intuitively reveal the significant advantages of formative assessment in improving students’ writing ability. Compared with traditional summative assessment methods, formative assessment can more effectively promote students’ progress. Therefore, teachers should organically combine formative assessment with summative assessment in writing teaching activities. Summative assessment can be used to summarize students’ learning achievements, while formative assessment runs through the entire writing teaching process, focusing on students’ daily learning performance and progress. For example, in a semester of writing teaching, teachers can arrange multiple formative assessment activities, such as regular composition feedback, immediate classroom comments, and self/peer evaluations during the learning process, while conducting a summative writing test at the end of the semester. Through the combination of the two, students’ learning situations can be comprehensively and accurately evaluated.

Second, design diversified evaluation activities. The questionnaire survey results show that students have a positive attitude toward participating in formative assessment activities and believe these activities help improve writing ability. Teachers should design rich and diverse evaluation activities based on students’ actual conditions and needs to stimulate their learning interest and initiative. For example, “student self-evaluation” activities can be carried out, guiding students to reflect on and evaluate their compositions from aspects such as content, structure, and language after completion; “peer evaluation” activities can be organized, allowing students to mutually evaluate compositions and exchange writing experiences in groups; “teacher evaluation” activities can be conducted, where teachers not only give scores but also provide specific modification suggestions and encouraging comments when grading compositions to help students clarify their efforts’ direction. Furthermore, students can be encouraged to establish “learning growth portfolios” to record their writing works, learning reflections, and progress, enabling students to gain a sense of achievement and motivation through organization and review.

Third, cultivate students’ self-monitoring and self-regulation abilities. Formative assessment emphasizes students’ dominant position, requiring them to have self-monitoring and self-regulation abilities [20]. Teachers should focus on cultivating students’ these abilities in teaching activities, enabling them to learn to independently manage the learning process. This can be achieved by setting clear learning goals and tasks, guiding students to continuously reflect on their learning methods and strategies, assess learning effects, and adjust learning plans in a timely manner. For example, in writing learning, teachers can guide students to develop personalized writing improvement plans, regularly check the implementation of the plans, and discuss learning difficulties and gains with students to help them adjust learning strategies and improve learning efficiency.

Fourth, create a cooperative and communicative learning environment. Formative assessment activities often require cooperation and communication among students. Teachers should create an open, inclusive, and cooperative learning environment, encouraging students to actively communicate with peers and teachers during the learning process. For example, in “peer evaluation” activities, teachers can arrange students to work in groups, create a relaxed and friendly atmosphere, enable students to dare to express their opinions, carefully listen to others’ suggestions, and inspire each other for common progress in communication. Meanwhile, teachers should also actively participate in students’ communication, provide appropriate guidance and assistance, and ensure the effective implementation of communication activities.

Fifth, strengthen teachers’ professional development and training. To better implement formative assessment, teachers need to possess corresponding professional knowledge and skills. Schools and educational administrative departments should strengthen teachers’ professional development and training to help them improve evaluation capabilities and master diversified evaluation methods and techniques. Special training, seminars, and other activities can be organized, inviting experts to explain the concepts, strategies, and cases of formative assessment, and sharing successful teaching experiences; teachers can also be encouraged to communicate and cooperate with each other, jointly discuss problems and solutions in teaching, promote teachers’ professional growth, and enhance teaching standards.

(2) Implications for Students

First, actively participate in formative assessment activities. Students should recognize the importance of formative assessment activities in improving writing ability and actively engage in various evaluation activities. In “student self-evaluation” activities, students should earnestly reflect on their compositions, identify problems, and make modifications; in “peer evaluation” activities, they should modestly listen to peers’ opinions and suggestions, learn others’ strengths, and make up for their own deficiencies; in “learning reflection” activities, students should deeply think about their learning processes, summarize experiences and lessons, and clarify future directions for efforts.

Second, cultivate self-monitoring and self-regulation abilities. Students should learn to self-monitor and self-regulate, actively manage their own learning processes. During the learning process, students should regularly reflect on their learning methods and strategies, assess their learning effects, and adjust learning plans and goals in a timely manner [21]. For example, in writing learning, students can formulate corresponding learning plans based on their writing levels and goals, such as practicing writing daily, reading model essays, and accumulating writing materials, and continuously reflect on and adjust these plans during the learning process to improve writing ability.

Third, take the initiative to communicate with teachers and peers. Students should proactively communicate with teachers and peers, sharing learning experiences and confusions. In communication with teachers, students can consult writing skills, obtain feedback, and clarify their own shortcomings; in communication with peers, students can learn from each other and jointly improve writing levels. For example, in “peer evaluation” activities, students can discuss writing ideas with peers, share writing skills, mutually point out the strengths and weaknesses in compositions, and make progress together.

5. Conclusions

This study has deeply explored the application effect of formative assessment in senior high school English writing teaching, and through empirical research, revealed its positive impact on students’ English writing ability. The study adopted a quasi-experimental design, selecting two 11th-grade classes from a general senior high school in Shaoxing as research subjects: one class served as the experimental class to receive a one-month formative assessment English writing classroom experiment, and the other class served as the control class to maintain the conventional writing teaching model. Data were collected through two reading and continuation writing tests (pre-test and post-test) and questionnaire surveys, and statistical analysis was conducted using SPSS software, leading to a series of conclusions with important theoretical and practical values.

The study found that formative assessment significantly improved students’ English writing scores. The writing scores of the experimental class increased from 75.09 to 77.36, while those of the control class only rose from 72.91 to 74.40. This indicates that formative assessment can effectively promote the development of students’ English writing ability, helping them continuously improve in the writing process. Additionally, students held a positive attitude toward formative assessment: approximately 66% - 80% of students were willing to participate in formative assessment activities, 72% - 84% believed these activities were conducive to improving writing ability, and about 18% of students had strong confidence in enhancing their own writing ability. This reflects that students recognize the role of formative assessment in improving writing ability and are willing to engage in it.

The effectiveness of formative assessment primarily benefits from its unique teaching advantages. First, it promotes students’ active learning. Formative assessment emphasizes process evaluation, focusing on students’ performance and progress in the learning process rather than just the final result. This evaluation method can stimulate students’ learning interest and enthusiasm, making them more actively participate in learning. For example, in the “peer evaluation” link, students can exchange writing ideas and share writing experiences, and this peer interaction can spark their learning enthusiasm, allowing them to gain more inspiration through mutual learning. Students are no longer passive knowledge recipients but active participants in all aspects of learning, improving learning autonomy and interactivity. Second, formative assessment provides students with timely and specific feedback information, helping them understand their learning situation, clarify learning goals, and adjust learning strategies. In writing learning, teachers can promptly point out the strengths and weaknesses in students’ compositions through formative assessment and provide specific modification suggestions, so that students can make targeted improvements based on this feedback. This timely feedback mechanism helps students continuously adjust and perfect their writing skills in practice, thereby promoting the improvement of writing ability. Furthermore, formative assessment cultivates students’ self-monitoring and self-regulation abilities. It emphasizes students’ dominant position, requiring them to have self-monitoring and self-regulation skills. During the formative assessment process, students need to reflect on and monitor their own learning processes, assess learning effects, and adjust learning plans and strategies in a timely manner. For example, in writing learning, students can develop corresponding learning plans according to their writing levels and goals, such as practicing writing daily, reading model essays, and accumulating writing materials, and continuously reflect on and adjust these plans during the learning process to improve writing ability. This self-monitoring and self-regulation process helps students better manage their learning processes, improving learning efficiency and quality.

In terms of teaching implications, teachers should organically combine formative assessment with summative assessment, using summative assessment to summarize students’ learning achievements while letting formative assessment run through the entire writing teaching process to focus on students’ daily learning performance and progress. Through their integration, students’ learning situations can be comprehensively and accurately evaluated to promote the all-round development of their writing ability. Teachers should also design diverse evaluation activities, such as “student self-evaluation,” “peer evaluation,” and “teacher evaluation,” to stimulate students’ learning interest and initiative. Meanwhile, teachers should focus on cultivating students’ self-monitoring and self-regulation abilities, enabling them to learn to manage the learning process independently, and create an open, inclusive, and cooperative learning environment to encourage students to actively communicate with peers and teachers. In addition, to better implement formative assessment, teachers need to possess corresponding professional knowledge and skills, so schools and educational administrative departments should strengthen teachers’ professional development and training. For students, they should recognize the importance of formative assessment activities in improving writing ability and actively participate in various evaluation activities. In “student self-evaluation” activities, students should earnestly reflect on their compositions, identify problems, and make modifications; in “peer evaluation” activities, they should modestly listen to peers’ opinions and suggestions, learn others’ strengths, and make up for their own deficiencies. Students should also learn to self-monitor and self-regulate, actively manage their learning processes, regularly reflect on learning methods and strategies, assess learning effects, and adjust learning plans and goals in a timely manner. At the same time, students should take the initiative to communicate with teachers and peers, sharing learning experiences and confusions, consulting writing skills and obtaining feedback from teachers to clarify their own shortcomings, and learning from each other with peers to jointly improve writing levels.

In conclusion, this study provides new ideas and methods for senior high school English writing teaching, proving the effectiveness of formative assessment in enhancing students’ English writing ability. By reasonably applying formative assessment, teachers can better promote the development of students’ writing ability, and students can achieve greater progress in writing learning.

Conflicts of Interest

The author declares no conflicts of interest.

Conflicts of Interest

The author declares no conflicts of interest.

References

[1] Liu, Q.L. and Wang, Y.L. (2024) Investigating the Effects of English Continuation Writing Tasks on Senior High School Students’ Creative Thinking. Foreign Language Education in China, 7, 54-61, 93.
[2] Sadler, D.R. (1989) Formative Assessment and the Design of Instructional Systems. Instructional Science, 18, 119-144.
https://doi.org/10.1007/bf00117714
[3] Scriven, M. (1991) Evaluation Thesaurus. Sage Publications.
[4] Bloom, B.S., et al. (1956) Taxonomy of Educational Objectives: The Classification of Educational Goals. Handbook I: Cognitive Domain. David McKay Company Inc.
[5] Black, P. and Wiliam, D. (1998) Assessment and Classroom Learning. Assessment in Education: Principles, Policy & Practice, 5, 7-74.
https://doi.org/10.1080/0969595980050102
[6] Shepard, L.A. (2005) Formative Assessment: Caveat Emptor. Educational Psychologist, 40, 75-86.
[7] Vygotsky, L.S. (1986) Thought and Language. MIT Press.
[8] Boston, C. (2002) The Concept of Formative Assessment. Practical Assessment, Research, and Evaluation, 8, 9.
[9] Abu-Zaid, A. (2013) Formative Assessments in Medical Education: A Medical Graduate’s Perspective. Perspectives on Medical Education, 2, 358-359.
https://doi.org/10.1007/s40037-013-0089-5
[10] Jafari, A. and Kaufman, C. (2006) Handbook of Research on ePortfolios. Idea Group.
[11] Bitner, M.J. and Brown, S.W. (2008) The Service Imperative. Business Horizons, 51, 39-46.
https://doi.org/10.1016/j.bushor.2007.09.003
[12] Cooper, T. and Love, T. (2007) e-Portfolios in E-Learning. Informing Science Press.
[13] Bryant, L.H. and Chittum, J.R. (2013) ePortfolio Effectiveness: A Search for Empirical Support. International Journal of ePortfolio, 3, 189-198.
[14] Oskam, I.F. (2009) T-Shaped Engineers for Interdisciplinary Innovation: An Attractive Perspective for Young People as Well as a Must for Innovative Organizations. SEFI (European Society of Engineering Education) Annual Conference, Rotterdam, 1-4 July 2009, 2.
https://www.researchgate.net/publication/216353140_Tshaped_engineers_for_interdisciplinary_innovation_an_attractive_perspective_for_young_people_as_well_as_a_must_for_innovative_organisations
[15] Klenowski, V., Askew, S. and Carnell, E. (2006) Portfolios for Learning, Assessment and Professional Development in Higher Education. Assessment & Evaluation in Higher Education, 31, 267-286.
https://doi.org/10.1080/02602930500352816
[16] Creswell, J.W. (2018) Research Design: Qualitative, Quantitative, and Mixed Methods Approaches. Sage Publications.
[17] Ministry of Education of the People’s Republic of China (2020) General Senior High School Curriculum Standards English. People’s Education Press.
[18] Li, Y.L. and Tian, X.C. (2018) An Empirical Research into the Reliability of iWrite 2.0. Modern Educational Technology, 28, 75-80.
[19] Grant, H. and Yang, H.Z. (2001) A Guide to Language Testing: Development, Evaluation, and Research. Foreign Language Teaching and Research Press.
[20] Liu, J.D. and Yu, Q. (2024) Formative Assessment and Self-Regulated Learning in Foreign Language Education. Modern Foreign Languages (Bimonthly), 47, 702-711.
[21] Zhou, L. and Zhou, W.Y. (2020) Formative Assessment: Promoting Students’ Self-Regulated Abilities. Journal of Shanghai Educational Research, No. 2, 53-57.

Copyright © 2025 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.