Foreign Language Web-Based Learning by Means of Audiovisual Interactive Activities

Abstract

Online learning has been on an upward trend for many years and is becoming more and more prevalent every day, consistently presenting the less privileged parts of our society with an equal opportunity at education. Unfortunately, though, it seldom takes advantage of the new technologies and capabilities offered by the modern World Wide Web. In this article, we present an interactive online platform that provides users with learning activities for students of English as a foreign language. The platform focuses on using audiovisual multimedia content and a user experience (UX) centered approach to provide learners with an enhanced learning experience that aims at improving their knowledge level while at the same time increasing their engagement and motivation to participate in learning. To achieve this, the platform uses advanced techniques, such as interactive vocabulary and pronunciation assistance, mini-games, embedded media, voice recording, and more. In addition, the platform provides educators with analytics about user engagement and performance. In this study, more than 100 young students participated in a preliminary use of the aforementioned platform and provided feedback concerning their experience. Both the platform’s metrics and the user-provided feedback indicated increased engagement and a preference of the participants for interactive audiovisual multimedia-based learning activities.

Share and Cite:

Kanellopoulou, C. , Pergantis, M. , Konstantinou, N. , Kanellopoulos, N. and Giannakoulopoulos, A. (2021) Foreign Language Web-Based Learning by Means of Audiovisual Interactive Activities. Journal of Software Engineering and Applications, 14, 207-232. doi: 10.4236/jsea.2021.146013.

1. Introduction

Education is a vital necessity for any living person. Starting from a young age and continuing throughout a person’s life, the need to understand the world around us, better satisfy our curiosity and acquire knowledge is constantly driving us forward. Unfortunately, in many cases, many people’s capability to participate in traditional education is limited by factors beyond their control. Major hindrances can come in the form of a lack of local resources in terms of educational facilities, such as schools or universities; limited capability for transportation to and from such facilities; accessibility issues; disability problems, and more. In addition to these issues, other situational problems may arise, such as the COVID-19 pandemic, which dictated social distancing, unfavorable weather problems, or geopolitical related events that may cause similar interruptions to traditional educational programs. Distance learning has evolved as a means to overcome these difficulties although it does present other challenges [1].

The most prominent method of distance learning in the modern world is courses and other educational activities held online through the use of the Internet. The Internet, since its invention in the late 20th century and its global diffusion, has become a major communication network of modern life. Since distance learning is essentially an endeavor based on communication, it has also benefitted greatly from technological advancements, overcoming many of the hindrances mentioned earlier [2]. Additionally, the invention of the World Wide Web has enabled asynchronous communication via information shared through websites. Educational platforms based on the Web can provide students with the capability to engage in educational activities at their own time and pace thus providing a new way of achieving learning goals, a way that is very distinct from traditional educational methods.

Despite these new opportunities, made possible by online and Web-based learning, there is still room for development in this field. Most online or web based educational platforms seem to take the approach of transitioning traditional methods of teaching from the real world to the Internet by creating equivalents. That leads to many advantages of the Web being left untapped. One of these advantages is the ability to use multiple media forms, including audio and video, not only to convey information but also to enrich interactivity. Audiovisual means have become a prevalent aspect of the Web experience with their use ranging from commercial and advertisement purposes to entertainment, art projects, and more. Adding audio and visual enhancement to the learning experience can both help achieve better results in acquiring knowledge and increase user engagement providing students with motivation to participate and learn [3].

In this article, we focus on the use of an online Web platform designed for young Greek students studying English as a foreign language. The platform, which was developed exclusively for use in this project, attempts to provide students with an enhanced experience through various means of embedding audiovisual elements, as well as improved interactivity in a series of web activities. The activities implemented for preliminary testing and use of the platform were labeled “English through Film” and focused on capturing the students’ interest by using the 1996 film “Independence Day” [4] as their focus. Features of these activities include vocabulary assistance, audio links to provide pronunciation assistance, mini-games, and embedded media, all delivered through a responsive interface suitable for all kinds of devices and with the help of an easy-to-use intuitive control scheme. An important element of the platform is an activity calling students to record their own voice-over of a scene from the movie and then allowing them to listen to the combined media of the voiceless scene (which, however, kept all the background sounds) with their own voice in the main character’s role. This was meant to engage students in a creative way and motivate them to repeat the process until they were happy with the results, providing them with motivation to become better through creativity and not the traditional carrot and stick approach.

The research presented in this article aims to study the effects of implementing audio and visual based learning activities, both in terms of student engagement and in terms of outcome-based educational results. Taking into account both metrics concerning user behavior collected by the platform and feedback given by the participants themselves, the article will seek to reach a conclusion concerning the methods mentioned. Additionally, the study focuses on various means of improving student interaction and knowledge acquisition during web-based educational activities and specifically those aimed at teaching a foreign language to younger students.

2. The Intricacies of Online Learning

Distance learning, often considered a modern development, has its roots as early as the 18th century when it first appeared in the form of early correspondence courses [5] although it didn’t start displaying its more contemporary characteristics until the late 19th century when many Universities started offering equivalent courses [2] [5]. One defining quality that emerged from these first instances of distance education was the concept of two-way teacher student communication which incorporated student feedback and didn’t just present itself as a one-way written lecture. Even in those early days, the importance of interactivity for the student was becoming apparent. Efforts continued with TV and radio distance education methods which started to incorporate audio and visual stimuli to offer an enhanced experience to distance learners [6].

With the invention of the Internet and communication technologies becoming more widespread, the bulk of open universities and other institutions specializing in distance education started shifting their focus towards the new medium. The dawn of the new millennium saw a rapid increase in offered online courses and other web-based educational programs [7]. As the world merges through the use of communication technology into a global community, the massive educational opportunities offered by online curricula are not being overlooked by the public. However, the World Wide Web does not only present an alternative to traditional learning, it also provides a variety of new tools at the disposal of educators to further improve the learning experience.

The improvements of communication and Internet infrastructure across the globe, coupled with the innovation of digitization and digital compression, provided the World Wide Web with the potential to incorporate much more than the originally intended text content in the form of audio and visual media. These new capabilities defined the information age by accelerating the digital revolution [8] and changing forever the way we understand information distribution through the global network. Nowadays integrated digital video and audio information in the Web is commonplace and its uses cover every aspect of web-based enterprises. The audiovisual aspect of the modern World Wide Web can be a useful weapon in the arsenal of online education.

Visual and audio stimulation in teaching and learning is not a concept that emerged with the information age. Use of visual aids and toys to encourage youth education has been noted as early as the 17th century by JH Pestalozzi (1746-1827) and Jean Rousseau (1712-1778) [9]. Some of the major benefits of visual and audio stimulation include increasing the students’ attention span and adding interactivity to a traditional lecture or lesson. In the electronic age, these teaching elements became even more efficacious because of their use as distance learning tools through Television education programs [6]. It is only natural that in an extremely media-oriented medium such as the World Wide Web, the use of such methods would see even more prevalence.

Multiple challenges of learning from a distance through the World Wide Web can be overcome by the use of visual and audio aids. The quality of the audiovisual material can become a decisive factor in the learning outcome because of its influence on the learners’ emotional aspects and help form the user experience [10]. Especially in the case of this study, which examines an instance of teaching English as a foreign language, problems such as limited exposure to the target language, motivation and pronunciation issues are mitigated [11]. The visual stimuli can cross the language gap and offer information without the use of the written word and audio stimuli can be used to initiate the student into native use and sound of the oral word. Both audio and visual media and their combination can help increase engagement and make the online course more interesting. This may be true not only for younger ages but even for the mature ages and even for the more demanding requirements of higher education [12]. Especially in technology related fields of study, or fields relating to sound and imagery themselves, the importance of incorporating audio and visual tools in the learning process is paramount [13].

Although using multimedia for educational purposes both from a distance and in physical classrooms in itself can be beneficial, it is further enhanced by an approach based on creativity as a tool for education [14]. In such an approach, the student isn’t just expected to consume the media in order to acquire knowledge but is expected to participate in a creative process using such media. This has a beneficial effect to the students’ motivation and provides incentive for them to apply more effort in order to achieve the final result that is not only a lesson learned but also entirely or partially creative work. The combination of distance learning through the World Wide Web with the Web’s intrinsic affinity for information diffusion, the use of audio and visual media stimuli, an interactive approach and the use of creativity as a learning tool can create a very beneficial learning environment for most subjects but even more so for teaching English as a foreign language.

It has been made clear that online learning faces many challenges but not all of them concern the nature of educational material. Reaching the greatest possible audience is also paramount. Availability and accessibility are necessary attributes of an online educational activity. In the modern technological environment, availability requires support for a vast number of devices and screen resolutions that are available. Specifically in education, the usage of mobile devices appears to be common among students [15]. Especially in resource-constrained settings the use of smartphones is much cheaper than the use of desktop or laptop PCs. In terms of accessibility, special care must be taken to make sure a learning platform does not exclude users with disabilities by regulating use of graphics, providing text equivalents, ensuring appropriate use of colors and more [16]. This further signifies the importance of interface design both in terms of presentation and in terms of usage.

When designing an interface for an educational platform one must take into consideration not only the basic tenets of interface design but also how learning principles affect the implementation of these tenets [17]. In order to accomplish this, the interface must be able to provide the various types of media that are required by the educational activity in a distinct manner while, at the same time, maintaining visual clarity. Moreover, the technical aspects of the interaction between users and the platform must be considered. Ease of use is very important and time-consuming interactions should be avoided to ensure a high level of user engagement [18]. Students often get frustrated by difficult navigation, and wasting time figuring out the interface might be detrimental to the learning outcome.

When trying to better evaluate both the content of a learning activity and the interaction between students and that content, it is useful to have as much information as possible. There are multiple methods of collecting data concerning UX and usability in general [19]. These may include interviews with users concerning their satisfaction as well as quantitative metrics and statistics collected by the platform. Combining the different forms of feedback and reaching an example can oftentimes be a rigorous process [19].

3. Implementation of the Online Platform

For the purposes of this research study, an online platform was developed by the research team and used by young Greek students of English as a foreign language. The platform implemented four basic functions:

1) Registration of the student providing personal demographic information and their interest in the English language.

2) An evaluation test comprised of multiple-choice questions meant to evaluate the participants’ level of English language proficiency.

3) A set of learning activities employing audiovisual media and an intuitive interface including a voice-over activity.

4) A similar series of learning activities meant to measure the learning outcome of the previous set.

Despite its wide functionality, the platform was designed and implemented as a single system. The system was comprised of a publicly accessible front-end Web application and a separate administration area. It was hosted on a dedicated server running a classic LAMP stack (CentOS Linux, Apache, MariaDB, and PHP7.2). While database connectivity and the basic functionality of the platform were handled using PHP, extensive use of client-side technologies based on Javascript and the JQuery and JQuery-UI libraries was made in order to create a more interactive user experience. All communication between the user’s browser and the platform’s server was handled through the secure HTTPS protocol using TLS encryption to protect the participants’ personal information. The front-end Web Interface was designed to responsively alter its appearance based on which device was used to view the content using the Bootstrap library. A graphic representing the platforms structure I presented in Figure 1.

Figure 1. Platform representation.

3.1. Student Registration Page

The initial registration process was available to any visitor of the platform’s website. During the registration, the user selected a username and password and recorded some other personal information. For the purposes of the preliminary testing presented in this study the registration process did not give the users immediate access to the platform but instead placed them in an approval queue where they waited for an administrator to grant them access. Everyone that registered received an automated Email from the platform with the information they provided and a second automated email as soon as they were approved by the administrators. The Emails were sent using the SMTP protocol over a secure port in order to maintain privacy, despite the fact that the only sensitive personal information was the users’ email addresses. The information collected by the registration form is presented in Table 1.

Table 1. Registration information.

Adding additional information regarding knowledge of other languages was accomplished by a JQuery function that dynamically altered the submitted registration form adding the relevant fields. The students could select a foreign language from the most commonly studied in the region, as well as add their own option through a full-text input field (the registration form included the option of “other” language and a space for free text input). An instance of this part of the registration form is presented in Figure 2, entitled Other foreign languages you speak (Άλλες ξένες γλώσσες που γνωρίζεις), with a list of languages (Γλώσσα) and levels (Επίπεδο). There were four languages students could select: French (Γαλλικά), German (Γερμανικά), Italian (Ιταλικά), and Spanish (Ισπανικά), and three language ability levels: A little (Λίγο), Well (Καλά), Very well (Πολύ καλά).

We need to stress that the system did not require participants to record information regarding their identity. This reassured the students that took part in the experimental use of the platform that their interaction with it, including answers to activities and the English knowledge evaluation test, were not meant to be a test. After filling in the initial registration form users did not gain immediate access to the platform, but needed the approval of the administrator. The approval mechanism was available through the administration area alongside the users’ provided information.

Figure 2. Additional languages block of the registration form.

3.2. Evaluation Test

After users were approved by the administrator, they could log into the platform using their newly created username and password. Immediately following their first login, users were presented with a placement test aimed to classify them into groups according to their current proficiency in the English language. This affected several aspects of the platform’s activities, as seen in Table 2.

Table 2. Proficiency-dependent features.

The placement test included 4 sets of 10 multiple choice questions each randomly selected from a pool of questions classified by their difficulty. The pool of questions was provided by educators through administration area. Questions of all levels of difficulty could be added to the pool thus enriching the evaluation process. At the time of its preliminary testing the platform contained over 125 questions.

If a student answered at least 7 out of 10 questions correctly, the next set that

included questions of higher difficulty level was presented to them. If not, they were placed in the group of the current level of knowledge and the evaluation process stopped. Students were given 30 seconds to answer each question. The user interface (Figure 3) was designed to be simple and clean, without unnecessary clutter so that the participants could easily focus on the task of answering the questions without being distracted or having to navigate an intricate UI.

Figure 3. Placement test UI screenshot.

The user could select their answer using the mouse or by tapping on their mobile device. The time left for each question was clearly displayed on the top right corner of the question block. After selecting an answer, users were asked to confirm their choice by clicking on a confirmation button. If the time limit was reached before the user confirmed a response the currently selected response was considered their answer. All information regarding the users’ progress in the evaluation was remotely saved in a PHP_SESSION variable and only committed to the database after the user had finished their evaluation. This enabled the platform to start over if a user’s evaluation progress was interrupted due to technical or other difficulties. Later on, in the result section, more information will be presented regarding the collection of information concerning the users’ performance in the placement test.

At the end of the test, the users were presented with their evaluation results (Figure 4) and they were prompted to continue to the available activities. In order to further reinforce the platforms’ character as a learning tool and not as an

Figure 4. Result screen.

exam activity, the results were not disclosed to the user. For the purpose of this research though, every answer was saved in the system’s database alongside the time it took the student to answer and if each answer was correct or not. This information will be further discussed in the result section of this article.

3.3. Pre-Test Activities

After the evaluation of a student’s proficiency level of English was completed, the set of pre-test activities was made available to them. The main set consisted of 10 activities that involved a variety of interactivity including mini-games, multimedia assets, drag and drop questions, multiple-choice questions, multiple selection questions, and the sound recording of the voice-over activity which was only available to a portion of the participants as part of an experiment that is beyond the scope of this article. These activities labeled “English through film” were centered on the 1996 film “Independence Day” and the general notions of space and extraterrestrial life. The activities’ total duration was estimated to be around 45 minutes. A short description for each activity is presented below and is followed by a table indicating which activity made use of which types of interactivity.

Activity 1 comprised of a drag and drop puzzle game depicting a UFO and a multiple-choice question concerning the resulting image on the puzzle.

Activity 2 comprised of a multiple-choice question and a multiple selection question inquiring students’ beliefs about extraterrestrial life and which words they might use to describe it. Possible choices were accompanied by pronunciation audio clips and their translation in Greek or definition in English depending on the student’s language level.

Activity 3 comprised of an embedded video regarding extraterrestrial life accompanied by subtitles. Greek subtitles were offered only to the students with low level of English knowledge while other students had the option to watch without subtitles or with subtitles in English.

Activity 4 asked students to use drag and drop mechanics to place different words into groups based on their relevance with Space or with War, two major themes of the movie. Each word could be placed in either or both groups and they were all accompanied by pronunciation audio clips and translations/definitions.

Activity 5 comprised of a drag and drop interface that asked users to combine word endings with verbs in order to create the appropriate nouns. The activity only completed the operation to form the correct version of the noun, thus giving students a trial and error type of interactivity.

Activity 6 consisted of a small text passage containing the synopsis of the movie that was enriched with hyperlinks to the translations or definitions of words depending on the student’s level.

Activity 7 presented the students with the trailer of the movie.

Activity 8 asked students about their knowledge of various words included in the movie scene that would be the object of the next two activities. Students could pick one of three options (“Don’t know”, “Have seen/heard” and “Can understand”). The word’s pronunciation was also provided. After the students gave all their answers, the words’ translations or definitions were also made available, for immediate feedback.

Activity 9 consisted of a video containing a scene from the movie depicting the President of the United States delivering an inspiring speech. The movie snippet was accompanied by English subtitles. A series of multiple-choice questions concerning the meaning of words included in the scene was provided for the students to answer. The words were the same as in activity 8. After the students gave all their answers, the correct answers were highlighted for feedback.

Activity 10 consisted of the same video presented in Activity 9 but with the actual voice of the POTUS removed. The transcript of the speech was also provided. Students were asked to record their own voice and could then listen to the result and choose to save it or record again. When they were satisfied with their recording, they could end the activity which also contained two multiple-choice questions about how difficult and how interesting they found voice-over process. The students could also download and keep their voice-over in mp3 format.

In order to implement this type of interactivity, the custom controls were added to the platform that manipulated the recording process. Whenever a participant started recording their voice, the video would automatically start playing accompanied by subtitles of the transcript. An implementation of the AudioContext interface was used to record and store the information client-side. Participants could hear the movie scene’s sound effects and ambient music while the president’s voice was removed through the use of specialized software. As soon as the video duration ended, or the participant pressed the “Stop Recording” button, the recorded audio clip was converted into mp3 format with the use of the Lame MP3 encoder through the lame.js Javascript library. The use of compression heavily reduced the time required to forward the recording to the server while at the same time made it easier for participants to keep a copy of their creative work. After a recording attempt was completed the participant could listen to their recording while simultaneously watching the video. Having immediate access to the final result both rewarded the participants with a sense of achievement and increased their motivation to proceed to subsequent recordings until they were satisfied with the result. Figure 5 depicts the voice recording interface.

The User Interface (UI) elements that were used during the activities were based mostly on simple interface actions such as clicking on the mouse (or tapping on mobile devices) and dragging and dropping elements. Typing was avoided as it is time consuming (especially for younger students) and can even be cumbersome when using mobile devices. A time-consuming element of interaction can cause loss of motivation for the participant, resulting in decreased effort. JQuery’s UI library was used to implement the dragging and dropping features in a way that were compatible with both desktop and mobile devices.

Figure 5. Voice-over interface.

An extensive use of client-side scripting through JQuery and JS made sure the interface responded to user input in an intuitive manner. Drop areas were clearly visually represented and the action of moving a draggable element above them provided the user with visual feedback through the use of CSS instructions. An instance of the interface is presented in Figure 6 as an example. Of course, each activity used different graphical elements according to the features it provided. Throughout the activities, wherever it was deemed necessary, the students

Figure 6. Interface example from Activity 4.

were provided with help regarding the words that were the main focus of the specific activities. Clicking on a word would provide a user with the translation of the word in Greek or a definition of the word in English depending on the participants’ evaluated level of English. Additionally, a small speaker icon appeared next to the words (also seen in Figure 5) that provided students with an audio clip of the pronunciation of the word. The pronunciation clips were presented through online resources such as the Oxford Learner’s Dictionary.

Table 3 presents a summary of activities and the different features each activity included.

Table 3. Activities and features.

3.4. Post-Test Activities

After the students completed the “English through Film” activities, a set of post-test tasks was made available that could help measure the learning outcome of the platform’s use. For the purposes of the preliminary testing of the platform, these post-test activities were undertaken by students at a later date. The post-test activities had an estimated duration of 15 minutes. A short description of the post-test activities follows:

Post-Test Task 1 was Activity 8 repeated in order to assess the improvement of the students’ vocabulary.

Post-Test Task 2 comprised of a task that required students to select the correct pronunciation of a word from the two choices offered. The words were part of the ones used in Activities 8-9-10.

Post-Test Task 3 was Activity 9 repeated but without inclusion of video snippet.

Post-Test Task 4 comprised of a series of multiple-choice questions checking the meaning of words contained in Activities 8-9-10

Post-Test Task 5 comprised of a drag and drop interface that allowed students to fill out sentences using words from Activities 8-9-10. The sentences

could be completed even if the students’ choice was incorrect. Each word could be used more than once.

Post-Test Task 6 comprised of a series of multiple-choice questions concerning the multiple meanings of the word “difference”.

The online platform’s primary use was to provide students with a learning experience, enhanced through audio and visual media, which made use of a modern and easy-to-use interactive interface. In addition to this, the platform also collected data not only on the students’ answers but also on their other interactions with the platform. This data included how long students spent on each activity, which words they checked for pronunciation or translation/definition, which subtitles they used when watching the included videos and others. This information, which included statistics from both the evaluation process and the pre- and post-test activities, can be used by educators to identify learning issues and difficulties. Additionally, it can be useful in assessing the platform’s performance in terms of learning results.

In order to collect this information, the platform used a series of AJAX calls in tandem with JS triggered events. Every time a student’s interaction triggered such an event the client delivered the events information to the server that seamlessly and asynchronously incorporated it in the database without causing any sort of delay or other issues with the student’s interaction with the platform. The bulk of platform-collected data included answers in the evaluation test and data collected from the students’ participation in the various activities.

4. Results

The data collected by the platform plays an important role in assessing the results of its use. In this section, we present the collected data in detail, as well as some statistics that will be further elaborated upon in the discussion section.

During the placement test, which evaluated students’ proficiency in the English language, the system recorded students’ answers to each question, the level of difficulty of the question, whether that answer was correct and how long the user spent on each question in seconds. A sample of this data can be seen in Figure 7.

Figure 7. Sample of placement test data.

A grand total of 107 students successfully completed the placement test. Their ages ranged from 11 to 17 and they were all students of various language schools in the municipality of Patras, Greece. After their evaluation result, participants were classified according to their proficiency in English into four groups: Beginner, Lower Intermediate, Intermediate and Upper Intermediate/Advanced levels. The results of this process, along with some statistics concerning the average time it took for participants to answer each question and the percentage of correct answers per level are presented in Table 4.

Table 4. Placement test statistics.

The platform continued to gather metrics during the pre-test activities and post-test tasks. The nature of participant interaction was different in each activity, so in order to collect information while avoiding over-complication of our data structure, a unified format that stored this information was devised. For each different piece of information, the platform stored and saved the user, the activity in which the data was collected, an identifier that specified the nature or type of the data and a value that defined this specific instance of data. A sample of this data can be seen in Figure 8.

Figure 8. Sample data of pre- and post-activities.

Commonly used data identifiers are “time” for storing activity duration in seconds, “read” for storing which words were clicked by from the participant to get information regarding their translation/definition, “heard” for storing which word’s pronunciation was requested by the user and more.

A variety of metrics can be used to gauge engagement. The time a participant took to complete an activity, the interest they showed in getting help or interacting with elements of the interface and more. Table 5 presents the average duration per activity, both for the pre-test activities as well as the post-test tasks. It also presents the number of words for which users requested more information (definition or pronunciation) per activity, where applicable.

Table 5. Engagement related metrics.

In order to obtain a quantified estimation of the learning results of the activities as a whole, a comparison can be made between Post-Test Task 1 and Activity 8 and between Post-Test Task 3 and Activity 9. These post-test tasks were practically a reiteration of their earlier equivalents, so the only variable is knowledge gained by the students through their use of the “English through Film” activities. Table 6 presents the percentage of each available answer for every question in Activity 8 and Post-Test Task 1. There, the participants were asked to self-evaluate their knowledge of specific words. Table 7 presents the percentage of correct answers in Activity 9 and Post-Test Task 3. There, the participants were asked to the answer questions regarding the meaning of specific words. The post-test tasks were completed by the participants in a later date (there was a two-week interval between pre- and post-tests) to mitigate any skewing of the results caused by the questions and answers being too recent in the students’ minds.

Table 6. Perceived vocabulary knowledge before and after the completion of the pre-test activities.

Table 7. Actual vocabulary knowledge before and after the completion of the pre-test activities.

A vast array of other data regarding the learning aspect of the activities are available and it can be used by educators to evaluate the academic performance, language challenges and grammar or vocabulary issues that the participants faced during their use of the platform. This information goes beyond the scope of this study which mainly focuses on the platform’s own contributions to motivation and learning.

5. Discussion

The testing process involved over 100 primary school or junior high school students attending language schools in the municipality of Patras, Greece. They were both male and female students in three different language schools, aged between 11 - 17 years old. The experiment was carried out during the academic year 2018-2019 and at that time, students had been studying English as a foreign language for 3 to 5 years. Their native language was Greek; those whose native language was other than Greek were exempt from the study. Furthermore, none of the participants had lived in any English-speaking country.

5.1. Placement Test

As presented by the results of the placement test (Table 4), the participants’ language proficiency level was primarily of lower intermediate and intermediate levels, while very few students were placed in the Beginner or the Upper Indeterminate/Advanced groups. The graph presented in Figure 9 visualizes the distribution of the participants’ English proficiency level.

Figure 9. Evaluated English level.

As expected, the time participants took to answer each question as seen in Table 4 increases alongside with the difficulty of these questions, whereas the percentage of correct answers decreases. This is a good indicator that the questions were appropriately placed in their respective difficulty groups and reinforces the researchers trust in the placement test and its evaluation results. With the average answer time being below 10s and the average answer time in Upper Indeterminate and Advanced questions being below 14s it is safe to assume that the 30s time allowed per question was enough for students to answer correctly. Only in 36 out of 2580 student answers was the time limit reached, which further reinforces our conviction that the time limit selected was correct. Out of these 36 questions, 8 were answered correctly, 11 incorrectly and 17 were not answered at all. Overall, no statistical anomalies were detected when evaluating the collected data of the placement test which supports the belief that it was carried out correctly and delivered results that represent the truth.

5.2. Interaction with the Platform Activities

An indicator of the engagement of individual participants is their interaction with the platform’s assistance features regarding word pronunciation and definition or translation. Taking a closer look at these statistics presented in Table 5, it is noticeable that this interaction was rather frequent. With a total of 1083 participant requests for a word’s pronunciation over 8 activities and post-test tasks that supported this feature, an average of more than 135 requests per activity was reached. Considering the number of participants, it becomes apparent that students on average used that feature more than once per activity per user. The highest usage appears in Activity 8 which inquired students about their perceived knowledge of specific words. The lowest usage appears in Activity 9 and Post-Activity 5 which featured the same words as Activity 8. This is expected since it stands to reason that students would already have started to familiarize themselves with these words after their appearance in the previous activity.

Similarly, looking at the data in Table 5, which applies to a word’s translation or definition, there were 1177 total interactions over 6 activities where the feature was available. That results in an average of over 196 interactions per activity which means almost two interactions per activity per user on average. The lowest number of such interactions appears in Post-Test Task 2 which features the same words that appear in Activities 8-9-10 and Post-Test Task 1, so it is rather safe to assume that it is the result of familiarity of the participants with the information provided. The highest number of such interactions per activity appears on Activity 4. There seems to be some discrepancy, though, in the number of interactions. Interactions per activity for both the pronunciation assistance and the definition assistance seem to fluctuate between 100 and 200 with the exception of repeated appearances, but in the instance of Activity 4 they appear to be 444. Considering that the specific activity required participants to drag and drop word elements into groups this might be an indication that some of the assistance interactions were unintentional when the participant was trying to complete the activity. A closer look at the interface of Activity 4 is required on potential next steps of the platform’s wider testing.

Regarding the average duration of each activity it is noticeable that activities that involved embedded videos were lengthier, with Activity 10 which included the voice-over related interaction being by far the lengthiest. This is to be expected since the video duration dictated, to an extent, the length of the participant interaction with the activity. However, it is important to underline that in pre-test Activity 3, only 6 out of the 107 participants proceeded to the next activity without having watched the full duration of the video. On the other hand, 8 participants chose to spend more than one and a half the time of the video’s duration in the activity, presumably re-watching parts of the video. Similarly, in Activity 7, 8 participants did not watch the video for its full duration while four participants watched for more than 1.5 times the video’s duration. In Activity 9, where the video was supplementary to the vocabulary related questions, 30 participants decided to proceed without watching the full duration while at the same time 47 participants spent more time on the activity than 1.5 times the video’s duration. Perhaps the most interesting finding is the average participant time spent in Activity 10, which involved the voice-over. The experimental group participants spent an average of 543s on this activity, whose video duration was 197s, indicating that many of them recorded their voice-overs more than once, until they were happy with the result. This is a strong indicator that the combination of multimedia with creativity as motivation increases both the students’ engagement and their willingness to commit more of their time into an activity.

Another useful insight provided by our platform collected data concerns the learning results of the participants. As mentioned in the results section, Activities 8 and 9 and Post-Test Tasks 1 and 3 are ideal candidates for this analysis. Figure 10 and Figure 11 visualize the information provided in Table 6. It is

Figure 10. Visualization of perceived vocabulary knowledge for Activity 8.

Figure 11. Visualization of perceived vocabulary knowledge for Post-Test Task 1.

clear that there was a small but significant improvement in participants’ perceived knowledge of the target vocabulary used in Activities 8-9-10. Some small discrepancies appeared in specific words, which the majority of the students erroneously considered known; a case in point is the word “differences” where in fact a less frequent homonym of the word was presented. However, there is a consistent upward trend between the number of students who were confident they can understand each word in Activity 9 and those in the Post-Test. Especially for words where there was a considerable lack of knowledge, like “annihilation” or “persecution”, the improvement in the participants’ confidence is evident. In the case of the word “fate”, the increase in students who answered that they can understand the word reached almost 20%. Overall, there was a 7% increase in the students who claimed to understand the target words and an equivalent 7% decrease in the students who claimed no prior knowledge of the target words. This difference can be considered substantial.

Α similar comparative look between Activity 9 and Post-Test Task 3 can shed light on the actual increase in lexical knowledge, as opposed to the perceived one. Figure 12 and Figure 13 visualize the information provided in Table 7. In this comparison, things are not as clear as in the previous case. Although there is an increase in correct answers that reaches 2% and an equivalent decrease of 2% in students who could not provide the correct meaning of the target words, this trend is not consistent for all lexical items. Despite the fact that for the vast majority of words there were more correct answers in the post-test, both the words “annihilation” and “persecution” do not follow that trend, even though participants appeared more confident of being familiar with them in the previous

Figure 12. Visualization of target vocabulary knowledge for Activity 9.

Figure 13. Visualization of target vocabulary knowledge for Post-Test Task 3.

comparison. The word “declare” shows the best improvement both in the increase of the number of correct answers, as well as in the decrease of the number of errors. Overall, the 2% increase is deemed marginal and further research can be useful in reaching more solid conclusions.

The data collected by the platform may have another use beyond its usefulness to evaluate the platform’s contribution to students’ engagement. It can be a valuable tool for educators to acquire a deeper insight into what challenges their students are facing regarding specific lexical items. As seen in our earlier analysis of correct and incorrect answers regarding specific words, an educator can use these metrics to find gaps in the students’ knowledge of the subject as well as general trends concerning their language development. For instance, participants asked for pronunciation or definition assistance on the words “annihilation” and “annihilate” in Activities 5, 8 and Post-Test Task 1 more than any other words. This, combined with the fact that the questions regarding the meaning or use of the word “annihilation” in Activity 9 and Post-Test Tasks 1 and 4 involved a large percentage of incorrect answers, indicates a particular weakness of the participants with regards to this word and its derivatives. This kind of information can be used to guide educators’ focus on particular problematic lexical items and, in due time, support their students in improving their vocabulary skills.

5.3. User Satisfaction

Metrics and statistics aside, a good indicator of the platform’s contribution, can be inferred by the general disposition of the participants during the process of the preliminary testing of the online platform. All participants were informed of the experiment beforehand by the administration of their schools. A leaflet with information was circulated in the school and a parental consent form was given to the potential participants for their parents to sign. One of the schools advertised the study on their Facebook page. Once parental consent had been gathered, the administrations of the schools set up a timetable for the platform testing. This was composed of 45 min evening sessions using 2 - 4 classrooms each time for about two weeks, to meet the demanding evening schedules of the students. Despite the challenge of finding free time in their very busy schedules, most students did not fail to turn up for their appointments. This was a preliminary positive outcome regarding the study.

Most students were very enthusiastic and eager to participate, however, there were some who were skeptical and a few were even intimidated by the process, which they viewed as yet 'another test'. The researcher, following the protocol of the study, clarified to each and every participant that they were not taking a test, and that they were meant to enjoy the activities and have fun. It was also stressed that the procedure was anonymous, and the results of their work would not be viewed by their teachers, school owners, etc. Then on, the participants felt much more relaxed, and seemed to display trust towards the experimenter. Many participants, especially boys, were very enthusiastic and waited impatiently in the waiting areas for their sessions to start.

Apart from some minor issues with microphones and speakers, that is, computer hardware difficulties which were solved very quickly, no other significant problems were encountered. Perhaps the most challenging problem in the schools was the internet connection speed. This led to three students being unable to complete the voice-over activity since they could not save their recordings because the platform crashed due to very low connection speed. Overall, no participants commented negatively on the duration of the session; even though some took more than 45 minutes to complete the activities, mainly because they either viewed the videos more than once, or because they recorded the voice-over many times until they were happy with the outcome.

Regarding the post-test, apart from one participant who failed to show up, all other students gladly turned up for the tasks. Most participants commented on the shorter duration of the post-test, which only lasted about 10 - 15 minutes. They were surprised and somewhat disappointed with the short duration and said they would have liked it to last longer. Some of the participants stated they would have liked a video in the post-test, too. Perhaps a video would have made the Post-test more enjoyable and less similar to a 'test', thus the comments in its favor. This specific feedback is a good indicator that the multimedia content presented in the initial activities was well received.

What is probably the most significant evaluation of the platform, however, is what the participants commented once they were out, after they had done the pre-test activities. In all three language schools, the experimenter made sure she approached participants in a friendly and not too obvious manner, for any comments the students had to offer regarding their experience. Overall, participants were satisfied and admitted it was much more fun than they had expected. One of the things many participants liked was the straightforward format of the placement test and some of the activities, namely activities 08 & 09. They specifically mentioned “Ήταν (πολύ) απλό/εύκολο!” (It was (very) simple/easy!), referring to the simple layout. Some other participants commented on the fact that Activity 02 did not have “Correct” or “Incorrect” answers and they could choose words as they themselves saw fit. This lowered the stress on doing the activity and probably also helped them look up unknown words without external pressure, which indicates that whatever work they did on the specific activity was the result of their own interest due to intrinsic motivation. Many participants also found dictionary and pronunciation features convenient because of the immediacy with which they received the information they needed. One participant commented “Είναι βολικό να μην ψάχνεις.” (It’s convenient not to have to search). On one specific instance, a small group of boys, aged 11 - 12, wanted to know whether the content of the NASA video regarding Mars was in fact true or not. The experimenter assured them that the video was authentic and referred them to the official website for more up-to-date videos, since that was not a recent one on the issue. It was certainly very satisfying to see youngsters intrigued by the content of the platform, which tapped into their inquisitive nature. Finally, most participants agreed that they liked the drag-and-drop feature in activities 4 & 5 and commented “Καλύτερα/Πιο γρήγορο απτο να γράφω/γράφουμε.” (Preferable to/Faster than typing). They also liked the fact that in activity 5 only the correct suffix would attach to the word root to form the noun. This meant giving learners direct feedback and helped a lot with the words that participants were not familiar with.

6. Conclusions

In conclusion, the evaluation of the experimental platform by its users was positive. It was considered very user-friendly, in terms of its layout and features, while its multimedia content was regarded as interesting and engaging. There were no complaints concerning the platform’s functionality whatsoever, despite the very few cases when poor internet connection temporarily delayed page downloading. Participants were eager to move on with the materials presented to them, both in the pre-test as well as the post-test.

Based on the encouraging results of this preliminary use of the platform it is safe to say that further development and a wider implementation should be pursued. The focus of these next steps should be in incorporating more elements of creativity like the voice-over activity, which helps engage the students for prolonged periods of time and motivates them to interact with learning material. Additionally, the extra effort can be put into providing educators with more and better presented feedback automatically generated by the usage of the platform. Many of the popular commercial and non-commercial e-classroom and Online Education solutions may benefit from a similar approach; i.e., monitor user activity, as well as filter and present educators with results regarding their students’ performance in order to help them focus their efforts on areas where the need is greater.

Overall, we believe that the future of online learning is bright. The potential to achieve both greater learning results and higher student motivation and engagement is enormous. We primarily need to avoid simple translation of real-life practices into a web based alternative, as this would merely end up only in rediscovering the learning process through the capabilities and strengths of web technologies.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Pardanjac, M., Radosav, D. and Jokic, S. (2009) Advantages and Disadvantages of Distance Learning. MIPRO 2009 32nd International Convention Proceedings: Computers in Education, Opatija, Croatia, 25-29 May 2009.
[2] Harting, K. and Erthal, M. (2005) History of Distance Learning. Information Technology, Learning and Performance Journal, 23, 35-44.
[3] Ziden, A. and Rahman, M. (2013) The Effectiveness of Web-Based Multimedia Applications Simulation in Teaching and Learning. International Journal of Instruction, 6, 211-222.
[4] Emmerich, R.D.W. and Devlin, D.W. (1996) Independence Day [Motion Picture]. Twentieth Century Fox, USA.
[5] Holmberg, B. (2005) The Evolution, Principles and Practices of Distance Education. Bis, Bibliotheks-und Informationssystem der Universität Oldenburg, Oldenburg.
[6] Provenzo, E.F. (1986) Teachers and Machines: The Classroom Use of Technology since 1920 by Larry Cuban. History of Education Quarterly, 26, 647-648.
https://doi.org/10.2307/369036
[7] Radford, A. (2011) Learning at a Distance: Undergraduate Enrollment in Distance Education Courses and Degree Programs. Stats in Brief US Department of Education, National Center for Education Statistics, USA, 1-22.
[8] Dewar, J.A. (2000) The Information Age and the Printing Press: Looking Backward to See Ahead. Ubiquity, 2000, Article No. 8.
https://doi.org/10.1145/347634.348784
[9] Akram, S. and Khatoon Malik, D.S. (2012) Use of Audio Visual Aids for Effective Teaching of Biology at Secondary Schools Level. Elixir Leadership Management, 50, 10597-10605.
[10] Kalliris, G., et al. (2014) Emotional Aspects and Quality of Experience for Multifactor Evaluation of Audiovisual Content. International Journal of Monitoring and Surveillance Technologies Research, 2, 40-61.
https://doi.org/10.4018/IJMSTR.2014100103
[11] Mathew, N. and Alidmat, A. (2013) A Study on the Usefulness of Audio-Visual Aids in EFL Classroom: Implications for Effective Instruction. International Journal of Higher Education, 2, 86-92.
https://doi.org/10.5430/ijhe.v2n2p86
[12] Rasul, S., Bukhsh, Q. and Batool, S. (2011) A Study to Analyze the Effectiveness of Audio Visual Aids in Teaching Learning Process at Uvniversity Level. Procedia Social and Behavioral Sciences, 28, 78-81.
https://doi.org/10.1016/j.sbspro.2011.11.016
[13] Sevastiadis, C., et al. (2001) Development of a Distance-Learning Environment, Using Database Driven Dynamic Web Pages. Application for Digital Audio Internet Courses, 110th Audio Engineering Society Convention, Amsterdam, NL, 12-15 May 2001.
[14] Hennessey, B.A. (2016) Intrinsic Motivation and Creativity in the Classroom: Have We Come Full Circle? In: Kaufman, J.C. and Beghetto, R.A., Eds., Nurturing Creativity in the Classroom, Cambridge University Press, Cambridge, 227-264.
https://doi.org/10.1017/9781316212899.015
[15] Wark, N. and M. Ally (2018) Online Student Use of Mobile Devices for Learning. mLearn 2018, 11-14 November 2018, Chicago, IL.
[16] Pearson, E. and Koppi, T. (2002) Inclusion and Online Learning Opportunities: Designing for Accessibility. Research in Learning Technology, 10, 17-28.
https://doi.org/10.3402/rlt.v10i2.11398
[17] Faghih, B., Azadehfar, M. and Katebi, S. (2013) User Interface Design for E-Learning Software. The International Journal of Soft Computing and Software Engineering, 3, 786-794.
[18] Mladenova, M. and Kirkova, D. (2014) Role of Student Interaction Interface in Web-Based Distance Learning, in ACHI. The Seventh International Conference on Advances in Computer-Human Interactions, Barcelona, Spain, 2014, 307-312.
[19] Seffah, A., et al. (2006) Usability Measurement and Metrics: A Consolidated Model. Software Quality Journal, 14, 159-178.
https://doi.org/10.1007/s11219-006-7600-8

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.