Investigating Media Selection through ChatGPT: An Exploratory Study on Generative Artificial Intelligence in the Aid of Instructional Design

Abstract

In instructional design (ID), the selection of appropriate media (in the facilitation of learning) is crucial yet complex. This study proposes that artificial intelligence (AI), particularly ChatGPT, an advanced large language model, holds the potential to streamline the media selection process. Aimed at assisting instructional designers and educators, this investigation explores the efficacy of ChatGPT in recommending suitable instructional delivery systems. A media selection tool built with ChatGPT was used to review a series of instructional tasks, mirroring the work of the authors who did the same using traditional methods. The results from both approaches were then compared and studied, revealing similarities and differences in their media decisions, showing that the AI and authors went “off script” from their training and guidance, respectively. The findings demonstrate ChatGPT’s capacity to expedite analyses and offer varied insights, thereby enriching the instructional process while also raising concerns regarding the need to evaluate its outputs, inputs (i.e., prompts), and potential biases in training data carefully. Overall, this research concludes that generative AI presents novel approaches and opportunities within ID, offering significant benefits. It underscores the transformative impact of AI tools like ChatGPT and advocates for continued exploration into their integration and influence within the field of ID.

Share and Cite:

DaCosta, B. and Kinsell, C. (2024) Investigating Media Selection through ChatGPT: An Exploratory Study on Generative Artificial Intelligence in the Aid of Instructional Design. Open Journal of Social Sciences, 12, 187-227. doi: 10.4236/jss.2024.124014.

1. Introduction

Media selection stands as a pivotal aspect within instructional systems design (ISD) (Gagné et al., 1981; Holden et al., 2010) and with it, the long-accepted premise that every instructional designer encounters the challenge of determining the most suitable media for effective instructional delivery (e.g., Higgins & Reiser, 1985 ). Media selection ensures that a particular instructional medium can support the achievement of a specified learning goal (Holden et al., 2010) or outcome. The result is a delivery system that includes all the essential components for implementing an instructional program. Examples can range from traditional, face-to-face instruction, such as classroom settings, to immersive technologies that provide interactive learning experiences, emphasizing virtual (VR), augmented (AR), and mixed reality environments.

However, assessing which medium is most suitable for a particular instructional task can be challenging (Braby, 1973; Gagné et al., 1981; Hawkridge, 1973; Sugrue & Clark, 2000) . Indeed, media selection can be daunting, as the process involves careful consideration of many factors to ensure that the selection of the most suitable media for instructional delivery is obtained (Holden et al., 2010; Martin, 2011; Sugrue & Clark, 2000) . To that degree, it could be argued that every instructional task necessitates a medium (or media) that possesses specific characteristics or attributes. With the growing demand within organizations to disseminate instructional content using the most efficient method (Holden et al., 2010) , what is already an overwhelming task of choosing media that supports desired learning outcomes is only exacerbated.

It is therefore proposed in this work that artificial intelligence (AI) may offer promising solutions to this challenge. The incorporation of generative AI-driven tools has already brought about a revolution in learning activities (Bahroun et al., 2023) . These tools enable the alignment of learning outcomes with instruction, activities, and assessments, facilitating tailored experiences (Office of Educational Technology, 2023) . Moreover, while AI has been a subject of interest for decades, it has significantly transformed the field of instructional design in recent years, resulting in an abundance of innovative tools and techniques for course design and development.

In the context of media selection, AI could be used to analyze a range of factors to recommend the most suitable instructional delivery system, making decision-making more streamlined and effective. That is, AI has the potential to assist in identifying appropriate media while serving as a time-saving tool. ChatGPT is a large language model (Gimpel et al., 2023) developed by OpenAI that can generate human-like text responses and provide natural language understanding in a conversational manner (ChatGPT-4, personal communication, December 15, 2023). Given its capability to respond to queries, clarify concepts, and craft content akin to human interaction, ChatGPT holds promise to efficiently guide instructional designers and educators in choosing the most appropriate media through a rapid, informal, and dialogue-based approach.

1.1. Aim of the Study

The purpose of this investigation is to explore the use of ChatGPT as a resource to assist instructional designers and educators in selecting the most suitable media for delivering their instructional content. This inquiry is structured as follows: First, a brief introduction to ISD alongside a review of media selection is presented, and in doing so, the challenges are discussed. Next, research and development of AI as it applies to enhancing learning outcomes are explored, with emphasis on the use of AI-driven technology in ISD. The method employed to investigate media selection is subsequently described. Then, the findings are examined, accompanied by a detailed discussion. Finally, this work considers the future of AI-driven technology as a resource in ISD.

Altogether, this research examines the potential advantages and implications of using ChatGPT in instructional design. The aim is to encourage instructional designers and educators to explore and adopt generative AI-driven tools to streamline what are often demanding and time-consuming ISD practices. Finally, the results of this study were disseminated among educational professionals whose constructive feedback contributed to the refinement and improvement of the research.

1.2. Terminology

Several terms that require clarification are found in this work. Historically, ISD has been recognized by various monikers. Terms like “educational technology” (Seels & Richey, 1994) , “instructional design” (Martin, 2011; Reiser, 2001b) , “instructional design and technology” (Reiser, 2001a, 2001b) , “instructional development” (Reiser, 2001b) , “instructional systems development” (Merrill & ID2 Research Group, 1996) , “instructional technology” (Seels & Richey, 1994) , and “the systems approach” (Reiser, 2001b) are all present in scholarly literature and have been used synonymously in the context of ISD. This diversity might partly stem from the field’s evolution. For example, Funaro and Mulligan (1978) discussed “systems analysis” as an antecedent to ISD and a “systems approach to training” in the context of developing and managing U.S. Naval training programs. Then, scholars have denoted differences among these terms (e.g., Seels & Richey, 1994 ), including the distinction between ISD and instructional design (e.g., Nolin, 2019 ). For the sake of simplicity, this investigation uses the term “instructional design” (ID).

Moreover, there have been many attempts to define the field of ID. Merrill and the ID2 Research Group (1996) described instructional systems development as a “set of procedures for systematically designing and developing instructional materials” (p. 30), whereas Merrill et al. (1996) described ID as “the use of these scientific principles [i.e., natural principles involved in instructional strategies] to invent instructional design procedures and tools” (p. 5). Reiser (2001a, 2001b) described the field of ID and technology as encompassing the “analysis of learning and performance problems, and the design, development, implementation, evaluation and management of instructional and non[-]instructional processes and resources intended to improve learning and performance in a variety of settings, particularly educational institutions and the workplace” (p. 53 and 57 respectively). While Martin (2011) defined ID as the “science of creating detailed specifications for the design, development, evaluation, and maintenance of instructional material that facilitates learning and performance” (p. 956).

The current work adopts the 1994 definition attributed to the Association for Educational Communications and Technology (AECT), which defined instructional technology as “the theory and practice of design, development, utilization, management and evaluation of processes and resources for learning” (see Seels & Richey (1994) for a comprehensive analysis of the definition, to include its origin). This definition was chosen due to the AECT’s (2023) longstanding dedication to the field.

Artificial intelligence is an area of study that has also undergone many attempts to be defined and succinctly explained. Citing others, Pokrivcakova (2019) explained that AI has been defined in the context of machines, computers, or systems (e.g., Russell & Norvig, 2010 ) (although it is said not to describe any one technology; Baker & Smith, 2019 ), as a specific set of skills (e.g., Baker & Smith, 2019 ), and more broadly, as a science (e.g., Stone et al., 2016 ). Pokrivcakova (2019) adopted the definition of Luckin et al. (2016) , who borrowed from all three contexts, defining AI “as computer systems that have been designed to interact with the world through capabilities (for example, visual perception and speech recognition) and intelligent [behaviors] (for example, assessing the available information and then taking the most sensible action to achieve a stated goal) that we would think of as essentially human” (p. 14). Chen et al. (2020) went on to remark that although definitions deviate in language, many have the common theme of “mimicking human cognition by computers” (Wartman & Combs, 2018: p. 1107) , formulating their broader definition from the literature as AI encompassing “the development of machines that have some level of intelligence, with the ability to perform human-like functions, including cognitive, learning, decision making, and adapting to the environment” (p. 75267). Given the focus on ChatGPT, a large language model dependent on advancements within subfields of AI, this work adopts the explanation by Gimpel et al. (2023) , who characterized AI as a broad field that includes the creation of intelligent machines that can perceive their environment while making decisions with machine learning serving as a subfield enabling computers to enhance task performance through pattern recognition and data-driven predictions.

Finally, an “instructional designer” is defined as someone who uses a systematic approach grounded in ID theory and practices to develop materials for learning experiences. This individual may be a formally trained professional in ID or someone with practical, real-world experience.

2. Related Work

2.1. The Evolution of Instructional Design

Dating to the Second World War (Dick, 1987; Funaro & Mulligan, 1978; Reiser, 2001a, 2001b; Seels, 1997) , ID emerged from military and academic initiatives during the late 1940s (Dick, 1987; Funaro & Mulligan, 1978) , continuing through the 1950s (Dick, 1987; Funaro & Mulligan, 1978; Reiser, 2001b; Wilson & Jonassen, 1990) and 1960s (Funaro & Mulligan, 1978; Reiser, 2001b; Wilson & Jonassen, 1990) . The aim was to establish systematic approaches for developing instructional materials (Reiser, 2001b; Seels, 1989; Wilson & Jonassen, 1990) , drawing on concepts from general systems theory (Seels, 1997; Wilson & Jonassen, 1990) , psychology, management science (Wilson & Jonassen, 1990) , instructional theory, and communications theory (Seels, 1997) . The thought was that system design methods and principles used in the advancement of weapons and manufacturing could be used to develop complex training systems (Wilson & Jonassen, 1990) . That is, it was proposed that something could be learned by examining design models found in engineering and other similar design fields ( Wilson & Jonassen, 1990; see Reiser (2001b) for a historical account of ID and technology).

In the years that followed, ID has been rooted in dozens of models and frameworks to guide the creation of instructional and training materials (Nolin, 2019; Reiser, 2001b; Wilson & Jonassen, 1990) . Reiser (2001b) notably highlighted Gustafson and Branch’s (1997) detailed examination of the many models formulated during the 1980s and 1990s. Contrary to being a singular theory (Merrill & ID2 Research Group, 1996) , ID encompasses diverse principles (Martin, 2011; Nolin, 2019) developed with different emphases, such as learner participation, interaction, and engagement, as well as motivation, and student success (Nolin, 2019) . Therefore, it is common for instructional designers to rely on various models, frameworks, and theories to steer them through the phases of ID (Martin, 2011; Nolin, 2019; Reiser, 2001b) .

Among these models, Gagné (1965) categorized learning outcomes into five domains—verbal information, intellectual skills, psychomotor skills, attitudes, and cognitive strategies—each of which necessitated unique conditions for promoting learning (Reiser, 2001b) . He also outlined nine events of instruction—representing essential teaching activities to facilitate the achievement of diverse types of learning outcomes—detailing which instructional events were critical for certain types of outcomes and providing insights into the situations where certain events could be omitted (Reiser, 2001b) . Dick and Carey (1990) presented nine consecutive steps intended to achieve an instructional objective, beginning with the establishment of goals and concluding with a summative evaluation. Merrill et al. (1991) introduced the concept of knowledge objects to represent knowledge that can be used to build transactional shells (or instructional algorithms) to teach the knowledge or skill. Kemp et al. (1994) described a circular ID framework with nine interconnected components, presenting a non-linear, ongoing cycle of implementation and evaluation, contrasting with the Dick and Carey model by offering greater flexibility in navigating its stages. Then there is the work of Van Merriënboer (1997) , who posited that complex learning programs aimed at teaching professional competencies that integrate knowledge, skills, attitudes, and skill coordination can be deconstructed into four components: learning tasks, supportive information, procedural information, and part-task practice (Frerejean et al., 2019) .

Altogether, there is no shortage of models, frameworks, and taxonomies to aid in the ID process. The outpouring, however, has made it difficult for instructional designers to identify the most appropriate model to use for a given situation, further clouding the decision-making process (Slagter van Tryon et al., 2018) . This is problematic, given that one of the skills instructional designers are said to need is the ability to select an ID process (Lenane, 2022) . It also highlights the broader challenges, as ID—and, by extension, education—is complex and multifaceted (Bond & Dirkin, 2020) . It has been asserted, for instance, that the emerging nature of the profession itself constrains a shared comprehension of its practices (Sharif & Cho, 2015) .

The literature abounds with opinions and positions offering ID guidance that speaks to the many challenges in the field. Even the longevity of models has been questioned (e.g., Dick, 1996 ). It is argued, in the context of the current work, that the largest issue in ID is the ever-changing landscape of educational technology and pedagogy. With the rapid advancement of technology, instructional designers and educators must continually adapt to incorporate new tools and platforms effectively (Gameil & Al-Abdullatif, 2023) . This requires staying abreast of technological trends and understanding how to leverage these tools to enhance learning experiences (Liu et al., 2002) without overwhelming learners or sacrificing didactic integrity. However, finding the right balance between educational objectives and technical capabilities is difficult, as instructional designers must ensure that the technology supports learning (Holden et al., 2010) rather than driving the process (Martin, 2011) . This includes supporting the diverse needs of today’s learners, including accessibility matters (Pollard & Kumar, 2022; Office of Educational Technology, 2023) demanding technologies that are inclusive and adaptive.

The influence of technology on ID cannot be overstated. It speaks heavily to the importance of selecting the most appropriate media for instructional delivery.

2.2. The Importance of Media Selection

Martin (2011) explained that a substantial aspect of the instructional process entails presenting learners with the essential information required for learning (Gagné, 1985; Reiser & Dick, 1996) . Emphasis should be placed on important (or characteristic) aspects of what needs to be learned (Gagné, 1985) , which should be chunked and organized in a meaningful way (Kruse & Keil, 1999) . This thinking further underscores the importance of selecting suitable media in that the selection supports the information and essential characteristics that learners need to grasp. Thus, in addition to focusing on effective representation and organization of the subject matter to facilitate learning (Merrill & ID2 Research Group, 1996) , it is equally imperative to integrate media that can support the desired learning goals.

The integration of media is not new. It has long been asserted that technology should be considered part of the instructional strategy (Martin, 2011; Merrill & ID2 Research Group, 1996) so that the selected media is aligned with the instructional objectives (Martin, 2011) . Seels (1997) stressed that delivery system selection (and development) is as essential as the other design stages, explaining that content, objectives, assessment, strategies, and delivery systems should be incorporated through systems and theory (i.e., the essential elements of effective instruction should be integrated into a cohesive whole). Reiser (2001b) echoed this, stating that the integration of media is one of the practices that, over the years, has formed the “core of the field” (p. 57), with media selection a recurring element found in many ID models.

Early academics stressed that instructional designers had yet to lay the groundwork for establishing robust approaches for choosing media suitable for specific learning tasks (e.g., Hawkridge, 1973 ). Hawkridge (1973) , for example, offered that his university’s choices of media were driven more by logistical, financial, and political factors rather than by well-founded, explicitly defined psychological and educational considerations. Scholars also highlighted the inefficiency of formal media selection procedures as being overly broad, too simplistic, or excessively complex (e.g., Braby, 1973 ). Upon conducting interviews with 29 instructional designers at four U.S. Army schools, Gagné et al. (1981) observed that the complexity of procedures often led to media selection decisions based on experience and intuition, therefore introducing the human factor into decision-making regardless of the model being used.

2.3. Challenges in the Selection of Media

The difficulty of media selection is no secret (e.g., Braby, 1973; Gagné et al., 1981; Hawkridge, 1973; Sugrue & Clark, 2000 ). Sugrue and Clark (2000) referenced Heidt (1989) , who noted the inherent difficulty in handling media selection models, explaining that none of the examples reviewed were considered “easy-to-handle,” indicating a significant level of complexity (p. 397). This is arguably attributed to a multitude of factors.

Models have been accused of being more similar than different (Sugrue & Clark, 2000) . Sugrue and Clark (2000) spoke to the conclusions of Heidt (1989) , who found that of the models reviewed, some included identifiable factors that should be considered in the decision-making process, while others proposed procedures that typically involve elaborate steps culminating in a common-sense decision. Of the models comprising linear procedures, several incorporated the selection of media as an activity that occurs after initial design but before development (Sugrue & Clark, 2000) . So, as diverse as these models are, a consensus could be argued regarding the timing of media selection while highlighting the potential intricacies of processes that must weigh numerous factors.

Moreover, many models promote a two-stage process, first focused on instructional characteristics and then on practical considerations (Sugrue & Clark, 2000) . Seels (1997) offered the example of Reiser and Gagné (1983) , who grounded their work on the principle that ID factors should be prioritized when selecting the most suitable media. They centered their model on principles of human learning to guide selection, emphasizing the creation of optimal learning conditions and only then looking at practical factors (Seels, 1997) . This means that while there have been attempts to establish a scientific approach to media selection centered around instructional elements, models continue to account for practical considerations, including cost and other logistical aspects, which add a layer of subjectivity.

Compounding matters, Nolin (2019) , citing Reigeluth’s (1999) work, explained that ID is often described as prescriptive. It has been reasoned that because ID is not a theory, it stands as a series of procedural steps primarily emphasizing what to do rather than how to do it or why it is effective (Merrill & ID2 Research Group, 1996) . This only exacerbates matters, particularly for those who are not versed in ID theories and practices. Reiser (2001b) viewed individuals in the ID field as those who devote a substantial portion of their time to working with media and tasks related to systematic ID procedures or a combination of both. As defined in this study, however, instructional designers could be those who possess practical, firsthand experience. Thus, while these individuals are likely experts in their respective areas, they may lack essential ID knowledge (Martin, 2011) to navigate the complexity of these models.

Another challenge has been the lack of experimental evaluation (or the comparison of methods), with no formal efforts to determine their reliability and validity (Braby, 1973) . Sugrue and Clark (2000) maintained that justifying media selection rules on grounds other than practicality is challenging because there is no proof that any medium uniquely enhances learning, motivation, or the development of distinct or transferable cognitive skills ( Clark & Salomon, 1986; Clark & Sugrue, 1991; see Clark (1994) for a discussion of the counter-arguments, and Clark (1983) for a discussion of media comparison research). To support their claim, Sugrue and Clark (2000) referenced studies by Braby (1973) and Romiszowski (1970) , who reported conflicting findings in their examination of the comparative effectiveness of media selection models and intuitive methods. This supports Clark’s (1983, 1994) longstanding view that media serve primarily as conduits for instructional methods rather than directly influencing cognition.

Building upon this is the idea that no single medium is inherently superior or inferior to others. Ziagos (1991) explained that in the 1960s and 1970s, delivery media was traditionally seen as crucial for effective instruction, but by the 1990s, research suggested that if the media supports the instructional method to meet the learning objective, all delivery media could be equally effective. From this, Ziagos (1991) concluded that it is the method, not the medium of delivery, which determines the effectiveness of learning. In subsequent years, it has been proposed that the suitability of media elements depends on factors like the unique characteristics of each medium and the context in which it is used (e.g., Holden et al., 2010 ).

However, identifying the unique characteristics of media has its share of challenges. Sugrue and Clark (2000) cited Heidt (1975) , who highlighted issues about categorizing media to deliver learning, pointing to cases in which the classifications were too broad or offered little regarding their variances. Finally, there is the issue that although media may have been appropriate at their inception, they become less suitable over time (Sugrue & Clark, 2000) . This stresses the importance of prioritizing media that consistently aligns with and reinforces the intended learning outcomes (Holden et al., 2010) rather than focusing predominantly on the novelty or technological sophistication of the media (Martin, 2011) .

Altogether, despite the impetus to incorporate media selection into the ID process, choosing suitable media continues to be a challenge, in which selections are instead founded on pragmatic factors along with experience and intuition. Sugrue and Clark (2000) cited Heidt (1989) , who noted that apart from practical and quantifiable factors, the criteria for selecting media depend on the subjective judgment of the instructional designer or instructor, guided by potentially relevant considerations. Anderson (1983) perhaps best summarized the matter, asserting that charts (alongside other guides) found in models are aimed at structuring the process systematically and comprehensively. Consequently, media selection should not be viewed as an exact science (Anderson, 1983) .

2.4. Artificial Intelligence in Education

An area of study that is making a profound impact on enhancing learning outcomes is AI (Luckin et al., 2022) . Artificial Intelligence has ushered in transformative advantages, marked by its ability to analyze large datasets efficiently, thus enhancing decision-making processes in a multitude of fields (Bahroun et al., 2023) ranging from healthcare to environmental science. For example, AI-based systems are being explored to create personalized solutions to improve patient outcomes using AI-driven diagnostics and treatment plans (e.g., Esteva et al., 2019 ). Although algorithms have long been used in anti-malware software, AI is revolutionizing cybersecurity in protecting Internet-connected systems through the application of machine learning, deep learning, and natural language processing, thereby automating and improving the intelligence of cybersecurity services and management (e.g., Sarker et al., 2021 ). Then, there are environmental conservation efforts, in which AI techniques are being used in the better sustainment of natural resources and the modeling of climate change scenarios, showing the benefits of AI-driven technologies in addressing global environmental challenges (e.g., Rolnick et al., 2022 ).

These examples, though limited, underscore AI’s multifaceted benefits, particularly its role in fostering innovation across diverse fields. This includes enhancing efficiency via quicker decision-making and problem-solving, automating both routine and complex tasks, analyzing vast data sets, and conducting predictive analytics. Furthermore, AI improves accessibility personalization by analyzing individual preferences, behaviors, and needs (Office of Educational Technology, 2023) .

It is this personalization, however, that is perhaps of most interest in the field of education. Using AI in education (AIEd) presents significant opportunities, enabling, for example, tailored learning experiences (Bahroun et al., 2023; Office of Educational Technology, 2023; Ruiz-Rojas et al., 2023) . Discussions on AIEd often touch upon personalized learning (e.g., Bahroun et al., 2023; Baidoo-Anu & Ansah, 2023; Bond et al., 2024; İpek et al., 2023; Luckin et al., 2016; Office of Educational Technology, 2023; Rudolph et al., 2023; Shum & Luckin, 2019; Zawacki-Richter et al., 2019; Zhai et al., 2021 ), which involves customizing content and instruction to suit the needs and preferences of learners (Atlas, 2023) . It is believed that AI’s ability to analyze and adapt content based on learner feedback and performance can lead to more effective and personalized experiences (Shum & Luckin, 2019) .

A representation of this is intelligent tutoring, which can be thought of as an innovative form of personalized learning and adaptive testing in which the technology provides customized, step-by-step guidance and real-time feedback personalized to each learner’s performance and needs (Atlas, 2023) . It is akin to having a personal tutor who can tailor their teaching to their student’s learning needs but is void of a direct human presence (Luckin et al., 2016) . As with personalized learning, intelligent tutoring is commonly discussed in the literature (e.g., Atlas, 2023; Bond et al., 2024; Luckin et al., 2016; Pokrivcakova, 2019; Rudolph et al., 2023; Stone et al., 2016; Zawacki-Richter et al., 2019; Zhai et al., 2021 ), with ample research that suggests intelligent tutoring systems enhance learning outcomes, whether used independently or in conjunction with traditional teaching methods (Weitekamp et al., 2020) .

Indeed, attempts at using intelligent computers in the design of instruction go back decades (Bond et al., 2024) , with Luckin et al. (2016) claiming AIEd has been of interest for over 30 years. However, within the past decade, the potential societal impact of AI has garnered considerable attention (Cooper, 2023) , indicating a growing interest in its broader implications. So, unsurprising, AI has been said to have far-reaching consequences in the fields of business (Atlas, 2023; Susnjak, 2022) and finance (Bahroun et al., 2023; Luckin & Cukurova, 2019) , science and engineering (Atlas, 2023; Bahroun et al., 2023; Susnjak, 2022) , the humanities (Atlas, 2023; Susnjak, 2022) , healthcare (Atlas, 2023; Bahroun et al., 2023; Luckin & Cukurova, 2019) , and law (Atlas, 2023) . This widespread applicability underscores its potential, reflecting its versatility and transformative power.

2.5. Transforming Instructional Design with Generative Artificial Intelligence

The recent emergence of generative AI holds possibilities within education (Cooper, 2023; İpek et al., 2023; Zawacki-Richter et al., 2019) . The same has been said about other technologies throughout the 20th century (e.g., computers, film, radio, television; Cuban, 1986 ), and as with other advancements, the use of generative AIEd (Gimpel et al., 2023; İpek et al., 2023) is intensely debated (Atlas, 2023; Baidoo-Anu & Ansah, 2023; Gimpel et al., 2023; İpek et al., 2023; Rudolph et al., 2023) . Plagiarism, for instance, alongside other forms of cheating, is commonly discussed in the literature (e.g., Atlas, 2023; Gimpel et al., 2023; İpek et al., 2023; Office of Educational Technology, 2023; Rudolph et al., 2023; Susnjak, 2022 ). However, there is ample research suggesting that AIEd has far-reaching benefits.

Luckin and Cukurova (2019) pointed out that research demonstrating well-designed AI that works in education is increasing, alongside the growing argument that AIEd contexts have yet to be fully realized (Celik, 2023; Luckin et al., 2022) . Atlas (2023) detailed the versatile use of generative AIEd, encompassing automated essay scoring, personalized tutoring, classroom assistance, language translation, and the development of writing, research, and communication skills. Additionally, this technology can aid in creating syllabi, quizzes, exams, summaries, reports, and other research documents while also offering support in email and chatbot communication, meeting and event organization, campus tours, policy guidance, and report generation (Atlas, 2023) .

ChatGPT is a large language model (Gimpel et al., 2023; Susnjak, 2022) developed by OpenAI that can generate human-like text responses and provide natural language understanding in a conversational context (ChatGPT-4, personal communication, December 15, 2023). Generative AI uses large language models, a type of machine learning model, to produce text, images, and other content with minimal human involvement (Gimpel et al., 2023) . GPT, short for Generative Pre-Trained Transformer (Gimpel et al., 2023; Tlili et al., 2023) , is OpenAI’s acronym for large language models that undergo pre-training on publicly available data (Gimpel et al., 2023) . OpenAI introduced its advanced models in November 2022, culminating in the release of ChatGPT-4 in March 2023 (Gimpel et al., 2023; İpek et al., 2023) . Cooper (2023) , citing Scharth (2022) , explained that these models were developed using machine learning algorithms trained on comprehensive text datasets comprising books, news articles, and websites. (See Rudolph et al. (2023) for an explanation of ChatGPT and its history.)

ChatGPT’s popularity has been attributed to its ability to produce high-quality text, its user-friendliness, as well as its free access (Atlas, 2023) (for ChatGPT-3.5; Gimpel et al., 2023 ), enabling new users to experience the possibilities of text generation firsthand. Its popularity may also come from its ability to be incorporated into existing applications. Gimpel et al. (2023) , referencing Salz (2023) , explained that ChatGPT is seeing rapid integration into Microsoft 365, making it increasingly challenging to evade the influence of AI. Moreover, it is believed that AI will continue to impact the development of technological tools for education and training (Luckin & Cukurova, 2019) .

Tools like ChatGPT could, therefore, be viewed as disruptive in education (Rudolph et al., 2023) . There is a concern that AI will assume roles performed by humans (Atlas, 2023; Office of Educational Technology, 2023) . From a historical standpoint, though, technological changes have brought new roles, creating new opportunities (Ch’ng, 2023; Luckin et al., 2016) . Gimpel et al. (2023) argued that ChatGPT (and generative AI tools like it) are supplementary; that is, used to enhance learning, teaching, and research (Atlas, 2023) . Thus, while it is conceivable that AI may assume some tasks currently performed by humans, generative AI is more likely to aid in curriculum development and similar tasks rather than entirely replace instructional designers and educators (Atlas, 2023) .

Generative AI can be used to free individuals from basic tasks, affording the time to focus on more advanced higher-order thinking (Ch’ng, 2023; İpek et al., 2023) . Instructional designers can use ChatGPT to perform laborious tasks so they can concentrate on other aspects of ID, reducing course content turnaround. Comparing traditional, current ID to AI collaborative practices, Ch’ng (2023) offered that content is often written by subject matter experts (SMEs) who conduct time-consuming research. Complicating matters, instructional designers must often deal with a general lack of understanding when it comes to their function and role, particularly as it relates to collaborating with others, including these experts (Pollard & Kumar, 2022) . This can create challenges because, at least in the case of SMEs, instructional designers are not necessarily authorities in an area (Pollard & Kumar, 2022) ; thus, their reliance on these experts is imperative. ChatGPT can be used to assist in identifying sources and rapidly summarizing articles (Atlas, 2023) , including translating content from different languages to help SMEs as well as instructional designers enhance collaboration efforts. Content can be logically chunked and organized, creating outlines that instructional designers can flesh out. It can also be used as a copyeditor to assist in producing polished text (Gimpel et al., 2023) . Then there is its ability to create introductions, objectives, main content topics, quizzes, assignments, and instructions (Ch’ng, 2023) .

Ch’ng (2023) —continuing the comparison of traditional ID to AI collaborative practices, but in the context of video, audio, and images—explained that the generation of multimedia can be a time-consuming and costly process as the production of video with audio typically involves a studio and recording equipment. However, ChatGPT can assist with aspects of multimedia generation. For example, it can be used to generate transcripts (based on text input) for closed captioning. ChatGPT can also aid in developing scripts to be used in the later creation of voice-overs. As for images, instructional designers must find copyright-free (or purchase stock) images that match the content and objectives (Ch’ng, 2023) , create the illustrations themselves, or rely on artists and graphic designers. Although the copyright of images generated by AI is currently evolving (Ch’ng, 2023) , tools like DALL·E can be used to create drawings, illustrations, and related graphics.

Overall, generative AI holds the potential to undertake lengthy and complex tasks, thus playing a supportive role in the ID process. This idea is perhaps best conveyed by the findings of İpek et al.’s (2023) systematic review of ChatGPT, which highlighted the evolving capabilities of generative AI in academic and research contexts. Some of these included summarizing and creating abstracts of articles, finding literature resources, generating literature, translating and paraphrasing, generating exam answers, identifying learners’ needs early in the learning process, personalizing learning experiences, grading and assessment, conducting data analysis, and helping with studying (İpek et al., 2023) . (See İpek et al. (2023) for other capabilities and an extensive list of research published in the examination of ChatGPT, and Bahroun et al. (2023) for a comprehensive analysis of generative AIEd.)

3. Method

As with others who have documented their experiences with ChatGPT (e.g., Cooper, 2023; Susnjak, 2022; Tlili et al., 2023 ), this study explored the use of ChatGPT as a resource to assist instructional designers and educators in selecting the most suitable media for delivering instructional content. Different approaches can be found when studying ChatGTP. For example, Cooper (2023) , citing Hamilton et al. (2009) , used “a self-study methodology to investigate technology” (p. 445). Others have borrowed from qualitative approaches without calling out any specific methodology (e.g., Susnjak, 2022 ). In this investigation, an approach like that of Tlili et al. (2023) was adopted to understand ChatGPT’s suitability during media selection. Citing Yin (1984) and Stake (1995) , Tlili et al. (2023) used a qualitative case study approach that benefited from an instrumental case study research design, respectively. Tlili et al. (2023) posited that an instrumental research design is effective for comprehending phenomena within their specific contexts (Stake, 1995) . This approach was pertinent to their analysis of ChatGPT, which is an asserted relevance to the current work.

In this specific research, the authors, in response to the media selection complexities and obstacles identified in the literature review, developed a media selection tool using ChatGPT. The authors used the tool to explore ChatGPT’s ability to choose suitable instructional delivery systems. These results were then compared to findings obtained through the authors’ performance of a traditional media selection process. Altogether, this inquiry sought to examine ChatGPT’s strengths and weaknesses as a decision-making tool for selecting media during ID. As far as is known (at the time of this writing), no empirical studies have been conducted and published on the application of ChatGPT in selecting media as part of the ID process.

3.1. Instruments

3.1.1. Media Selection Guide

As discussed, the process of media selection is influenced by several factors (Holden et al., 2010; Martin, 2011; Sugrue & Clark, 2000) . Holden et al. (2010) presented that in choosing suitable media for distance learning, it is crucial to consider factors that could affect the preference for one medium over another. They offered examples of technologies, like satellite e-learning and real-time web-based instruction, for teaching strategies that necessitate live and interactive learning. Thus, among the various considerations are instructional strategies.

The instructional strategy for course delivery refers to presenting information in a way that supports learning and performance with various approaches, including blended or hybrid delivery and simulations (Martin, 2011) . Furthermore, certain aspects of strategies involve considerations like the sequence of presentation and level of interactivity (Martin, 2011) . Given the number of learning environments, it becomes crucial to evaluate which strategies are most suitable for various delivery options, understanding that strategies are most effective when tailored to address specific learning goals and objectives (Martin, 2011) . To put it differently, Martin (2011) introduced the notion of instructional alignment, which underscores the interconnectedness of various elements, contributing to the effectiveness of instructional material. Therefore, just as it is crucial to align the objective with information, examples, practice, feedback, review, and assessment, it is equally essential to align media and strategies with all these different elements (Martin, 2011) .

Considering these insights, along with other perspectives such as Ziagos’s (1991) assertion that the method, rather than the delivery medium, dictates learning effectiveness and Sugrue and Clark’s (2000) recommendation to match media with Bloom’s (1956) classification of learning tasks, the authors developed a media selection guide (MSG; see Appendix A). Moreover, given the argument that media selection is not an exact science (Anderson, 1983) , the guide was also based on the authors’ professional judgments and ID experience. Recognizing this balance is essential, as it pertains to works asserting that the criteria for selecting media depend, in part, on the subjective judgment of the instructional designer (e.g., Heidt, 1989 ). It was also predicated on questions found in scholarly discussions concerning the equilibrium instructional designers maintain between adhering to formal guidelines and relying on their experience and intuition within ID (Wilson & Jonassen, 1990) .

3.1.2. Media Selection Tool

ChatGPT-4 (OpenAI, 2024a) was used in this research. This is OpenAI’s most advanced model (at the time of this writing), which offers improved language generation, making it more adept at handling complex queries and producing more nuanced responses (ChatGPT-4, personal communication, January 15, 2024). It was also trained on a significantly larger dataset using more sophisticated algorithms, which are believed to enhance its accuracy and breadth of understanding across knowledge domains (ChatGPT-4, personal communication, January 15, 2024). Given its capability to consider relative information during response generation, it can sustain a conversational flow (Gimpel et al., 2023) and, consequently, may allow for more coherent and contextually relevant conversations over more prolonged interactions (ChatGPT-4, personal communication, January 15, 2024). The ChatGPT-4 model is at the core of OpenAI’s advanced offerings, including ChatGPTs (announced at DevDay in November 2023), which are instantiations that have been tailored for specific purposes (OpenAI, 2024b) .

Using the ChatGPTs feature, a GPT was created to assist in the media selection of specific tasks. A “knowledge source” file that included the content used to train and guide the model was uploaded (see Appendix B). The knowledge source was primarily the MSG, which encompassed specific language that worked best when training and testing the GPT. The beauty of this approach is that GPTs can be built by uploading files that comprise natural language (e.g., Adobe PDF, Microsoft Word); no coding or scripting is involved (OpenAI, 2024b) . Furthermore, the tailored GPT was given explicit instructions on what to do (see Table 1). The outcome was a media selection tool (MST and “the GPT” hereafter) built upon the ChatGPT-4 model but with additional knowledge on the selection of suitable delivery systems in ID (see Figure 1).

3.2. Procedure

3.2.1. Training the Media Selection Tool

The authors established an account with OpenAI and upgraded their plan to gain access to ChatGPT-4. The motivation for upgrading was to acquire the ability to use GPTs. After reviewing OpenAI’s (2024c) guide on prompt engineering—which emphasized strategies and techniques for optimizing interactions with large language models—the authors experimented with GPT Builder. The authors responded to the builder’s prompts to create the MST to understand specific instructional criteria.

Table 1. Instructions and settings used to configure the MST.

Figure 1. The MST interface (“By” line removed from the figure).

This led to multiple iterations of the GPT, with continuous refinements and adjustments, thereby enhancing the MST’s behavior. During refinement, the model’s performance was evaluated by the authors using the preview capabilities and continued until the model could effectively prompt and accurately understand the relationships defined in the knowledge source. The result was a GPT that comprehended interrelationships among several instructional elements.

Moreover, while the model received training through examples (in the knowledge source), this was by no means the extensive data typically found in a relational database used with traditional ID tools. Neither did it include comprehensive data involved in fine-tuning AI models. This was intentional, aiming to capitalize on the generative capabilities of AI. There was the concern that if the model was overly reliant on a comprehensive dataset, it might only reflect a singular school of thought. This could lead to a tool that merely mimics existing database-fed decision-tree ID systems rather than offering the novel, diverse perspectives and solutions enabled by AI’s generative potential. Finally, to ensure clarity in its decision-making process, the GPT was instructed to offer detailed explanations for its choices.

3.2.2. Conducting the Media Selection

To evaluate the decision-making process of the MST, the outputs and decisions produced were compared with those obtained using the MSG. First, the authors defined the terminal and enabling objectives and extracted the enabling objective behaviors as tasks to conduct the media selections.

The terminal objective represents the final aim of instruction, achieved through the cumulative fulfillment of specific, smaller enabling objectives and the execution of tasks that serve as the practical steps for accomplishing these broader goals. The objective has been asserted to be the most critical factor in selecting media for instruction (Holden et al., 2010) , whereas the task is equally important, given that it is the action through which the objective is achieved.

Table 2 shows the terminal and enabling objectives and corresponding tasks. To maintain simplicity, photography was chosen, concentrating on learning the fundamentals of exposure.

The authors then independently used the MSG to carry out their selections for the enabling objectives for each task. The authors identified the domain hierarchies, learning outcomes, level of interactivity (LOI), instructional methods, instructional strategies, and the chosen delivery system. They documented these instructional elements on worksheets. A total of 80 items (10 tasks by 8 elements) were assessed. Upon comparing their findings, the authors reached an agreement on 51 of the 80 items (64%). The authors deliberated on the remaining items until they reached an agreement, creating a worksheet for each task that included the final agreed-upon selections.

With consensus in hand, one of the authors entered the enabling objectives for each task into the MST. The outcomes generated by the GPT for each instructional element were then documented by the author on the respective task’s worksheet. The author also added comments deemed essential or relevant about the GPT that departed from its expected behavior.

4. Results

Table 3 displays the media selections and corresponding instructional element decisions in using the MSG and MST. The table also shows the GPT-provided explanations (behind each instructional element selection) in italics. The findings revealed that despite receiving identical training and guidelines, respectively (albeit in different formats and wording), the authors and the GPT diverged in their choices.

4.1. Task Interpretations and Delivery System Recommendations

The authors and the GPT concurred in their identification of the central verb in all tasks except for 5.1. The authors identified the verb “conduct,” whereas the GPT chose “evaluate,” though recognizing the task involved “conduct[ing] a portfolio review.” The authors saw the task as performing the review, while the GPT likely focused on judging the content’s quality, as reflected in their respective category selections of “adaptation” and “evaluation,” respectively. This resulted in varied choices for the remaining instructional elements, which, in the context of each analysis, were aligned with the guide and knowledge source contents.

The authors determined the learning outcome to be “cognitive strategies,” while the GPT decided upon “intellectual skills,” reasoning that the task “requires learners to critically assess and provide feedback on portfolios, a task that involves higher-order cognitive processes like analysis, synthesis, and evaluation.” Despite these different interpretations of the task, both agreed that a facilitator-led “small group” was the optimal instructional method, subsequently suggesting a “peer/pressure group” instructional strategy. This led to distinct yet

Table 2. The instructional objectives and tasks used in the media selections.

Table 3. Media selections and corresponding decisions in using the MSG and MST.

The GPT generated the italicized text.

similar methods of delivery. While the authors favored a classroom environment, they also suggested online breakout rooms, which corresponded with the GPT’s recommendation of a learning management system (LMS) coupled with a synchronous online platform (e.g., Microsoft Teams).

Excluding 1.4, variations in the instructional elements were observed across all the other tasks. As shown in Table 3, tasks 1.1, 1.2, and 1.3 were dissimilar in learning level, category, learning outcome, LOI, and instructional method. Although “specify” was used for the verb, focusing on the cognitive (knowledge) domain, the authors viewed the task as learning “factual” information, whereas the GPT likely saw the matter as responding differently to specific (i.e., aperture, shutter, ISO), albeit similar stimuli. This thinking persisted in their subsequent choices, with the authors characterizing the outcome as “verbal” with an LOI 2, designating limited learner involvement, and the GPT viewing the matter as an “intellectual skill” necessitating a higher order of participation (LOI 3). Interestingly, even with deviating in their analysis, both opted for a “traditional” instructional strategy and proposed distinct yet analogous delivery systems. The authors advised self-study via physical or digital textbook, while the GPT suggested an interactive e-learning module or a digital text with integrated tutorials.

Not all the analyses resulted in differences across the same instructional elements. For 2.1, the authors and the GPT agreed upon the learning outcome and LOI but diverged regarding learning level, category, instructional method, and instructional strategy. Tasks 3.1, 3.2, and 3.3 revealed agreement on all the elements and conflicted on outcome, method, and strategy. Whereas 4.1 only showed a match with LOI and strategy. These differences do not suggest the analyses were flawed, but rather, the authors and GPT viewed the tasks differently, as was described above with tasks 1.1, 1.2, 1.3, and 5.1.

Reinforcing this point with another example, the authors and the GPT viewed tasks 3.1, 3.2, and 3.3 as a skill comprising a “complex overt response (mechanism),” which the GPT summarized as “performing complex and coordinated actions, such as manipulating camera settings (shutter speed) to achieve specific photographic effects.” However, the authors noted a learning outcome of “cognitive strategies,” whereas the GPT denoted a “motor skill.” That is, the authors saw a cognitive task, basing their conclusion, in part, on the work of Gagné (1985) , who characterized “cognitive strategy” as an internal regulatory process, allowing learners to choose and adjust their methods of focusing, acquiring knowledge, retaining information, and reasoning, as well as the insights of Bruner et al. (1956) on the operation and usefulness of cognitive strategies in problem-solving. Conversely, the GPT deduced a physical task, rationalizing that “it focuses on developing the physical ability to adjust camera settings effectively to achieve different effects in photographs.”

The most interesting of these findings is that, as with 1.1 through 1.4 and 5.1, the remaining tasks revealed different yet compatible delivery systems. In tasks 3.1, 3.2, and 3.3, the authors and the GPT advocated using the camera, albeit presumably influenced by the enabling objective. Additionally, the GPT recommended using AR and VR as auxiliary technologies, reasoning that they could assist in “demonstrating the effects of different aperture settings in various lighting conditions without the need for immediate access to indoor and outdoor settings.” The exception in delivery system selection was task 2.1. Even though the choices made sense in the context of their respective analysis, the authors proposed a traditional classroom or online breakout room, while the GPT recommended interactive multimedia instruction (IMI).

4.2. Straying from the Prescribed Guidance

Although the selections did not always align between the authors and the GPT, as presented, they were aligned within their respective individual analyses. Nevertheless, it is important to note that the authors and the GPT did go “off script,” which may help explain some of their choices beyond their initial interpretation of each task. It is recognized that the media selection approach provided to the authors and the GPT served as general guidance. Furthermore, the contents of the MSG and knowledge source were in no way exhaustive; they only included a few examples. Therefore, a certain level of deviation was anticipated.

Based on this, the selection of a combination of instructional methods, instructional strategies, and delivery systems showed a departure from the prescribed guidelines. For tasks 1.1, 1.2, and 1.3, the authors noted “assigned reading” as the method, “traditional” as the strategy, and “self-study” as the delivery system. Though not explicitly one of the methods in the MSG, the authors based their “assigned reading” decision on guidance from the MIL-HDBK-29612-2A (DoD, 2001) , which underpinned the MSG. Moreover, “assigned reading” is contextualized in the MIL-HDBK within lecture and traditional classroom learning, which influenced the authors’ decision to adopt a “traditional” strategy. Meanwhile, “self-study” via textbook was based on the instructional elements alongside the authors’ experiences.

For these tasks, the GPT also departed from its training, choosing a “reference-based” instructional method, which it explained encompassed “utilizing the text as a primary source of information.” In the knowledge source, “reference-based” is not linked to “intellectual skills”; however, it is connected to the “traditional” instructional strategy. Furthermore, the choice of employing an interactive e-learning module or digitally enhanced text with tutorials was comparable with the authors’ preference for a digital textbook. Thus, the GPT’s analysis is arguably aligned with the authors, albeit departing from the examples in its method selection, but presumably a result of its exposure to the enabling objective.

Of the remaining tasks, the authors diverged from the MSG in their choice of instructional method and instructional strategy for tasks 3.1, 3.2, and 3.3, while the GPT departed from its training regarding the method and strategy decisions of 4.1. For tasks 3.1 through 3.3, the authors concluded on “practical application” for the method and “traditional” as the strategy. Although not explicitly among the examples, the authors decided upon the method of “practical application” because they viewed the tasks in the context of individuals performing them and having their specific environments (i.e., indoor and outdoor settings) that required camera configurations unique to each situation. The authors identified that trial and error learning would be enhanced to meet the photo portfolio requirement. Otherwise, a small group method could have been selected with the same element alignment but might allow an individual to lean heavily on teammates as opposed to putting in the individualized effort to learn and practice.

As for task 4.1, the GPT noted an instructional method of “project-based learning” and the instructional strategy, “Exercise, Experiential, or Experimental (E3)”. “Project-based learning” was also not part of the training examples but was instead derived by the GPT, demonstrating its heuristic capability to make independent choices. It could also be said that—although project-based learning has been described in the context of ID—its selection might have been influenced by the task’s language, which referenced the use of a portfolio. This rationale was further articulated by the GPT, which noted that “project-based learning is particularly relevant as it involves completing a real-world task (here, the creation of a photo portfolio) that allows learners to apply and demonstrate their understanding in a practical context.” Finally, its decision to adopt an “E3” strategy was based on the rationale that the task “emphasizes learning through doing and experimenting, which is essential for understanding various photographic techniques.”

4.3. Unpredictability and Variation in Responses

Finally, using the MST revealed additional observations beyond those previously mentioned. The GPT consistently selected instructional elements across tasks 1.1, 1.2, and 1.3, as well as 3.1, 3.2, and 3.3. However, for tasks 1.1 through 1.4, a secondary instructional strategy aligned with the secondary instructional method was chosen. This behavior strayed from its training, as the model was directed to “choose a primary and secondary instructional method” and then “emphasize a single strategy as the most suitable.” Furthermore, for tasks 1.1 through 1.4, the GPT offered a rationale for the chosen secondary strategies. While consistent in meaning, the language varied, demonstrating its non-deterministic behavior. This means ChatGPT may produce diverse responses to identical prompts (ChatGPT-4, personal communication, January 22, 2024).

Building upon ChatGPT’s non-deterministic nature, the GPT also exhibited various levels of detail in explaining its selections. That is, despite instructions to “[provide] detailed yet easy-to-understand explanations” and “[offer] rationale behind selections,” the GPT exhibited inconsistency in its verboseness while explaining its decision-making rationale. For instance, the GPT provided detailed explanations for its choices in tasks 3.1 and 3.2, offering much less in 3.3. Altogether, the GPT’s performance indicates a tendency towards variability and unpredictability, not only in its strategic selections but also in the level of its explanations.

5. Discussion

5.1. Strengths as an Assistant in Media Selection

5.1.1. User-Friendliness (With Inherent Complexities)

One key reason ChatGPT was selected for this study is its user-friendliness (Atlas, 2023) . Specifically, GPTs can be created with straightforward instructions and knowledge sources that contain the training content. This represents a significant shift from traditional AI models, which typically necessitate exposure to hundreds, thousands, or even millions of data points. The GPTs feature has unlocked opportunities to explore generative AI’s potential in a way that requires no coding or scripting expertise beyond a basic understanding of prompt engineering practices.

This was one of the advantages found in this study, as the authors realized the MST could be developed using the “fail fast and often” philosophy associated with agile software development. The authors produced iterations of the GPT rapidly, making minor changes until testing revealed the desired behavior. However, even with the ease of use, considerable refinement was needed during development, with the authors acknowledging that subsequent fine-tuning would be required in its ongoing use. For instance, in tasks 1.1 through 1.4, it was noted that the GPT chose secondary instructional strategies based on the secondary instructional method, whereas, for later tasks, it adhered to its training to provide only one strategy. This is despite the desired behavior the authors witnessed during iterative development and testing, signifying that constant attention is required in using these models.

Moreover, even though a basic understanding of prompt engineering is needed, proficiency in this practice is crucial to communicating effectively with generative AI. Namely, the ability to attain high-quality output relies heavily on the skill of formulating suitable prompts (Gimpel et al., 2023) . The authors observed, for instance, that minor adjustments in the phrasing of the instructions led to noticeable changes in the GPT’s behavior.

5.1.2. A Personalized Experience

Another identified benefit is ChatGPT’s capacity to interact engagingly and realistically. While ChatGPT’s ability to participate in natural communication has been documented in the literature (e.g., Atlas, 2023; Baidoo-Anu & Ansah, 2023; İpek et al., 2023; Susnjak, 2022 )—and was a factor in its selection for this research—this capability represents a significant shift from traditional ID systems. Instructional designers typically engage with these systems by inputting defined data (such as action verbs) into text fields, choosing options from dropdown menus, and utilizing other interface elements, with the outcomes often displayed in a report-like format using statistical data descriptions, tables, and graphs. In this study, the GPT demonstrated its capacity to suggest delivery systems promptly and efficiently without using such interfaces, drop-downs, or lock-step procedures. More importantly, the GPT was able to articulate insights and reasoning in a format that was conversational and easily digestible, presumably a result of its instructions to “communicate in a plain, accessible, and conversational tone, providing detailed yet easy-to-understand explanations.” From this, the authors suggest that its interactivity can be personalized to align with the user’s proficiency level in ID.

5.1.3. An Interactive Experience

Furthermore, although the MST was designed to recommend delivery systems with only the task as input, the authors recognized that the GPT could have been trained to lead instructional designers through the media selection process incrementally. For instance, instructional designers might be prompted to specify the pertinent study domain, such as business and finance, education and teaching, or hospitality and tourism, while also selecting the most effective instructional methods and strategies. Instructional designers might also be offered several delivery systems to choose from and delve into those systems more deeply with ChatGPT. In other words, instructional designers are not restricted to the interfaces or methods commonly associated with existing ID systems but instead empowered to actively participate in the decision-making process and engage through open exchanges.

5.1.4. Variability and Adaptability in Response

The GPT’s engaging nature was showcased, in part, by its use of diverse language and inconsistency in its responses and verbosity, highlighting its capacity for unpredictability. This poses challenges in standardization and consistency, which can be problematic if uniformity is desired. However, ChatGPT’s inherent non-deterministic behavior could be beneficial in media selection (as well as other aspects of ID) by providing instructional designers with ways to apply newer technologies not formally included in existing ISD resources or models.

For the same instructional task, for instance, the MST might suggest different delivery systems. This variability could be a benefit as it introduces a range of potential solutions, offering opportunities to explore a variety of media options not otherwise considered. This could also be viewed from a creative perspective. Although ChatGPT is said to fall short in this area (e.g., Ch’ng, 2023 ), it could present unique combinations of media, potentially leading to the innovative pairing of content and delivery options that help uncover more effective ways to achieve learning outcomes.

This also means that GPT responses can adapt over time, even without explicit updates to their knowledge source (ChatGPT-4, personal communication, February 24, 2024). As it encounters a broader array of queries and instructional scenarios, its internal models adjust, potentially leading to improved recommendations as it “learns” from the diversity of interactions. This might encourage users to refine their prompts, experiment with the wording of instructional objectives, or adjust their ID parameters to explore a broader range of delivery system options. Subsequently, helping stimulate users to critically assess delivery system choices for their specific instructional goals, enhancing their understanding of how different media might be leveraged to achieve desired learning outcomes.

5.1.5. Far-Reaching Autonomous Decision-Making

Finally, it was discerned for task 4.1 that the GPT identified “project-based learning” as the instructional method despite it not being included in the training material, thereby highlighting its heuristic ability to make autonomous decisions. This was likely a result of its ability to perform pattern recognition and text data analysis on trained datasets through April 2023 (ChatGPT-4, personal communication, January 22, 2024). This underscores the apprehension that ChatGPT’s inferential capabilities are confined to its training dataset, a limitation that notably would hamper its proficiency in recognizing the most recent developments in media alongside emergent perspectives on ID and technology. However, notwithstanding an earlier inability to access the Internet (Gimpel et al., 2023; İpek et al., 2023) , ChatGPT-4 can now interface with the Bing search engine (ChatGPT-4, personal communication, January 22, 2024), which is why the authors instructed the GPT to recommend suitable delivery systems “based on the selected instructional elements, the examples provided” and “searching the Internet.” Consequently, the GPT’s decision to use “project-based learning” for task 4.1 and its recommendation of synchronous online platforms like Zoom or Microsoft Teams for task 5.1 might be attributed to its internet access, creating far-reaching implications.

5.2. Limitations and the Importance of Human Insight

5.2.1. An Inability for Nuanced Analysis

Conversely, the GPT’s identification of “project-based learning” (for task 4.1) and other instructional method choices, such as “referenced-based” (for tasks 1.1, 1.2, and 1.3), also demonstrated its willingness to go “off script.” Granted, the authors strayed from the MSG in their selections, which were influenced by not only their exposure to other sources but also their own experiences and intuition. Even within their analyses, the authors initially reached a consensus of only 64%. This highlights the subjectivity of the inherent decisions while speaking to the challenges found within media selection.

However, the GPT’s choices were ultimately based on pattern recognition and data synthesis rather than a profound understanding. Reevaluating the GPT’s selection of “project-based learning” as the instructional method, one might argue that it more aptly serves as an instructional strategy. As a method, it is the means through which learners engage in the learning process, fostering a hands-on experience. As a strategy, the intended learning outcome includes critical thinking and collaborative skills. In truth, both perspectives hold merit and complement each other, with the distinction found in the specific details of the learning being emphasized.

However, as İpek et al. (2023) explained, ChatGPT may lack the capability to identify critical nuances or uncover patterns of knowledge in literature as effectively as an experienced human observer (Rudolph et al., 2023) . Artificial intelligence excels in managing vast amounts of data, executing intricate calculations, and conducting real-time analyses. İpek et al. (2023) also admitted that ChatGPT enables researchers and students to conduct extensive literature reviews swiftly, thus streamlining the process of discerning the breadth and limitations within the scope of literature about a specific research topic. Where it significantly falls short is in areas of creative contribution, critical thinking, emotional intelligence, and contextual understanding (Ch’ng, 2023) . Consequently, in the context of task 4.1, its ability to differentiate between project-based learning as a hands-on instructional method or a strategic approach to achieve specific cognitive and collaborative goals may be limited, underscoring the importance of human insight.

5.2.2. Approach with a Cautious Mindset

This raises a major criticism of ChatGPT, in that it might be viewed as a reliable source, even though it has been accused of often lacking evidence to substantiate its replies (Cooper, 2023) . As highlighted by others (e.g., Atlas, 2023; Gimpel et al., 2023; İpek et al., 2023; Rudolph et al., 2023; Tlili et al., 2023 ), there is the concern that despite ChatGPT’s responses seeming credible, they may not always be factually correct (OpenAI, 2023a) . That is, ChatGPT can “hallucinate” (Gimpel et al., 2023: p. 15; Rudolph et al., 2023) . Providing an example, Gimpel et al. (2023) noted that although academic references might appear correct, ChatGPT often generates citations that are stitched together from authentic sources or that are entirely fabricated altogether (Cooper, 2023) . The authenticity of cited academic works can be confirmed by locating the sources (including asking ChatGPT to provide the unique digital object identifier). However, the more pressing matter is the inability to recognize made-up responses.

To tackle this, it has been recommended that users possess a thorough understanding of the subject matter to differentiate between factual and inaccurate information (Gimpel et al., 2023) . This means that data produced by ChatGPT should be cross-checked for accuracy (Gimpel et al., 2023) . Consequently, users are advised to approach ChatGPT and similar generative AI technologies cautiously, treating them as auxiliary rather than conclusive resources. On a positive perspective, however, it was discussed that the non-deterministic nature of ChatGPT could enrich the ID process by introducing variability and adaptability into media selection, encouraging users to think critically about their media choices, and adopting innovative solutions. The MST (and similarly developed ID aids), for example, could be advantageous in offering instructional designers alternative perspectives they might not have considered. However, these perspectives should be treated as preliminary at best and thoroughly evaluated in relation to the details of the effort. That is, in addition to adaptability, ChatGPT’s non-deterministic behavior also shows a tendency towards variability and unpredictability; thus, a mindful approach is needed to ensure alignment with the goals of each effort.

5.2.3. Critically Evaluate Outputs, Inputs, and Training Data

Finally, there is the matter of bias. Although ChatGPT may provide diverse viewpoints, it is vital to recognize that the training received by the GPT in this work likely influenced its choices. The topic of discrimination is well-discussed in AI (Baker & Smith, 2019; Baidoo-Anu & Ansah, 2023; Office of Educational Technology, 2023) , with OpenAI (2023b) acknowledging that ChatGPT could exhibit a predisposition toward content reflecting Western perspectives and individuals. It has been presented that ChatGPT has the potential to perpetuate societal biases and discrimination, as it is trained on substantial data, and if that data contains biases, the large language model may reflect this (Atlas, 2023; Celik, 2023; İpek et al., 2023; Tlili et al., 2023) .

The GPT created for this investigation did not display harmful biases towards a people; it was trained on an approach to media selection derived from various ID models, frameworks, and taxonomies. However, the extent to which the examples included in its training impacted its choices of delivery systems remains unclear. This uncertainty is built upon, in part, the assertion that the quantity of provided examples tends to correlate with the nature of the output (Rudolph et al., 2023) . Therefore, while the GPT demonstrated neutrality in biases towards individuals, its training based on diverse ID models, alongside the examples used, may have subtly influenced its media selection decisions, underscoring the need for careful evaluation (Office of Educational Technology, 2023) of not only its outputs but also inputs (i.e., prompts) and potential biases of those responsible for training these models.

6. Limitations, Future Research, and Conclusion

This study, like any investigation, has its constraints. As discussed, the training of the model did not encompass the extensive data found in relational databases typical of traditional ID tools or involve the comprehensive data often involved in fine-tuning AI models for specialized purposes. Albeit this was by design to capitalize on the generative capabilities of AI, it nevertheless means that the GPT’s training was not as extensive as it could have been, potentially affecting the findings. Furthermore, the media selection process developed for this study reflects just one approach among many that instructional designers adopt, each with its unique models, frameworks, taxonomies, and influential factors. Thus, the specificity of the instructions and data used to guide the authors and train the GPT raises questions about whether different approaches would have yielded significantly different findings, indicating the need for further research in this area.

Altogether, the findings of this study underscore the potential advantages of using generative AI in ID. The GPT created for this study exemplified how such tools can expedite analyses, thereby saving time and offering instructional designers alternative viewpoints, enhancing the depth and breadth of ID tasks. Although generative AI excels in pattern recognition and decision-making (through data analysis), it lacks the human understanding and experience to make detailed judgments. Care is therefore advised in using such tools (Office of Educational Technology, 2023) , considering them supplementary (Gimpel et al., 2023) , and emphasizing the importance of a thorough evaluation of their outputs, inputs, and potential biases in training data.

This study reinforces the notion that, while generative AI faces challenges and limitations, it also opens doors to innovative approaches and techniques in ID, making it an asset for instructional designers and educators in the field. The findings herein are, therefore, anticipated to resonate beyond the context of this research, contributing to broader discussions in ID and the development of generative AI. Specifically, the findings serve as a preliminary exploration, encouraging discussions about the integration and implications of tools like ChatGPT. Further research is urged, and communities of practice are encouraged to delve deeper into the capabilities and potential of generative AI in ID.

Appendix A. Media Selection Guide

Appendix B. Knowledge Source

Begin by inquiring about the user’s task to isolate the central verb. Use the verb to classify the task as either knowledge-, skill-, or attitude-based. Then, use the verb to determine the specific learning level category it represents (see Tables B1-B3 for examples).

Table B1. Cognitive (Knowledge) Domain.

Table B2. Psychomotor (Skill) Domain.

Table B3. Affective (Attitude) Domain.

Discern the most appropriate type of learning outcome that aligns with the nature of the task. For each learning level category, there is a corresponding learning outcome (see Table B4 for examples).

Table B4. Category of learning level and learning outcomes.

The four levels of interactivity (LOI) reference the domain hierarchy and type of action (verb) being performed. A single domain can cross over to several LOIs (see Table B5 for examples).

Table B5. Domain hierarchy and levels of interactivity.

Based on the learning outcome, determine the most effective instructional method(s), focusing on those that best suit the task (see Table B6 for examples). Then, choose a primary and secondary instructional method. Ensure that choices are made based on the task’s inherent requirements rather than conforming to existing or preconceived learning environments.

Table B6. Learning outcomes and instructional methods.

Multiple instructional strategies might be suitable for implementing a particular instructional method. If so, enumerate all viable instructional strategies but emphasize a single strategy as the most suitable (see Table B7 for examples).

Table B7. Instructional methods and instructional strategies.

Finally, based on the selected instructional elements, the examples provided in Table B8, and searching the Internet, recommend suitable delivery systems.

Table B8. Domain hierarchy, learning outcome, and delivery system.

Provide a summary of the selected instructional elements and the recommended delivery system(s).

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Anderson, J. R. (1983). The Architecture of Cognition. Harvard University Press.
[2] Association for Educational Communications and Technology (AECT) (2023). AECT in the 20th Century: A Brief History.
https://www.aect.org/aect_in_the_20th_century_a_br.php
[3] Atlas, S. (2023). ChatGPT for Higher Education and Professional Development: A Guide to Conversational AI. College of Business Faculty Publications.
https://digitalcommons.uri.edu/cba_facpubs/548
[4] Bahroun, Z., Anane, C., Ahmed, V., & Zacca, A. (2023). Transforming Education: A Comprehensive Review of Generative Artificial Intelligence in Educational Settings through Bibliometric and Content Analysis. Sustainability, 15, Article 12983.
https://doi.org/10.3390/su151712983
[5] Baidoo-Anu, D., & Ansah, L. O. (2023). Education in the Era of Generative Artificial Intelligence (AI): Understanding the Potential Benefits of ChatGPT in Promoting Teaching and Learning. Journal of AI, 7, 52-62.
https://doi.org/10.61969/jai.1337500
[6] Baker, T., & Smith, L. (2019). Educ-AI-tion Rebooted? Exploring the Future of Artificial Intelligence in Schools and Colleges. Nesta Foundation.
https://media.nesta.org.uk/documents/Future_of_AI_and_education_v5_WEB.pdf
[7] Bloom, B. S. (1956). Taxonomy of Educational Objectives, Handbook I: The Cognitive Domain. David McKay.
[8] Bond, J., & Dirkin, K. (2020). What Models Are Instructional Designers Using Today? The Journal of Applied Instructional Design, 9, 125-138.
https://doi.org/10.51869/92jbkd
[9] Bond, M., Khosravi, H., De Laat, M., Bergdahl, N., Negrea, V., Oxley, E., Pham, P., Chong, S. W., & Siemens, G. (2024). A Meta Systematic Review of Artificial Intelligence in Higher Education: A Call for Increased Ethics, Collaboration, and Rigour. International Journal of Educational Technology in Higher Education, 21, Article 4.
https://doi.org/10.1186/s41239-023-00436-z
[10] Braby, R. (1973). An Evaluation of Ten Techniques for Choosing Instructional Media (NAVTRAEQUTPCEN TAEG Report No. 8). Naval Training Equipment Center, Department of the Navy.
https://apps.dtic.mil/sti/tr/pdf/AD0773456.pdf
[11] Bruner, J. S., Goodnow, J. J., & Austin, G. A. (1956). A Study of Thinking. Wiley.
https://doi.org/10.2307/1292061
[12] Celik, I. (2023). Towards Intelligent-TPACK: An Empirical Study on Teachers’ Professional Knowledge to Ethically Integrate Artificial Intelligence (AI)-Based Tools Into Education. Computers in Human Behavior, 138, Article 107468.
https://doi.org/10.1016/j.chb.2022.107468
[13] Ch’ng, L. K. (2023). How AI Makes Its Mark on Instructional Design. Asian Journal of Distance Education, 18, 32-41.
https://doi.org/10.5281/zenodo.8188576
[14] Chen, L., Chen, P., & Lin, Z. (2020). Artificial Intelligence in Education: A Review. IEEE Access, 8, 75264-75278.
https://doi.org/10.1109/ACCESS.2020.2988510
[15] Clark, D. (2015). Bloom’s Taxonomy of Learning Domains. A Big Dog, Little Dog and Knowledge Jump Production.
http://www.nwlink.com/~donclark/hrd/bloom.html
[16] Clark, R. E. (1983). Reconsidering Research on Learning from Media. Review of Educational Research, 53, 445-459.
https://doi.org/10.3102/00346543053004445
[17] Clark, R. E. (1994). Media Will Never Influence Learning. Educational Technology Research and Development, 42, 21-29.
https://doi.org/10.1007/BF02299088
[18] Clark, R. E., & Salomon, G. (1986). Media in Teaching. In M. Wittrock (Ed.), Handbook of Research on Teaching (3rd ed., pp. 464-478). Macmillan.
[19] Clark, R. E., & Sugrue, B. M. (1991). Research on Instructional Media, 1978-1988. In G. J. Anglin (Ed.), Instructional Technology: Past, Present, and Future (pp. 327-343). Libraries Unlimited.
[20] Cooper, G. (2023). Examining Science Education in ChatGPT: An Exploratory Study of Generative Artificial Intelligence. Journal of Science Education and Technology, 32, 444-452.
https://doi.org/10.1007/s10956-023-10039-y
[21] Cuban, L. (1986). Teachers and Machines: The Classroom Use of Technology Since 1920. Teachers College Press.
[22] Das, S. (2024). Levels of Interactivity in ELearning. eLearning Industry.
https://elearningindustry.com/levels-of-interactivity-elearning-modules
[23] Department of Defense (DoD) (2001). Department of Defense Handbook. Instructional Systems Development/Systems Approach to Training and Education (Part 2 of 5 Parts) (MIL-HDBK-29612-2A, Notice 3). Department of Defense.
http://everyspec.com/MIL-HDBK/MIL-HDBK-9000-and-Up/MIL-HDBK-29612_2A_24724/
[24] Dick, W. (1987). A History of Instructional Design and Its Impact on Educational Psychology. In J. Glover, & R. Roning (Eds.), Historical Foundations of Educational Psychology (pp. 183-202). Plenum.
https://doi.org/10.1007/978-1-4899-3620-2_10
[25] Dick, W. (1996). The Dick and Carey Model: Will It Survive the Decade? Educational Technology Research and Development, 44, 55-63.
https://doi.org/10.1007/BF02300425
[26] Dick, W., & Carey, L. (1990). The Systematic Design of Instruction (3rd ed.). Scott Foresman & Co.
[27] Esteva, A., Robicquet, A., Ramsundar, B., Kuleshov, V., DePristo, M., Chou, K., Cui, C., Corrado, G., Thrun, S., & Dean, J. (2019). A Guide to Deep Learning in Healthcare. Nature Medicine, 25, 24-29.
https://doi.org/10.1038/s41591-018-0316-z
[28] Frerejean, J., Van Merriënboer, J. J. G., Kirschner, P. A., Roex, A., Aertgeerts, B., & Marcellis, M. (2019). Designing Instruction for Complex Learning: 4C/ID in Higher Education. European Journal of Education, 54, 513-524.
https://doi.org/10.1111/ejed.12363
[29] Funaro, J., & Mulligan, B. E. (1978). Instructional Systems Design: The NAVAIR/NAVTRAEQUIPCFN Model (Report No. NAVTRAEQUIPCFN IH-304). Naval Training Equipment Center, Department of the Navy.
https://apps.dtic.mil/sti/tr/pdf/ADA060459.pdf
[30] Gagné, R. M. (1965). The Conditions of Learning (1st ed.). Holt, Rinehart & Winston.
[31] Gagné, R. M. (1985). The Conditions of Learning (4th ed.). Holt, Rinehart & Winston.
[32] Gagné, R. M., Briggs, L. J., & Wagner, W. W. (1992). Principles of Instructional Design (4th ed.). Harcourt Brace Javanovich.
[33] Gagné, R. M., Rieser, R. A., & Learsen, J. Y. (1981). A Learning-Based Model for Media Selection: Description (Research Product 81-25a). US Army Research Institute for the Behavioral and Social Science.
https://apps.dtic.mil/sti/tr/pdf/ADA109472.pdf
[34] Gameil, A. A., & Al-Abdullatif, A. M. (2023). Using Digital Learning Platforms to Enhance the Instructional Design Competencies and Learning Engagement of Preservice Teachers. Educational Sciences, 13, Article 334.
https://doi.org/10.3390/educsci13040334
[35] Gimpel, H., Hall, K., Decker, S., Eymann, T., Lämmermann, L., Mädche, A., Röglinger, M., Ruiner, C., Schoch, M., Schoop, M., Urbach, N., & Vandirk, S. (2023). Unlocking the Power of Generative AI Models and Systems Such as GPT-4 and ChatGPT for Higher Education: A Guide for Students and Lecturers (Hohenheim Discussion Papers in Business, Economics and Social Sciences, No. 02-2023). Universität Hohenheim, Fakultät Wirtschafts-und Sozialwissenschaften.
https://nbn-resolving.de/urn:nbn:de:bsz:100-opus-21463
[36] Gustafson, K. L., & Branch, R. M. (1997). Survey of Instructional Development Models (3rd ed.). ERIC Clearinghouse on Information & Technology.
[37] Hamilton, M. L., Smith, L., & Worthington, K. (2009). Fitting the Methodology with the Research: An Exploration of Narrative, Self-Study and Auto-Ethnography. Studying Teacher Education, 4, 17-28.
https://doi.org/10.1080/17425960801976321
[38] Hawkridge, D. G. (1973). Media Taxonomies and Media Selection. ERIC.
https://eric.ed.gov/?id=ED092067
[39] Heidt, E. U. (1975). In Search of a Media Taxonomy: Problems of Theory and Practice. British Journal of Educational Technology, 6, 4-23.
https://doi.org/10.1111/j.1467-8535.1975.tb00155.x
[40] Heidt, E. U. (1989). Media Selection. In M. Eraut (Ed.), The International Encyclopedia of Educational Technology (pp. 393-398). Pergamon.
[41] Higgins, N., & Reiser, R. A. (1985). Selecting Media for Instruction: An Exploratory Study. Journal of Instructional Development, 8, 6-10.
https://www.jstor.org/stable/30220774
https://doi.org/10.1007/BF02906242
[42] Holden, J. T., Westfall, P. J.-L., & Gamor, K. I. (2010). An Instructional Media Selection Guide for Distance LearningImplications for Blended Learning Featuring an Introduction to Virtual World (2nd ed.). United States Distance Learning Association.
https://www.usdla.org/wp-content/uploads/2015/05/AIMSGDL_2nd_Ed_styled_010311.pdf
[43] İpek, Z. H., Gözüm, A. I. C., Papadakis, S., & Kallogiannakis, M. (2023). Educational Applications of the ChatGPT AI System: A Systematic Review Research. Educational Process: International Journal, 12, 26-55.
https://doi.org/10.22521/edupij.2023.123.2
[44] Kemp, J. E., Morrison, G. R., & Ross, S. V. (1994). Design Effective Instruction. Macmillan Publishers.
[45] Krathwohl, D. R., Bloom, B. S., & Masia, B. B. (1964). Taxonomy of Educational Objectives. Handbook II: Affective Domain. McKay.
[46] Kruse, K., & Keil, J. (1999). Technology-Based Training: The Art and Science of Design, Development and Delivery. Jossey Bass.
[47] Lenane, H. (2022). Instructional Designer Perspectives of the Usefulness of an Instructional Design Process When Designing E-Learning. Doctoral Dissertation, Walden University.
https://scholarworks.waldenu.edu/dissertations/12752/
[48] Liu, M., Gibby, S., Quiros, O., & Demps, E. (2002). The Challenge of Being an Instructional Designer for New Media Development: A View from the Practitioners. In P. Barker, & S. Rebelsky (Eds.), Proceedings of ED-MEDIA 2002—World Conference on Educational Multimedia, Hypermedia & Telecommunications (pp. 1151-1157). Association for the Advancement of Computing in Education.
[49] Luckin, R., & Cukurova, M. (2019). Designing Educational Technologies in the Age of AI: A Learning Sciences-Driven Approach. British Journal of Educational Technology, 50, 2824-2838.
https://doi.org/10.1111/bjet.12861
[50] Luckin, R., George, K., & Cukurova, M. (2022). AI for School Teachers. CRC Press.
https://doi.org/10.1201/9781003193173
[51] Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. (2016). Intelligence Unleashed. An Argument for AI in Education. Pearson.
[52] Martin, F. (2011). Instructional Design and the Importance of Instructional Alignment. Community College Journal of Research and Practice, 35, 955-972.
https://doi.org/10.1080/10668920802466483
[53] Merrill, M. D., & ID2 Research Group (1996). Instructional Transaction Theory: An Instructional Design Model Based on Knowledge Objects. Educational Technology, 36, 30-37.
https://mdavidmerrill.files.wordpress.com/2019/04/txbased_ko-1.pdf
[54] Merrill, M. D., Drake, L., Lacy, M. J., Pratt, J., & ID2 Research Group (1996). Reclaiming Instructional Design. Educational Technology, 36, 5-7.
[55] Merrill, M. D., Li, Z., & Jones, M. K. (1991). Instructional Transaction Theory: An Introduction. Educational Technology, 31, 7-12.
https://www.jstor.org/stable/44426107
[56] Nolin, K. M. P. (2019). An Instrumental Case Study in Instructional Design: Integrating Digital Media Objects in Alignment with Curriculum Content in the Online Higher Education Course. Doctoral Dissertation, Northeastern University.
https://repository.library.northeastern.edu/files/neu:m044pp383/fulltext.pdf
[57] Office of Educational Technology (2023). Artificial Intelligence and the Future of Teaching and Learning. U.S. Department of Education.
https://tech.ed.gov/ai-future-of-teaching-and-learning/
[58] OpenAI (2023a). Does ChatGPT Tell the Truth?
https://help.openai.com/en/articles/8313428-does-chatgpt-tell-the-truth
[59] OpenAI (2023b). Is ChatGPT Biased?
https://help.openai.com/en/articles/8313359-is-chatgpt-biased
[60] OpenAI (2024a). ChatGPT-4.
https://openai.com/blog/introducing-gpts
[61] OpenAI (2024b). Introducing GPTs.
https://openai.com/blog/introducing-gpts
[62] OpenAI (2024c). Prompt Engineering.
https://platform.openai.com/docs/guides/prompt-engineering
[63] Pokrivcakova, S. (2019). Preparing Teachers for the Application of AI-Powered Technologies in Foreign Language Education. Language and Cultural Education, 7, 135-153.
https://doi.org/10.2478/jolace-2019-0025
[64] Pollard, R., & Kumar, S. (2022). Instructional Designers in Higher Education: Roles, Challenges, and Supports. The Journal of Applied Instructional Design, 11, 1-17.
https://doi.org/10.59668/354.5896
[65] Reigeluth, C. M. (Ed.) (1999). Instructional-Design Theories and Models: A New Paradigm of Instructional Theory, Volume II. Lawrence Erlbaum Associates.
[66] Reiser, R. A. (2001a). A History of Instructional Design and Technology: Part I: A History of Instructional Media. Educational Technology Research and Development, 49, 53-64.
https://doi.org/10.1007/BF02504506
[67] Reiser, R. A. (2001b). A History of Instructional Design and Technology: Part II: A History of Instructional Design. Educational Technology Research and Development, 49, 57-67.
https://doi.org/10.1007/BF02504928
[68] Reiser, R. A., & Dick, W. (1996). Instructional Planning: A Guide for Teachers (2nd ed.). Allyn & Bacon.
[69] Reiser, R. A., & Gagné, R. M. (1983). Selecting Media for Instruction. Educational Technology Publications.
[70] Rolnick, D., Donti, P. L., Kaack, L. H., Kochanski, K., Lacoste, A., Sankaran, K., Ross, A. S., Milojevic-Dupont, N., Jaques, N., Waldman-Brown, A., Luccioni, S., Maharaj, T., Sherwin, E. D., Mukkavilli, S. K., Kording, K. P., Gomes, C. P., Ng, A. Y., Hassabis, D., Platt, J. C., Creutzig, F., Chayes, J., & Bengio, Y. (2022). Tackling Climate Change with Machine Learning. ACM Computing Surveys, 55, 1-96.
https://doi.org/10.1145/3485128
[71] Romiszowski, A. J. (1970). Classifications, Algorithms and Checklists as Aids to the Selection of Instructional Methods and Media. In A. C. Bajpai, & J. Leedham (Eds.), Aspects of Educational Technology (Vol. 4). Pitman.
[72] Rudolph, J., Tan, S., & Tan, S. (2023). ChatGPT: Bullshit Spewer or the End of Traditional Assessments in Higher Education? Journal of Applied Learning and Teaching, 6, 342-363.
https://doi.org/10.37074/jalt.2023.6.1.9
[73] Ruiz-Rojas, L. I., Acosta-Vargas, P., De-Moreta-Llovet, J., & Gonzalez-Rodriguez, M. (2023). Empowering Education with Generative Artificial Intelligence Tools: Approach with an Instructional Design Matrix. Sustainability, 15, Article 11524.
https://doi.org/10.3390/su151511524
[74] Russell, S. J., & Norvig, P. (2010). Artificial Intelligence: A Modern Approach (3rd ed.). Prentice Hall.
[75] Salz, M. (2023). Microsoft-Pläne für ChatGPT: KI soll auch zu Word, Excel, PowerPoint und Co. kommen. Chip.
https://www.chip.de/news/microsoft-bringt-chatgpt-auch-zu-word-excel-und-co._184651857
[76] Sarker, I. H., Furhad, M. H., & Nowrozy, R. (2021). AI-Driven Cybersecurity: An Overview, Security Intelligence Modeling and Research Directions. SN Computer Science, 2, Article No. 173.
https://doi.org/10.1007/s42979-021-00557-0
[77] Scharth, M. (2022). The ChatGPT Chatbot Is Blowing People away with Its Writing Skills. The University of Sydney.
https://www.sydney.edu.au/news-opinion/news/2022/12/08/the-chatgpt-chatbot-is-blowing-people-away-with-its-writing-skil.html
[78] Seels, B. (1989). The Instructional Design Movement in Educational Technology. Educational Technology, 29, 11-15.
[79] Seels, B. (1997). The Relationship of Media and ISD Theory: The Unrealized Promise of Dale’s Cone of Experience. In Proceedings of Selected Research and Development Presentations at the 1997 National Convention of the Association for Educational Communications and Technology (pp. 357-361). Association for Educational Communications and Technology.
[80] Seels, B. B., & Richey, R. C. (1994). The 1994 Definition of the Field. In B. B. Seels, & R. C. Richey (Eds.), Instructional Technology: The Definition and Domains of the Field (pp. 1-22). Association for Educational Communications and Technology.
[81] Sharif, A., & Cho, S. (2015). 21st-Century Instructional Designers: Bridging the Perceptual Gaps between Identity, Practice, Impact and Professional Development. International Journal of Educational Technology in Higher Education, 12, 72-85.
https://doi.org/10.7238/rusc.v12i3.2176
[82] Shum, S. J. B., & Luckin, R. (2019). Learning Analytics and AI: Politics, Pedagogy and Practices. British Journal of Educational Technology, 50, 2785-2793.
https://doi.org/10.1111/bjet.12880
[83] Slagter van Tryon, P. J., McDonald, J. K., & Hirumi, A. (2018). Preparing the Next Generation of Instructional Designers: A Cross-Institution Faculty Collaboration. Journal of Computing in Higher Education, 30, 125-153.
https://scholarsarchive.byu.edu/facpub/2067
https://doi.org/10.1007/s12528-018-9167-3
[84] Stake, R. E. (1995). The Art of Case Study Research. Sage Publishing.
[85] Stone, P., Brooks, R., Brynjolfsson, E., Calo, R., Etzioni, O., Hager, G., Hirschberb, J., Kalyanakrishnan, S., Kamar, E., Kraus, S., Leyton-Brown, K., Parkes, D., Press, L., Saxenian, A., Shah, J., Tambe, M., & Teller, A. (2016). Artificial Intelligence and Life in 2030 (Report of the 2015-2016 Study Panel). Stanford University.
http://ai100.stanford.edu/2016-report
[86] Sugrue, B., & Clark, R. E. (2000). Media Selection for Training. In S. Tobias, & D. Fletcher (Eds.), Training & Retraining: A Handbook for Business, Industry, Government and the Military. Macmillan.
[87] Susnjak, T. (2022). ChatGPT: The End of Online Exam Integrity? arXiv:2212.09292.
https://doi.org/10.48550/arXiv.2212.09292
[88] Tlili, A., Shehata, B., Adarkwah, M. A., & Huang, R. (2023). What If the Devil Is My Guardian Angel: ChatGPT as a Case Study of Using Chatbots in Education. Smart Learning Environments, 10, Article No. 15.
https://doi.org/10.1186/s40561-023-00237-x
[89] Van Merriënboer, J. J. G. (1997). Training Complex Cognitive Skills: A Four-Component Instructional Design Model for Technical Training. Educational Technology Publications.
[90] Wartman, S. A., & Combs, D. (2018). Medical Education Must Move from the Information Age to the Age of Artificial Intelligence. Academic Medicine, 93, 1107-1109.
https://doi.org/10.1097/ACM.0000000000002044
[91] Weitekamp, D., Harpstead, E., & Koedinger, K. R. (2020). An Interaction Design for Machine Teaching to Develop AI Tutors. InProceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1-11). Association for Computing Machinery.
https://doi.org/10.1145/3313831.3376226
[92] Wilson, B. G., & Jonassen, D. H. (1990). Automated Instructional Systems Design: A Review of Prototype Systems. Journal of Artificial Intelligence in Education, 2, 309-328.
https://www.proquest.com/openview/fbfce12ad795d7aee9c724920c80b484/1?pq-origsite=gscholar&cbl=2031153
[93] Yin, R. K. (1984). Case Study Research: Design and Methods. Sage Publishing.
[94] Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic Review of Research on Artificial Intelligence Applications in Higher Education—Where Are the Educators? International Journal of Educational Technology in Higher Education, 16, Article No. 39.
https://doi.org/10.1186/s41239-019-0171-0
[95] Zhai, X., Chu, X., Chai, C. S., Jong, M. S. Y., Istenic, A., Spector, M., Liu, J., Yuan, J., & Li, Y. (2021). A Review of Artificial Intelligence (AI) in Education from 2010 to 2020. Complexity, 2021, Article ID: 8812542.
https://doi.org/10.1155/2021/8812542
[96] Ziagos, D. B. (1991). The Design and Development of a Media Selection Model. Master’s Thesis, California Polytechnic State University.
https://proquest.com/openview/e848cac0c708eaa8389a0eb178bab8bd/1?pq-origsite=gscholar&cbl=18750&diss=y

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.