Construction Cycle and Quality Controls for Training Transfer Evaluations in Lifelong Learning Programs in Quebec and Switzerland


The Construction Cycle and Quality Controls for Training Transfer Evaluations (CCQCTTE) is an assessment method that results of collaboration between University of Quebec in Montreal (UQAM) Canada and the University of Teacher Education of State of Vaud (HEP Vaud) Switzerland. The main objective of CCQCTTE project is to design and field test a method for building high quality training transfer assessments (level 3 of Kirkpatrick’s model). In relation with this goal, we defined five sub-objectives: easy way of use; implementation of best practices; cyclical quality approach; taking into account of transfer factors; diagnostic feedbacks. CCQCTTE consists of eight steps: 1) analysis of the training objectives and of the factors influencing the transfer; 2) assessment design; 3) items writing; 4) information about the assessment; 5) collection of transfer data; 6) processing of results; 7) feedbacks and 8) macro-regulation. The end product of the first step is a table of specifications and a list of transfer factors. Once the evaluation plan is defined in step two, we can move on to step three of item development. During the fourth step, all the stakeholders are informed. During step five data collection takes place in the training environment and in the workplace. Data are processed to extract information during step six. The seventh step concerns elaboration and sending of personalized feedback to the trainees and the stakeholders. Finally, the eighth and final step is a “macro-regulation” that consists of learning from all the previous steps in order to improve future transfer assessments cycles. During the first-year, we made a preliminary field testing and the second-year, a series of main field tests of the CCQCTTE. During the third year, the method was implemented in Montreal and in Lausanne. The three years international CCQCTTE project has made it possible to develop the method during construction of several transfer assessments for lifelong training programs. We highlighted a real added value of step 8 that transforms the cycle in a kind of spiral of quality. In terms of limitation, we note that step 1 “Analysis” remains time-consuming and that it is difficult to start without the accompaniment of an experienced expert of the CCQCTTE. As part of this paper, we will describe the CCQCTTE method and its quality approach, the circumstances in which it was developed and field tests results.

Share and Cite:

Gilles, J. and Chochard, Y. (2022) Construction Cycle and Quality Controls for Training Transfer Evaluations in Lifelong Learning Programs in Quebec and Switzerland. Creative Education, 13, 3533-3558. doi: 10.4236/ce.2022.1311226.

1. Introduction

The Construction Cycle and Quality Controls for Training Transfer Evaluations (CCQCTTE) is an assessment method that results of three years collaboration between the Faculty of Education of University of Quebec in Montreal (UQAM) Canada and the University of Teacher Education of State of Vaud (HEP Vaud) Switzerland.

This project took place within the framework of the PEERS program (Gilles, Gutmann and Tedesco, 2012). PEERS is the French acronym for “Projets d’Etudiants et d’Enseignants-chercheurs en Réseaux Sociaux” translated as follows: “Projects of Students and Professors in Social Networks”. Gilles (2017: p. 46) proposes this definition of PEERS Program:

The PEERS program proposes international exchanges adapted to the context of teacher training institutions wishing to take advantage of internationalization in order to link training, research and practice. PEERS is based on the completion of research and innovation (R&I) projects during the academic year, during which international groups of professors and students from teacher training partner institutions collaborate remotely as well as during two placements of one week. For the students, the PEERS program aims to develop competencies in distance collaboration with the help of Information and Communication Technology (ICT), the management of intercultural groups, and the continuous improvement of their activities through reflective thinking and the spirit of research. For the professors, the PEERS program aims to better link research and training, to foster opportunities for international publications, and to reinforce their skills in the management of international research projects.

The project we conducted from 2015 to 2020 in the framework of the PEERS Program concerned continuing education, which is today considered as one of the most effective approaches to develop human capital of an organization. Public and private organizations currently tend to offer training not only to improve skills but also to maintain a good working atmosphere or motivate and retain their staff. It also helps attract new employees, increase staff productivity and support change (Jehanzeb & Bashir, 2013; Mwema & Gachunga, 2014; Rodriguez & Walters, 2017).

Moreover, business investments in training are very high as indicated by Frash, Kline, Almanza, & Antun (2008: pp. 199-200):

Exact estimates of training expenditures among North American businesses vary, but all measures report large amounts of fiscal investment. Annual expenditures range from $78 billion in direct expenses (Carnevale & Desrochers, 1999) to up to $200 billion (Mckenna, 1990) when indirect costs such as trainee salaries and costs of training facilities are included. Given this substantial investment, training evaluation has become a salient focus in corporate America (Hale, 2002; Phillips & Stone, 2002).

To verify the profitability of these expenses, training professionals frequently refer to the Kirkpatrick model to evaluate training. This model, one of the most used in the world of in-company training, proposes four levels of evaluation (Kirkpatrick, 1959, 1996, 1998, 2006; Kirkpatrick & Kirkpatrick, 2008): Satisfaction—participants’ satisfaction with the quality of the training (level 1); Learning—the degree to which participants acquire the intended knowledge during the training (level 2); Behavior—participant’s ability to transfer what they have learned when they are back on the job (level 3); and Results—the economic impacts of training for the company or the public or non-profit organization (level 4) (Figure 1).

At the same time, a series of studies show that level 3 is poorly evaluated in comparison with levels 1 and 2 in Europe and North America (Mathews et al., 2001; Meignant, 2009; Formaeva, 2011, 2012; Van Buren & Erskine, 2002, cited by Pottiez, 2017). One of the reasons given is the complexity of the assessment mechanisms available and the difficulty in identifying quality standards that would make it possible to have confidence in the results (Dunberry & Péchard, 2007).

On the basis of these findings, as part of our PEERS project we decided to focus the R&D efforts on the realization of a method for building assessments of the transfer of learning (level 3 of Kirkpatrick’s model), a method that could be useful and easily implemented by practitioners of continuing education while offering guarantees of methodological robustness and diagnostic feedback.

2. Literature Review

Concerning learning transfer

In their famous article, Baldwin and Ford (1988: p. 63) defined transfer of training as follows:

Figure 1. The Kirkpatrick model.

Positive transfer of training is defined as the degree to which trainees effectively apply the knowledge, skills, and attitudes gained in a training context to the job. Transfer of training, therefore, is more a function of original learning in a training program (Atkinson, 1972; Fleishman, 1953). For transfer to have occurred, learned behavior must be generalized to the job context and maintained over a period of time on the job”.

This definition suggests two conditions to consider for a successful transfer of learning: 1) maintenance of what was learned in the training context and 2) generalization of the material learned from the training environment to the workplace context. We will come back to these aspects later when we discuss the issues of the analysis step and the development of specification tables in transfer assessments construction.

Baldwin and Ford (1988: p. 65) also presented a “framework for the transfer process understanding”. Figure 2 shows their model, which describes transfer in terms of training-input factors, training outcomes, and conditions of transfer.

As shown in the Baldwin & Ford model, training outcomes and training-input factors have both direct and indirect effects on conditions of transfer. Working backwards in the model, link 6 suggests that training outcomes of learning and retention have direct effects on transfer, like trainee characteristics (link 4) and work-environment (link 5). Training outcomes (learning & retention) are viewed as directly affected by the three training inputs (linkages 1, 2 and 3). Therefore, trainee characteristics, training design and work environment according to Baldwin and Ford model have also an indirect effect on transfer through their impact on training outcomes.

Chochard (2012: p. 118) proposes the following diagram of the learning transfer within the training results chain. According to Yamnill & McLean (2001: p. 196)

Figure 2. Baldwin & Ford (1988: p. 65) model of the transfer process.

and Holton (1996: p. 17), Figure 3 shows that within the cause-and-effect chain, the transfer of learning is seen as the stage that really determines the effectiveness of training from the organizational point of view (Dunberry & Péchard, 2007; Holton, 1996). Indeed, the training may have been excellent with significant learning effects, if the trained people do not use their new skills in their work, the benefits for their organization will be low. Transfer of learning is also special because it is essentially the responsibility of the person trained: it is ultimately the trainee who will choose to change the way he works, to put into practice what he learned.

In his comprehensive literature review, Chochard (2012: p. 120) reports that Georgenson (1982) estimated that only 10% of training content were converted into behavioral changes. Twenty years later, the problem was still present, Saks (2002) considers that 40% of the trained people cannot transfer immediately what they have learned, that they are still 70% to hesitate to change their behaviors one year after their training and that, finally, only 50% of training investments lead to organizational gains. The scale of the problem has prompted scientists to look at variables that may influence transfer. By understanding the way in which these variables work (Yamnill and McLean, 2001), they identified a large number of variables that could influence the transfer of learning and the effectiveness of training. These variables, called efficiency variables (Alvarez et al., 2004) or transfer factors (Cheng and Hampson, 2007), exert influence at different levels of the causal chain (Figure 2): they not only favor or hinder the transfer of learning, but they can also act upstream, by encouraging the trainee to more or less develop their learning, or downstream, by promoting or inhibiting individual performance gain into organizational benefits.

Concerning effective assessment processes in a quality approach

In order to better know what we are doing during assessments, the formalization of the processes of evaluation development in successive steps such as that proposed by Downing (2011) is particularly interesting to see more clearly what kind of operations are carried out. Based on years of experience and on relevant literature supporting the best practices in test development, Downing proposes twelve steps for effective test development: 1) Overall plan; 2) Content definition; 3) Test specification; 4) Item development; 5) Test design and assembly; 6) Test production; 7) Test administration; 8) Scoring test responses; 9) Passing scores; 10) Reporting test results; 11) Item banking and 12) Test technical report.

Figure 3. Chochard’s diagram of the learning transfer within the training results chain inspired by Yamnill & McLean (2001: p. 196) and Holton (1996: p. 17).

As underlined by Downing (2011): “Following these twelve steps of effective test development, for both selected-response or constructed-response tests, tends to maximize validity evidence for the intended test score interpretation” (p. 3). It is understandable through these words of Downing that to evaluate effectively is not an easy task.

On one hand, as we pointed out in the introduction, level 3 of the Kirkpatrick’s model is poorly evaluated in comparison with levels 1 and 2, especially because the complexity of the measures to be put in place to evaluate learning transfer. On the other hand, issues and amounts invested in training transfer raised the question of the quality of evaluation methods used to measure learning transfer.

In our opinion, a quality approach in transfer assessment development is inseparable from a reflection on assessment quality criteria. In this area we were inspired by the recommendations contained in the Standards for Educational and Psychological Testing (AERA, APA, ACME, 1999). In order to put these reflections within the reach of field practitioners, we propose to meet nine quality criteria: validity, reliability, sensitivity, diagnosticity, equity, practicability, communicability, authenticity and acceptability (we will give a brief definition for each of them in the next section).

In addition to these criteria, it is also essential to define a step-by-step process with quality controls to create robust assessments that will measure learners’ transfer when they are back on the job. Given the issues and the amounts invested, training transfer assessment methods must be of high quality and provide reliable results in order to improve learning transfer efficiency. In this area we were influenced by Edward Deming’s work considered by many to be the father of the quality management movement. The way he envisions management processes improvement by adopting appropriate principles in order to increase quality and simultaneously reduce costs for organizations (by reducing waste, rework, staff attrition, etc.) was a source of inspiration for us. His key idea for a quality approach is to practice continual improvement and think the processes as systems (Deming, 1986, 1993). This is what we have transposed to the field of transfer assessments construction with in mind his famous quote “If you cannot describe what you are doing as a process, you do not know what youre doing”. Another key idea we have taken up and adapted from Deming is that of the “Plan, Do, Check, Act” (PDCA) he always referred to the “Shewhart cycle” and in fact, at least partially, based on the scientific method. The origin of PDCA cycle is well summed up by Best & Neuhauser (2006):

The constant evaluation of management policy and procedures leads to continuous improvement. This cycle has also been called the Deming cycle, the Plan-Do-Check-Act (PDCA) cycle, or the Plan-Do-Study-Act (PDSA) cycle. While Deming marketed the cycle to the massesa cycle, which he called the Shewhart cyclemost people referred to it as the Deming cycle. The Shewhart cycle has the following four stages: 1) Plan: identify what can be improved and what change is needed; 2) Do: implement the design change; 3) Study: measure and analyse the process or outcome; 4) Act: if the results are not as hoped for. (Best & Neuhauser, 2006: pp. 142-143).

For Deming (1986: p. 88), the PDSA cycle (Shewhart, 1931, 1958, 1989) was seen as helpful procedure to follow for quality improvement. Figure 4 shows four steps and the questions related to this quality improvement cycle regardless of a particular production context.

In the context of learning assessment (level 2 of Kirkpatrick’s model), Gilles (2002) developed a methodology to build in a quality approach effective standardized tests. The eight-step process is entitled “Construction Cycle and Quality Controls for Standardized Testing (CCQCST)” (Gilles, 2002: pp. 74-92; Gilles et al., 2005; Gilles & Tinnirello, 2017).

As shown on Figure 5, the CCQCST model includes following steps: 1) Analysis of the objectives to be evaluated; 2) Design/test plan; 3) Questions/items

Figure 4. Shewhart cycle represented by Deming (1986: p. 88).

Figure 5. Construction cycle and quality controls for Standardized Testing (CCQCST) used at level 2 of Kirkpatrick’s model (Gilles, 2002).

elaboration; 4) Training/Information of the trainees to be evaluated; 5) Testing/assessment; 6) Correction and analysis of results; 7) Feedbacks and 8) Regulation for continuous improvement of subsequent evaluations. The CCQCST includes quality control procedures for standardized assessments. The idea is to provide an easy way to build learning assessments for practitioners who want to ensure a high level of quality in their evaluations at level 2 of Kirkpatrick’s model. Another key-idea is to provide diagnostic feedbacks with explanations in order to improve learning and teaching (Van den Bossche, Segers, & Jansen, 2010). These preoccupations fit into a movement of a worldwide trend of introducing quality management in education and training activities (Ramsden, 1991; Nightingale & O’Neil, 1994; Zink & Schmidt, 1995; Segers, Dochy, & Cascallar, 2003).

The Construction Cycle and Quality Controls for Training Transfer Evaluations (CCQCTTE) method that we present in this article is an extension of the reflections and sources we mentioned in this review of the literature.

3. Objectives

In a context in which companies and organizations make huge investments in the field of continuing education (Frash, Kline, Almanza, & Antun, 2008) and in which transfer of learning in professional environments (level 3 of Kirkpatrick’s model) is relatively low in continuing education (Saks, 2002), we focused our R & D efforts on this project on designing and testing a method for building high quality training transfer assessments.

Literature review and past experiences allowed us to identify a series of features that should be considered by the method to meet current needs:

• Easy way of use for rapid appropriation by practitioners involved in lifelong learning programs (Dunberry & Péchard, 2007);

• Implementing of best practices in effective evaluation development (Downing, 2011);

• Cyclical approach of continuous improvement in order to guarantee a high level of quality of the results (Gilles, 2002);

• Consideration of the factors influencing transfer and the chain of causes and effects in the training environment (Chochard, 2012);

• Diagnostic feedbacks that would allow stakeholders and trainees to improve learning and training environments (Van den Bossche, Segers, & Jansen, 2010).

From this perspective and with as sub-objectives the previous points, we defined as main objective of this Research and Development (R&D) project: the design and testing of a method we entitled “Construction Cycle and Quality Controls for Training Transfer Evaluations (CCQCTTE)”.

4. Partnership

The field test of CCQCTTE method was carried out in lifelong learning programs in Quebec (Montreal) and Switzerland (Lausanne). For Montreal, the tests were conducted in the context of community organizations that constitute an important component of Quebec social structure (MESS, 2001). These are organizations independent of State public services, political movements or trade unions. They intervene to fight against poverty, discrimination and exclusion. To carry out their activities, they can use continuing education offered by independent training organizations (Chochard, 2018). Learning transfer is a major concern for Canadian community organizations that have limited resources and therefore need to ensure the efficiency of their training investments.

Concerning Lausanne, the operational testing of the method was carried out at the Training Center (CFor) of the Lausanne University Hospital (CHUV). The CHUV acts as a general university hospital offering acute and specialist care for people who are living in the Lausanne area, but also in the whole State of Vaud and in parts of French-speaking Switzerland. CHUV’s CFor implements a broad and high-level training program in order to develop and improve professional skills and excellence of care services: “In 2017, the entire training center offer represents 17,253 days of classes.” (CHUV, 2017: p. 110). As we can read in the CHUV’s Annual Report Activity, transfer evaluation is one of the biggest training concerns for this organization at the end of this decade:

In a context where resources are limited, no organization can be satisfied withtraining to train’. Today, more than ever, evaluation of the effectiveness of training actions is an obvious necessity, which suggests going beyond the level of general satisfaction assessment and knowledge acquisition, but knowledge transfer is a multidimensional phenomenon, influenced by a myriad of factors and whose evaluation process can be complex and tedious.” (CHUV, 2017: p. 116).

These organizations, which were very aware of the complexity of transfer evaluation, were therefore very open to partnerships with academic institutions and teams specializing in learning transfer and evaluation. Collaboration with these partners was also greatly appreciated by the Swiss-Canadian research team because their decisive contribution to operationalization and practicability of the learning transfer assessment method in lifelong learning contexts like those of Quebec community organizations and Swiss CFor.

5. Methodology

5.1. Participants

The assessment method was tested on 11 continuing education courses (Table 1).

As we explained in our introduction, the Construction Cycle and Quality Controls for Training Transfer Evaluations (CCQCTTE) is an assessment method that results of three years collaboration between the Faculty of Education of University of Quebec in Montreal (UQAM) Canada and the University of Teacher Education of State of Vaud (HEP Vaud) Switzerland.

Table 1. Training courses that have been evaluated using the CCQCTTE method during the field tests.

5.2. Procedure

During these three years we conducted the R & D project involving students from the University of Quebec at Montreal (UQAM)1 and the University of Teacher Education of State of Vaud (HEP Vaud)2 who participated in the research as part of the PEERS international mobility program (see Introduction). Each year, we collaborated with field partners to test the results in real life training contexts in Quebec (Montreal) and Switzerland (Lausanne). Concerning the R & D methodology, the international group followed the steps recommended by Borg & Gall (1989) and explained by Loiselle (2001). We also referred to Van der Maren (2003) for some specific aspects.

PEERS 2015-2016:

­ Data collection including literature review and needs analysis;

­ Research planning with definition of the objectives of the project;

­ Preliminary testing of the first version method as part of a training given in Quebec and Lausanne;

­ Main revision of the method after preliminary field testing.

During this first-year project, we made a preliminary field testing of the CCQCTTE method in continuing education. The training in which we tested the method concerned report writing in administrative and logistical support. This training course was given by the same trainer to a small group of people who were part of the administrative and technical staff during two sessions: one in Montreal at the University of Quebec at Montreal and the other one in Lausanne. After this first year a report was published with first results of the project (Gilles, Chochard, Berset, Bieri, El Guermei, Graveline et al., 2017).

PEERS 2016-2017:

­ Pursuit of main revision of the method after previous year preliminary testing;

­ Main tests on a larger number of participants in three continuing education training courses contexts;

­ Operational revisions of the method taking into account the results of the main tests.

During the second-year project, we made a series of main field tests of the CCQCTTE method on a larger number of participants in three continuing education training courses contexts:

1) Quebec’s Major Leagues union of film technicians (IATSE514): the learning transfer evaluation was about technical aspects regarding lifting platforms with articulated arms;

2) Sun Youth community organization at Montreal: transfer evaluation was about effective communication on the workplace for the staff;

3) Training Center (CFor) of Lausanne University Hospital (CHUV): the learning transfer evaluation concerned best practices for drug administration in medical care.

PEERS 2017-2018:

­ Pursuit of operational revisions of the CCQCTTE method after the previous year main field tests;

­ Testing of the method on a larger scale;

­ Implementation of CCQCTTE with dissemination of information regarding the method.

During the third year project, the CCQCTTE method was implemented in Montreal and in Lausanne. Two Quebec community organizations were interested in an implementation partnership: “Bois Urbain”, a socio-professional integration company in cabinetmaking and The Benedict Labre House, a day center for people in precarious situations. Concerning Lausanne, the CHUV’s CFor agreed to extend the implementation of the method on a larger number of participants within 4 training courses in medical care.

6. Results

CCQCTTE method is based on an eight-step cycle inspired by the Construction Cycle and Quality Controls for Standardized Testing (CCQCST) (Gilles, 2002). A quality approach in this context is inseparable from a reflection on quality criteria. In this area we were inspired by the recommendations contained in the Standards for Educational and Psychological Testing (AREA, APA, ACME, 1999).

Learning transfer assessment quality criteria

Each construction step results in a product whose quality is potentially controllable according to one or more of the nine criteria summarized below.

Validity: trainees’ results should represent what the evaluator wants to measure in terms of transfer and cover the important aspects of the training content to be transferred on the workplace.

Reliability: learning transfer evaluation must provide objective and reliable information that is independent of the assessors’ characteristics (intra and inter-assessors consistency).

Sensitivity: measures of learning transfer must be precise and reflect subtle phenomena.

Diagnosticity: results delivered by the learning transfer assessment should enable the precise diagnosis of 1) the problems encountered by the participants with regard to transfer and 2) their causes, by distinguishing the difficulties related to the participant those related to his work environment.

Equity: In principle, the learning transfer procedures must be the same for all participants (standardization principle).

Practicability: achievement learning transfer evaluation must be feasible within a reasonable timeframe and using available staff and equipment resources.

Communicability: non-confidential information relating to the implementation of learning transfer evaluation must be communicated and understood by the all actors involved in the assessment process.

Authenticity: items and statements proposed in learning transfer evaluation must make sense for the trainees, be relevant and appropriate to their working environment.

• Acceptability: all stakeholders must accept proposed procedures and arrangements for learning transfer evaluation.

For the 8 steps described below, when the implementation conditions allow it, CCQCTTE method proposes that an experienced person take a critical look at the productions obtained with the quality criteria as reference points (a posteriori quality controls). It is also advisable to use the quality criteria when producing the outputs of each of the 8 steps (a priori quality controls). The eight CCQCTTE’ steps method are detailed below.

Step 1: Analysis of training objectives, behaviors to be transferred and factors influencing the transfer

During this first step, the idea is to identify the important aspects of the training that participants will need to be able to transfer to their workstations. Particularly important quality criteria at this stage are validity and diagnosis. Clarifications made here will have significant repercussions in Step 7 “Feedbacks” in terms of targeted feedback on the important aspects to be transferred that have been highlighted in this first step. A list of behaviors to be transferred to the workplace from the objectives and the content of the training must be done during this step. Firstly, CCQCTTE method proposes to analyze the training material used by the trainer, this is made by an experienced in the use of CCQCTTE. After that, this experienced person and the trainer will create a Table of Specifications (ToS), which allows determining the important aspects addressed during the training that should be transferred by the participants to their workplaces. ToS helps to improve and monitor the validity of the transfer evaluation system. In parallel, discussions also lead to clarify factors that in the context of the participants can influence learning transfer and gradually, a list of factors is established and structured according to the categories put forward by Baldwin and Ford (1988): 1) trainees characteristics; 2) training design and 3) work environment. Following Chochard’s (2012: p. 148) recommendations, we have chosen to distinguish, within work environment, the following four subcategories: a) supervisor support, b) support from colleagues, c) the opportunities for implementation and d) the characteristics of the organization.

The advantages of ToS are described in the literature and there are different implementation approaches (Morissette, 1996; Gilles, 2002; Nitko and Brookhart, 2007; Fives and Didonato-Barnes, 2013). The ToS that we recommend takes, in a first column, the various training chapters and, in a second, the sections of each of these chapters. For each of the sections, a third column contains the precise elements addressed during the training. The fourth column contains references to the media used (for example, slides in a presentation or page numbers of course notes). A fifth column is fed by the results of an analysis made from the first three columns. The idea is to systematically supplementing this assertion: “Concerning the various aspects of the training course included in the ST first three columns, learning transfer on the workplace will be manifested through the following behaviors: …”. Elements are grouped into clusters and the analysis then leads to categories of learning outcomes to be transferred to the workstations. The analysis is then discussed with the trainer to clarify and to validate the result.

Two outputs result from this first step: (1.a) a ToS related to the training course with a linked list of behaviors to transfer and (1.b) a structured list of transfer factors. This first step “Analysis” is fundamental to ensure a curriculum alignment (Anderson, 2002) between: training objectives; related lessons during the training course; and behaviors that will be evaluated when trainees are back to the job.

During the testing of the CCQCTTE we have seen how much the completion of a ToS improves quality in terms of validity and diagnosticity. Positive benefits are observed not only for the learning transfer assessment tool, but also for the training objectives and process. However, we were also able to appreciate how time-consuming this analysis is and how important it is for trainers to be accompanied in the ToS elaboration when they implement the method for the first time.

Step 2: Design of learning assessments framework

The design step consists in determining the characteristics of the learning transfer evaluation: What kind of assessment tools? Which categories of people will be assessed and when? When and how will they be informed about the transfer assessment process? When will they receive feedback? etc.

As we saw in the previous step, we want to evaluate learning transfer on the workplace, but also environment factors that can facilitate or make more difficult these transfers. This dual concern leads us to consider questionnaires in two parts: one concerning “behavioral evaluation”, i.e. the transfer performance levels of the trainees when they are back on the job. The other part is about “environment factors”, variables related to the participant’s professional context that can promote or block learning transfer.

A notable design evolution in the last version of CCQCTTE method concerned the trainees’ choice of three behaviors from the list of those for which they had to give their self-assessment ability to transfer it on the job. The idea was to better support the transfer on the workplace by focusing each trainee transfer efforts on three chosen behaviors.

Concerning the evaluation moments, it is a question of performing a transfer measurement on the job that is related to the content covered during the training course. Indeed, some aspects to be transferred may already be acquired before the training. It is thus suitable to isolate these already transferred aspects from new ones related to the effective learning transfers after training course. CCQCTTE recommends a pretest before training, and an immediate post-test one month after, in order to give the participant opportunities to exercise the learning outcomes several times on his workplace and to evaluate the training effects by comparison between pretest and post-tests results. The same measure should also be carried out several months after the training, in order to evaluate the evolution of the learning transfer behaviors. Thus, ideally, there should be several moments of learning transfer measurement. We recommend: before training; one month after and three to six months after. Finally, concerning the items formats of the questionnaires used at these different moments, we recommend Likert scales for reasons related to practicability and reliability criteria. In order to improve sensitivity of the measurements, we propose scales with 7 levels. During the field tests of the method an assessment plan appeared to be particularly useful when expressed on a timeline with the different moments.

In addition, CCQCTTE method recommends looking more closely at learning transfer performance and suggests a multi-source approach by questioning not only the participant on his transfer performance, but also—when implementation conditions make it possible—his colleagues and line manager (N + 1), and/or, if possible, customers or beneficiaries of his services. This is a “360˚” conception of learning transfer evaluation (Antonioni, 1994; Hedge et al., 2001) which can be limited to a “180˚” (questioning the participant, his colleagues and his/her supervisor) or restrict to a “90˚” questioning trainee and manager.

Quality controls are linked to the practicability criteria, as well as reliability and sensitivity of results collected using the learning transfer evaluation design. Quality controls are carried out during evaluators’ validation and are related to three outputs: (2.a) evaluation plan; (2.b) items concept—adapted Likert scales for behavior items and learning transfer factors items—and (2.c) items formats. Considering the practicability criteria, the evaluation team (the trainer, the CCQCTTE experienced person and possibly other stakeholders) carries out a design validation.

Step 3: Item writing

This step involves writing items to be included in the learning transfer questionnaires. Items must respect the format defined in point 2.3 as well as the item concept defined in point 2.2. Concerning the contents evaluated by these items, they are linked to the results of the CCQCTTE first analysis step: the specification table (1.a) that summarizes important learning transfer aspects addressed during the training course. Concerning transfer factors items, they are linked to the structured list (1.b) with three categories of variables (Baldwin & Ford, 1988) and four subcategories proposed by Chochard (2012) concerning the professional environment. In terms of the end products of this step 3, we end up with two series questions that can be collected and stored online (item banking). First in: (3.a) a item bank intended to evaluate participants’ behavior related to learning transfer outcomes, and second, (3.b) in another items bank intended to evaluate the environmental factors that influence learning transfer. This second item bank can be enriched with the Learning Transfer System Inventory (LTSI), a factor inventory designed to assess individual perceptions of catalysts and barriers to the transfer of learning from work-related training (Holton III et al., 2000; Bates et al., 2012).

Quality controls mainly concern validity, reliability and authenticity criteria. We must ensure that items are well linked to the results of analyzes carried out at CCQCTTE step 1. We also must verify that the items do not have any bias that could affect reliability. In terms of authenticity, the items should make sense to the trainees and therefore be in relation with their professional realities. Therefore, in order to check and possibly improve items’ quality CCQCTTE method recommends a systematically review by an external experimented person. These items collections, especially those concerning the professional environment and which are quite generic and independent of the training course contents, should feed items banks adapted to the trainees’ professional contexts.

Step 4: Trainees and stakeholdersinformation

This step of CCQCTTE is about information of people who are involved in the transfer evaluation process, in particular the trainees but also N + 1 managers, colleagues and possibly beneficiaries of their professional services. Information must be not confidential and linked to the outputs of previous steps.

Information plan can take several forms:

(1.a) a summary of important training courses behaviors that will be transferred on the job;

(1.b) a list of professional environment factors that influence learning transfer;

(2.a) an explanation of the evaluation design indicating which stakeholders will be questioned and when;

(2.b) an explanation of items concept—adapted Likert scales for behavior items and learning transfer factors items;

(2.c) an explanation of the item formats used in the questionnaires;

(3.a) examples of items related to trainees’ behavior expected on workplace;

(3.b) examples of items related to professional environment.

This step 4 can also meet administrative purposes.

Concerning the end products of this CCQCTTE fourth step, we recommend to feed a trainees and stakeholders information web page containing:

(4.a) information on the CCQCTTE previous steps’ outputs;

(4.b) timetable schema describing the learning transfer implementation;

(4.c) texts of emails announcing the characteristics of learning transfer evaluation.

Quality controls of this step are related to the criterion of communicability. This is why, first and foremost, it is important to check that non-confidential information is accessible, comprehensive and understandable by trainees and stakeholders.

Step 5: Learning transfer data collection

The idea is to put into reality the evaluation plan (2.a) developed in Step 2: Design. Implementation schedule is crucial in this CCQCTTE step.

The output of this fifth CCQCTTE step consist of a series of (5.a) online or paper questionnaires completed by the stakeholders identified as part of the evaluation plan developed previously in step 2 Design.

The main concern related to quality controls during this step 5 is about reliability measured through data completeness verifications and proper functioning of the data collection.

Step 6: Learning transfer data processing

After collecting trainees and stakeholders’ answers to the questionnaires sent according to the terms specified in the evaluation plan of step 2 Design, it is now a question of data processing in order to prepare information that will be returned during further step 7. Feedbacks.

First, we recommend gathering data stored in several files linked to different questionnaires previously administered and then treat these data. Descriptive analyses are made to summarize and represent the results obtained. Specific treatments are made from the results of pretest; immediate post-test and 3 months later post-test questionnaires to evaluate learning transfer evolution on the job. Other treatments are also made concerning the impact of professional environment on trainees’ performances. In terms of end product of this step, it is about producing a reliable version of the results with descriptive analyses (6.a), trainees’ individual performance evolution (6.b) and professional environment impact analyses (6.c).

Quality controls during data processing lie in the necessity to ensure validity and reliability of the results obtained after treatment. Quality checks of items can be helpful, like internal coherence verification. Validity of the results is also conditioned by taking into account the effects of the professional environment on expected learning transfer behaviors. Another type of issue lies in the sensitivity of the measures resulting from treatments and in the ability to deliver an accurate diagnosis allowing distinguish behaviors that show effective learning transfers. Moreover, if the items are well connected to the specification table and the categories of important aspects addressed, scores by behaviors categories can be calculated, which will be important for diagnostic feedbacks at next step.

Step 7: Diagnostic feedbacks

On basis of reliable results established in previous step 6, the aim is to transmit in a short period of time personalized diagnostic feedbacks adapted to trainees and stakeholder’s needs.

First CCQCTTE recommends drawing up the list of the trainees and stakeholders (colleagues, managers (N + 1), human resources manager, etc.). Then the idea is to analyze the needs of these people in terms of feedback. Finally, depending on this analysis, we recommend formalizing the characteristics of feedback sending methods. This work can be done using a table with as first entry the following elements:

­ Information used to create feedbacks;

­ Data collection instruments and moments of information gathering;

­ Recommended approaches: formative and/or certification perspectives, respect of ethical standards, etc.;

­ Feedback characteristics with as possible such options like: anonymity, comparison with a reference group, transfer performance evolution, etc.;

­ Diagnostics: analysis done in the CCQCTTE first step using a specification table highlighting important transfer outcomes that trainees must be able to put into practice after the training course;

­ Moments of feedback broadcast;

­ Forms of feedback: personalized email; link on a web page; slideshow presentation with or without audio commentaries; etc.

In terms of final products, we propose three categories of needs-based diagnostic feedbacks to support learning transfer improvement on the job based on three types of audiences:

­ (7.a) Trainees;

­ (7.b) Members of the trainers’ staff;

­ (7.b) Officials who have sponsored the training course.

The “diagnosticity” criterion is that on the forefront of this seventh step. The idea is to provide a strengths and weaknesses diagnosis related to the learning transfer after the training course and relativized with results concerning impact of professional environment factors. That said, communicability and practicability criteria are also to be taken into account. Different solutions are available for delivering effective feedbacks to support learning transfer. During 2017-2018 PEERS project, we choose to send by email a slide show with audio commentaries that seems to be a practicable solution for trainees that need to be informed in a short time with attractive information. This type of feedback modality is particularly suitable for trainees’ audiences who would not have the time, or the desire, to take note of this informative feedback on their workplace through reading reports of several pages (Table 2).

Step 8: Macro-regulation of the whole cycle in order to improve the next learning transfer evaluation

The idea during this eighth and last step of CCQCTT model is to review the operations and outputs of previous seven steps. The proposed analysis is based on a dual perspective: the time perspective (present situation and future projection)—what should be done in the future?—and the practical perspective of the protagonists (the trainer and the evaluators)—What could we improve? Gilles and Lovinfosse (2004: p. 1) point out concerning macro-regulation in framework of the CGQTS cycle: “(…) from the point of view of qualitative criteria (validity, fidelity, sensitivity, diagnosticity, practicability, equity, communicability and authenticity) the current situation is dissected using both perspectives temporal and practical, to deduce the lines of improvement in a quest of excellence”. With this eighth step, the CCQCTTE model becomes somehow a spiral-quality where lessons will be drawn of what happened during the evaluation process that has just been conducted, with a view to improving the next training transfer evaluation.

Quality review of the first step “Analysis of training objectives, behaviors to be transferred and factors influencing the transfer” allows establishing that learning

Table 2. Feedback audiences and formats during field tests.

transfer analysis using a Table of Specifications (ToS) as proposed in CCQCTTE method allowed evaluators to better control validity and later, at step seven, feedback diagnosticity. Improvements are often possible, in particular in the field of validity criterion. These findings should lead to a first step 8 macro-regulation output: a series of (8.a) Proposals to improve the next ToS.

The review of the second CCQCTTE step “Design of the learning assessments framework” highlight the need to create questionnaires in two parts “Learning transfer behaviors evaluation” and “Professional environment transfer factors” while questioning different stakeholders (managers, trainees, trainers, colleagues, etc.) improves validity of the results. Taking measurements before (pretest) training and after (immediate post-test and three month later post-test) also improves sensitivity of the results. A second final product then takes the form of (8.b) Recommendations with a view to an improved evaluation plan.

Review of the step 3 “Item writing and banking for learning transfer” leads to review two ways in which items banks were built, one related to trainees’ learning transfer behaviors and the other, in relation to the professional environment factors that influence learning transfers. The third final product, should take the form of (8.c) Items’ quality report.

Concerning CCQCTTE step 4 “Stakeholders information”, the quality review leads to reflect about improvement paths for better communication with stakeholders. At the end of this review, the idea is to propose a first (8.d1) List of recommendations in terms of improvement of pieces of information to be communicated and another (8.d2) List of proposals for better mobilizing in a more efficient way the stakeholders.

Review of step 5 “Learning transfer data collection”, leads us to analyze retrospectively various aspects: the presentation of the questionnaires; their modes of transmission; how to collect the answers, in order to improve for the next learning transfer data collection. The opinions of trainees and other stakeholders are very helpful during this review. At the end of this step 5 review, it is therefore necessary to arrive at a (8.5) List of recommendations to improve data collection.

Review of CCQCTTE step 6 “Learning transfer data processing” will focus on improvements at three levels: descriptive analyses (6.a), trainees’ individual performance evolution (6.b) and professional environment impact analyses (6.c). The output of this review is a (8.6) List of recommendations for improving data processing in order to produce a more reliable version of the results.

Finally, the last review in the context of this CCQCTTE macro-regulation concerns step 7 “Diagnostic feedbacks”. This review aims to improve information and transmission procedures for stakeholders involved in learning transfer evaluation. At the end of this seventh step review, two types of recommendations are expected: a (8.7.a) List of recommendations on feedback content as well as a (8.7.b) List of recommendations to improve procedures for feedback transmission.

Ideally, all stakeholders should validate these recommendations lists. A cost analysis of proposed improvements should also complete the reviewing, allowing the setting of improvement priorities considering available resources for the next learning transfer assessment.

7. Discussion

Our main objective was to design and field test a method for building high quality training transfer assessments. In relation with this goal, we defined five sub-objectives: easy way of use; implementation of best practices; cyclical quality approach; taking into account of transfer factors; diagnostic feedbacks.

This led us to develop a method we entitled Construction Cycle and Quality Controls for Training Transfer Evaluations (CCQCTTE).

Concerning the easy way of use of the method, the three years of research and development led to an 8-steps cycle with well-defined processes and outputs for each step. These results allows non-specialists to use the CCQCTTE, but in terms of limitations we note that step1 “Analysis” remains time-consuming and that even with a methodological guide such as the one we published in 2017 (Gilles, Chochard et al., 2017) it is difficult to start with this method without the accompaniment of a more experienced person (role played by PEERS students). It is obvious that the support of a person specialized in the implementation of the method facilitates its appropriation.

Regarding the consideration of best practices in learning transfer assessment development, different aspects of the method in relation to the quality criteria make it possible to offer guarantees in terms of: validity and diagnosticity with the elaboration of the specification table; reliability and equity with standardization of pre-test, of immediate post-test and of delayed post-test (3 month later); sensitivity with 7 levels Likert item scales; acceptability with the involvement of stakeholders; communicability with the information plan; authenticity with taking in account the transfer factors within a dedicated questionnaire; practicability with an 8-step procedure that begins to be well documented. One of the limitations at this level lies in the fact that many documents exist but in French, which limits a wide diffusion of the method, at least as we write these lines.

Concerning the cyclical quality approach, besides the fact that we have integrated the reflection on quality criteria, there is still much to be done to specify and formalize the quality controls at different steps of CCQCTTE. Nevertheless, step 8 of macro-regulation brings a real added value and transforms the cycle through the recommendations produced in a kind of spiral of quality which makes it possible to envisage a continuous improvement of transfer assessment quality by drawing the lessons of a previous cycle to improve the next learning transfer assessment development. In terms of limitation, to date there is no software or online platform yet to implement the CCQCTT 8-steps method. There is currently a Docimo platform (Gilles & Tinnirello, 2017) that facilitates the use of a similar cycle but at level 2 of Kirkpatrick’s model. Implement the CCQCTT in Docimo platform is possible, but there are many adjustments that need to be made to add the specificities related to learning transfer assessments.

About the consideration of transfer factors, CCQCTTE proposes a specific dedicated questionnaire with three categories of items according to Baldwin & Ford (1988): 1) trainees characteristics; 2) training design and 3) work environment. For the latest, following Chochard’s (2012) propositions we distinguished four subcategories: a) supervisor support; b) colleagues’ support; c) opportunities for implementation and d) organization characteristics. Current treatments provide descriptive feedback on participants’ perception of transfer factors. However, in terms of limitations, we have not yet clarified the procedures that would make it possible to link the levels of learning transfer on the job with these factors influencing training transfer. The idea is to visualize the effects of these transfer factors in order to facilitate decisions concerning reduction of their impact when they slow down trainees’ learning transfer.

Regarding diagnostic feedbacks, the analysis made during the CCQCTTE first step using a specification table allows to point important transfer outcomes in terms of behaviors that trainees must be able to put into practice after the training course when they are back on the job. These priority transfer objectives can then be translated into items in the pretest questionnaire before training and post-test questionnaires immediately after training and delayed 3 month later. This therefore allows feedbacks that highlight the strengths and weaknesses in learning transfer and that shows evolution over time when pretests are compared to post-tests. These feedbacks are useful not only for trainees, but also for trainers who can learn from learning transfer gains—just as there are learning gains calculation possibilities for Kirkpatrick’s level 2 (Cox & Vargas, 1966; Mac Guigan, 1967; D’Hainaut, 1973)—in order to improve training transfer efficiency.

8. Conclusion

The Construction Cycle and Quality Controls for Training Transfer Evaluations (CCQCTTE) was designed and tested as part of the PEERS program (Gilles, Chochard et al., 2017) during R&D collaboration between the Faculty of Education of University of Quebec in Montreal (Canada) and the University of Teacher Education of State of Vaud (Switzerland). The CCQCTTE has a solid methodological foundation. On the one hand, we relied on the founding principles of transfer evaluation established by Baldwin & Ford (1988) and updated by Chochard (2012). On the other hand, we have adapted the quality PDCA process (Deming, 1986) adjusted for learning evaluations by Gilles (2002) to the construction of training transfer evaluations. These preoccupations fit into a worldwide trend of introducing quality management in training, in a context of huge investments in the field of continuing education.

The 8 steps of the CCQCTTE (Analysis, Design, Items, Information, data collection, data processing, feedback, macro-regulation) were implemented during transfer evaluations of 10 continuous training courses. These took place in Montreal, Canada and in Lausanne, Switzerland. During the development and implementation of the evaluation devices using the CCQCTTE model, we carried out quality controls with reference to 9 criteria (validity, reliability, sensitivity, diagnosticity, equity, practicability, communicability, authenticity, acceptability).

At step 1 “Analysis”, using a Table of Specifications (ToS) allows evaluators to better control validity at and later, at step seven, feedback diagnosticity. During step 2 “Design” we highlight the need to create questionnaires in two parts “Learning transfer behaviors evaluation” and “Professional environment transfer factors” as well as the importance of questioning different stakeholders. The implementation of step 3 showed the interest in distinguishing two types of item banks, one related to the behaviors to be transferred by the trainees and the other related to the items concerning the factors influencing training transfer. Concerning step 4 “Information”, the quality review leads to reflect about improvement paths for better communication with stakeholders. For step 5 “Data collection”, the implementation schedule is crucial and the main concern related to quality controls is about reliability and practicability: of pretests; of immediate post-tests and of 3 months later post-tests questionnaires to evaluate learning transfer evolution on the job. On step 6 “Data processing”, treatments are made to highlight these evolutions and the impact of professional environment on trainees’ transfer performances. On basis of reliable results, the aim of step 7 “Feedback” is to transmit in a short period of time personalized diagnostic feedbacks adapted to trainees and stakeholder’s needs. The “diagnosticity” criterion is that on the forefront of this seventh step, providing a strengths and weaknesses diagnosis related to training transfer. Finally, step 8 “macro-regulation” permits to review the operations and outputs of previous seven steps with a view to improving the next training transfer evaluation. This step 8 brings a real added value and transforms the cycle through the recommendations produced in a kind of spiral of quality which makes it possible to envisage a continuous improvement of training transfer evaluations.

In terms of limitations, we note that step 1 “Analysis” remains time-consuming and that it is difficult to start without the accompaniment of an experienced expert of the CCQCTTE. But this is the price to pay to produce feedbacks that are useful not only for trainees and trainers, but also for training managers who want to improve training systems and making them more efficient in terms of transfer.


Authors are grateful to trainers who participated in the testing of the CCQCTTE and students of the PEERS Program who were involved in its elaboration.


1Students from the workplace trainers certificate program.

2Students from the educational sciences and practices joint master degree of HEP vaud and university of Lausanne.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.


[1] AERA, APA, ACME (1999). Standards for International and Psychological Testing. Washington: American International Research Association.
[2] Alvarez, K., Salas, E., & Garofano, C. M. (2004). An Integrated Model of Training Evaluation and Effectiveness. Human Resource Development Review, 3, 385-416.
[3] Anderson, L. W. (2002). Curricular Alignment: A Re-Examination. Theory into Practice, 41, 255-260.
[4] Antonioni, D. (1994). The Effects of Feedback Accountability on upward Appraisal Ratings. Personnel Psychology, 47, 349-356.
[5] Atkinson, R. C. (1972). Ingredients for a Theory of Instruction. American Psychologist, 27, 921-931.
[6] Baldwin, T. T., & Ford, J. K. (1988). Transfer of Training: A Review and Directions for Future Research. Personnel Psychology, 41, 63-105.
[7] Bates, R., Holton, E. F., & Hatala, J. P. (2012). A Revised Learning Transfer System Inventory: Factorial Replication and Validation. Human Resource Development International, 15, 549-569.
[8] Best, M., & Neuhauser, D. (2006). Heroes and Martyrs of Quality and Safety—Walter A Shewhart, 1924, and the Hawthorne Factory. BMJ Quality & Safety, 15, 142-143.
[9] Borg, W. R., & Gall, M. D. (1989). Educational Research. Longman.
[10] Carnevale, A. P., & Desrochers, D. (1999). Training in the Dilbert Economy. Training & Development, 53, 32-36.
[11] Cheng, E. W. L., & Hampson, I. (2007). Transfer of Training: A Review and New Insights. International Journal of Management Reviews, 10, 327-341.
[12] Chochard, Y. (2012). Les variables influencant le rendement des formations managériales: Une étude de cas multiple suisse par la méthode de l’analyse de l’utilité. Université de Fribourg.
[13] Chochard, Y. (2018). Les apports de la formation continue offerte aux organismes communautaires: étude de cas d’une formation. La revue canadienne pour l’étude de l’éducation des adultes, 50, 32-46.
[14] CHUV (2017). Rapport d’activités 2017 du Centre hospitalier universitaire vaudois. Lausanne: CHUV.
[15] Cox, R. C., & Vargas, J. S. (1966, April). A Comparison of Item Selection Techniques for Norm-Referenced and Criterion-Referenced Tests. In Annual Conference of the National Council on Measurement in Education. Chicago.
[16] D’Hainaut, L. (1973). étude d’une nouvelle variable pour l’analyse statistique des expériences pédagogiques. Bulletin de psychologie, 26, 622-630.
[17] Deming, W. E. (1986). Out of the Crisis (p. 88). MIT Press.
[18] Deming, W. E. (1993). The New Economics for Industry, Government, and Education. MIT Press.
[19] Downing, S. M. (2011). Twelve Steps for Effective Test Development. In S. M. Downing, & T. M. Haladyna (Eds.), Handbook of Test Development (pp. 3-24). Routlege.
[20] Dunberry, A., & Péchard, C. (2007). L’évaluation de la formation dans l’entreprise-état de la question et perspectives.
[21] Fives, H., & Didonato-Barnes, N. (2013). Classroom Test Construction: The Power of a Table of Specifications. Practical Assessment, Research, and Evaluation, 18, 1-7.
[22] Fleishman, E. (1953). Leadership Climate, Human Relations Training, and Supervisory Behavior. Personnel Psychology, 6, 205-222.
[23] Formaeva (2011). Les pratiques d’évaluation des formations des entreprises francaises en 2010. Formaeva.
[24] Formaeva (2012). Comparaison des pratiques d’évaluation des formations en France et au Québec. Formaeva.
[25] Frash, R., Kline, S., Almanza, B., & Antun, J. (2008). Support for a Multi-Level Evaluation Framework in Hospitality Training. Journal of Human Resources in Hospitality & Tourism, 7, 197-218.
[26] Georgenson, D. L. (1982). The Problem of Transfer Calls for Partnership. Training & Development Journal, 36, 75-78.
[27] Gilles, J.-L. (2002). Qualité spectrale des tests standardisés universitaires—Mise au point d’indices édumétriques d’analyse de la qualité spectrale des évaluations des acquis des étudiants universitaires et application aux épreuves MOHICAN check up ’99 (Thèse de doctorat en sciences de l’éducation). Université de Liège, Faculté de psychologie et des sciences de l’éducation.
[28] Gilles, J.-L. (2017). Origin, Foundation, Objectives, and Original Aspects of the PEERS Program Linking Research and Training in Internationalization of Teacher Education. In J.-L. Gilles (Ed.), Linking Research and Training in Internationalization of Teacher Education with the PEERS Program: Issues, Case Studies and Perspectives (pp. 25-48). Peter Lang.
[29] Gilles, J.-L., & Lovinfosse, V. (2004). Utilisation du cycle SMART de gestion qualité des évaluations standardisées dans le contexte d’une Haute Ecole: Regard critique en termes de validité, fidélité, sensibilité des mesures, diagnosticité, praticabilité, équité, communicabilité et authenticité. In S. Arzola, R. Vizcarra, A. Cornejo, & G. Undurraga (Eds.), World Education for Educational Research (WAER)—XIV Congreso Mundial de Ciencias de la Education: Educatores para una Nueva Cultura—Summary of papers abstracts. Pontifica Universidad Catolica de Chile, Facultad de Education.
[30] Gilles, J.-L., & Tinnirello, S. (2017). DOCIMO: An Online Platform Dedicated to the Construction and Quality Management of Learning and Impact Assessments in the Digital Age. Poster Presented at the The #dariahTeach Open Resources Conference.
[31] Gilles, J.-L., Gutmann, C., & Tedesco, B. (2012). Colloque “PEERS Projects, a New Key Component for the Internationalization of Teacher Education”. In G. Baillat (Ed.), Livre des résumés du 17e Congrès de l’Association mondiale des sciences de l’éducation (pp. 108-111). Université de Reims.
[32] Gilles, J.-L., Chochard, Y., Berset, T., Bieri, S., El Guermai, J., Graveline, A. et al. (2017). Cycle de construction et de gestion qualité d’évaluation du transfert (CGQET). Université du Québec à Montréal.
[33] Gilles, J.-L., Piette, S.-A., Detroz, P., Tinnirello, S., Pirson, M., Dabo, M., & Lê, H. (2005). The Electronic Construction and Quality Control in Standardized Testing Platform Project (e-C&QCST). In A. Demetriou, & E. Dochy (Eds.), Proceedings of European Association for Research on Learning and Instruction (EARLI) (p. 429).
[34] Hale, J. (2002). Performance-Based Evaluation. Jossey-Bass/Pfeiffer.
[35] Hedge, J. W., Borman, W. C., & Birkeland, S. A. (2001). History and Development of Multisource Feedback as a Methodology. In D. W. Bracken, C. W. Timmreck, & A. H. Church (Eds.), The Handbook of Multisource Feedback (pp. 15-32). Jossey Bass.
[36] Holton III, E. F., Bates, R. A., & Ruona, W. E. A. (2000). Development of a Generalized Learning Transfer System Inventory. Human Resource Development Quarterly, 11, 333-359.;2-P
[37] Holton, E. F. (1996). The Flawed Four-Level Evaluation Model. Human Resource Development Quarterly, 7, 5-21.
[38] Jehanzeb, K., & Bashir, N. A. (2013). Training and Development Program and its Benefits to Employee and Organization: A Conceptual Study. European Journal of Business and Management, 5, 243-252.
[39] Kirkpatrick, D. L. (1959). Techniques for Evaluating Training Programs. Journal of the American Society of Trainings Directors, 13, 21-26.
[40] Kirkpatrick, D. L. (1996). Great Ideas Revisited. Training and Development, 50, 54-59.
[41] Kirkpatrick, D. L. (1998). Evaluating training Programs (2nd ed.). Barrett-Koehler Publishers Inc.
[42] Kirkpatrick, D. L. (2006). Seven Keys to Unlock the Four Levels of Evaluation. Performance Improvement, 45, 5-8.
[43] Kirkpatrick, D. L., & Kirkpatrick, J. D. (2008). Implementing the Four Levels: A Practical Guide for Effective Evaluation of Training Programs (153 p). Barret-Koehler Publishers, Inc.
[44] Loiselle, J. (2001). La recherche développement en éducation: Sa nature et ses caractéristiques. In M. Anadón (Ed.), Nouvelles dynamiques de recherche en éducation (pp. 77-97). Les Presses de l’Université de Laval.
[45] Mac Guigan, F. J. (1967). The G Statistics, An Index of Amount Learned. N.S.P. Journal, 6, 14-16.
[46] Mathews, B. P., Ueno, A., Kekale, T., Repka, M., Pereire, Z. L., & Silva, G. (2001). Quality Training: Needs and Evaluation-Findings from a European Survey. Total Quality Management, 12, 483-490.
[47] Mckenna, J.-F. (1990). Take the “A” Training. Industry Week, 239, 22-29.
[48] Meignant, A. (2009). Manager la formation (8e éd). Liaisons.
[49] MESS (2001). Politique gouvernementale—L’action communautaire: Une contribution essentielle à l’exercice de la citoyenneté et au développement social du Québec: Portail 46 Chochard. “Les apports de la formation continue” Québec.
[50] Morissette, D. (1996). Guide pratique de l’évaluation sommative—Gestion des épreuves et des examens. Editions du renouveau pédagogique.
[51] Mwema, N. W., & Gachunga, H. G. (2014). The Influence of Performance Appraisal on Employee Productivity in Organizations: A Case Study of Selected WHO Offices in East Africa. International Journal of Social Sciences and Entrepreneurship, 1, 324-337.
[52] Nightingale, P., & O’Neil, M. (1994). Achieving Quality Learning in Higher Education. Kogan Page.
[53] Nitko, A., & Brookhart, S. (2007). Educational Assessment of Students (5th ed.). Pearson Merrill Prentice Hall.
[54] Phillips, J. J., & Stone, R. D. (2002). How to Measure Training Results. McGraw-Hill.
[55] Pottiez, J. (2017). L’évaluation de la formation—Pilotez et maximisez l’efficacité de vos formations (2e éd). Dunod.
[56] Ramsden, P. (1991). A Performance Indicator of Teaching Quality in Higher Education: The Course Experience Questionnaire. Studies in Higher Education, 16, 129-150.
[57] Rodriguez, J., & Walters, K. (2017). The Importance of Training and Development in Employee Performance and Evaluation. World Wide Journal of Multidisciplinary Research and Development, 3, 206-212.
[58] Saks, A. M. (2002). So What Is a Good Transfer of Training Estimate? A Reply to Fitzpatrick. The Industrial-Organizational Psychologist, 39, 29-30.*
[59] Segers, M., Dochy, F., & Cascallar, E. (2003). Optimising New Modes of Assessment: In Search of Qualities and Standards. Kluwer Academic Publishers.
[60] Shewhart, W. A. (1931). Economic Control of Quality of Manufactured Product (501 p). Van Nostrand.
[61] Shewhart, W. A. (1958). Nature and Origin of Standards of Quality. Bell System Technical Journal, 37, 1-22.
[62] Shewhart, W. A. (1989). Les fondements de la maitrise de la qualité. Economia.
[63] Van Buren, M. E., & Erskine, W. (2002). The 2002 ASTD State of the Industry Report (Vol. 8). American Society for Training & Development.
[64] Van den Bossche, P., Segers, M., & Jansen, N. (2010). Transfer of Training: The Role of Feedback in Supportive Social Networks. International Journal of Training and Development, 14, 81-94.
[65] Van der Maren, J.-M. (2003). Chapitre 5. La recherche de développement. In La recherche appliquée en pédagogie (2e éd., pp. 107-124). De Boeck Supérieur.
[66] Yamnill, S. & McLean, G. N. (2001). Theories Supporting Transfer of Training. Human Resource Development Quarterly, 12, 195-208.
[67] Zink, K. J., & Schmidt, A. (1995). Measuring Universities against the European Quality Award Criteria. Total Quality Management, 6, 547-561.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.