Design and Evaluation of a User Interface for an Autonomous Agricultural Sprayer

Abstract

This paper describes the design and evaluation of a user interface for a remotely supervised autonomous agricultural sprayer. The interface was designed to help the remote supervisor to instruct the autonomous sprayer to commence operation, monitor the status of the sprayer and its operation in the field, and intervene when needed (i.e., to stop or shut down). Design principles and guidelines were carefully selected to help develop a human-centered automation interface. Evaluation of the interface using a combination of heuristic, cognitive walkthrough, and user testing techniques revealed several strengths of the design as well as areas that needed further improvement. Overall, this paper provides guidelines that will assist other researchers to develop an ergonomic user interface for a fully autonomous agricultural machine.

Share and Cite:

Edet, U. , Ogidi, F. and Mann, D. (2022) Design and Evaluation of a User Interface for an Autonomous Agricultural Sprayer. Agricultural Sciences, 13, 221-243. doi: 10.4236/as.2022.132016.

1. Introduction

For decades, researchers and manufacturers have been working to make agricultural machines, such as tractors and sprayers, to function autonomously. This has resulted in the creation of different machine design concepts: 1) retain operator station, 2) eliminate operator station, 3) integrated tractor, and 4) swarm/ fleet [1]. Creating a functional autonomous agricultural machine is just one hurdle that must be overcome by the design engineer. Another challenge is designing the user interface that will enable farmers to monitor the autonomous machine remotely during operation to minimize problems that may arise due to unexpected situations or system malfunctions [2]. The user interface should be able to provide the human with the information needed to perform their responsibilities [3]. If designed poorly, the interface will affect the supervisor’s ability to understand the current situation and make decisions [4]. It will also increase the error rates, training time and cost, stress, and frustration [5]. Therefore, it is essential that adequate effort be dedicated to designing an effective interface.

Reference [4] recommended that an automation interface should be human-centered rather than machine-centered if one is to achieve optimum productivity and safety while using the interface. Other authors [6] [7] [8] have also presented similar approaches toward achieving a human-centered interface. Generally, the design of a human-centered automation interface involves four phases: 1) requirement and technology analysis, 2) design concept, 3) test and evaluation, and 4) final design [4] (Endsley et al. 2003).

The requirement analysis stage involved identifying and understanding the set goals (what it is and why it is needed), roles and demographic information of the user, and environmental conditions that may affect the interface [4]. Technological analysis on the other hand, involved evaluating the various tools and technology available to determine which one is most suitable for the intended user. Several authors [9] [10] [11] have presented different requirements for autonomous agricultural machines. Reference [11] noted that the automation interface which they termed a “tractor mimic display” would be used to display telemetric data from the autonomous tractor unit, show the position of the tractor unit on a map, and display real-time video as seen through steerable cameras placed on the tractor unit. Reference [12] published a paper that investigated how humans can supervise autonomous agricultural machines. These authors discussed the challenges associated with designing an autonomous system that avoids both false positives and false negatives. Although they stated the desire to design such to err on the side of false positives (i.e., where a machine sees a problem where there are none), they further suggested the use of humans as “remote troubleshooters” to classify “positives” as either true or false. Some of the information included in the interface were status information, live video from the tractors, map showing tractor travelling down their path, chemical level, warning for hardware failure and obstacle detection. Reference [13] described a system of autonomous tractors for orchard maintenance. The autonomous system was comprised of tractors (equipped with perception systems and capable of driving autonomously) and a remote supervisor who assigns tasks, responds to requests when the perception system is unable to decide how to deal with a detected obstacle, and tracks the fleet of autonomous tractors.

A detailed requirement and technological analysis of the automation interface for remotely supervised autonomous sprayers was reported by references [14] [15] [16] [17]. Their findings can be grouped into five categories: 1) the role of the human, 2) machine health and spraying status, 3) navigation features, 4) visual needs, and 5) warning and notification (Table 1). Overall, it is envisioned

Table 1. Summary of the requirement analysis for the design of an automation interface for remote supervision of an autonomous agricultural sprayer.

that the automation interface will provide telemetric data related to the autonomous agricultural machine, show the location of the autonomous machine within the context of its operating environment, notify the human when there is a situation that requires their assistance, and provide means to see what is happening.

The aim of this paper is to design a user-centered interface for autonomous agricultural sprayers that is based on numerous human factors principles.

2. Interface Design Considerations for Autonomous Agricultural Machines

As earlier stated, an interface that is used to remotely supervise an autonomous agricultural machine is needed to enhance the operational efficiency and safety of the autonomous machine. Different types of interfaces exist which are commonly grouped as command line (text based), speech-based, menu, graphical user interface (GUI), gesture-based, and virtual reality [18]. The GUI takes advantage of the mouse or tab (for touchscreen) while the command line uses mainly the keyboard and requires an expert knowledge to use it. Previously, user interfaces in a control room were mainly command line, rotary switch, and buttons but with advances made over the years, GUIs have become more prominent [19]. This popularity is because a GUI is more flexible (amendable to multi-tasking), appealing, requires less expert knowledge and is easy to use [5] [19].

2.1. Impact of Automation on the User Interface

Automation of agricultural machines presents many challenges that would impact the design of the user interface. These challenges are in relation to the level of automation, remote supervision concept employed, information transmission latency, automation logic, and the supervisor’s situation awareness. The level of automation is related to the amount of human involvement. Generally, it can range from full automation whereby the system requires no human involvement during task execution, to minimal automation that only provides basic data filtering or a set of alternative decisions or actions for the human to consider [20] [21]. In agriculture, automation of field machinery can be grouped into five levels: 1) guidance, 2) coordination and optimization, 3) operator assisted autonomy, 4) supervised autonomy, and 5) full autonomy [22]. The characteristics of the user interface of an autonomous system is greatly influenced by the level of automation [9]. In conventional agricultural sprayers, the human operator relies on several interfaces (or consoles) onboard as well as visual scanning of the environment to achieve a high situation awareness of the activities of the sprayer. Functions that relate to machine and spraying operation (speed, application rate, coverage map, boom height, spray pressure, etc.) were some of the parameters that were presented on these interfaces. Information about the sprayer intent like avoiding obstacles may not be necessary to include in the interface since the operator can visually assess the situation [9]. With increased level of automation, the type of information and the way the information is displayed would likely change [23]. For example, since less control would be required from the human, only a few input functions will be available on the interface. Likewise, since the role of the user is mainly supervisory, more information about the status of the task will be presented to the user through the interface so that the supervisor can monitor how the task is being executed and decide when it may be necessary to intervene. It was also envisioned that one farmer will be able to monitor the operations of multiple autonomous sprayers and other field machines [24] [25]. This means that the interface will not only present information about individual machines and the task performed but also their interaction with other machines in the field(s). Overall, for any task that is automated, a form of feedback must be sent to the human so that (s)he can be aware of its status or state in case the system fails to execute the task. This additional information will likely increase the complexity of the automation interface as well as the cognitive load on the human supervisor when using the interface—which may negatively affect the supervisor’s situation awareness [4] [23]. Generally, the human supervisor of an autonomous sprayer would be expected to have high situation awareness (SA) of the location, activities, status, and surroundings of the autonomous sprayer to be able to perform his/her roles effectively and efficiently [26].

Similarly, being that the autonomous sprayer would be remotely supervised also complicates the design issue. The further the human is from the autonomous sprayer, the more the human’s knowledge of the autonomous sprayer, its tasks, and surrounding will be limited or depend on the information that would be presented on the automation interface [27] since the human might not be able to visually observe the sprayer and its operation directly. Hence, it is crucial that all the needed information be made available on the interface.

A technical challenge with the task of remotely supervising an autonomous machine through an interface is the issue of latency associated with wireless transfer of digital information back and forth between the machine and the interface [23]. Poor connectivity may increase the timely reception and transfer of information to and from the remote supervisor resulting in delayed perception of information and execution of instructions.

Regarding the automation logic, the situation awareness of the supervisor may degrade if the autonomous sprayer acts in a manner that is different from the supervisor’s mental model or interacts with the supervisor in an uncommon or ineffective language [23]. Furthermore, the ability of the interface to draw the attention of the supervisor to critical information in a timely manner is impacted by automation. In a conventional sprayer, the human operator is actively engaged cognitively in the spraying operation. Hence, the operator has a good understanding and high retention of critical information [27]. With an autonomous sprayer, the frequency of human involvement will likely decline. Hence, the supervisor may require more time to detect a problem and to make intelligent decisions due to low situation awareness and the so-called out-of-the-loop syndrome [28]. To tackle these challenges, the interface would need high transparency, understandability, predictability, and proper use of salient features [28].

2.2. Interface Design Principles, Guidelines, and Standards

To address the above-mentioned design challenges, researchers have presented different guidelines that will enhance the functionality and usability of the automation interface. These principles and guidelines have been carefully selected to highlight those that are relevant to the design of the interface (see appendix A). To simplify things, the authors divided these principles and guidelines into 10 sections: 1) sprayer status/activity awareness, 2) sprayer location/environment awareness, 3) overall mission awareness, 4) cognitive factors, 5) robustness, 6) safety, 7) information presentation, 8) handheld mobile device design, 9) warning/notification, and 10) standards. It is important to point out that this list does not only account for principles and guidelines that are unique to the design of an automation interface but also incorporates some general design guidelines that are relevant to the design of an interface. A fully autonomous machine was also considered [22] when selecting the design principles and guidelines for the interface. However, the remote supervisor was envisioned to interact with the autonomous machine from the edge of the field (as opposed to the farm office). This modification was based on the findings of [15].

3. Overview of the User Interface

The interface was designed to monitor the operation of an autonomous plot sprayer that was developed by the UM-agBOT team at the University of Manitoba (Figure 1). The close-to-the-field remote supervision concept [15] was considered when designing the interface.

3.1. Interface Requirements

Based on a review of literature on autonomous machines, remote supervision, and interface design, it was determined that the remote supervisor of an autonomous sprayer should be able to:

1) Instruct the autonomous sprayer to commence operation.

2) Monitor status of the spraying operation (i.e., status of the task performed).

3) Monitor information about the health/state of the sprayer (non-task related telemetric).

4) Visualize the position and surroundings of the sprayer within the field (i.e., global view).

5) Receive notifications of important events and anomalies from the sprayer.

6) Query the sprayer about planned actions or changes of course.

7) Instruct the sprayer to stop or shut down, or to alter plans.

3.2. Scope of the User Interface

The interface was designed to help the user instruct the autonomous sprayer to

Figure 1. The UM-agBOT sprayer.

commence operation, monitor the status of the sprayer and its operation in the field, and intervene when needed—i.e., stop the machine and its spraying operation. Hence, functions needed to perform pre-field tasks (such as planning the spraying operation) and tasks after the machine had been stopped or the spraying operation have been completed were not considered when designing the interface.

3.3. The User Interface

The user interface was programmed using Qt and Microsoft.NET software—specifically for a tablet-sized device—to satisfy the lightweight and field-portable preference that was raised by farmers during a prior survey study [14]. The touchscreen interactive method was also selected due to its advantages (speed, ease of use, and preference) over other input methods [5]. Parameters (elements and features) that were included on the interface were based on the recommendations by [14] during their prior survey and telemetric information that could be obtained from the autonomous plot sprayer. Live data of these elements and features were obtained wirelessly from the autonomous sprayer while an open-source weather application was used to obtain weather data. The data from the autonomous sprayer and weather application, respectively, were updated (refreshed) every 350 ms and 10 min, respectively. A detailed layout of the interface is presented in Figure 2.

The design layout was restricted to landscape orientation to take advantage of the wider horizontal field of view and to prevent distortion that may occur when the tablet is tilted to portrait view. The “START” element was used to instruct the autonomous sprayer to commence operation while the “EMERGENCY STOP” element gave the supervisor the ability to stop the autonomous sprayer’s movement and spraying operation during unfavorable conditions without turning off the on-board computer, so that the sprayer was still able to maintain communication with the supervisor [11]. Video feeds showing the front view

Figure 2. Layout of the automation interface for remote supervision of an autonomous sprayer.

ahead of the sprayer and the left and right booms of the sprayer were included in the interface based on the findings by [16]. The purpose of the notification bar was to inform the supervisor about any abnormal condition and to provide information that would assist the supervisor to navigate the interface effectively. It was comprised of a notification box (for displaying messages) and a status indicator (for communicating the nature of the message). The status indicator was designed to have four distinct colors—black, green, yellow, and red. The black indicator was used to communicate general information to the supervisor such as start-up instructions (which were not directly related to abnormalities in the sprayer or its operation). The green indicator was used to show that everything was normal while the yellow indicator was used to draw the attention of the supervisor to any abnormality that did not require urgent action. A red indicator meant that a serious abnormality had occurred with the sprayer or spraying operation and required immediate user attention. Generally, the order of priority (in decreasing order) in which the status indicators and messages were displayed was red (urgent), yellow (warning), black (instruction), and green (normal). This means that if the spray nozzle, for example, was plugged (i.e., red status) and the tank level was a quarter of the tank capacity at the same time (i.e., yellow status), the notification box will present information about the plugged nozzle since it has a higher priority than the tank level. However, if there were two abnormalities with the same priority level, both error messages would be presented in an alternating sequence.

Display elements used green, yellow, and red color, respectively, to indicate when elements and/or features were within or outside the acceptable range (Figure 2). As for the nozzle element, there was need to also differentiate among three possible nozzle status conditions—on, off, and plugged (since the autonomous sprayer applied chemical/herbicide by spot spraying). To achieve this aim, a red rectangle with the inscription “X” was designed to represent a plugged nozzle. A green triangle and plain nozzle (i.e., no color) was used to represent when a nozzle was turned on and turned off, respectively.

4. Evaluating the User Interface

There are different techniques for evaluating a user interface. These include heuristic evaluation, usability testing, cognitive walkthrough, and user modeling [29]. These techniques have their merits and problems in relation to resource requirements, costs, results, and applicability [30]. For example, with usability testing, one can get direct feedback from actual users of the interface. However, it may be challenging and expensive to recruit these users to assess the interface. The heuristic technique is inexpensive, fast, and easy to use but does not involve actual users during the evaluation. Hence, some issues that may be identified by the experts may not really be a problem for the user (i.e., false positives) [31]. The cognitive walkthrough, on the other hand, can be carried out even without a fully functional prototype or end users. However, it is time consuming, focuses on examining a particular task rather the entire interface and lacks the user’s input [32] [33] —hence, having the same issue of false positives [31]. Reference [34] also noted that heuristic evaluation identifies more high-level structural problems while user testing detects more severe problems. Overall, using multiple techniques will help to improve the evaluation process as well as increase the chance of identifying specific issues with the interface [30] [35].

In this study, a two-phase evaluation was used to assess the effectiveness and usability of the interface. During the first phase, the heuristic technique and cognitive walkthrough techniques were employed. Evaluation criteria were based on the design principles and design requirements (Appendix A). Depending on the severity of the problems, a high or low impact level was assigned to each individual problem. High impact meant that the problem had a significant impact on the user that prevented the user from performing their task that may result in damage, while the low impact level was used when the problem was related to mild problems or optional features.

The second phase of the evaluation involved recruiting potential users to assess the interface. These participants were recruited through producer groups, grower associations, and university representatives (such as Directors and Coordinators). Due to the COVID-19 pandemic, participants were unable to come to the university lab to interact with the interface in-person. Hence, evaluation of the interface was performed remotely (online) and asynchronously—in the form of an online survey that was developed using Survey Monkey. The online survey comprised four sections. The first section gathered demographic information of the users (such as age, gender, experience, etc.). The second section focused on assessing the overall layout of the interface. In the third section, participants were presented with video clips of the interface showing different scenarios of the spraying operation through the automation interface and were asked to monitor (watch) the activities of the autonomous sprayer, identify any problem with the sprayer, explain the consequence of the problem should it persist, and describe how they would have used the interface to mitigate the problem (i.e., situation awareness assessment). A total of six video clips (one training video and five experimental tests) were presented during the third section. Participants were also asked to provide their overall perception of the interface and additional suggestions by completing a questionnaire at the end of experiment. Usability assessment was based on the ability of the interface to enhance the situation awareness of the participants.

Endsley [36] defined situation awareness (SA) as the perception of the elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future”. The SA of the user can either impede or improve the user’s decision-making and performance, depending on its level—i.e., high, or low [37]. A high level of SA enables the individual to act accordingly and timely, even when faced with very complex and challenging tasks [38] while a low SA can lead to error [4]. Reference [39] also attributed all the anomalies in their interface to lack of awareness (i.e., low SA). Hence, a well-designed automation interface should promote higher SA of the user [20]. What this implies for human-machine interaction in agriculture is that the interface should enhance the farmer’s awareness of the machine’s location, activity, status, surroundings, and overall mission, respectively [26].

5. Interface Evaluation Results

Consistent with previous research reported by [34], the heuristic technique detected more problems than the user testing. This could result from the differences in skill set and tasks performed by the different evaluators. The users’ interaction with the interface is scenario-based. Hence, their evaluation of the interface is limited by the tasks they are required to perform. An expert conducting a heuristic evaluation (alongside cognitive walkthrough) draws from their design skills, human factors knowledge, and set of heuristics or guidelines and evaluates every aspect of the interface (i.e., covers the generality of the design). Details of the findings are presented below.

5.1. Heuristic and Cognitive Evaluation

To simplify the discussion, results of the heuristic and cognitive evaluations were grouped into five sub-headings: 1) supervisor’s awareness (status, activities, location, and surrounding), 2) cognitive factors, 3) robustness, 4) information presentation, and 5) safety and warning design.

5.1.1. Supervisor’s Awareness

Comparing the interface elements on the automation interface with those recommended by [14] revealed that the design only contained some of the frequently used information that would enable the supervisor to assess the movement of the sprayer and the spraying operation (Table 2). It did not provide enough information about the machine health/status and network signal (GPS) that would aid the supervisor to understand the situation and make intelligent decisions.

Furthermore, it was difficult to differentiate between when the sprayer operated autonomously (i.e., automation mode) and when the supervisor had taken over the control of the machine (manual mode)—to intervene.

Regarding the coverage map, it was difficult to perceive the exact location of the autonomous sprayer. This may project the wrong location and even make the supervisor panic especially if the sprayer was moving along the field boundaries. Similarly, the use of a white circle to indicate the direction of travel was not intuitive since the supervisor would require additional clues (like the movement of the line) to comprehend the actual direction of motion.

5.1.2. Cognitive Factors

The main purpose of the video feed was to enable the supervisor to better understand any abnormalities in the sprayer, its activities or in the environment—

Table 2. Interface elements as determined by experienced sprayer operators [14]. Only elements and parameters in uppercase were included in the automation interface.

rather than for detecting abnormalities [40]. Hence, the three video feeds were only complementary sources for obtaining information. During the cognitive walkthrough, however, it was noticed that the user’s attention was mainly drawn to the front view video while less attention was paid to other regions of the interface (i.e., attention tunneling). The implication was that the supervisor may not perceive the feedback from the indicators in a timely manner or may even miss an important event, resulting in a poor decision. This issue can be resolved by either placing the video feeds in a section of the interface that the supervisor’s attention is not always drawn to (e.g., bottom). Another option is to make the video appear only on-demand or when the system detects an undesirable condition.

Regarding the spray tank, the digital reading provided immediate level 1 and level 2 SA while the color coding (green, yellow, red) aided the supervisor to achieve level 3 SA projection (i.e., whether to refill the tank or not). However, on a closer look, it was noticed that both tanks had different capacity, yet they appear to have the same size on the interface. This is contrary to the mental model of the supervisor. Also, the design may impede the supervisor’s judgement since it is confusing to differentiate one tank from the other. Furthermore, it may be difficult to discriminate the fertilizer tank from that of the herbicide tank. The small font size of the tank label may have also contributed to this problem. A solution to these two problems would be to resize the tank, and introduce discriminating features like color, shape, and/or size.

5.1.3. Robustness

Although, the design provided the functions that will enable the supervisor to start and stop the machine movement and the spraying operation, it did not give the supervisor the flexibility to remotely turn on/off the ignition and the machine’s onboard computer system if needed. This inference was based on [11] who proposed a safety protocol for an autonomous agricultural machine. Furthermore, only the application rate value, units and the label were presented to the supervisor. This design provided level 1 situation awareness to the user but failed to provide its meaning—level 2 SA (i.e., comprehension). Although this approach may not be a major concern for experienced supervisors (who have knowledge of the desirable application rate), an inexperienced supervisor may find it difficult to determine when the value is within the desirable limit thereby impeding their ability to take the necessary action (i.e., level 3 SA). One way to address this issue is to present the application rate as an analog reading. A second option would be to use color coding to differentiate when the value is within the desirable limit.

5.1.4. Information Presentation

The font size was small and may impede the ability of the supervisor to obtain the actual reading of each function from the linear analog readings as well as perceive what each label (caption) represented. Increasing the font size or using recommended font size height to width ratio and font style will resolve this issue [5] [41]. Similarly, information regarding the weather (temperature, wind speed, and direction) were not readily available like other elements. The supervisor had to click the weather button (i.e., “current weather” or “weather forecast”) to obtain the data. Considering that the weather (temperature, wind speed, and direction) was indicated as frequently used information by experienced sprayer operators [14], it would be beneficial to always present them on the interface (as opposed to on-demand).

The travel speed element (speedometer) is like those found in most vehicles, making it easy to comprehend the perceived data. The combination of the analog and digital display also facilitated level 3 SA of the supervisor. On the contrary, the inclusion of color coding in the design could result in misinterpretation of the travel speed. For example, if the autonomous sprayer is making a turn, it would need to slow down to perform this action smoothly—thereby going below the lower speed limit. As a result of the color coding, this action would be perceived by the supervisor or reported by the machine as an abnormality. A solution would be to only highlight the desirable speed limit with green.

Overall, there was enough contrast between the interface elements and background and the design did not appear to be cluttered—an indication that the proper amount of information was presented on the screen. The use of red, yellow, and green colors, respectively, were also consistent through the interface. A unique discussion is the location of the spray boom and height on the interface. These elements were centrally placed on the interface to emphasize their frequency of usage. Unlike a conventional sprayer, this may not be the case with an autonomous sprayer. Since the boom height and nozzles would be automatically controlled, the supervisor may only be interested in the elements when there is an abnormality. The supervisor, on the other hand, may be more interested in knowing the overall state of the operation, regardless of the sprayer’s condition (i.e., abnormal, or normal). An existing feature that can fulfill this need is the coverage map. A list of the shortcomings of the interface that were identified is highlighted in Table 3.

5.1.5. Safety and Warning Design

The presence of the notification bar also made it possible to inform the supervisor about any abnormal condition and its severity as well as to provide information that would assist the supervisor to navigate the interface effectively. However, it would be difficult to perceive this information when the supervisor’s attention is not focused on the interface since no means for drawing the attention of the supervisor (flashing text and sound) was included in the design. Also, this communication was one-sided since there was no way that would enable the supervisor to request help when it was not available. These shortcomings may impede

Table 3. List of violations identified for the user interface (H = high, L = low).

the supervisor’s ability to make intelligent decisions (i.e., Level 3 SA projection). Table 3 presents a list of the problems identified during the evaluation. The table highlights the problems, their severity, and the design guidelines that were violated.

5.2. Usability Evaluation

Fifteen experienced male farmers attempted the survey, however, not all participants finished the survey. Twelve participants attempted the first two sections of the study, while eight participants completed all four sections. Among the twelve participants, there were ten farm owners, one custom applicator, and one worker. Participants resided within rural areas of the prairie provinces of Canada (mostly in Manitoba) and were between the age of 27 - 72 years old (49 ± 13.6). All but two participants had more than ten years experience with operating agricultural machines. Overall, participants had either basic knowledge or were very conversant about autonomous agricultural machines and remote supervision and five of them had monitored various farm activities remotely (i.e., spraying, combining, grain drying, grain handling, grain cleaning and bin condition (temperature and moisture)). A lot of valuable feedback was obtained from the participants as described in the following sections.

5.2.1. User’s Awareness

Participants were able to detect the problems primarily with the help of the message appearing on the notification bar followed by the display elements, although not all of them were 100% accurate (Table 4). Of all the problems presented, “low oil level” (which only appeared as a message on the notification bar) had the lowest percentage detection. Some participants also stated that the visual warning cues (yellow and red colors) were not effective in informing them about

Table 4. Participants’ ability to detect problem presented.

the problem, especially when they were not staring at the screen/interface (i.e., when they were distracted).

Similarly, participants also noted that the plugged nozzle display element (i.e., red square with “X”) was not sufficient to understand the problem (level 2 SA) because it only informed them about the problem (which was plugged nozzle) but failed to highlight the cause of problem (high/low pressure). Hence, there is need to modify the warning messages and provide more details to the cause of any problem—and make it easily accessible to the user.

All participants were able to describe the consequence of the problem should it persist (i.e., level 3 SA)—probably because of their years of experience operating agricultural machines. They also indicated that they would press the “emergency stop” button to mitigate the effect of the problems (as was expected). However, some participants felt that the interface should have provided them the opportunity to resolve the issue remotely before opting to stop the sprayer (e.g., use the interface to flush the nozzle to unplug it or resetting application rate)—thereby saving them the trip to the field to resolve the problem. They also wanted more details about the problem to help them decide if stopping the machine would be necessary.

Additionally, it is also important to understand which machine functions or features work simultaneously so that the right details and instruction can be presented directly to support comprehension—thereby reducing the cognitive load on the user [4]. For example, in one of the scenarios, participants were notified that the autonomous sprayer was moving below the desired speed, but their responses showed that they were more concerned about the consequences of the slow speed—i.e., inaccurate application rate, field not getting finished on time, and cost.

5.2.2. Interface Layout, Perception, and Information Need

Participants did not complain about legibility of the text on the interface or visibility of the display elements. However, there were concerns regarding the video quality and the contrast between display elements and background and two participants raised concern about the weather pop-up (that overlapped the application rate and spay nozzles display elements). Overall, 63% (5 out of 8) of participants who completed the end-of-experiment questionnaire somewhat agree that the interface was attractive. They also felt that the interface was well designed, easy to understand, and would either use it or recommend it to a friend. Six out of twelve participants also recommended different information that should be included on the interface. Their suggestions include application pressure (nozzle pressure, sprayer pressure), acres applied, acres to be applied, wind direction, fuel quantity, engine temperature, and oil pressure, visual view of boom height from ground, and better video quality to see the spray pattern and other areas clearly. Other suggestions included: 1) adding an auditory alert (to draw attention), 2) better widget-background contrast, 3) adding a rotating center camera, 4) making weather pop-up less obstructive, 5) a larger coverage map, 6) adding the ability to rotate camera orientation, 7) providing a bigger screen, 8) reducing the amount of information displayed, and 9) allowing the user to control the amount of information that is shown/presented.

6. Conclusions

A user interface for remotely supervising the operation of an autonomous agricultural sprayer was designed. Evaluation of the interface using a combination of heuristic technique, cognitive walkthrough, and usability testing revealed several strengths and weaknesses of the design. Although text legibility was raised during the heuristic and cognitive evaluation, it wasn’t a concern during the usability evaluation. Some of the common weaknesses of the design were inadequate information about problems and the pop-up weather (that overlapped the application rate and spray nozzles displays). Potential users of the interface do not want to just detect a problem and hit the “emergency stop” button. They also want to get a better understanding of the problem to help them decide if stopping the machine is required. Hence, there is need to improve the interface to enhance the user’s comprehension. Furthermore, it is important to complement the visual warning cues (that already exist) with other modalities that could draw the attention of a distracted user. It may also be beneficial to provide the user the opportunity to first resolve the issue remotely through the interface since stopping the autonomous machine may not always be necessary.

These modifications will further enhance the farmer’s awareness of the sprayer’s status, activity, location, and environment, respectively, when using the interface. Overall, we have great confidence in the design based on the evaluation. The guidelines provided alongside the users’ feedback will help other designers to develop an ergonomic user interface for remotely supervised autonomous agricultural machines.

Limitation

A major limitation in this study was the number of expert reviewers used during the evaluation of the interface. Although a minimum of three to five experts have been recommended as the optimal number of reviewers [30] [42] [43] to time and resource constraints. Another limitation is the number of participants that were recruited during the usability study. Data collection was carried out during the pandemic, and we were limited to how we could recruit participants.

Acknowledgements

The authors would like to acknowledge the financial support of the Natural Sciences and Engineering Research Council of Canada (NSERC).

Appendix

Appendix 1. Design principles, guidelines, and standard that relate to the design of an automation interface.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Mann, D.D., Edet, U., Green, M., Folorunsho, O., Simundsson, A. and Ogidi, F. (2021) Real-Time Sensory Information for Remote Supervision of Autonomous Agricultural Machines. In: Technology in Agriculture, IntechOpen, London, 1-29.
[2] Bechar, A. and Vigneault, C. (2016) Agricultural Robots for Field Operations: Concepts and Components. Biosystems Engineering, 149, 94-111.
https://doi.org/10.1016/j.biosystemseng.2016.06.014
[3] Dorais, G., Bonasso, R.P., Kortenkamp, D., Pell, B. and Schreckenghost, D. (1999) Adjustable Autonomy for Human-Centered Autonomous Systems. Working Notes of the Sixteenth International Joint Conference on Artificial Intelligence Workshop on Adjustable Autonomy Systems, Stockholm, Sweden, 31 July-6 August 1999, 16-35.
[4] Endsley, M.R., Bolté, B. and Jones, D.G. (2003) Designing for Situation Awareness. An Approach to User-Centered Design. Taylor and Francis, London.
[5] Stone, D., Jarrett, C., Woodroffe, M. and Minocha, S. (2005) User Interface Design and Evaluation. Elsevier, Amsterdam.
[6] Gulliksen, J., Göransson, B. and Lif, M. (2001) A User-Centered Approach to Object-Oriented User Interface Design. In: van Harmelen, M., Ed., Object Modeling and User Interface Design, Addison-Wesley, Boston, Chapter 8, 283-312.
[7] Wong, M.L., Khong, C.W. and Thwaites, H. (2012) Applied UX and UCD Design Process in Interface Design. Procedia—Social and Behavioral Sciences, 51, 703-708.
https://doi.org/10.1016/j.sbspro.2012.08.228
[8] Taslim, A. (2018) Towards a Framework for Integration of User-Centered Design and Agile Methodology. Journal of Computer Science, 3, 1-10.
https://doi.org/10.31357/jcs.v3i1.3000
[9] Johnson, D.A., Naffin, D.J., Puhalla, J.S., Sanchez, J. and Wellington, C.K. (2009) Development and Implementation of a Team of Robotic Tractors for Autonomous Peat Moss Harvesting. Journal of Field Robotics, 26, 549-571.
https://doi.org/10.1002/rob.20297
[10] Berenstein, R., Edan, Y. and Halevi, I.B. (2012) A Remote Interface for a Human-Robot Cooperative Vineyard Sprayer. Proceedings International Society of Precision Agriculture (ICPA), Indianapolis, IN, 15-18 July 2012, 15-18.
[11] Blackmore, S., Have, H. and Fountas, S. (2002) Specification of Behavioural Requirements for an Autonomous Tractor. Automation Technology for Off-Road Equipment Proceedings of the 2002 Conference, Chicago, IL, 26-27 July 2002, 33.
[12] Stentz, A., Dima, C., Wellington, C., Herman, H. and Stager, D. (2002) A System for Semi-Autonomous Tractor Operations. Autonomous Robots, 13, 87-104.
https://doi.org/10.1023/A:1015634322857
[13] Moorehead, S.J., Wellington, C.K., Gilmore, B.J. and Vallespi, C. (2012) Automating Orchards: A System of Autonomous Tractors for Orchard Maintenance. Proceedings of the IEEE International Conference of Intelligent Robots and Systems, Workshop on Agricultural Robotics, Vilamoura-Algarve, Portugal, 7-12 October 2012.
[14] Edet, U., Hawley, E. and Mann, D.D. (2018) Remote Supervision of Autonomous Agricultural Sprayers: The Farmer’s Perspective. Canadian Biosystems Engineering, 60, 2-19.
https://doi.org/10.7451/CBE.2018.60.2.19
[15] Edet, U. and Mann, D.D. (2020) Remote Supervision of Autonomous Agricultural Machines: Concepts and Feasibility. Canadian Biosystems Engineering. (In Press)
[16] Edet, U. and Mann, D.D. (2020) Visual Information Requirements for Remotely Supervised Autonomous Agricultural Machines. Applied Sciences, 10, 2794.
https://doi.org/10.3390/app10082794
[17] Edet, U. and Mann, D.D. (2021) Evaluating Warning Modalities for Remotely Supervised Autonomous Agricultural Machines. Journal of Agricultural Safety and Health, 28, 1-17.
https://doi.org/10.13031/jash.14395
[18] Karray, F., Alemzadeh, M., Abou Saleh, J. and Arab, M.N. (2017) Human-Computer Interaction: Overview on State of the Art. International Journal on Smart Sensing and Intelligent Systems, 1, 137-159.
https://doi.org/10.21307/ijssis-2017-283
[19] Han, S.H., Yang, H. and Im, D.G. (2007) Designing a Human-Computer Interface for a Process Control Room: A Case Study of a Steel Manufacturing Company. International Journal of Industrial Ergonomics, 37, 383-393.
https://doi.org/10.1016/j.ergon.2006.12.006
[20] Parasuraman, R., Sheridan, T.B. and Wickens, C.D. (2000) A Model for Types and Levels of Human Interaction with Automation. IEEE Transactions on Systems, Man, and Cybernetics—Part A: Systems and Humans, 30, 286-297.
https://doi.org/10.1109/3468.844354
[21] Cummings, M.L. and Mitchell, P.M. (2005) Management of Multiple Dynamic Human Supervisory Control Tasks for UAVs. Human Computer Interaction International Human Systems Integration Conference, Las Vegas, NV, 22-27 July 2005, 1-11.
[22] Case IH. 2020 Ag-Autonomy Level.
https://www.caseih.com/northamerica/en-us/innovations/automation
[23] Riley, J.M., Strater, L.D., Chappell, S.L., Connors, E.S. and Endsley, M.R. (2010) Situation Awareness in Human-Robot Interaction: Challenges and User Interface Requirements. In: Barnes, M.J. and Jentsch, F.G., Eds., Human-Robot Interactions in Future Military Operations, CRC Press, London, 171-192.
[24] Gonzalez-de-Santos, P., Ribeiro, A., Fernandez-Quintanilla, C., Lopez-Granados, F., Brandstoetter, M., Tomic, S. and Debilde, B. (2017) Fleets of Robots for Environmentally-Safe Pest Control in Agriculture. Precision Agriculture, 18, 574-614.
https://doi.org/10.1007/s11119-016-9476-3
[25] Shearer, S.A., Pitla, S.K. and Luck, J.D. (2010) Trends in the Automation of Agricultural Field Machinery. Proceedings of the 21st Annual Meeting of the Club of Bologna, Bologna, 13-14 November 2010, 1-29.
[26] Adamides, G., Berenstein, R., Ben-Halevi, I., Hadzilacos, T. and Edan, Y. (2012) User Interface Design Principles for Robotics in Agriculture: The Case of Telerobotic Navigation and Target Selection for Spraying. Proceedings of the 8th Asian Conference for Information Technology in Agriculture, Taipei, Vol. 36, 1-8.
[27] Adams, J.A. (2002) Critical Considerations for Human-Robot Interface Development. Proceedings of 2002 AAAI Fall Symposium, North Falmouth, Massachusetts, 15-17 November 2002, 1-8.
[28] Endsley, M.R. (2017) From Here to Autonomy: Lessons Learned from Human-Automation Research. Human Factors, 59, 5-27.
https://doi.org/10.1177/0018720816681350
[29] Holyer, A. (1993) Methods for Evaluating User Interfaces. University of Sussex, Brighton.
[30] Ivory, M.Y. (2001) An Empirical Foundation for Automated Web Interface Evaluation. University of California, Berkeley.
[31] Wilson, C. (2013) User Interface Inspection Methods: A User-Centered Design Method. Morgan Kaufmann Publishers, Waltham.
[32] Holzinger, A. (2005) Usability Engineering Methods for Software Developers. Communications of the ACM, 48, 71-74.
https://doi.org/10.1145/1039539.1039541
[33] Wilson, C. (2014) Cognitive Walkthrough in User Interface Inspection Methods: A User-Centered Design Method. Elsevier, Amsterdam, 65-79.
https://doi.org/10.1016/B978-0-12-410391-7.00004-X
[34] Tan, W.S., Liu, D. and Bishu, R. (2009) Web Evaluation: Heuristic Evaluation vs. User Testing. International Journal of Industrial Ergonomics, 39, 621-627.
https://doi.org/10.1016/j.ergon.2008.02.012
[35] Nielsen, J. (1994) Usability Engineering. Morgan Kaufmann, Burlington.
[36] Endsley, M.R. (1988) Design and Evaluation for Situation Awareness Enhancement. Proceedings of the Human Factors Society Annual Meeting, 32, 97-101.
https://doi.org/10.1177/154193128803200221
[37] Strater, L.D. and Bolstad, C.A. (2008) Simulation-Based Situation Awareness Training. In: Human Factors in Simulation and Training, CRC Press, Boca Raton, 129-148.
https://doi.org/10.1201/9781420072846.ch7
[38] Vidulich, M.A. (2000) The Relationship between Mental Workload and Situation Awareness. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 44, 3-460-3-463.
https://doi.org/10.1177/154193120004402122
[39] Drury, J.L., Hestand, D., Yanco, H.A. and Scholtz, J. (2004) Design Guidelines for Improved Human-Robot Interaction. CHI’04 Extended Abstracts on Human Factors in Computing Systems, Vienna, 24-29 April 2004, 1540.
https://doi.org/10.1145/985921.986116
[40] Panfilov, I. and Mann, D.D. (2018) The Importance of Real-Time Visual Information for the Remote Supervision of an Autonomous Agricultural Machine. Canadian Biosystems Engineering, 60, 2.11-2.18.
https://doi.org/10.7451/CBE.2018.60.2.11
[41] FAA Human Factor. 2020.
https://www.hf.faa.gov/Webtraining/VisualDisplays/text/size1a1.htm
[42] Nielsen, J. and Molich, R. (1990) Heuristic Evaluation of User Interfaces. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Seattle, 1-5 April 1990, 249-256.
https://doi.org/10.1145/97243.97281
[43] Kirmani, S. (2008) Heuristic Evaluation Quality Score (HEQS) Defining Heuristic Expertise. Journal of Usability Studies, 4, 49-59.
[44] Scholtz, J., Young, J., Drury, J.L. and Yanco, H.A. (2004) Evaluation of Human-Robot Interaction Awareness in Search and Rescue. IEEE International Conference on Robotics and Automation, Vol. 3, 2327-2332.
https://doi.org/10.1109/ROBOT.2004.1307409
[45] Adamides, G., Christou, G., Katsanos, C., Xenos, M. and Hadzilacos, T. (2014) Usability Guidelines for the Design of Robot Teleoperation: A Taxonomy. IEEE Transactions on Human-Machine Systems, 45, 256-262.
https://doi.org/10.1109/THMS.2014.2371048
[46] Keyes, B., Micire, M., Drury, J.L., Yanco, H.A. and Chugo, D. (2010) Improving Human-Robot Interaction through Interface Evolution. In: Human-Robot Interaction, InTech, London, 183-202.
https://doi.org/10.5772/8140
[47] Labonte, D., Boissy, P. and Michaud, F. (2010) Comparative Analysis of 3-D Robot Teleoperation Interfaces with Novice Users. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 40, 1331-1342.
https://doi.org/10.1109/TSMCB.2009.2038357
[48] Yanco, H.A., Drury, J.L. and Scholtz, J. (2004) Beyond Usability Evaluation: Analysis of Human-Robot Interaction at a Major Robotics Competition. Human-Computer Interaction, 19, 117-149.
https://doi.org/10.1207/s15327051hci1901&2_6
[49] Yanco, H.A. and Drury, J.L. (2007) Rescuing Interfaces: A Multi-Year Study of Human-Robot Interaction at the AAAI Robot Rescue Competition. Autonomous Robots, 22, 333-352.
https://doi.org/10.1007/s10514-006-9016-5
[50] Jentsch, F. (2016) Human-Robot Interactions in Future Military Operations. CRC Press, Boca Raton.
https://doi.org/10.4324/9781315587622
[51] Scott, S.D., Mercier, S., Cummings, M.L. and Wang, E. (2006) Assisting Interruption Recovery in Supervisory Control of Multiple Uavs. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 50, 699-703.
https://doi.org/10.1177/154193120605000518
[52] Green, P., Levison, W., Paelke, G. and Serafin, C. (1993) Suggested Human Factors Design Guidelines for Driver Information Systems. University of Michigan, Transportation Research Institute, Ann Arbor.
[53] Chen, J.Y., Barnes, M.J. and Harper-Sciarini, M. (2010) Supervisory Control of Multiple Robots: Human-Performance Issues and User-Interface Design. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 41, 435-454.
https://doi.org/10.1109/TSMCC.2010.2056682
[54] Kirlik, A. (2012) An Overview of Human Factors Psychology. In: Kozlowski, S.W.J., Ed., The Oxford Handbook of Organizational Psychology, Volume 2, Oxford University Press, Oxford, 1287-1305.
https://doi.org/10.1093/oxfordhb/9780199928286.013.0038
[55] Galitz, W.O. (2007) The Essential Guide to User Interface Design: An Introduction to GUI Design Principles and Techniques. John Wiley and Sons, Hoboken.
[56] Wogalter, M.S., Conzola, V.C. and Smith-Jackson, T.L. (2002) Based Guidelines For Warning Design and Evaluation. Applied Ergonomics, 33, 219-230.
https://doi.org/10.1016/S0003-6870(02)00009-1
[57] Oshana, R. and Kraeling, M. (2019) Software Engineering for Embedded Systems: Methods, Practical Techniques, and Applications. Newnes, London.
[58] Nielsen, J. (2005) Ten Usability Heuristics.
http://www.useit.com/papers/heuristic/heuristic_list.html
[59] Johnson, J. (2020) Designing with the Mind in Mind: Simple Guide to Understanding User Interface Design Guidelines. Morgan Kaufmann, Burlington.
[60] Rakhra, A.K. and Mann, D.D. (2018) Design and Evaluation of Individual Elements of the Interface for an Agricultural Machine. Journal of Agricultural Safety and Health, 24, 27-42.
https://doi.org/10.13031/jash.12410
[61] Gong, J. and Tarasewich, P. (2004) Guidelines for Handheld Mobile Device Interface Design. Proceedings of DSI 2004 Annual Meeting, 3751-3756.
[62] Lehto, M.R., Lesch, M.F. and Horrey, W.J. (2009) Safety Warnings for Automation. In: Nof, S.Y., Ed., Springer Handbook of Automation, Springer, Berlin, 671-695.
https://doi.org/10.1007/978-3-540-78831-7_39
[63] Laughery, K.R. and Wogalter, M.S. (2006) Designing Effective Warnings. Reviews of Human Factors and Ergonomics, 2, 241-271.
https://doi.org/10.1177/1557234X0600200109
[64] Peryer, G., Noyes, J., Pleydell-Pearce, K. and Lieven, N. (2005) Auditory Alert Characteristics: A Survey of Pilot Views. The International Journal of Aviation Psychology, 15, 233-250.
https://doi.org/10.1207/s15327108ijap1503_2
[65] Baldwin, C.L., Spence, C., Bliss, J.P., Brill, J.C., Wogalter, M.S., Mayhorn, C.B. and Ferris, T.K. (2012) Multimodal Cueing: The Relative Benefits of the Auditory, Visual, and Tactile Channels in Complex Environments. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 56, 1431-1435.
https://doi.org/10.1177/1071181312561404
[66] König, C., Hofmann, T. and Bruder, R. (2012) Application of the User-Centred Design Process According ISO 9241-210 in Air Traffic Control. Work, 41, 167-174.
https://doi.org/10.3233/WOR-2012-1005-167
[67] Mentler, T. and Herczeg, M. (2013) Applying ISO 9241-110 Dialogue Principles to Tablet Applications in Emergency Medical Services. Proceedings of the 10th International ISCRAM Conference, Baden-Baden, 12-15 May 2013, 502-506.
[68] Jokela, T., Iivari, N., Matero, J. and Karukka, M. (2003) The Standard of User-Centered Design and the Standard Definition of Usability: Analyzing ISO 13407 against ISO 9241-11. Proceedings of the Latin American Conference on Human-Computer Interaction, Rio de Janeiro, 17-20 August 2003, 53-60.
https://doi.org/10.1145/944519.944525

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.