Algorithms for Multicriteria Scheduling Problems to Minimize Maximum Late Work, Tardy, and Early

Abstract

This study examines the multicriteria scheduling problem on a single machine to minimize three criteria: the maximum cost function, denoted by maximum late work (Vmax), maximum tardy job, denoted by (Tmax), and maximum earliness (Emax). We propose several algorithms based on types of objectives function to be optimized when dealing with simultaneous minimization problems with and without weight and hierarchical minimization problems. The proposed Algorithm (3) is to find the set of efficient solutions for 1//F (Vmax, Tmax, Emax) and 1//(Vmax + Tmax + Emax). The Local Search Heuristic Methods (Descent Method (DM), Simulated Annealing (SA), Genetic Algorithm (GA), and the Tree Type Heuristics Method (TTHM) are applied to solve all suggested problems. Finally, the experimental results of Algorithm (3) are compared with the results of the Branch and Bound (BAB) method for optimal and Pareto optimal solutions for smaller instance sizes and compared to the Local Search Heuristic Methods for large instance sizes. These results ensure the efficiency of Algorithm (3) in a reasonable time.

Share and Cite:

Alshaikhli, K. and Alshaikhli, A. (2024) Algorithms for Multicriteria Scheduling Problems to Minimize Maximum Late Work, Tardy, and Early. Journal of Applied Mathematics and Physics, 12, 661-682. doi: 10.4236/jamp.2024.122043.

1. Introduction

The multicriteria scheduling problem has gotten much attention lately [1]. The fundamentals of multicriteria are as follows. On a single machine, there are n jobs to be processed. Each job has a processing time (pi) and due date (di) at which it should be finished. When a task is finished ahead of schedule or after, penalties are assessed. Therefore, the problem is a multicriteria scheduling problem. Let Ω be a schedule, V(Ω), T(Ω), and E(Ω) be functions of late work, tardiness, and earliness, respectively. The problem is to find a schedule Ω to optimize V(Ω), T(Ω), and E(Ω) or a composite objective function of V(Ω), T(Ω), and E(Ω). In the literature [2], there are mainly three classes of approaches that are applicable to the multicriteria scheduling problem. C1: In the hierarchical approach, one of the criteria (more important) is regarded as a constraint (primary) criterion that must be satisfied, and the other one is considered as a (secondary) criterion to optimize. This means optimizing the primary criterion while breaking ties in favor of the schedule that has minimum secondary criterion [3]. C2: Minimizing a weighted sum of the bicriteria (objectives) and converting the bicriteria to a single criterion problem, several bicriteria scheduling problems studied belong to this class [4] [5]. C3: One typically generates all efficient (Pareto optimal) schedules and selects the one that yields the best composite objective function value of two criteria [6]. For the bicriteria that concern the simultaneous minimization of (ΣCi, fmax) for 1//F Ci, fmax) problem in C3, which is solved by Hoogeveen and Vand de Velde [4] in a polynomial time, Vanwassenhove and Gelder [7] solved the 1//F Ci, Tmax) problem.

This paper will extend the bicriteria 1//(Vmax, Emax) [6] to multicriteria problem 1//(Vmax, Tmax, Emax), which is an NP-hard problem, to get past these limitations, we can apply heuristic approaches. We suggest algorithm (3) to find optimal solutions, and we will study the problem in classes C1, C2, and C3. The rest of the paper is organized as follows:

Section (2), gives notations, basic concepts, and mathematical forms. In sections (3) and (4), we formulate the multicriteria problem according to the classes of the approaches, we propose algorithms for each problem and their particular cases, and we study the Branch and Bounded Method for 1//(Vmax + Tmax + Emax). In section (5), we suggested Local Search Heuristic methods to find approximate solutions, and the experimental results are given in section (6).

2. Notation, Basic Concepts and Mathematical Forms

2.1. Notation and Basic Concepts

The following notation will be used in this paper:

n = number of jobs.

pi = processing time of job i.

di = due date of job i.

Ci = The completion time, the time at which the processing of job j is completed s.t.

C i = j = 1 i p i

Ei = the earliness of the job i.

Ti = the tardiness of the job i.

Vi = the late work penalty for job i.

Emax = Max{Ei}, the maximum early work.

Tmax = Max{Ti}, the maximum Tardy work.

Vmax = Max{Vi}, the maximum late work.

fmax = Max{fi}, the maximum function.

LB = lower bound.

UB = upper bound.

In this paper, we shall use the following sequencing rules and concepts:

MST: Jobs are sequenced in nondecreasing order of (si = dipi), this rule is well known to minimize Emax for 1//Emax problem [8].

EDD: Jobs are sequenced in nondecreasing order of (di), this rule is well known to minimize Tmax for 1//Tmax problem [9].

Definition (1): The term” optimize” in a multiobjective decision making problem refers to a solution around which there is no way of improving any objective without worsening at least one other objective [10].

Definition (2):

A feasible schedule σ is Pareto optimal, or non-dominated (efficient), with respect to the performance criteria f and g if there is no feasible schedule σ such that both f(σ) ≤ f(σ) and g(σ) ≤ g(σ), where at least one of the inequalities is strict [7].

Lawler algorithm (LA): [11]

Step (1): let N = { 1 , , n } , Ω = ( ) and M be the set of all jobs with no successors.

Step (2): let j such that f j ( Σ p i ) = min { f j ( Σ p i ) } , j M

Set N = N { j } and sequence the job j in the last position of Ω.

Modify M to represent the new set of scheduled jobs.

Step (3): If N = stop, otherwise go to step (2).

E max = Minimum Emax by MST rule.

T max = Minimum Tmax by EDD rule.

V max = Minimum Vmax by LA.

2.2. The Mathematical Forms and Their Algorithms

2.2.1. Hierarchical Problems

We present the mathematical forms and the algorithms for generating solutions when one of three criteria (Vmax, Tmax, Emax) is more important than the others. These hierarchical problems are called secondary criteria problems, where the secondary criteria refer to the less important ones. The formulation for multicriteria problems is similar to that for the single criteria problems, which require that the optimal value of the primary objective is not violated.

Let us first consider the formulations for multicriteria hierarchical problems, say

1//Lex (γ1, γ2, γ3). There are three parts of the formulations

· Primary objective function (γ1)

Subject to: Secondary objective function (γ2)

· Secondary objective function (γ2)

Subject to: Primary objective function (γ1)

· Secondary objective function (γ3)

Subject to: Primary objective function (γ1)

Hence the algorithm for solving the multicriteria problem needs two steps:

Step (1): We optimize γ1, followed by

Step (2): The optimization of γ2, and γ3 subject to the primary objective value γ1. For example, if Vmax is more important than Tmax and Emax, then the 1//Lex (Vmax, Tmax, Emax) problem section (3.1) can be written as Min(Emax)

s.t. Vmax = ∆, where ∆ = Vmax(LA)

T max T * , T * ( T max ( LA ) , T max ( MST ) )

2.2.2. Simultaneous Problems

Many algorithms can solve multicriteria scheduling problems to find efficient solutions or at least approximations of them [12]. The running time for the algorithm often increases with the increase of the instance size. Any algorithm process aims to find an optimal solution for each instance that minimizes the objective function. This usual meaning of the optimum makes no sense in the multicriteria case because it does not exist, in most cases, as a solution optimizing all objectives simultaneously. Hence, we search for feasible solutions yielding the best compromise among objectives constituting a so-called efficient solution set. These efficient solutions can only be improved in one objective by decreasing their performance in at least one of the others. This efficient solution set is challenging to find. Therefore, approximating that set in a reasonable time could be preferable.

3. Problem Formulation and Analysis

The problem of scheduling a set N = { 1 , , n } of n jobs on a single machine to minimize multicriteria may be stated as follows. Each job i N is to be processed on a single machine that can handle only one job at a time, job i has a processing time pi and due date di. All jobs are available for processing at a time zero.

If a schedule Ω = ( 1 , , n ) is given, then a completion time C i = j = 1 i p i for each job i can be computed and consequently an earliness Ei = max{diCi, 0}, Emax = max{Ei} for each i and E max w = max{wiEi}, where wi is the important of the job i with respect to other jobs. The tardiness Ti = max{Cidi, 0}, Tmax = max{Ti} for each i. The late work Vi(Ω) for the job i N which is the amount of processing performed on job i after its due date di is easy to compute,

· If Vi = 0, then job i is early with C i d i

· If 0 < Vi < pi, then job i is partially early

· If Vi = pi, then job i is late with C i d i + p i

This means that

V i = { 0 if C i d i , i = 1 , , n C i d i if d i < C i < d i + p i , i = 1 , , n p i if C i > d i + p i , i = 1 , , n

Hence V max w = max{wiVi}, wi is the important of job i with respect to other jobs.

Our object is to find a schedule that minimizes bicriteria for the following problems:

1) 1//Lex (Vmax, Tmax, Emax) problem (P1) C1

2) 1//Lex ( V max w , Tmax, Emax) problem (P2) C1

3) 1//F (Vmax, Tmax, Emax) problem (P3) C3

4) 1//F ( E max w , Tmax, Emax) problem (P4) C3

5) 1//(Vmax + Tmax + Emax) problem (P5) C2

3.1. [1//Lex (Vmax, Tmax, Emax) Problem (P1)]

This problem can be written as:

Min ( E max )

s.t. V max = Δ , where Δ = V max ( LA )

T max T * , T * ( T max ( LA ) , T max ( MST ) )

Algorithm (1) for problem (P1):

Step (0): Using Lawler algorithm (LA) to find optimal Vmax, and set ∆ = Vmax (LA).

Step (1): Set N = { 1 , , n } , t = Σ p i , i N .

Step (2): Solve 1/ ≤ ∆/Vmax problem to determine job j to be the job completed at time t, such that: 1) ≤ ∆

2) S j S i , j , i N and Vi ≤ ∆.

Step (3): Schedule j in the interval [tpj, t]

Step (4): Set N = N − {j}, t = tpj

Step (5): If t > 0, then go to step (2)

Step (6): Stop.

Example (1): Consider the problem (P1) with the following data:

pi = (2, 3, 5, 7), di = (11, 7, 18, 9) and i = 1, 2, 3, 4

Lawler algorithm (LA) gives the sequence (2, 4, 1, 3), with Vmax = 1, Tmax = 1 & Emax = 4. Set ∆ = 1, we get the sequence (2, 4, 1, 3) gives Vmax = 1, Tmax = 1 & Emax = 4. This sequence is optimal since the optimal sequence (2, 4, 1, 3) with Vmax = 1, Tmax = 1 & Emax = 4 is obtained by the complete enumeration method (CEM).

3.2. [1//Lex ( V m a x w , Tmax, Emax) Problem (P2)]

This problem can be written as:

Min ( E max )

s.t. V max w = Δ , where Δ = V max w ( LA )

T max T * , T * ( T max ( LA ) , T max ( MST ) )

Algorithm (2) for problem (P2):

Step (0): Using Lawler algorithm (LA) to find V max w and set Δ = V max w ( LA ) .

Step (1): Set N = { 1 , , n } , t = Σ p i , i N

Step (2): Solve 1 / V i w = Δ / V max w problem where V i w = W i V i to determine job j to be the job completed at time t, such that: 1) WjVj ≤ ∆

2) S j S i , j , i N and WiVi ≤ ∆

Step (3): Schedule j in the interval [tpj, t]

Step (4): Set N = N − {j}, t = t pj

Step (5): If t > 0, then go to step (2)

Step (6): Stop.

Example (2): Consider the problem (P2) with the following data: pi = (4, 6, 2, 5), di = (20, 9, 4, 7), Wi = (4, 6, 2, 5), and i = 1, 2, 3, 4 Lawler algorithm (LA) gives the sequence (4, 2, 3, 1) With V max w = 12, Tmax = 9, and Emax = 3, ( V max w , Tmax, Emax)= (12, 9, 3), Set ∆ = 12, we get the sequence (4, 2, 3, 1) gives V max w = 12, Tmax = 9, and Emax = 3, ( V max w , Tmax, Emax)= (12, 9, 3) is optimal.

3.3. [1//F (Vmax, Tmax, Emax) Problem (P3)]

Multicriteria scheduling refers to the scheduling problem in which advantages of a particular schedule are evaluated using more than one performance criterion. Several scheduling problems considering the simultaneous minimization of various forms of sum completion time, earliness and tardiness costs have been studied in the literature [13], also solves 1//F (fmax, gmax) and solves the general problem 1//F ( f max 1 , , f max k ), k is finite integer number and each one of these functions is assumed to be non-decreasing in the job completion time. Now consider the multicriteria problem 1//F (Vmax, Tmax, Emax) in which Emax is not decreasing in job completion time. This problem belongs to C3 and is written as:

Min { V max , T max , E max }

s.t. V i = Min { p i , T i } , i = 1 , , n

E i d i C i , i = 1 , , n

E i 0 , i = 1 , , n

T i C i d i , i = 1 , , n

T i 0 , i = 1 , , n

The following algorithm (3) is used to solve the problem (P3)

Algorithm (3) for the problem (P3):

Step (0): Determine the point ( V max , Tmax, Emax), (Vmax, T max , Emax), and (Vmax, Tmax, E max ) by solving 1//Vmax by Lawler algorithm (LA), 1//Tmax by EDD and 1//Emax by MST rule. Let SE be the set of efficient (Pareto) solutions, set SE = {( V max , Tmax, Emax), (Vmax, T max , Emax), and (Vmax, Tmax, E max )} if each point is not dominated by the other. Set SUM = min { V max + Tmax + Emax, Vmax + T max + Emax, Vmax + Tmax + E max }.

Step (1): Set ∆ = Vmax (MST).

Step (2): Solve 1/Vi ≤ ∆/Vmax problem by using LA (break tie to schedule the job j last with maximum sj = djpj; let ( V max L , T max L , E max L ) denote the outcome. Add ( V max L , T max L , E max L ) to the set of Pareto optimal points (SE), unless it is dominated by the previously obtained Pareto optimal points. If SUM is greater than V max L + T max L + E max L , then set SUM = V max L + T max L + E max L . Let ∆ = V max L − 1, if ∆ > 0 repeat step (2), otherwise go to step (3).

Step (3): The Pareto optimal set SE has been obtained and SUM which is the minimum of values for the Pareto points in the set SE.

Step (4): Stop.

Example (3): Consider the problem (P3) with the following data: pi = (1, 4, 8, 5), di = (20, 7, 11, 9) and i = 1, 2, 3, 4 MST gives the sequence (2, 3, 4, 1) with E max = 3, Tmax = 8 & Vmax = 5; (Vmax, Tmax, E max ) = (5, 8, 3). EDD gives the sequence (2, 4, 3, 1) with Emax = 3, T max = 6 & Vmax = 6; (Vmax, T max , Emax) = (6, 6, 3). Lawler algorithm gives the sequence (4, 3, 2, 1) with Emax = 4, Tmax = 10 & V max = 4; ( V max , Tmax, Emax) = (4, 10, 4). Set SE = {(5, 8, 3), (6, 6, 3), (4, 10, 4)} & SUM = 15, set ∆ = 5, we get the sequence (3, 2, 4, 1) which Emax = 3, Tmax = 8 & Vmax = 5; (Vmax, Tmax, Emax) = (5, 8, 3), then the set SE remains the same, let ∆ = 5-1=4, we get the sequence (3, 4, 2, 1) gives Emax = 3, Tmax = 10 & Vmax = 4; (Vmax, Tmax, Emax) = (4, 10, 3), then the set SE = {(5, 8, 3), (6, 6, 3), (4, 10, 3)}. Let ∆ = 4 − 1 = 3, There is no Vj ≤ ∆, then we stop. The Set of efficient solutions is SE = {(5, 8, 3), (6, 6, 3), (4, 10, 3)} & SUM = 15.

Note that algorithm (3) does not find all the efficient solutions, but it finds most of them as shown in the following example.

Example (4): Consider the problem (P3) with the following data: pi = (7, 14, 3, 1), di = (16, 20, 8, 2) and i = 1, 2, 3, 4 The algorithm (4) gives SE = {(7, 9, 4), (3, 17, 8), (5, 5, 5)}, but the exact set of efficient solutions which is obtained by complete enumeration method is SE = {(7, 9, 4), (3, 17, 8), (5, 5, 5), (4, 23, 6)}.

Note: We can use the BAB method to find the set of all efficient solutions.

3.4. [1//F ( E m a x w , Tmax, Vmax) Problem (P4)]

This problem is denoted by:

Min { E max w , T max , V max }

s.t. V i = min { p i , T i } , i = 1 , , n

W i E i W i ( d i C i ) , i = 1 , , n

W i E i 0 , i = 1 , , n

T i C i d i , i = 1 , , n

T i 0 , i = 1 , , n

The following algorithm (4) is used to solve the problem (P4)

Algorithm (4) for problem (P4):

Step (0): Determine the point ( E max w , Tmax, V max ), ( E max w , T max , Vmax), and ( E max w , Tmax, Vmax) by solving 1//Vmax by Lawler algorithm (LA), 1//Tmax by EDD, and 1// E max w by WMST rule. Let SE be the set of efficient (Pareto) solutions, set SE = {( E max w , Tmax, V max ), ( E max w , T max , Vmax), ( E max w , Tmax, V max )}.

Step (1): Set ∆ = Vmax (WMST)

Step (2): Solve 1/Vi ≤ ∆/Vmax problem by using Lawler algorithm (break tie to schedule the job j last with maximum SjWj = (djpj)Wj; let ( E max w ( L ) , V max L ) denote the outcome. Add E max w ( L ) , V max L to the set of Pareto optimal Points (SE), unless it is dominated by the previously obtained Pareto optimal points. Let ∆ = V max L – 1, if ∆ > 0 repeat step (2), otherwise go to step (3).

( E max w ( L ) , V max L ) to the set of Pareto optimal Points (SE), unless it is dominated by the previously obtained Pareto optimal points.

Let ∆ = V max L – 1, if ∆ > 0 repeat step (2), otherwise go to step (3).

Step (3): The Pareto optimal set SE has been obtained with values for the Pareto points.

Step (4): Stop.

Note Since the 1// E max w problem cannot always be solved to optimality by the WMST {SiWi = (di pi)Wi} rule, hence the algorithm (4) does not give the set of all efficient solutions [9].

Example (5): Consider the problem (P4) with the following data:

pi = (7, 3, 2, 7), di = (15, 9, 4, 16), Wi = (6, 3, 12, 1) and i = 1, 2, 3, 4. The WMST gives the sequence (4, 2, 3, 1) with E max w = 9, Tmax = 8 & Vmax = 4, ( E max w , T max , Vmax) = (9, 8, 4), EDD gives the sequence (3, 2, 1, 4) with E max w = 24, Tmax = 3 & Vmax = 3, ( E max w , Tmax, Vmax) = (24, 3, 3), and Lawler algorithm (LA) gives the sequence (2, 1, 4, 3) with E max w = 30, Tmax = 15 & V max = 2, ( E max w , Tmax, V max ) = (30, 15, 2).

Then SE = {(9, 8, 4), (24, 3, 3), (30, 15, 2)}.

Set ∆ = Vmax(WMST) = 4, we get the sequence (3, 4, 1, 2) gives E max w = 24, Tmax = 10 & Vmax = 3, ( E max w , Tmax, Vmax) = (24, 10, 3). Then SE remains the same SE = {(9, 8, 4), (30, 15, 2), (24, 3, 3)}

Set ∆ = 2, we get the sequence (4, 2, 1, 3) gives E max w = 9, Tmax = 15 & Vmax = 2, ( E max w , Tmax, Vmax) = (9, 15, 2). Then SE = {(9, 8, 4), (9, 15, 2), (24, 3, 3)}.

Set ∆ = 1, There is no Vj ≤ ∆, then we stop. The set of efficient solutions is SE = {(9, 8, 4), (9, 15, 2), (24, 3, 3)}.

4. [1//(Vmax + Tmax + Emax) Problem (p5)]

The aim for the problem (P5) is to find processing order σ of the jobs on a single machine to minimize the sum of maximum late work, maximum tardiness, and maximum earliness (i.e. to minimize Vmax(σ) + Tmax(σ) + Emax(σ), σ S (where S is the set of all feasible solutions)). It is clear that the problem (P5) is a special case of the problem (P4).

In this section we decompose the 1//Vmax + Tmax + Emax problem into two subproblems with a simpler structure. For this problem let:

M = min σ S { V max ( σ ) + T max ( σ ) + E max ( σ ) } .

The problem (P5) can be decomposed into three subproblems (SP1), (SP2) and (SP3).

M 1 = min { max { V σ ( i ) } } s . t . V σ ( i ) = { 0 if C σ ( i ) d σ ( i ) , i = 1 , , n C σ ( i ) d σ ( i ) if p σ ( i ) C σ ( i ) d σ ( i ) + p σ ( i ) , i = 1 , , n p σ ( i ) if C σ ( i ) d σ ( i ) + p σ ( i ) , i = 1 , , n } SP1

M 2 = min { max { T σ ( i ) } } s . t . T σ ( i ) = C σ ( i ) d σ ( i ) , i = 1 , , n T σ ( i ) 0 , i = 1 , , n } SP2

M 3 = min { max { E σ ( i ) } } s . t . E σ ( i ) = d σ ( i ) C σ ( i ) , i = 1 , , n E σ ( i ) 0 , i = 1 , , n } SP3

4.1. Derivation of Lower Bound (LB) for Problem (P5)

The lower bound (LB) is based on decomposing problem (P5) into three subproblems (SP1), (SP2) and (SP3). Then calculate M1 to be the minimum value for (SP1), M2 to be the minimum value for (SP2), and M3 to be the minimum value for (SP3), then applying the following theorem:

Theorem (1): [5]

M1 + M2 + M3M where M1, M2, M3, and M are the minimum objective function values of (SP1), (SP2), (SP3), and (P5) respectively.

To get a lower bound LB for the problem (P5):

· For the subproblem (SP1), we compute M1 as a lower bound by sequencing the jobs using Lawler’s algorithm (LA) to find the minimum maximum late work Vmax.

· For the subproblem (SP2), we compute M2 as a lower bound by sequencing the jobs using EDD order (i.e., sequencing the jobs in non-decreasing order of di) to find the minimum maximum tardiness Tmax.

· For the subproblem (SP3), we compute M3 to be a lower bound by sequencing the jobs by MST order (i.e., sequencing the jobs in non-decreasing order of si = di - pi) to find the minimum maximum early job Emax.

then applying theorem (1) to obtain:

LB = M 1 + M 2 + M 3

4.2. Heuristic Method to Calculate Upper Bound (UB) for the Problem (p5)

· A simple heuristic is obtained by sequencing the jobs using Lawler’s algorithm (LA) to find V max , Tmax, and Emax, then UB1 = V max (LA) + Tmax(LA) + Emax(LA).

· UB2 is obtained by ordering the jobs in EDD order, that is, sequencing the jobs i, ( i = 1 , , n ) in non-decreasing order of di to find Vmax, T max , and Emax, then UB2 = Vmax(EDD) + T max (EDD) + Emax(EDD).

· UB3 is obtained by ordering the jobs in MST order, that is, sequencing the jobs i, ( i = 1 , , n ) in non-decreasing order of si = dipi to find Vmax, Tmax, and E max , then UB3 = Vmax(MST) + Tmax(MST) + E max (MST). Then UB = min{UB1, UB2, UB3}.

4.3. Branch and Bound (BAB) Method

Our BAB method is based on the forward sequencing branching rule for which nodes at level k of the search tree correspond to the initial partial sequence in which jobs are sequenced in first k positions [6].

The LB at any node is the cost of scheduling jobs (this cost depends on the objective function) and the cost of unsequenced jobs (this cost depends on the derived lower bound (LB)). At any level of the BAB method, if a node has LB ≥ UB, then this node is dominated.

If the branching ends at a complete sequence of jobs, then this sequence is evaluated, and if its value is less than the current (UB), this (UB) is reset to take that value. The procedure is then repeated until all nodes have been considered using backtracking. The backtracking procedure is the BAB method's movement from the lowest to the upper level. The (UB) at the end of this procedure is the optimum for our scheduling problem (P5). Hence, we get at least one optimal solution using the BAB method. The BAB method is improved by using efficient (LB), good (UB), and dominance rules. If it can be shown that an optimal solution can always be generated without branching from a particular node of the search tree, then that node is dominated and can be eliminated. Dominance rules usually specify whether a node can be eliminated before its (LB) is calculated. Dominance rules are beneficial when a node that has a (LB) that is less than the optimal solution can be eliminated.

Example (6): Consider the problem (P5) with the following data:

pi = (2, 5, 7, 4), di = (6, 21, 10, 9) and i = 1, 2, 3, 4

Lawler’s algorithm gives a sequence (4, 3, 1, 2) where V max = 2, Tmax = 12 & Emax = 5, then UB1 = V max (LA) + Tmax(LA) + Emax(LA) = 2 + 12 + 5 = 19, and

Figure 1. Branch and Bound (BAB) Method.

EDD rule gives a sequence (1, 4, 3, 2), where Vmax(EDD) = 3, T max (EDD) = 3 & Emax(EDD) = 4, then UB2 = Vmax(EDD) + T max (EDD) + Emax(EDD) = 3 + 3 + 4 = 10, and MST rule gives a sequence (3, 1, 4, 2), where Vmax(MST) = 4, Tmax(MST) = 4 & E max (MST) = 3, then UB2 = Vmax(MST) + Tmax(MST) + E max (MST) = 4 + 4 + 3 = 11

Hence the minimum upper bound is UB = min {UB1, UB2, UB3} = 10

LB1 = V max (LA) = 2, LB2 = T max (EDD) = 3 & LB3 = E max (MST) = 3

ILB = LB1 + LB2 = 2 + 3 + 3 = 8

We now give the BAB tree algorithm to find an optimal solution for (P5) in Figure 1.

5. Local Search Heuristic

In this section, several local search methods are implemented on the problem of scheduling n jobs on a single machine to minimize the sum of the maximum late work, maximum tardiness, and maximum earliness, i.e., to minimize (Vmax + Tmax + Emax) for the problem (p5).

The local search provides approach high-quality solutions to NP-hard problems of realistic size in a reasonable time, and it’s been widely used recently. Since it’s simpler to construct algorithms using these techniques and get reasonable approximation results, several researchers employ them. The local search methods start with an initial solution and then continually try to add better solutions by searching neighborhoods. We proposed several local search methods and compared the results of these methods with the results of Algorithm (3) for 1//(Vmax + Tmax + Emax).

Definition: [14]

A pair (S, f) illustrates a combinatorial optimization problem, where the solution set S is the set of all feasible solutions, and the cost function f is a mapping f : S R . The problem is to find a globally optimal (minimal) solution, i.e., an s S , such that f ( s ) f ( s ) for all s S .

Definition: [15]

A neighborhood function N is a mapping N : S P ( S ) which specifies for each s S a subset N ( s ) of S neighbors of s.

Glass and Potts (1995) [15] gave four possible neighborhoods below; each is illustrated by considering a typical neighbor of the sequence (1, 2, 3, 4, 5, 6, 7, 8) in a problem where there are eight jobs labeled 1, ∙∙∙, 8.

(a) Transpose: Swap two adjacent jobs. Thus (1, 2, 4, 3, 5, 6, 7, 8) is a neighbor.

(b) Insert: Remove a job from one position in the sequence and insert it at another position (either before or after the original position). Thus, (1, 5, 2, 3, 4, 6, 7, 8) and (1, 2, 3, 4, 6, 7, 5, 8) are both neighbors.

(c) Swap: Swap two jobs that may not be adjacent. Thus, (1, 6, 3, 4, 5, 2, 7, 8) is a neighbor.

(d) Block Insert: Move a subsequence of jobs from one position in the sequence and insert it at another position. Thus, (1, 4, 5, 2, 3, 6, 7, 8) is a neighbor.

Definition: [2]

Let (S, f) be an instance of a combinatorial optimization problem, and let N be a neighborhood function. A solution s S is called a local optimal (minimal) solution with respect to N if f ( s ) f ( s ) for all s N ( s ) . The neighborhood function N is called exact if every local minimum with respect to N is also a global minimum.

5.1. Descent Method (DM)

This method is a simple form of a local search method [16]. It can be executed as follows:

1) Initialization:

In this step, a feasible solution σ = ( σ ( 1 ) , , σ ( n ) ) , obtained from the MST rule is chosen to be the initial current solution for descent method, with objective function value Z.

2) Neighborhood Generation:

In this step, a feasible neighbor σ = ( σ ( 1 ) , , σ ( n ) ) of the current solution is generated by randomly choosing two jobs from σ, (not necessarily adjacent), transposing their positions, and computing the function value denoted by Z'.

3) Acceptance Test:

Now consider the test of whether to accept the move from σ to σ' or not, as follows:

· If Z < Z : then σ' replace σ as the current solution, and we set Z = Z , and then return to step (2).

· If Z Z : then σ is the current solution and return to step (2).

4) Termination condition:

After a number of iterations, the algorithm is stops at a near optimal-solution.

5.2. The Simulated Annealing (SA) Method

In this method, improving and neutral moves are always accepted. While deteriorating actions are accepted according to a given probability acceptance function.

The following steps describe the SA method [17]:

1) Initialization:

Is the same as initialization in DM and with its objective function value Z.

2) A feasible neighborhood of σ is generated by the same technique described in DM to get σ' and Z'.

3) Acceptance Test:

In this step we calculate the difference value between the current initial solution Z and the new value Z', Δ = Z Z then we have:

· If ∆ ≤ 0, then Z' is accepted as a new current solution, and set Z = Z , and go to step (2).

· If ∆ > 0, then Z' is accepted with P(∆) = exp(−∆/t), which is the probability of accepting a move, where t is known as temperature. The temperature t starts at a relatively high value and then gradually decreases slowly as the algorithm progresses. We chose (40˚) as an initial temperature value for the (SA). Let m be the number of iterations, for each iteration j, 1 j m a temperature tj is drive from [16] as:

t j = t j 1 / ( 1 + B × t j 1 ) , j = 2 , , m

where B = t 1 1 / m × t 1 , t1 = 40, m = is the number of iterations.

4) The algorithm is stopped after a number of iterations at the near-optimal solution.

5.3. Genetic Algorithm (GA)

A genetic algorithm is a general search and optimization method that works on a population of feasible solutions (individuals) to a given problem. The following steps have described the structure of GA [18]:

1) Initialization:

The initial population can be constructed by using heuristic methods. In this paper, we generated the initial population starting with (m = 30) two of them by using MST rule and Lawler algorithm, and the remaining ones are generated randomly.

2) New population:

A new population is created by repeating the following substeps until the new population is completed.

a) Selection:

Selecting the individuals according to fitness value will usually form the next generation’s parents.

b) Crossover:

Homogeneous mixture crossover (HMX) [16] is defined in the following:

The mixture of the two parents uniformly by making a set of (m) genes, the odd position from the first parent and the even position from the second parent. Then separate genes without repetition of the gene. If the gene (k) does not exist in the child, then keep it and put (0) in (m). otherwise, we keep gene k in the second child and put (1) in (m) until the genes are exhausted. For example

Parent Mixture

Parent (1) 1 3 2 5 4 9 6 8 7 → 1 4 3 5 2 9 5 6 4 7 9 3 6 1 8 2 7 8

Parent (2) 4 5 9 6 7 3 1 2 8 0 0 0 0 0 0 1 0 1 0 1 1 1 1 0 1 1 1

Exchanging → 1 4 3 5 2 9 6 7 8 Child (1)

5 4 9 3 6 1 2 7 8 Child (2)

c) Mutation:

The mutation is a genetic operator used to maintain genetic diversity from one generation of a population of chromosomes (solutions) to the next. Pair-wise (swap) mutation is applied on each pair of parent solutions to generate two new solutions (children). For example:

1 3 2 5 4 9 6 8 7 swap → 1 3 6 5 4 9 2 8 7

d) Termination Condition:

The algorithm is terminated after a number of iterations.

5.4. The Tree Type Heuristics Method (TTHM)

The branch and bounded (BAB) method can be used to obtain the upper bound on the optimal value of the objective function if some of the possible optimal partial schedules have not been explored. The tree-type heuristic method [16] uses a (BAB) method without using a backtracking procedure. In the primary step of this method, the lower bound (LB) is evaluated at all nodes in each level of the search tree, then some of the nodes within each level of the search tree are chosen from which to branch. Usually, one node is selected with each level and stops at the first complete sequence of the jobs to be the solution.

6. Experimental Results

6.1. Computational Results

Algorithm (3), BAB, and all local search algorithms Decent Method (DM), Simulation Annealing (SA), Genetic Algorithm, and Tree Type heuristics method, are coded in MATLAB 9.12 (R2022a) and implemented on 11th Gen Intel(R) Core (TM) i7-1185G7 @ 3.00GHz 3.00 GHz, with RAM 32.0 GB personal computer.

6.2. Test Results for All Algorithms

Table 1 displays the outcomes of using the algorithm (3) to get a set of efficient solutions and minimum sum of Vmax, Tmax, and Emax for the problem (P3) on samples of different jobs with five experiments for each. The results of efficient solutions compare with those obtained from the BAB method for n < 10, and the complete enumeration method (CEM), which generates all solutions for n < 7.

Table 2 displays the minimum sum of Vmax, Tmax, and Emax using Local Search Heuristic methods on the same data in Table 1 and compares the results with the SUM obtained from Algorithm (3), and BAB.

Table 3 displays the minimum sum of Vmax, Tmax, and Emax obtained by Local Search Heuristic methods, and the SUM from Algorithm (3) n ≥ 10.

Table 1. Displays the outcomes of using the algorithm (3) to get a set of efficient solutions and minimum sum of Vmax, Tmax, and Emax for the problem (P3).

Table 2. Displays the minimum sum of Vmax, Tmax, and Emax using Local Search Heuristic methods on the same data in Table 1 and compares the results with the sum obtained from Algorithm (3), and BAB.

Table 3. displays the minimum sum of Vmax, Tmax, and Emax obtained by Local Search Heuristic methods, and the sum from Algorithm (3) n ≥ 10.

From our Computational results using random data we conclude that:

1) The number of efficient solutions (points) of an algorithm (3) is less than the number of jobs n.

2) Algorithm (3) can find most of efficient points and this clear from the results of Table 1 from the 50 test problems for n = 3, 4, ∙∙∙, 10 only for:

· n = 4, the experiment (4) gives SUM = 16, and the exact SUM = 14 which is obtained by (com) and BAB method, n = 7 experiment (3) gives SUM = 25 and the exact SUM = 24 which is obtained by BAB method.

· n = 8 experiment (2) gives SUM = 59 and the exact SUM = 58 which is obtained by BAB method.

· n = 8 experiment (4) gives SUM = 47 and the exact SUM = 46 which is obtained by BAB method.

· n = 8 experiment (5) gives SUM = 27 and the exact SUM = 26 which is obtained by BAB method.

· n = 9 experiment (1) gives SUM = 24 and the exact SUM = 22 which is obtained by BAB method.

· n = 9 experiment (4) gives SUM = 38 and the exact SUM = 37 which is obtained by BAB method.

· n = 10 experiment (1) gives SUM = 34 and the exact SUM = 33 which is obtained by BAB method.

· n = 10 experiment (2) gives SUM = 41 and the exact SUM = 38 which is obtained by BAB method.

· n = 10 experiment (4) gives SUM = 49 and the exact SUM = 48 which is obtained by BAB method.

· n = 10 experiment (5) gives SUM = 42 and the exact SUM = 41 which is obtained by BAB method.

3) The algorithm (3) can be used for solving problems of the form 1//F (Vmax, Tmax, Emax).

4) Table 2 compares DM, SA, GA, TTHM, Algorithm (3), and BAB. We found that the Local Search Heuristic methods give more efficient solutions than algorithm (3) for n ≤ 10.

5) Table 3 for n ≥ 10, Algorithm (3) gives more efficient solutions than Local Search Heuristic methods when n → ∞.

6) The average time (in seconds) for five experiments when n = 5000 is 2.5285 for an algorithm (3), DM is 6.3790, GA is 6.8763. SA is 96.7463, and TTHM is 2351.9938.

7) Since problem (P5) is a particular case of problem (P3), hence algorithm (3) can be used to find near-optimal solutions without using the BAB method and in a reasonable time for large n.

8) Local Search Heuristic approaches (DM, SA, and GA) are effective means of obtaining efficiency in a fair amount of time. Table 3 demonstrates the effectiveness of these approaches and shows a small variation in the results when compared to algorithm (3).

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Azizoglu, M., Kondakci, S. and Kokslan, M., (2003) Single Machine Scheduling with Maximum Earliness and Number Tardy. Computers & Industrial Engineering, 45, 257-268.
https://doi.org/10.1016/S0360-8352(03)00034-2
[2] Celia, A.G. and Potts, C.N. (1996) A Comparison of Local Search Methods for Flow Shop Scheduling. Annals Operations Research, 63, 489-509.
https://doi.org/10.1007/BF02156631
[3] Chang, P. and Su, L. (2001) Scheduling Jobs on One Machine to Minimize the Maximum Lateness with a Minimum Number of Tardy Jobs. Computer & Industrial Engineering, 40, 49-60.
https://doi.org/10.1016/S0360-8352(01)00035-3
[4] Hoogeveen, J.A. and van de Velde, S.L. (1995) Minimizing Total Completion Time and Maximum Cost Simultaneously Is Solvable in Polynomial Time. Operations Research Letters, 17, 205-208.
https://doi.org/10.1016/0167-6377(95)00023-D
[5] Moslehi, G., Moghaddam, R., Vasei, M. and Azaron, A. (2005) Optimal Scheduling for a Single Machine to Minimize the Sum of Maximum Earliness and Tardiness Considering Idle Insert. Applied Mathematics and Computation, 167, 1430-1450.
https://doi.org/10.1016/j.amc.2004.08.022
[6] Abdul-Razaq, T.S. and Abdul-Razaq, K.F. (2016) Algorithms for Multicriteria Scheduling Problems. Basrah Journal of Science (A), 34, 1-12.
[7] Van Wassenhove L.N. and Gelders, F. (1980) Solving a Bicriterion Scheduling Problem. European Journal of Operational Research, 4, 42-48.
https://doi.org/10.1016/0377-2217(80)90038-7
[8] Hoogeveen, J.A. (1992) Singlemachine Bicriteria Scheduling. Ph.D. Thesis, Center for Mathematics and Computer Science, Amsterdam.
[9] Jackson, J.R. (1955) Scheduling a Prodection Line to Minimize Maximum Tardiness. Ph.D. Thesis, University of California, Loss Angles.
[10] Jouni, L. (2000) Multi-Objective Nonlinear Pareto-Optimization. Ph.D. Thesis, Lappeenranta University of Technology, Lappeenranta.
[11] Lawler, E.L. (1973) Optimal Sequencing of a Single Machine Subject to Precedence Constraint. Management Science, 19, 544-546.
https://doi.org/10.1287/mnsc.19.5.544
[12] Hoogeveen, J.A. (2005) Invited Review Multicriteria Scheduling. European Journal of Operational Research, 167, 592-623.
https://doi.org/10.1016/j.ejor.2004.07.011
[13] Hoogeveen, J.A. (1996) Single Machine Scheduling to Minimize a Function of Two or Three Maximum Cost Criteria. Journal of Algorithms, 21, 415-433.
https://doi.org/10.1006/jagm.1996.0051
[14] Arats, E.H.L. and Lenstra, J.K. (1997) Local Search in Combinatorial Optimization. John Wiley and Sons, Chichest.
[15] Anderso, E.J., Glass, C.A. and Potts, C.N. (1995) Applications of Local Search in Machine Scheduling. Ph.D. Thesis, University of Southampton, Southampton.
[16] Salman, F.M. (2008) Exact and Local Search Methods for Single Machine Problem. Master’s Thesis, University of Al-Mustansiriyah, Baghdad.
[17] Kirkpatrick, S., Gelat Jr. C.D. and Vecchi, M.P. (1983) Optimization by Simulated Annealing. Science, 220, 671-680.
https://doi.org/10.1126/science.220.4598.671
[18] Hummady, L.Z. (2005) Using Genetic Algorithm to Solve (NP-Compete) Problem. Master’s Thesis, University of Al-Mustansiriyah, Baghdad.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.