Lagrangian Relaxation Method for Multiobjective Optimization Methods: Solution Approaches

Abstract

This paper introduces the Lagrangian relaxation method to solve multiobjective optimization problems. It is often required to use the appropriate technique to determine the Lagrangian multipliers in the relaxation method that leads to finding the optimal solution to the problem. Our analysis aims to find a suitable technique to generate Lagrangian multipliers, and later these multipliers are used in the relaxation method to solve Multiobjective optimization problems. We propose a search-based technique to generate Lagrange multipliers. In our paper, we choose a suitable and well-known scalarization method that transforms the original multiobjective into a scalar objective optimization problem. Later, we solve this scalar objective problem using Lagrangian relaxation techniques. We use Brute force techniques to sort optimum solutions. Finally, we analyze the results, and efficient methods are recommended.

Share and Cite:

Alam, H. (2022) Lagrangian Relaxation Method for Multiobjective Optimization Methods: Solution Approaches. Journal of Applied Mathematics and Physics, 10, 1619-1630. doi: 10.4236/jamp.2022.105112.

1. Introduction

Many real-life problems turn into nonsmooth and large-scale when formulated mathematically. There are not enough approaches to solve these nonsmooth and large-scale problems. In such a case, the Lagrangian relaxation is used in dealing with the problems when they are nonsmooth and difficult to solve. The method can be considered as the extended nonlinear programming Lagrange multiplier method. Theoretical concepts of the Lagrangian relaxation method introduced in 1970’s considered a tool that is the backbone of a number of large-scale applications. Several surveys of the Lagrangian relaxation method were carried out by Fisher [1], Geoffrion [2], Shapiro [3]. Later, Fisher [1] and Held et al. [4] provided methodologies to use the Lagrangian relaxation approach in practice.

Many difficult integer problems can be altered into relatively simple problems when Lagrangian relaxation is used. To exploit this transformation, we create a Lagrangian problem in which the complicating constraints are replaced with a penalty term in the objective function involving the amount of violation of the constraints and their dual variables. The Lagrangian problem is relatively easy to solve and provides a lower bound (for minimization problem) on the optimal value of the original problem. It can be used in place of a linear programming relaxation to provide bounds in a branch and bound algorithm. It is also noted that after some heuristic adjustment of the Lagrange problem solution, this indicates the dual solution of the problem, a good approximation of the optimal solution of the primal problem can be obtained.

In our analysis, we transformed multiple objective functions into a single objective function which is called the scalarization technique for multiobjective optimization problems. The significance of the scalarization techniques is that once we transform the original into auxiliary problems, we can utilize the existing techniques used for single-objective optimization problems available in the literature. This paper proposes the Lagrangian relaxation problem for multiobjective optimization problems. Furthermore, we generate Lagrange multipliers that can be used in the algorithms to solve the proposed problems. We have used the Brute force technique to solve the problems, and Lagrangian parameters are generated by using the discretization technique with an equal step size.

In Section 2, we introduce Lagrangian relaxation technique, and in Section 3, multiobjective optimization problem and scalarization technique are discussed. Section 4 illustrates the use of Lagrangian relaxation method to solve scalarization problems.

2. Lagrangian Relaxation Technique

We formulate a linear programming problem:

(P) Z = min c x ,

subject to,

A x = b , P x d , x 0 and integers,

where x is n × 1 , b is m × 1 , d is k × 1 and all other matrices have conformable dimensions.

We assume that the constraints of (P) have been partitioned into the two sets A x = b and P x d . Now, we introduce new objective function that includes the original objective c x and part of the constraints A x = b , thus the Problem (P) transforms into auxiliary problem

(AP) Z D ( u ) = min c x + u ( A x b ) , P x d and x 0 ,

where u = ( u 1 , , u m ) is a vector of Langrage multipliers. It is easier to solve the problem (AP) than solve it (P).

For convenience, we assume that the feasible set of (P) is

X P = { x | A x = b , P x d , x 0 and x integers } ,

and the feasible set of Problem (AP).

X A P = { x | P x d , x 0 and x  integers } , and X P X A P .

Then, Z D ( u ) is finite for all u. It is easy to extend the development when these assumptions are violated or when inequality constraints are included in the set to be dualized.

It is well known that Z D ( u ) Z . This is easy to show by assuming an optimal solution x * to (P) and obtain that

Z D ( u ) c x * + u ( A x * b ) = Z .

The inequality in the above relation follows from the definition of Z D ( u ) and equality from Z = c x * and A x * b = 0 .

If A x = b is replaced by A x b in (P), then we require u 0 and the relation becomes.

Z D ( u ) c x * + u ( A x * b ) Z .

as Z = c x * , u 0 and A x * b 0 . Similarly, for A x b we require u 0 for Z D ( u ) Z to hold.

In general, it is not possible to guarantee to find u for which Z D ( u ) = Z , but this frequently happens for particular problem cases. The fact that Z D ( u ) Z allows (AP) to be used in place of (P) to provide lower bounds in a branch and bound algorithm for (P). While this is the most obvious use of (AP), it has a number of other uses. It can be a medium for selecting branching variables and choosing the next branch to explore. Good feasible solutions to (P) can frequently be obtained by perturbing nearly feasible solutions to (AP). Finally, Lagrangian relaxation has been used recently in [5] as an analytic tool for establishing worst-case bounds on the performance of certain heuristics.

Determining Lagrangian Parameters u

One of the important steps is finding the suitable Lagrangian parameter u that provides the optimal solution of (AP).

The best choice of finding u is to solve the dual problem of (AP) which is stated as follows (see for details in [4] ).

Z = max w ,

subject to,

w c x t + u ( A x t b ) , t = 1 , , t ,

u 0 ,

where x t is the solution of (AP).

Held, Wolfe and Crowder Approach (1974) [4]:

We are here recalling the main steps to calculate u. A detail of this setting can be seen in [4]. It is required to determine a sequence of values for u that are

u k + 1 = max { 0 , u k + t k ( A x k b ) } ,

where t k is the step-size and x k are the optimal solution of (AP).

The step-size t k can be determined as follows.

t k = λ k ( Z * Z D ( u k ) ) A x k b 2 .

where Z * is determined the value of known feasible solution. The sequence λ k is concluded from the interval [ 0 , 2 ] and suitable λ k is needed if Z D ( u k ) is failed to increase.

In this paper, we have not used the Held et al. approach. Instead, we generate a matrix for Lagrangian multipliers and solve the problem for each of the multipliers. In the end, we recommended the best possible multipliers that lead to getting optimum solution.

The following Table 1 depicts the chosen values of Lagrangian multipliers.

3. Multiobjective Optimization Problems

Many real-life problems that often deal with multiple objectives and intend to optimize all criteria simultaneously. We now formulate the multiobjective optimization problem as follows.

(MOP) minimize

f ( x ) ( f 1 ( x ) , f 2 ( x ) , , f l ( x ) ) ,

subject to the set

A = { x R n | g ( x ) 0 } .

Table 1. Lagrangian multipliers are generated with step size 0.1 within the interval 0 u 1 .

The solution of multiobjective optimization problem is called an efficient solution or weak efficient solution [6]. The relation between efficient and weak efficient is that every efficient point is a weak efficient point, but the reverse does not hold. There are a number of ways to solve (MOP), and one of the popular methods is the scalarization technique. When a multiobjective problem is solved by combining its multiple objectives into one single-objective scalar function, this approach is generally known as the scalarization method.

The kth objective weighted constraint problem introduced in [7] [8] is one of the well-known scalarization techniques. The method applies to the problems with the disconnected Pareto front and the problems with a disconnected feasible set under the mild assumptions that the objective functions are continuous and bounded from below with a known lower bound. For each fixed k, the kth objective is minimized while the other weighted objective functions are incorporated as constraints. The problem structure is as follows:

(Sc-MOP)

min w k f k ( x ) ,

subject to

w i f i ( x ) w k f k ( x ) , i = 1 , , l ; i k ,

w i > 0 , i = 1 l w i = 1 ,

g j ( x ) 0 , j = 1 , , n and w W : = { x E l | w i 0 , i = 1 l w i = 1 } .

If x ¯ is a weak efficient point to the original problem (MOP) then x ¯ is the solution of the (Sc-MOP) for each fixed k. On the other hand, a point x ¯ solves all subproblems of (Sc-MOP) then it is an efficient solution to the original problem (MOP). This is a strong condition and it can be relaxed by setting x k , k = 1 , , l solves the k-subproblems of (MOP), and if Proposition 3.3 in [5] holds, then x k , k = 1 , , l are weak efficient solutions of the original problem (MOP).

4. Lagrangian Relaxation Technique for (MOP)

We now employ Lagrangian relaxation techniques to solve the subproblems of (Sc-MOP). We ease the constraints

w i f i ( x ) w k f k ( x ) , i = 1 , , l ; i k

We define an ( l 1 ) vector of nonnegative multipliers p R l , p 0 and added the nonnegative term

p i ( w i f i ( x ) w k f k ( x ) ) , i = 1 , , l ; i k ,

to the objective function of (Sc-MOP). Therefore, the mathematical formulation of the revised kth subproblem is

(LRBMOP)

min w k f k ( x ) + i = 1 l 1 p i [ w i f i ( x ) w k f k ( x ) ] , i k ,

subject to

w i > 0 , i = 1 l w i = 1 ,

g j ( x ) 0 , j = 1 , , n ,

p 0 .

This can be reformulated as

min ( 1 i = 1 l 1 p i ) w k f k ( x ) + i = 1 l 1 p i w i f i ( x ) , i k ,

subject to

w i > 0 , i = 1 l w i = 1 ,

g j ( x ) 0 , j = 1 , , n ,

p 0 .

To analyze the concepts, we now illustrate the following example.

Example 1. We take n = l = 2 ; Consider the problem

min { x 1 , x 2 } and X = { ( x 1 , x 2 ) | ( x 1 1 ) 2 + ( x 2 1 ) 2 1 0 } .

Let’s take w 1 = 0.48 and w 2 = 0.52 .

We now employ Lagrangian relaxation approach in (Sc-MOP) to solve the Example 1.

(Subproblem-1)

min w 1 f 1 ( x ) ,

subject to

w 2 f 2 ( x ) w 1 f 1 ( x ) ,

w i > 0 , i = 1 2 w i = 1 .

x X .

We construct the auxiliary problem of Subproblem-1 using the Lagrangian relaxation technique given below.

(LR-1):

min w 1 x 1 + p 2 ( w 2 x 2 w 1 x 1 ) ,

subject to x X .

Similarly, (Subproblem-2)

min w 2 f 2 ( x ) ,

subject to

w 1 f 1 ( x ) w 2 f 2 ( x ) ,

w i > 0 , i = 1 2 w i = 1 ,

x X .

Lagrangian relaxation technique of the above problem would be

(LR-2)

min w 2 x 2 + p 1 ( w 1 x 1 w 2 x 2 ) ,

subject to x X .

Solve (LR-1): The Lagrangian multiplier p 2 is provided into the algorithm. First, we form a matrix for p 2 under interval 0 p 2 1 that includes 101 parameter values with step-size is 0.1 (see Table 1). We solve (LR-1) for each multiplier, and the optimum solution is recorded. In the algorithm, the Brute Force technique is used to identify the minimum value of the problem. The implementation of the algorithm is done through MATLAB and it is also noted that the differentiable solvers “Ipopt” [9] and “SCIP” [10] are used to solve each of the subproblems. Associated solution-value for each of the multipliers is shown in Table 2.

Remark 1

The minimum value in Table 2 is found for p 2 = 0 (Lagrangian multiplier) and the respective solution is x 1 = 0 and x 2 = 1 which is shown in a square box (Yellow) in Figure 1. In Cell (1, 1) of Table 1 and Table 2 (Yellow in both tables) are the respective multiplier and the solution. However, x 1 = 0 and x 2 = 1 is not a feasible solution of (LR-1), this means, it violates w 2 x 2 w 1 x 1 0 . Therefore, we need to integrate either Brute Force with constraint condition or Held et al. approach [4] in the algorithm to get the best u to obtain the optimal solution of (Subproblem-1). The optimal solution to this problem is x 1 = 0.3072 and x 2 = 0.2789 and the optimum value is 1.4163 (Green in both Table 1 and Table 2).

Figure 2 given below shows the optimal solution to (Subproblem-1) that also satisfies the constraint w 2 x 2 w 1 x 1 0 .

Now we will solve the other subproblem (Subproblem 2) which is as follows

min w 2 x 2 + p 1 ( w 1 x 1 w 2 x 2 ) ,

subject to x X .

Similar Lagrangian multipliers (Table 1) are provided in the Algorithm to solve the problem (Subproblem 2). The solutions are obtained for associated

Table 2. Solution set of (LR-1) obtained for each multiplier stated in Table 1.

Figure 1. The square (Yellow) box depicts the optimum solution (but not feasible of (Subproblem-1)) and small circles are the solution of the respective other Lagrangian multipliers.

Figure 2. The square (Yellow) box depicts the optimum solution (which is also feasible of (Subproblem-1)) and small circles are the solution of the respective other Lagrangian multipliers.

multipliers. In the last step, the Brute Force method is applied to identify the optimum solution of (SubP2). Figure 3 and Table 3 are depicted as the solutions and optimum solution of the subproblem (Subproblem 2). The implementation required MATLAB programming software and app solvers.

Table 3. Solution list of (Subproblem 2) obtained for each multiplier listed in Table 1.

Figure 3. The square (Yellow) box depicts the optimum solution of (Subproblem 2) and small circles are the solution of the respective Lagrangian multipliers.

Remark 2

The optimum solution of (Subproblem 2) is found for p 1 = 1 (Lagrangian multiplier) and the solution is x 1 = 1 and x 2 = 0 which is shown in a square box (Yellow) in Figure 3. The minimum value is 0. In Cell (11, 10) of Table 1 and Table 3 (Green in both tables) are the respective multiplier and the solution. However, x 1 = 1 and x 2 = 0 is not a feasible solution of (Subproblem 2), which means, it violates constraints w 2 x 2 w 1 x 1 0 . Therefore, we need to fit in either the Brute Force with constraint condition or Held et al. [4] in the algorithm to get the best u to obtain the optimal solution of (Subproblem-2). The optimal solution (see, Figure 4) to this problem is x 1 = 0.2929 and x 2 = 0.2929 and the optimum value is 1.462 (Green in both Table 1 and Table 3).

Now we solve both subproblems for a set of weight vectors w W . This provides the solution of Example-1 and the solutions construct the Pareto front (see Figure 3) of the problem.

Remark 3

The implementation of Held et al. [4] algorithm is quite computationally expensive. Lambda which is in the interval [ 0 , 2 ] needs to be chosen cautiously. This is required for each subproblem as well as each given weight. Figure 5 is given

Figure 4. The square (Yellow) box depicts the optimum solution (which is also feasible of (Subproblem-2)), and small circles are the solution of the respective other Lagrangian multipliers.

Figure 5. The circle (Blue) depicts the efficient solution of Example-1 which also approximates the Preto front.

below to demonstrate the Pareto front obtained by the Lagrangian relaxation approach (Subproblem 1) and (Subproblem 2).

5. Conclusion

The Lagrangian relaxation method to solve multiobjective optimization problems has been introduced in this paper. We began by selecting an appropriate and well-known scalarization method that transforms the original multiobjective into a scalar objective optimization problem. Afterwards, this scalar objective problem was solved by us using Lagrangian relaxation techniques. We supplied grids of multipliers in the algorithm followed by the application of Brute force techniques to sort optimum solutions. Ultimately, we analyzed the results and recommended efficient methods.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

References

[1] Fisher, M.L. (1981) The Lagrangian Relaxation Method for Solving Integer Programming Problems. Management Science, 27, 1-18.
https://doi.org/10.1287/mnsc.27.1.1
[2] Geoffrion, A.M. (1974) Lagrangian Relaxation for Integer Programming. Mathematical Programming, 2, 82-114.
https://doi.org/10.1007/BFb0120690
[3] Shapiro, J.F. (1979) Mathematical Programming: Structures and Algorithms. John Wiley, New York.
[4] Held, M., Wolfe, P. and Crowder, H.D. (1974) Validation of Subgradient Optimization. Mathematical Programming, 6, 62-88.
https://doi.org/10.1007/BF01580223
[5] Rizvi, M.M., Faruque, H.S.A. and Ray, G.C. (2021) Lagrangian Relaxation Method in Approximating the Pareto Front of Multiobjective Optimization Problems. GANIT: Journal of Bangladesh Mathematical Society, 40, 126-133.
https://doi.org/10.3329/ganit.v40i2.51315
[6] Xing, W. and Wu, B. (2012) Homotopy Continuous Method for Weak Efficient Solution of Multiobjective Optimization Problem with Feasible Set Unbounded Condition. Applied Mathematics, 3, 765-771.
https://doi.org/10.4236/am.2012.37114
[7] Burachik, R.S., Kaya, C.Y. and Rizvi, M.M. (2017) A New Scalarization Techniques and New Algorithms to Generate Pareto Fronts. SIAM Journal on Optimization (SIOPT), 27, 1010-1034.
https://doi.org/10.1137/16M1083967
[8] Burachik, R.S., Kaya, C.Y. and Rizvi, M.M. (2014) A New Scalarization Technique to Approximate Pareto Fronts of Problems with Disconnected Feasible Sets. Journal of Optimization Theory and Applications, 162, 428-446.
https://doi.org/10.1007/s10957-013-0346-0
[9] Wachter, A. and Biegler, L.T. (2006) On the Implementation of a Primal-Dual Interior Point Filter Line Search Algorithm for Large-scale Nonlinear Programming. Mathematical Programming, 106, 25-57.
https://doi.org/10.1007/s10107-004-0559-y
[10] Gleixner, A., Eifler, L., Gally, T., Gamrath, G., Gemander, P., Gottwald, R.L., Hendel, G., Hojny, C., Koch, T., Miltenberger, M., Muller, B., Pfetsch, M.E., Puchert, E., Rehfeldt, D., Schlosser, F., Serrano, F., Shinano, Y., Viernickel, J.M., Vigerske, S., Weninger, D., Witt, J.T. and Witzig, J. (2017) The SCIP Optimization Suite 5.0. Zuse Institute Berlin, ZIB-Report 17-61, December.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.