A Class of Continuous Separable Nonlinear Multidimensional Knapsack Problems

Abstract

The nonlinear multidimensional knapsack problem is defined as the minimization of a convex function with multiple linear constraints. The methods developed for nonlinear multidimensional programming problems are often applied to solve the nonlinear multidimensional knapsack problems, but they are inefficient or limited since most of them do not exploit the characteristics of the knapsack problems. In this paper, by establishing structural properties of the continuous separable nonlinear multidimensional knapsack problem, we develop a multi-tier binary solution method for solving the continuous nonlinear multidimensional knapsack problems with general structure. The computational complexity is polynomial in the number of variables. We presented two examples to illustrate the general application of our method and we used statistical results to show the effectiveness of our method.

Share and Cite:

Zhang, B. , Lin, Z. and Wang, Y. (2018) A Class of Continuous Separable Nonlinear Multidimensional Knapsack Problems. American Journal of Operations Research, 8, 266-280. doi: 10.4236/ajor.2018.84015.

1. Introduction

The nonlinear multidimensional knapsack problem is defined as minimizing a convex function with multiple linear constraints. The nonlinear knapsack problem is a class of nonlinear programming, and some methods designed for nonlinear programming can be applied for solving the nonlinear multidimensional knapsack problems. The general nonlinear programming problems have been intensively studied in the last decades, and some different methods have been developed, such as Newton method   , branch and bound method  , interior point method  , sequential quadratic programming method   and the filter method  . These methods are designed for nonlinear programming problems, and some of them are inefficient or limited for solving the nonlinear knapsack problems since they do not consider the characteristics of the knapsack problems.

Generally, it is much faster and more reliable to solve knapsack problems with specialized methods than with standard methods . Many researchers studied the solution methods for the nonlinear knapsack problems based on the specialized knapsack structures. Most of the research studied the problems with single constraint. Two basic specialized methods are mainly applied for solving the single-constraint nonlinear knapsack problem. One is the multiplier search method , and another is the pegging method  . Recently, some new methods are proposed for efficiently solving the single-constraint nonlinear knapsack problem. Zhang and Hua developed a united method for solving a class of continuous separable nonlinear knapsack problems . Kiwiel developed the breakpoint searching method for the continuous quadratic knapsack problem . Sharkey et al. studied a general class of nonlinear non-separable continuous knapsack problem .

Most research of nonlinear knapsack problems studied the one-dimensional problems with continuous or integer variables, and the proposed methods cannot be directly extended for solving multi-dimensional problems. Some researchers attempted to solve multi-dimensional problems with integer-valued variables. Morin and Marsten firstly studied the nonlinear multidimensional knapsack problems and developed the imbedded state space approach . Some researchers investigate the efficiency of other methods, such as smart greedy method , cut method  , branch and bound method  and branch and cut method . Other research studied different applications of multidimensional knapsack, e.g., multi-product newsvendor problems with multiple constraints    . The continuous separable nonlinear multidimensional knapsack problems with general structure have not been well studied due to its complexity, and the specialized methods are very limited.

This paper establishes some structural properties of the continuous separable nonlinear multidimensional knapsack problem, and develops a multi-tier binary solution method for solving a class of continuous nonlinear multidimensional knapsack problems with general structure. The computational complexity is polynomial in the number of variables. We presented two examples to illustrate the application of our method, and the statistical study with the randomly generated instances for different problem sizes are reported to show the effectiveness of our method.

The paper is organized as follows. In Section 2, the nonlinear multidimensional knapsack problem is described. Section 3 studies the structural properties of the problem, and develops the algorithm. Section 4 presents the illustrative examples and the statistical results. Finally, the concluding remarks are given in Section 5. All proofs are listed in Appendix.

2. Problem Formulation

The continuous separable nonlinear multidimensional knapsack problem studied in this paper is as follows (denoted as problem P):

$\text{Min}f\left(x\right)=\underset{i=1}{\overset{N}{\sum }}{f}_{i}\left({x}_{i}\right)$ , (1)

Subject to

$\underset{i=1}{\overset{N}{\sum }}{c}_{i,j}{x}_{i}\le {C}_{j},j=1,\cdots ,M,$ (2)

${l}_{i}\le {x}_{i}\le {u}_{i},i=1,\cdots ,N$. (3)

The notation used in this paper is listed in Table 1.

In problem P, all objective functions ${f}_{i}\left({x}_{i}\right),i=1,\cdots ,n$ are convex and differentiable, the unit resource coefficient ${c}_{i,j}>0$ for all $i=1,\cdots ,N,j=1,\cdots ,M$ , the resource constraints ${C}_{j}>0$ for all $j=1,\cdots ,M$ , and the lower and upper bounds satisfy $0\le {l}_{i}<{u}_{i}$ for all $i=1,\cdots ,N$.

Since the objective functions and the feasible domain in problem P are all convex, the optimality condition for problem P can be characterized using KKT conditions. Let $\lambda =\left({\lambda }_{1},\cdots ,{\lambda }_{M}\right)$ , ${\lambda }_{j}\ge 0,\text{}j=1,\cdots ,M$ , be the Lagrange multiplier vector for the constraints given in Equation (2), and $w=\left({w}_{1},\cdots ,{w}_{N}\right)$ , ${w}_{i}\ge 0,i=1,\cdots ,N$ , $v=\left({v}_{1},\cdots ,{v}_{N}\right)$ , ${v}_{i}\ge 0,i=1,\cdots ,N$ be the Lagrange multiplier vectors for the constraints in Equation (3). Thus, the Lagrange function for problem P can be written as:

$L\left(x,\lambda ,w,\nu \right)=\underset{i=1}{\overset{N}{\sum }}{f}_{i}\left({x}_{i}\right)-\underset{j=1}{\overset{M}{\sum }}{\lambda }_{j}\left({C}_{j}-\underset{i=1}{\overset{N}{\sum }}{c}_{i,j}{x}_{i}\right)-\underset{i=1}{\overset{N}{\sum }}{w}_{i}\left({x}_{i}-{l}_{i}\right)+\underset{i=1}{\overset{N}{\sum }}{v}_{i}\left({x}_{i}-{u}_{i}\right)$. (4)

Table 1. Notation.

Let ${g}_{i}\left({x}_{i}\right)=\text{d}{f}_{i}\left({x}_{i}\right)/\text{d}{x}_{i}$ , $i=1,\cdots ,N$. The KKT conditions for problem P can be summarized as the following proposition.

Proposition 1: The KKT conditions for problem P are:

${g}_{i}\left({x}_{i}\right)+\underset{j=1}{\overset{M}{\sum }}{\lambda }_{j}{c}_{i,j}-{w}_{i}+{v}_{i}=0,\text{\hspace{0.17em}}i=1,\cdots ,N,$ (5)

$\underset{i=1}{\overset{N}{\sum }}{w}_{i}\left({x}_{i}-{l}_{i}\right)+\underset{i=1}{\overset{N}{\sum }}{v}_{i}\left({x}_{i}-{u}_{i}\right)=0,$ (6)

${\lambda }_{j}\left(\underset{i=1}{\overset{N}{\sum }}{c}_{i,j}{x}_{i}-{C}_{j}\right)=0,j=1,\cdots ,M.$ (7)

Since ${f}_{i}\left({x}_{i}\right)$ is convex in ${x}_{i}$ , ${g}_{i}\left({x}_{i}\right)$ is an increasing function of ${x}_{i}$. Let ${\stackrel{¯}{x}}_{i}$ be the point that satisfies ${g}_{i}\left({x}_{i}\right)=0$ if ${g}_{i}\left(0\right)\le 0$ and $\underset{{x}_{i}\to +\infty }{\mathrm{lim}}{g}_{i}\left({x}_{i}\right)\ge 0$. If ${g}_{i}\left(0\right)>0$ , we let ${\stackrel{¯}{x}}_{i}=0$. If $\underset{{x}_{i}\to +\infty }{\mathrm{lim}}{g}_{i}\left({x}_{i}\right)<0$ , we set ${\stackrel{¯}{x}}_{i}=+\infty$. Then ${\stackrel{¯}{x}}_{i}$ is the optimal solution to the objective function in Equation (1) without any constraint. We summarize it as

$\begin{array}{l}{\stackrel{¯}{x}}_{i}=\mathrm{arg}\mathrm{min}\left\{{f}_{i}\left({x}_{i}\right),0\le {x}_{i}\le +\infty \right\}\\ =\left\{\begin{array}{l}0,\text{}\text{\hspace{0.17em}}\text{ }\text{ }\text{if}\text{\hspace{0.17em}}{g}_{i}\left(0\right)>0,\\ \mathrm{arg}\left\{{x}_{i}|{g}_{i}\left({x}_{i}\right)=0\right\},\text{if}\text{\hspace{0.17em}}{g}_{i}\left(0\right)\le 0\text{and}\underset{{x}_{i}\to +\infty }{\mathrm{lim}}{g}_{i}\left({x}_{i}\right)\ge 0,\\ +\infty ,\text{}\text{ }\text{if}\text{\hspace{0.17em}}\underset{{x}_{i}\to +\infty }{\mathrm{lim}}{g}_{i}\left({x}_{i}\right)<0.\end{array}\end{array}$ (8)

3. Structural Properties and Solution Method

In this section, we first investigate the structural properties of the optimal solution to problem P. Then we develop a solution method based on the structural properties for solving problem P.

3.1. Structural Properties

We denote by problem PR the knapsack relaxation problem from problem P, in which the constraints in Equation (2) are relaxed. This implies that we do not consider Equation (2) in problem PR. By analyzing the solution to problem PR, we can find the way to construct the solution to problem P. We let ${\stackrel{^}{x}}_{i}$ ( $i=1,\cdots ,N$ ) be the optimal solution to problem PR, then ${\stackrel{^}{x}}_{i}$ ( $i=1,\cdots ,N$ ) has the following property.

Proposition 2: The optimal solution to problem PR is ${\stackrel{^}{x}}_{i}=\mathrm{min}\left\{\mathrm{max}\left\{{\stackrel{¯}{x}}_{i},{l}_{i}\right\},{u}_{i}\right\}$.

If $\underset{i=1}{\overset{N}{\sum }}{c}_{i,j}{\stackrel{^}{x}}_{i}\le {C}_{j}$ holds for some $j=1,\cdots ,M$ , then the corresponding constraints in problem P are inactive, which can be removed from problem P. In the following, without loss of generality, we assume that $\underset{i=1}{\overset{N}{\sum }}{c}_{i,j}{\stackrel{^}{x}}_{i}>{C}_{j}$ for all $j=1,\cdots ,M$. The KKT conditions in Equation (7) are met at either ${\lambda }_{j}=0$ , or $\underset{i=1}{\overset{N}{\sum }}{c}_{i,j}{x}_{i}={C}_{j}$. The condition ${\lambda }_{j}=0$ implies that there is enough resource j at the optimal solution, and hence the j-th constraint is inactive. $\underset{i=1}{\overset{N}{\sum }}{c}_{i,j}{x}_{i}={C}_{j}$

means that the j-th constraint is active, and knapsack space of the j-th constraint must be fully utilized at the optimal solution.

We denote by ${x}^{*}$ the optimal solution to problem P and ${\lambda }^{*}$ the corresponding Lagrange multiplier vector. Let ${x}_{i}\left(\lambda \right)$ be a solution of the KKT conditions in Equation (5) and Equation (6). We denote by ${h}_{i}\left(\cdot \right)={g}_{i}^{-1}\left(\cdot \right)$ , then we have the following proposition.

Proposition 3. (a) ${x}_{i}\left(\lambda \right)=\mathrm{min}\left\{\mathrm{max}\left\{{h}_{i}\left(-\underset{j=1}{\overset{M}{\sum }}{\lambda }_{j}{c}_{i,j}\right),{l}_{i}\right\},{u}_{i}\right\}$ , $i=1,\cdots ,N$.

(b) If $\left(x\left(\lambda \right),\lambda \right)$ satisfies ${\lambda }_{j}=0$ or $\underset{i=1}{\overset{N}{\sum }}{c}_{i,j}{x}_{i}={C}_{j}$ , $j=1,\cdots ,M$ , then we have ${x}^{*}=x\left(\lambda \right)$.

For any given ${\lambda }_{M}\ge 0$ , we let $x\left({\lambda }_{M}\right)$ and ${\lambda }_{1},\cdots ,{\lambda }_{M-1}$ be the optimal so-

lution of Equations (5) and (6) and ${\lambda }_{j}\left(\underset{i=1}{\overset{N}{\sum }}{c}_{i,j}{x}_{i}-{C}_{j}\right)=0$ , $j=1,\cdots ,M-1$. For

ease of exposition, we denote problem P as $P\left(f,M\right)$ , where $f=\left({f}_{i},\cdots ,{f}_{N}\right)$ is the objective function vector. Problem $P\left(\stackrel{^}{f}\left({\lambda }_{M}\right),M-1\right)$ with ${\stackrel{^}{f}}_{i}\left({\lambda }_{M}\right)={f}_{i}+{\lambda }_{M}{c}_{i,M}{x}_{i}$ , $i=1,\cdots ,N$ , is an $M-1$ constraint problem with the objective function ${\stackrel{^}{f}}_{i}\left({\lambda }_{M}\right)$ and the first $M-1$ knapsack constraints of problem P.

By analyzing the structural properties of $x\left({\lambda }_{M}\right)$ and $P\left(\stackrel{^}{f}\left({\lambda }_{M}\right),M-1\right)$ , we can prove the following proposition.

Proposition 4. (a) If $\left(x\left({\lambda }_{M}\right),{\lambda }_{M}\right)$ satisfies ${\lambda }_{M}=0$ or $\underset{i=1}{\overset{N}{\sum }}{c}_{i,M}{x}_{i}\left({\lambda }_{M}\right)={C}_{M}$ , then we have ${x}^{*}=x\left({\lambda }_{M}\right)$.

(b) $x\left({\lambda }_{M}\right)$ is the optimal solution to problem $P\left(\stackrel{^}{f}\left({\lambda }_{M}\right),M-1\right)$ with ${\stackrel{^}{f}}_{i}\left({\lambda }_{M}\right)={f}_{i}+{\lambda }_{M}{c}_{i,M}{x}_{i}$ , $i=1,\cdots ,N$.

From Proposition 4(a), we know that the optimal solution to problem $P\left(\stackrel{^}{f}\left({\lambda }_{M}\right),M-1\right)$ is obtained in two possible cases: 1) ${\lambda }_{M}=0$ , which means

that the constraint $\underset{i=1}{\overset{N}{\sum }}{c}_{i,M}{x}_{i}\left({\lambda }_{M}\right)\le {C}_{M}$ is not binding and it can be removed

from problem $P\left(f,M\right)$. Therefore, ${x}^{*}$ can be obtained by solving problem $P\left(f,M-1\right)$ , which has the same structure as problem $P\left(f,M\right)$ ; 2)

$\underset{i=1}{\overset{N}{\sum }}{c}_{i,M}{x}_{i}\left({\lambda }_{M}\right)={C}_{M}$ , which implies that $\underset{i=1}{\overset{N}{\sum }}{c}_{i,M}{x}_{i}\left({\lambda }_{M}\right)\le {C}_{M}$ is an active constraint, and the optimal solution must be obtained at $\underset{i=1}{\overset{N}{\sum }}{c}_{i,M}{x}_{i}\left({\lambda }_{M}\right)={C}_{M}$ with ${\lambda }_{M}>0$.

Since problem $P\left(f,M\right)$ can be solved by solving problem $P\left(f,M-1\right)$ in the case of ${\lambda }_{M}=0$. In the following, we study the case of ${\lambda }_{M}>0$. Proposition 4(b) indicates that problem $P\left(\stackrel{^}{f}\left({\lambda }_{M}\right),M-1\right)$ determines the optimal values of $x\left({\lambda }_{M}\right)$ and ${\lambda }_{j}$ , $j=1,\cdots ,M-1$. For any ${\lambda }_{M}>0$ , the $M-1$ resource constraints could be active or inactive, and the N decision variables could take bound values or non-bound values.

If ${\lambda }_{j}>0$ , $j=1,\cdots ,M-1$ , constraint j will be active, thus we denote by $J\left({\lambda }_{M}\right)=\left\{j|{\lambda }_{j}>0,j=1,\cdots ,M\right\}$ the active constraint set for the given ${\lambda }_{M}$. Note that $J\left({\lambda }_{M}\right)$ includes at least one active constraint for the case of ${\lambda }_{M}>0$.

From Equation (5), we know ${x}_{i}\left({\lambda }_{M}\right)>{l}_{i}$ if $-{g}_{i}\left({l}_{i}\right)-\underset{j=1}{\overset{M}{\sum }}{\lambda }_{j}{c}_{i,j}>0$ , $i=1,\cdots ,N$ , and ${x}_{i}\left({\lambda }_{M}\right)<{u}_{i}$ if $-{g}_{i}\left({u}_{i}\right)-\underset{j=1}{\overset{M}{\sum }}{\lambda }_{j}{c}_{i,j}>0$ , $i=1,\cdots ,N$. For the given ${\lambda }_{M}$ , we define the non-bound variable set $I\left({\lambda }_{M}\right)$ , and lower and upper bound variable sets ${I}_{L}\left({\lambda }_{M}\right)$ and ${I}_{U}\left({\lambda }_{M}\right)$ as

$I\left({\lambda }_{M}\right)=\left\{i|{g}_{i}\left({l}_{i}\right)<-\underset{j=1}{\overset{M}{\sum }}{\lambda }_{j}{c}_{i,j}<{g}_{i}\left({u}_{i}\right),i=1,\cdots ,N\right\}$ , (9)

${I}_{L}\left({\lambda }_{M}\right)=\left\{i|-\underset{j=1}{\overset{M}{\sum }}{\lambda }_{j}{c}_{i,j}\le {g}_{i}\left({l}_{i}\right),i=1,\cdots ,N\right\}$ , (10)

${I}_{U}\left({\lambda }_{M}\right)=\left\{i|-\underset{j=1}{\overset{M}{\sum }}{\lambda }_{j}{c}_{i,j}\ge {g}_{i}\left({u}_{i}\right),i=1,\cdots ,N\right\}$. (11)

Let $m=|J\left({\lambda }_{M}\right)|$ , $n=|I\left({\lambda }_{M}\right)|$ , ${n}_{L}=|{I}_{L}\left({\lambda }_{M}\right)|$ , and ${n}_{U}=|{I}_{U}\left({\lambda }_{M}\right)|$. For the given ${\lambda }_{M}>0$ , without changing the orders of indices j and i, we re-index the constraints in the active constraint set $J\left({\lambda }_{M}\right)$ as $j=1,\cdots ,m$ , and we re-index the variables in the non-bound variable set $I\left({\lambda }_{M}\right)$ as $i=1,\cdots ,n$ , and re-index the variables in ${I}_{L}\left({\lambda }_{M}\right)$ and ${I}_{U}\left({\lambda }_{M}\right)$ as $i=1,\cdots ,{n}_{L}$ , and $i=1,\cdots ,{n}_{U}$ , respectively. As a result, constraint M in the original problem is re-indexed as constraint m, and ${\lambda }_{M}$ is also restated as ${\lambda }_{m}$.

We define ${G}_{j}\left(\lambda {}_{1},\cdots ,{\lambda }_{m}\right)\equiv \underset{i=1}{\overset{N}{\sum }}{c}_{i,j}{x}_{i}\left(\lambda \right)-{C}_{j}=0$ , $j=1,\cdots ,m-1$ , and substitute

${x}_{i}\left(\lambda \right)=\mathrm{min}\left\{\mathrm{max}\left\{{h}_{i}\left(-\underset{j=1}{\overset{M}{\sum }}{\lambda }_{j}{c}_{i,j}\right),{l}_{i}\right\},{u}_{i}\right\}$

into ${G}_{j}\left({\lambda }_{1},\cdots ,{\lambda }_{m}\right)$ , then we have

${G}_{j}\left({\lambda }_{1},\cdots ,{\lambda }_{m}\right)\equiv {\sum }_{i=1}^{n}{c}_{i,j}{h}_{i}\left(-{\sum }_{s=1}^{m}{\lambda }_{s}{c}_{i,s}\right)-\left({C}_{j}-{\sum }_{i=1}^{{n}_{L}}{c}_{i,j}{l}_{i}-{\sum }_{i=1}^{{n}_{U}}{c}_{i,j}{u}_{i}\right)=0$ , (12)

Taking the derivative of Equation (12), we get

$\begin{array}{c}\frac{\text{d}{G}_{j}\left({\lambda }_{1},\cdots ,{\lambda }_{m}\right)}{\text{d}{\lambda }_{m}}=-\left[\underset{i=1}{\overset{n}{\sum }}\frac{{c}_{i,j}}{{k}_{i}\left({x}_{i}\left({\lambda }_{1},\cdots ,{\lambda }_{m}\right)\right)}\underset{s=1}{\overset{m}{\sum }}\frac{\text{d}{\lambda }_{s}}{\text{d}{\lambda }_{m}}{c}_{i,s}\right]\\ =-\underset{s=1}{\overset{m}{\sum }}\underset{i=1}{\overset{n}{\sum }}\frac{{c}_{i,j}{c}_{i,s}}{{k}_{i}\left({x}_{i}\left({\lambda }_{1},\cdots ,{\lambda }_{m}\right)\right)}\frac{\text{d}{\lambda }_{s}}{\text{d}{\lambda }_{m}}\\ =0,\text{\hspace{0.17em}}j=1,\cdots ,m-1\end{array}$ , (13)

where ${k}_{i}\left({x}_{i}\right)=\text{d}{g}_{i}\left({x}_{i}\right)/\text{d}{x}_{i}$.

Since ${f}_{i}\left({x}_{i}\right)$ , $i=1,\cdots ,n$ are differentiable convex, we know ${g}_{i}\left({x}_{i}\right)$ is increasing and ${k}_{i}\left({x}_{i}\left({\lambda }_{1},\cdots ,{\lambda }_{m}\right)\right)>0$. Note that ${\stackrel{^}{f}}_{i}\left({\lambda }_{M}\right)={f}_{i}+{\lambda }_{M}{c}_{i,M}{x}_{i}$ has the same structure as ${f}_{i}\left({x}_{i}\right)$. So we define

${\rho }_{i}=\frac{1}{{k}_{i}\left({x}_{i}\left({\lambda }_{1},\cdots ,{\lambda }_{m}\right)\right)}>0$ ,

$i=1,\cdots ,n$ , and ${a}_{js}={\sum }_{i=1}^{n}{\rho }_{i}{c}_{i,j}{c}_{i,s}$ , $j,s=1,\cdots ,m$ , then Equation (13) can be rewritten in matrix form:

$\left(\begin{array}{cccc}{a}_{11}& {a}_{12}& \cdots & {a}_{1m}\\ {a}_{21}& {a}_{22}& \cdots & {a}_{2m}\\ ⋮& ⋮& \ddots & ⋮\\ {a}_{\left(m-1\right)1}& {a}_{\left(m-1\right)2}& \cdots & {a}_{\left(m-1\right)m}\end{array}\right)\left(\begin{array}{c}\text{d}{\lambda }_{1}/\text{d}{\lambda }_{m}\\ ⋮\\ \text{d}{\lambda }_{m-1}/\text{d}{\lambda }_{m}\\ 1\end{array}\right)=\left(\begin{array}{c}0\\ 0\\ ⋮\\ 0\end{array}\right).$ (14)

In order to solve $\frac{\text{d}{\lambda }_{j}}{\text{d}{\lambda }_{m}}$ , $j=1,\cdots ,m-1$ , from Equation (14), we further define

${H}_{m}=|\begin{array}{cccc}{a}_{11}& {a}_{12}& \cdots & {a}_{1m}\\ {a}_{21}& {a}_{22}& \cdots & {a}_{2m}\\ ⋮& ⋮& \ddots & ⋮\\ {a}_{m1}& {a}_{m2}& \cdots & {a}_{mm}\end{array}|$ , (15)

and denote by ${H}_{j\left(m-1\right)}$ , $j=1,\cdots ,m-1$ the m-1 dimensional determinant in which the j column of ${H}_{m-1}$ is replaced by ${\left({a}_{1m},{a}_{2m},\cdots ,{a}_{\left(m-1\right)m}\right)}^{\text{T}}$. We have the following formula from Equation (14) and Equation (15):

$\frac{\text{d}{\lambda }_{j}}{\text{d}{\lambda }_{m}}=-\frac{{H}_{j\left(m-1\right)}}{{H}_{m-1}},\text{}j=1,\cdots ,m-1,\text{}m>1$. (16)

Notice that the above results have similar structures as the results in Zhang . Using the similar way, we can prove that

$\begin{array}{l}\frac{\text{d}\underset{i=1}{\overset{n}{\sum }}{c}_{i,m}{x}_{i}\left({\lambda }_{m}\right)}{\text{d}{\lambda }_{m}}=-\underset{i=1}{\overset{n}{\sum }}{\rho }_{i}{c}_{i,m}\left(\underset{j=1}{\overset{m-1}{\sum }}{c}_{i,j}\frac{\text{d}{\lambda }_{j}}{\text{d}{\lambda }_{m}}+{c}_{i,m}\right)\\ =-\underset{i=1}{\overset{n}{\sum }}{\rho }_{i}{c}_{i,m}\left(-\underset{j=1}{\overset{m-1}{\sum }}{c}_{i,j}\frac{{H}_{j\left(m-1\right)}}{{H}_{m-1}}+{c}_{i,m}\right)=-\frac{{H}_{m}}{{H}_{m-1}}<0\end{array}$. (17)

Since constraint M in the original problem is re-indexed as constraint m, and ${\lambda }_{M}$ is also restated as ${\lambda }_{m}$ , then ${\sum }_{i=1}^{n}{c}_{i,m}{x}_{i}\left({\lambda }_{m}\right)$ is equivalent to ${\sum }_{i=1}^{N}{c}_{i,M}{x}_{i}\left({\lambda }_{M}\right)$ in problem P with the original index, thus we know that ${\sum }_{i=1}^{N}{c}_{i,M}{x}_{i}\left({\lambda }_{M}\right)$ is a decreasing in ${\lambda }_{M}$.

Therefore, there are three possible cases: 1) When ${\lambda }_{M}=0$ , we get the optimal solution to problem $P\left(f,M\right)$ by solving problem $P\left(f,M-1\right)$ ; 2) If ${\lambda }_{M}>0$ and $m=1$ , we obtain the optimal solution to problem $P\left(f,M\right)$ by setting ${x}_{i}\left({\lambda }_{M}\right)=\mathrm{min}\left\{\mathrm{max}\left\{{h}_{i}\left(-{\lambda }_{M}{c}_{i,M}\right),{l}_{i}\right\},{u}_{i}\right\}$ ; 3) When ${\lambda }_{M}>0$ and $m>1$ , we can solve problem $P\left(f,M\right)$ by studying problem $P\left(\stackrel{^}{f}\left({\lambda }_{M}\right),M-1\right)$ , with ${\stackrel{^}{f}}_{i}\left({\lambda }_{M}\right)={f}_{i}+{\lambda }_{M}{c}_{i,M}{x}_{i}$.

3.2. Solution Method

According to Proposition 2, we can solve ${x}^{*}$ by searching the optimal value of $\lambda$. Before presenting the solution method, we first study the bounds for $\lambda$. The lower bound for $\lambda$ is 0, and the upper bound for $\lambda$ is given in the following proposition.

Proposition 5. The upper bound of ${\lambda }_{i}$ is $\mathrm{max}\left(0,{\mathrm{max}}_{i=1,\cdots ,N}\left\{-{g}_{i}\left({l}_{i}\right)/{c}_{i,M}\right\}\right)$.

From Proposition 4, we get the optimal value of ${x}^{*}$ if the optimal solution $x\left({\lambda }_{M}\right)$ to problem $P\left(\stackrel{^}{f}\left({\lambda }_{M}\right),M-1\right)$ satisfies

${\lambda }_{M}\left(\underset{i=1}{\overset{N}{\sum }}{c}_{i,M}{x}_{i}\left({\lambda }_{M}\right)-{C}_{M}\right)=0$.

Since ${\sum }_{i=1}^{N}{c}_{i,M}{x}_{i}\left({\lambda }_{M}\right)$ is decreasing in ${\lambda }_{M}$ , the optimal solution can be found by applying the binary search over $\left[0,\mathrm{max}\left(0,{\mathrm{max}}_{i=1,\cdots ,N}\left\{-{g}_{i}\left({l}_{i}\right)/{c}_{i,M}\right\}\right)\right]$. Since Problem $P\left(\stackrel{^}{f}\left({\lambda }_{M}\right),M-1\right)$ has the same structure as problem $P\left(f,M\right)$ , we can use a multi-tier binary search method to solve problem P. Main steps of the multi-tier binary search method are given in Algorithm 1.

Algorithm 1: $\text{SloveP}\left(f,M\right)$

Step 1: If $M=0$ , then let ${x}_{i}^{*}=\mathrm{min}\left\{\mathrm{max}\left\{\begin{array}{c}\mathrm{arg}\end{array}\left\{{g}_{i}\left({x}_{i}\right)=0\right\},{l}_{i}\right\},{u}_{i}\right\}$ , stop;

Step 2: Let ${\lambda }_{M}^{L}=0$ , ${\lambda }_{M}^{U}=\mathrm{max}\left(0,{\mathrm{max}}_{i=1,\cdots ,N}\left\{-{g}_{i}\left({l}_{i}\right)/{c}_{i,M}\right\}\right)$ ;

Step 3: Let ${\lambda }_{M}=\left({\lambda }_{M}^{L}+{\lambda }_{M}^{U}\right)/2$ ;

Step 4: If ${\lambda }_{M}=0$ , then let ${x}_{i}^{*}=\text{SolveP}\left(f,M-1\right)$ and ${\lambda }_{M}^{*}=0$ , stop;

Step 5: If $M=1$ , then let ${x}_{i}\left({\lambda }_{M}\right)=\mathrm{min}\left\{\mathrm{max}\left\{{h}_{i}\left(-{\lambda }_{M}{c}_{i,M}\right),{l}_{i}\right\},{u}_{i}\right\}$ ;

If $M>1$ , then let ${x}_{i}\left({\lambda }_{M}\right)=\text{SolveP}\left(\stackrel{^}{f}\left({\lambda }_{M}\right),M-1\right)$

Step 6: If ${\sum }_{i=1}^{N}{c}_{i,M}{x}_{i}\left({\lambda }_{M}\right)>{C}_{M}$ , then let ${\lambda }_{M}^{L}={\lambda }_{M}$ , go to Step 3;

If ${\sum }_{i=1}^{N}{c}_{i,M}{x}_{i}\left({\lambda }_{M}\right)<{C}_{M}$ , then let ${\lambda }_{M}^{U}={\lambda }_{M}$ , go to Step 3;

Step 7: Let ${x}_{i}^{*}={x}_{i}\left({\lambda }_{M}\right)$ and ${\lambda }_{M}^{*}={\lambda }_{M}$ , stop.

In the algorithm, we first solve the unconstrained problem with bounded variables (Step 1) to obtain ${x}^{*}$. If the constraints are active, we apply the binary search procedure (Step 2 - 7) over interval $\left[{\lambda }_{M}^{L},{\lambda }_{M}^{U}\right]$ to determine ${\lambda }_{M}^{*}$. If either ${\lambda }_{M}=0$ or ${\sum }_{i=1}^{N}{c}_{i,M}{x}_{i}\left({\lambda }_{M}\right)={C}_{M}$ , the binary search procedure terminates. If ${\sum }_{i=1}^{N}{c}_{i,M}{x}_{i}\left({\lambda }_{M}\right)\le {C}_{M}$ is not binding, then the iterating process will end in Step 4 with ${\lambda }_{M}=0$. Therefore, we can get the optimal solution ${x}_{i}^{*}$ by solving problem $P\left(f,M-1\right)$. If the constraint ${\sum }_{i=1}^{N}{c}_{i,M}{x}_{i}\left({\lambda }_{M}\right)\le {C}_{M}$ is active, the so-

lution procedure will stop at Step 7 with ${\sum }_{i=1}^{N}{c}_{i,M}{x}_{i}\left({\lambda }_{M}\right)={C}_{M}$. Step 5 derives

${x}_{i}\left({\lambda }_{M}\right)$ by solving problem $P\left(\stackrel{^}{f}\left({\lambda }_{M}\right),M-1\right)$ with ${\stackrel{^}{f}}_{i}\left({\lambda }_{M}\right)={f}_{i}+{\lambda }_{M}{c}_{i,M}{x}_{i}$ for the given ${\lambda }_{M}>0$. If $M=1$ , problem $P\left(\stackrel{^}{f}\left({\lambda }_{M}\right),M-1\right)$ has no knapsack

constraint, and hence we have ${x}_{i}\left({\lambda }_{M}\right)=\mathrm{min}\left\{\mathrm{max}\left\{{h}_{i}\left(-{\lambda }_{M}{c}_{i,M}\right),{l}_{i}\right\},{u}_{i}\right\}$. If

$M>1$ , we can solve the problem recursively. Problem $P\left(\stackrel{^}{f}\left({\lambda }_{M}\right),M-1\right)$ has the same structure as problem $P\left(f,M\right)$ , and hence the algorithm can call itself recursively to solve the problem $P\left(\stackrel{^}{f}\left({\lambda }_{M}\right),M-1\right)$.

The algorithm is a recursive algorithm with M tiers of binary search loop. The computational complexity of M-tier binary search procedure is $O\left({\left({\mathrm{log}}_{2}\left(1/\epsilon \right)\right)}^{M}\right)$ , where $\epsilon$ is the error target for the binary search. The computational complexity of the last recursive step is $O\left(N\right)$. Therefore, the proposed algorithm has the computational complexity $O\left({\left({\mathrm{log}}_{2}\left(1/\epsilon \right)\right)}^{M}N\right)$ , which is polynomial in the number of decision variables N.

4. Numerical Study

The solution method developed in this paper can be used for solving the continuous nonlinear multidimensional knapsack problems with general structure, so many application problems with different objective functions summarized in Zhang and Hua with multiple constraints can be used to show the application of our method .

In our numerical study, we first show the application of our method using two examples: quadratic multidimensional knapsack problem (QMK) and the production planning problem presented in Bretthauer and Shetty . Then we use the statistical study to show the efficiency of our method. All computational experiments are conducted on a laptop (dual processor 2.00 GHz, memory 2.96G) with Matlab R2011a.

4.1. The Illustrative Examples

The first illustrative example is a separable quadratic knapsack problem. We set the objective function as ${f}_{i}\left({x}_{i}\right)={a}_{i}{\left({x}_{i}-{b}_{i}\right)}^{2}$ , ${a}_{i}>0,i=1,\cdots ,N$. It has two resource constraints: C1 = 12,000 and C2 = 10,000. Table 2 gives the relevant information for this example. ${x}_{i}^{*}$ is the optimal solution obtained by applying our algorithm. To show the efficiency of our method, we plot the values of ${\lambda }_{M}^{L},{\lambda }_{M}^{U}$ and ${\lambda }_{M}$ in the iteration process for solving the example in Figure 1. Figure 1 shows our algorithm can solve the problem within very limited iterations.

In the second example, we solve the production planning problem in Bretthauer and Shetty . The objective function was set as

${f}_{i}\left({x}_{i}\right)=\mathrm{min}\underset{i=1}{\overset{n}{\sum }}\left({h}_{i}+{d}_{i}{x}_{i}+\frac{{e}_{i}}{{x}_{i}}\right)$ ,

$i=1,\cdots ,N$. There are three resource constraints: C1 = 200, C2 = 300, and C3 = 500. We use the same parameters used in Bretthauer and Shetty . The relevant

Table 2. Parameters and solution for the first example.

Figure 1. ${\lambda }_{M}^{L},{\lambda }_{M}^{U},{\lambda }_{M}$ in the iteration process for solving the first example.

information for this example is listed in Table 3. ${x}_{i}^{*}$ is the optimal solution obtained by applying our algorithm.

4.2. The Statistical Results

In this subsection, we present two numerical experiments to show the effectiveness of our method for solving problems with different scale and objective functions. In the first experiment, parameters of the QMK problems are all randomly generated. We use the notation $z~U\left(\alpha ,\beta \right)$ to denote that z is uniformly generated over $\left[\alpha ,\beta \right]$. The parameters of QMK instances are generated as follows: ${a}_{i}~U\left(1,2\right)$ , ${b}_{i}~U\left(5,10\right)$ , ${c}_{i,j}~U\left(1,10\right)$ , ${l}_{i}~U\left(5,15\right)$ , ${u}_{i}~U\left(20,30\right)$ and ${C}_{j}~N×U\left(\text{100000},\text{200000}\right)$ , for $i=1,\cdots ,N;j=1,\cdots ,M$.

In this experiment, we set problems with different sizes, respectively with M = 4 and N = 10, M = 2 and N = 100, M = 3 and N = 100, M = 2 and N = 1000. For each problem size, 50 test instances are randomly generated. The statistical results on number of iterations and computation time (in seconds) are reported in Table 4.

In the Second experiment, we solve the production planning problem with randomly generated parameters. The parameters of the instances are generated as follows: ${d}_{i}~U\left(30,50\right)$ , ${e}_{i}~U\left(100,200\right)$ , ${c}_{i,j}~U\left(10,50\right)$ , ${l}_{i}~U\left(1,5\right)$ , ${u}_{i}~U\left(20,30\right)$ and ${C}_{j}~N×U\left(\text{100000},\text{200000}\right)$ , for $i=1,\cdots ,N;j=1,\cdots ,M$.

In this experiment, we set problems with different sizes, respectively with M = 4 and N = 10, M = 2 and N = 100, M = 3 and N = 100, M = 2 and N = 1000. For each problem size, we randomly generated 50 test instances. The statistical results on number of iterations and computation time (in seconds) are presented in Table 5.

From Table 4 and Table 5, we observe that the standard deviations of number of iterations and computation times are quite low. It implies that our method is quite effective with different objective functions. We also observe that the

Table 3. Parameters and solutions for the second example.

Table 4. Statistical results for randomly generated QMK problems.

Table 5. Statistical results for randomly generated production planning problems.

computation time is more sensitive to the number of the resource constraints rather than the number of variables. Since the application problems often have much more variables than knapsack constraints, our algorithm is useful in practice.

5. Conclusions

In this paper, we study a class of continuous separable nonlinear multidimensional knapsack problems. By analyzing the structural properties of the optimal solution, we develop a multi-tier binary solution method. The proposed method has following advantages. 1) It is applicable for solving the nonlinear multidimensional knapsack problems with general structure. 2) It has computational complexity of polynomial in the number of variables.

This research can be further extended in several ways. One is to study non-separable multidimensional knapsack problems using the similar idea. Another way is to develop exact solution methods or heuristics for solving the integer multidimensional knapsack problems based on our method. Finally, the idea used in this study can be extended for investigating other complex optimization problems with multiple constraints.

Acknowledgements

This work is supported by National Natural Science Foundation of China (Grants No. 71672199).

Appendix

A.1 Proof of Proposition 2

It is defined that $0\le {l}_{i}<{u}_{i}$ for all $i=1,\cdots ,N$. The optimal solution to problem PR should satisfy Equation (1) and Equation (3). If ${l}_{i}\le {\stackrel{¯}{x}}_{i}\le {u}_{i}$ , it means that the bound constraint is inactive. Therefore, we have ${\stackrel{^}{x}}_{i}={\stackrel{¯}{x}}_{i}$. Since ${g}_{i}\left({x}_{i}\right)$ is increasing in ${x}_{i}$ , and ${g}_{i}\left({\stackrel{¯}{x}}_{i}\right)=0$ , we have ${g}_{i}\left({x}_{i}\right)\ge 0$ if ${x}_{i}>{\stackrel{¯}{x}}_{i}$. If ${\stackrel{¯}{x}}_{i}<{l}_{i}<{u}_{i}$ , then ${g}_{i}\left({x}_{i}\right)\ge 0$ if ${l}_{i}\le {x}_{i}\le {u}_{i}$. Thus for any ${x}_{i}\in \left[{l}_{i},{u}_{i}\right]$ , we have ${f}_{i}\left({l}_{i}\right)\le {f}_{i}\left({x}_{i}\right)$ , and ${\stackrel{^}{x}}_{i}={l}_{i}$. If ${l}_{i}<{u}_{i}<{\stackrel{¯}{x}}_{i}$ , we have ${\stackrel{^}{x}}_{i}={u}_{i}$. It can be proved similar to the condition of ${\stackrel{¯}{x}}_{i}<{l}_{i}$.

A.2 Proof of Proposition 3

1) If ${g}_{i}\left({l}_{i}\right)\le -{\sum }_{j=1}^{M}{\lambda }_{j}{c}_{i,j}\le {g}_{i}\left({u}_{i}\right)$ , then we have ${l}_{i}\le {h}_{i}\left(-{\sum }_{j=1}^{M}{\lambda }_{j}{c}_{i,j}\right)\le {u}_{i}$ , and ${w}_{i}={v}_{i}=0$ , which implies ${x}_{i}\left(\lambda \right)={h}_{i}\left(-{\sum }_{j=1}^{M}{\lambda }_{j}{c}_{i,j}\right)$. If $-{\sum }_{j=1}^{M}{\lambda }_{j}{c}_{i,j}<{g}_{i}\left({l}_{i}\right)$ , then we have ${g}_{i}\left({x}_{i}\right)+{\sum }_{j=1}^{M}{\lambda }_{j}{c}_{i,j}\ge {g}_{i}\left({l}_{i}\right)+{\sum }_{j=1}^{M}{\lambda }_{j}{c}_{i,j}>0$ , and hence ${w}_{i}>0$ , ${x}_{i}\left(\lambda \right)={l}_{i}$.

If $-{\sum }_{j=1}^{M}{\lambda }_{j}{c}_{i,j}>{g}_{i}\left({u}_{i}\right)$ , we have ${g}_{i}\left({x}_{i}\right)+{\sum }_{j=1}^{M}{\lambda }_{j}{c}_{i,j}\le {g}_{i}\left({u}_{i}\right)+{\sum }_{j=1}^{M}{\lambda }_{j}{c}_{i,j}<0$ , which means ${v}_{i}>0$ , and ${x}_{i}\left(\lambda \right)={u}_{i}$. Therefore, we have

${x}_{i}\left(\lambda \right)=\left\{\begin{array}{l}{l}_{i},\text{if}\text{\hspace{0.17em}}-\underset{j=1}{\overset{M}{\sum }}{\lambda }_{j}{c}_{i,j}<{g}_{i}\left({l}_{i}\right),\\ {h}_{i}\left(-\underset{j=1}{\overset{M}{\sum }}{\lambda }_{j}{c}_{i,j}\right),\text{if}\text{\hspace{0.17em}}{g}_{i}\left({l}_{i}\right)\le -\underset{j=1}{\overset{M}{\sum }}{\lambda }_{j}{c}_{i,j}\le {g}_{i}\left({u}_{i}\right),\\ {u}_{i},\text{if}\text{\hspace{0.17em}}-\underset{j=1}{\overset{M}{\sum }}{\lambda }_{j}{c}_{i,j}>{g}_{i}\left({u}_{i}\right).\end{array}$ (A1)

2) ${\lambda }_{j}=0$ or ${\sum }_{i=1}^{N}{c}_{i,j}{x}_{i}\left(\lambda \right)={C}_{j}$ implies

${\lambda }_{j}\left(\underset{i=1}{\overset{N}{\sum }}{c}_{i,j}{x}_{i}\left(\lambda \right)-{C}_{j}\right)=0,j=1,\cdots ,M$.

Because $x\left(\lambda \right)$ satisfies Equation (5) and Equation (6), $x\left(\lambda \right)$ will satisfy all KKT conditions. Therefore, ${x}^{*}=x\left(\lambda \right)$ if $\left(x\left(\lambda \right),\lambda \right)$ satisfies ${\lambda }_{j}=0$ or ${\sum }_{i=1}^{N}{c}_{i,j}{x}_{i}={C}_{j}$ , $j=1,\cdots ,M$.

A.3 Proof of Proposition 4

1) ${\lambda }_{M}=0$ or ${\sum }_{i=1}^{N}{c}_{i,M}{x}_{i}\left({\lambda }_{M}\right)={C}_{M}$ implies ${\lambda }_{M}\left({\sum }_{i=1}^{N}{c}_{i,M}{x}_{i}-{C}_{M}\right)=0$. Since $\left(x\left(\lambda {}_{M}\right),{\lambda }_{M}\right)$ satisfies Equation (5) and Equation (6), it will satisfy all KKT conditions. Therefore, ${x}^{*}=x\left({\lambda }_{M}\right)$ if $\left(x\left(\lambda {}_{M}\right),{\lambda }_{M}\right)$ satisfies ${\lambda }_{M}=0$ or ${\sum }_{i=1}^{N}{c}_{i,M}{x}_{i}\left({\lambda }_{M}\right)={C}_{M}$.

2) KKT conditions for problem $P\left(\stackrel{^}{f}\left({\lambda }_{M}\right),M-1\right)$ are

$\frac{\text{d}{\stackrel{^}{f}}_{i}\left({x}_{i}\right)}{\text{d}{x}_{i}}+\underset{j=1}{\overset{M-1}{\sum }}{\lambda }_{j}{c}_{i,j}-{w}_{i}+{v}_{i}=0,i=1,\cdots ,N,$ (A2)

$\underset{i=1}{\overset{N}{\sum }}{w}_{i}\left({x}_{i}-{l}_{i}\right)+\underset{i=1}{\overset{N}{\sum }}{v}_{i}\left({x}_{i}-{u}_{i}\right)=0,$ (A3)

${\lambda }_{j}\left(\underset{i=1}{\overset{N}{\sum }}{c}_{i,j}{x}_{i}-{C}_{j}\right)=0,j=1,\cdots ,M-1$. (A4)

Notice that ${\stackrel{^}{f}}_{i}\left({x}_{i}\right)$ is a parameter-adjusted function of ${f}_{i}\left({x}_{i}\right)$ , with ${\stackrel{^}{f}}_{i}\left({\lambda }_{M}\right)={f}_{i}+{\lambda }_{M}{c}_{i,M}{x}_{i}$. These conditions in Equations (A2)-(A3) are the same as KKT conditions given in Equations (5)-(7) without ${\lambda }_{M}\left({\sum }_{i=1}^{N}{c}_{i,M}{x}_{i}-{C}_{M}\right)=0$. Since $x\left({\lambda }_{M}\right)$ is the optimal solution of the KKT conditions in Equations (5)-(7) without ${\lambda }_{M}\left({\sum }_{i=1}^{N}{c}_{i,M}{x}_{i}-{C}_{M}\right)=0$ , it must be the optimal solution to problem $P\left(\stackrel{^}{f}\left({\lambda }_{M}\right),M-1\right)$.

A.4 Proof of Proposition 5

Let ${\stackrel{¯}{\lambda }}_{M}=\mathrm{max}\left(0,{\mathrm{max}}_{i=1,\cdots ,N}\left\{-{g}_{i}\left({l}_{i}\right)/{c}_{i,M}\right\}\right)$. If ${\lambda }_{M}^{*}>{\stackrel{¯}{\lambda }}_{M}$ , then we have ${\lambda }_{M}^{*}{c}_{i,M}>-{g}_{i}\left({l}_{i}\right),i=1,\cdots ,N$. From Equation (5), we have

${w}_{i}^{*}={g}_{i}\left({x}_{i}\right)+\underset{j=1}{\overset{M}{\sum }}{\lambda }_{j}{c}_{i,j}+{v}_{i}>{g}_{i}\left({x}_{i}\right)+{\lambda }_{M}^{*}{c}_{i,M}+{v}_{i}>0,\text{}i=1,\cdots ,N$. (A5)

Since ${w}_{i}>0$ , from Equation (6), we know ${x}_{i}={l}_{i}$ and ${v}_{i}=0$. Thus, we have

${\lambda }_{M}^{*}\left(\underset{i=1}{\overset{N}{\sum }}{c}_{i,M}{x}_{i}^{*}-{C}_{M}\right)={\lambda }_{M}^{*}\left(\underset{i=1}{\overset{N}{\sum }}{c}_{i,M}{l}_{i}-{C}_{M}\right)\ne 0$. (A6)

Equation (A6) violates the slackness condition ${\lambda }_{M}\left(\underset{i=1}{\overset{N}{\sum }}{c}_{i,M}{x}_{i}-{C}_{M}\right)=0$ in Equation (7). Therefore, there must be ${\lambda }_{M}^{*}<{\stackrel{¯}{\lambda }}_{M}$.

Conflicts of Interest

The authors declare no conflicts of interest.

  Bonnans, J.F. (1994) Local Analysis of Newton-Type Methods for Variational Inequalities and Nonlinear Programming. Applied Mathematics and Optimization, 29, 161-186. https://doi.org/10.1007/BF01204181  Coleman, T.F. and Li, Y. (1994) On the Convergence of Interior-Reflective Newton Methods for Nonlinear Minimization Subject to Bounds. Mathematical Programming, 67, 189-224. https://doi.org/10.1007/BF01582221  Van Hentenryck, P., Michel, L. and Benhamou, F. (1998) Constraint Programming over Nonlinear Constraints. Science of Computer Programming, 30, 83-118. https://doi.org/10.1016/S0167-6423(97)00008-7  Borchers, B. and Mitchell, J.E. (1994) An Improved Branch and Bound Algorithm for Mixed Integer Nonlinear Programs. Computers & Operations Research, 21, 359-367. https://doi.org/10.1016/0305-0548(94)90024-8  Leyffer, S. (2001) Integrating SQP and Branch-and-Bound for Mixed Integer Nonlinear Programming. Computational Optimization and Applications, 18, 295-309. https://doi.org/10.1023/A:1011241421041  Byrd, R.H., Hribar, M.E. and Nocedal, J. (1999) An Interior Point Algorithm for Large-Scale Nonlinear Programming. SIAM Journal on Optimization, 9, 877-900. https://doi.org/10.1137/S1052623497325107  Benson, H.Y., Vanderbei, R.J. and Shanno, D.F. (2002) Interior-Point Methods for Nonconvex Nonlinear Programming: Filter Methods and Merit Functions. Computational Optimization and Applications, 23, 257-272. https://doi.org/10.1023/A:1020533003783  Spellucci, P. (1998) An SQP Method for General Nonlinear Programs Using only Equality Constrained Subproblems. Mathematical Programming, 82, 413-448. https://doi.org/10.1007/BF01580078  Zhu, Z. and Zhang, K. (2004) A New SQP Method of Feasible Directions for Nonlinear Programming. Applied Mathematics and Computation, 148, 121-134. https://doi.org/10.1016/S0096-3003(02)00832-9  Nie, P.Y. and Ma, C.F. (2006) A Trust Region Filter Method for General Non-Linear Programming. Applied Mathematics and Computation, 172, 1000-1017. https://doi.org/10.1016/j.amc.2005.03.004  Nie, P.Y. (2007) Sequential Penalty Quadratic Programming Filter Methods for Nonlinear Programming. Nonlinear Analysis: Real World Applications, 8, 118-129. https://doi.org/10.1016/j.nonrwa.2005.06.003  Bretthauer, K.M. and Shetty, B. (2002b) The Nonlinear Knapsack Problem-Algo- rithms and Applications. European Journal of Operational Research, 138, 459-472. https://doi.org/10.1016/S0377-2217(01)00179-5  Bretthauer, K.M. and Shetty, B. (1995) The Nonlinear Resource Allocation Problem. Operations Research, 43, 670-683. https://doi.org/10.1287/opre.43.4.670  Kodialam, M.S. and Luss, H. (1998) Algorithms for Separable Nonlinear Resource Allocation Problems. Operations Research, 46, 272-284. https://doi.org/10.1287/opre.46.2.272  Bretthauer, K.M. and Shetty, B. (2002a) A Pegging Algorithm for the Nonlinear Resource Allocation Problem. Computers & Operations Research, 29, 505-527. https://doi.org/10.1016/S0305-0548(00)00089-7  Zhang, B. and Hua, Z. (2008) A Unified Method for a Class of Convex Separable Nonlinear Knapsack Problems. European Journal of Operational Research, 191, 1-6. https://doi.org/10.1016/j.ejor.2007.07.005  Kiwiel, K.C. (2008) Breakpoint Searching Algorithms for the Continuous Quadratic Knapsack Problem. Mathematical Programming, 112, 473-491. https://doi.org/10.1007/s10107-006-0050-z  Sharkey, T.C., Romeijn, H.E. and Geunes, J. (2011) A Class of Nonlinear Nonseparable Continuous Knapsack and Multiple-Choice Knapsack Problems. Mathematical Programming, 126, 69-96. https://doi.org/10.1007/s10107-009-0274-9  Morin, T.L. and Marsten, R.E. (1976) An Algorithm for Nonlinear Knapsack Problems. Management Science, 22, 1147-1158. https://doi.org/10.1287/mnsc.22.10.1147  Ohtagaki, H., Iwasaki, A., Nakagawa, Y. and Narihisa, H. (2000) Smart Greedy Procedure for Solving a Multidimensional Nonlinear Knapsack Class of Reliability Optimization Problems. Mathematical and Computer Modelling, 31, 283-288. https://doi.org/10.1016/S0895-7177(00)00097-2  Li, D., Sun, X.L. and Wang, F.L. (2006) Convergent Lagrangian and Contour Cut Method for Nonlinear Integer Programming with a Quadratic Objective Function. SIAM Journal on Optimization, 17, 372-400. https://doi.org/10.1137/040606193  Li, D., Sun, X.L., Wang, J. and McKinnon, K.I. (2009) Convergent Lagrangian and Domain Cut Method for Nonlinear Knapsack Problems. Computational Optimization and Applications, 42, 67-104. https://doi.org/10.1007/s10589-007-9113-1  Quadri, D., Soutif, E. and Tolla, P. (2009) Exact Solution Method to Solve Large Scale Integer Quadratic Multidimensional Knapsack Problems. Journal of Combinatorial Optimization, 17, 157-167. https://doi.org/10.1007/s10878-007-9105-1  Wang, H., Kochenberger, G. and Glover, F. (2012) A Computational Study on the Quadratic Knapsack Problem with Multiple Constraints. Computers & Operations Research, 39, 3-11. https://doi.org/10.1016/j.cor.2010.12.017  Abdel-Malek, L.L. and Areeratchakul, N. (2007) A Quadratic Programming Approach to the Multi-Product Newsvendor Problem with Side Constraints. European Journal of Operational Research, 176, 1607-1619. https://doi.org/10.1016/j.ejor.2005.11.002  Abdel-Malek, L.L. and Otegbeye, M. (2013) Separable Programming/Duality Approach to Solving the Multi Constrained Newsboy/Gardener Problem. Applied Mathematical Modelling, 37, 4497-4508. https://doi.org/10.1016/j.apm.2012.09.059  Zhang, B. (2012) Multi-Tier Binary Solution Method for Multi-Product Newsvendor Problem with Multiple Constraints. European Journal of Operational Research, 218, 426-434. https://doi.org/10.1016/j.ejor.2011.10.053     customer@scirp.org +86 18163351462(WhatsApp) 1655362766  Paper Publishing WeChat 