## Introduction

Linear Programming (LP) constitutes one of the most fundamental classes of mathematical programming models which is widely used in many scientific areas since many real world problems can be formulated as Linear Programs (LPs) (Triantafyllidis & Papageorgiou, 2018; Gkioulekas & Papageorgiou, 2019; Yang et al., 2016; Amin & Emrouznejad, 2011; Janssens & Ramaekers, 2011; Fernndez & Borrajo, 2012; Burdett et al., 2017). LP is an important tool nowadays in many applications, spanning across a broad spectrum of fields (Bertsimas & Tsitsiklis, 1997). Many algorithms have been invented for the solution of LPs. The majority of these algorithms can be divided into two main categories: (i) simplex-type or pivoting algorithms and (ii) Interior Point Methods (IPMs).

The Primal Simplex Algorithm (PSA) (Dantzig, 1949) had been the most efficient method for solving LPs until the 80’s. PSA ranked as one of the top 10 algorithms of the 20th century (Dongarra & Sullivan, 2000). It performs well in practice, particularly on LPs of small or medium size. Nevertheless, PSA is not polynomial. Its worst-case complexity is exponential (Klee & Minty, 1972). Simplex algorithm visits in a sequential manner adjacent vertices of the feasible region using pivot operations, so that the new vertex has better objective value (monotonic algorithm) compared to the previous one. It is well known that the behavior of this algorithmic family can be improved by modifying: (i) the initial solution and (ii) the pivoting rule. The selection of appropriate pivoting rules affects the number of iterations required for solving LPs. Different pivoting strategies yield different basis sequences in simplex-type algorithms. The flexibility of the entering and leaving variable selection allows to develop various pivoting schemes. A complete presentation can be found in Terlaky & Zhang (1993).

The first polynomial time algorithm for linear programming was the Russian (ellipsoid) algorithm, developed by Khachiyan (1979). However, the ellipsoid algorithm is impractical for LP. Karmarkar (1984) then invented the first Interior Point Method (IPM); it uses a sequence of interior points and converges to the optimal solution in a few number of iterations. The most important advantage of IPMs compared to PSA is that the number of iterations is not proportional or related in any manner to the number of vertices. Most of the IPMs are infeasible in nature (Mehrotra, 1992; Mehrotra, 1993) and it is broadly accepted that an infeasible primal-dual IPM is the most efficient algorithm of this family. The development of IPMs has revolutionized the field of mathematical programming and efficient IPMs outperform the PSA on large-scale LPs.

Despite this fact, LPs have continued to receive great scientific analysis lately. More and effective pivoting schemes appeared in the literature (Terlaky, 1985; Murty & Fathi, 1984; Arsham, 2007; Pan, 2008; Jurik, 2008; Yeh & Corley, 2009; Elhallaoui et al., 2011; Li, 2013). Additionally, the papers (Basu, Loera & Junod, 2014; Gleixner, Steffy & Wolter, 2016; Omer et al., 2015) proposed a framework for an improved Primal Simplex algorithm that guarantees an improvement in the objective value after each iteration. Also, during the last decades researchers proposed more efficient implementations of simplex-type algorithms.

The Exterior Point Simplex Algorithm (EPSA) was originally developed by Paparrizos (1991) for the assignment problem. EPSA can avoid the boundary of the polyhedron of the feasible region and constructs two paths to converge to the optimal solution. One path is exterior to the feasible region while the other one is feasible. Later on, Paparrizos (1993) generalized EPSA to LP. The key idea behind EPSA is that when using pivots based on feasible directions to select the pair of entering and leaving variables, the algorithm can converge faster to the optimal solution. Paparrizos, Samaras & Tsiplidis (2001) have demonstrated that the geometry of EPSA reveals that this algorithm is faster than PSA. This result was partially verified by preliminary computational results (Paparrizos, Samaras & Stephanides, 2003a; Paparrizos, Samaras & Triantafyllidis, 2008). A well established way to improve EPSA is to transform its exterior path into a dual feasible simplex path. Such an algorithm is called Primal-Dual Exterior Point Simplex Algorithm (PDEPSA) (Paparrizos, 1996). This algorithm requires an initial dual feasible basic solution. Since such a solution is not always readily available, a modified big-M method is applied. Variations of using a Two-Phase approach for the EPSA were presented in Triantafyllidis & Samaras (2014). The main advantage of PDEPSA is its promising computational performance.

An important improvement of the PDEPSA is to traverse across the interior of the feasible region, in an attempt to avoid degenerate vertices of vertex-following algorithms. This algorithm is called Primal-Dual Interior Point Simplex Algorithm (PDIPSA) (Samaras, 2001). PDIPSA can be seen as a separate procedure to move from any interior point to an optimal basic solution. It can be combined with IPMs in order to develop a hybrid algorithm consisting of two stages (Glavelis, Ploskas & Samaras, 2018). At first stage, an IPM is applied and at the second stage PDIPSA is applied to compute an optimal basic solution. The main advantage of this hybrid algorithm is that it exploits the strengths of both IPM and PDIPSA. The computational results are very encouraging. A complete review of Exterior Point algorithms can be found in Paparrizos, Samaras & Sifaleras (2015). A review paper summarizing the advantages and disadvantages of pivots, ellipsoid and IPMs was presented by Illes & Terlaky (2002). Several methods have been developed which provide a combination of IPMs with pivoting algorithms (Bixby et al., 1992; Bixby & Saltzman, 1994; Andersen & Ye, 1996; Pan, 2013).

All the above mentioned algorithms are monotonic in nature. A monotonic linear optimization algorithm starts with a (feasible or infeasible) vertex, moves between (adjacent or not) vertices, improving the value of the objective function until an optimal solution is found. In this paper a non-monotonic infeasible simplex-type algorithm for general LP is presented. The proposed method does not maintain monotonicity on the basic solutions, but only on the interior point which is used to construct the feasible direction at each iteration. This new algorithm is comprised of three different parts: (i) interior Exterior Primal Simplex Algorithm (iEPSA), (ii) Exterior Point Simplex Algorithm (EPSA) and (iii) Primal-Dual Interior Point Simplex Algorithm (PDIPSA). The first one (iEPSA) interconnects a primal interior point with a primal (infeasible) exterior one. Using these two points, a feasible direction is constructed and while iterating in a non-monotonic way the algorithm stops at either a primal or a dual feasible solution. On the other hand iEPSA improves strictly from iteration to iteration the objective value at the interior point. The exterior point reaches optimality independently of the monotonicity of the interior point. In conclusion, we have non-monotonic movement outside the feasible region and monotonic movement in the interior of the feasible region.

In order to gain insight into the practical behavior of the proposed algorithm, we have performed some computational experiments on a set of benchmark problems (netlib, Kennington, Mészáros). The computational results demonstrate that the proposed non-monotonic algorithm requires less iterations than both the Primal-Simplex algorithm implemented in CPLEX and Gurobi commercial solvers.

This paper is organized as follows: In ‘Materials & Methods’ a brief reference to some basic notation for the linear problem and the algorithms described in this paper is given. Subsection iEPSA presents the proposed algorithm, an illustrative example and its pseudo-code. In the ‘Proof of Correctness’ subsection, mathematical proofs for the correctness of the algorithm are given. In order to gain an insight into the practical behavior of the proposed algorithm, we performed a computational study. These results are presented in the ‘Results’ section, followed by the ‘Conclusions’ section.

## Materials & Methods

In this section we give some necessary notation and definitions on LPs. Consider the following linear program in the standard form: $\begin{array}{cc}min\hfill & {c}^{T}x\hfill \\ subject\phantom{\rule{1em}{0ex}}to\hfill & Ax=b,\hfill \\ \hfill & x\ge 0\hfill \end{array}$

where A ∈ Rm×n, (cx) ∈ Rn, b ∈ Rm and T denotes transposition. We assume that A has full rank, rank(A) = m, 1 ≤ m ≤ n. If x satisfies Ax = b, x ≥ 0, then x is a feasible solution. The dual problem associated with the Eq. (1) is presented in Eq. (2): $\begin{array}{cc}max\hfill & {b}^{T}y\hfill \\ subject\phantom{\rule{1em}{0ex}}to\hfill & {A}^{T}y+s=c,\hfill \\ \hfill & s\ge 0\hfill \end{array}$

where y ∈ Rm and s ∈ Rn. Using a basic partition (BN) of A as A = [ABAN] and the corresponding partitioning of xT = [xBxN], cT = [cBcN], Eq. (1) is written as: $\begin{array}{cc}min\phantom{\rule{1em}{0ex}}Z={c}_{B}^{T}{x}_{B}+{c}_{N}^{T}{x}_{N}\hfill & \hfill \\ subject\phantom{\rule{1em}{0ex}}to\hfill & {A}_{B}{x}_{B}+{A}_{N}{x}_{N}=b\hfill \\ \hfill & {x}_{B},{x}_{N}\ge 0\hfill \end{array}$

In Eq. (3), AB is an m × m non-singular sub-matrix of A, called basic matrix or basis, whereas AN is an m × (n − m) sub-matrix of A called non-basic matrix. The columns of A which belong to subset B are called basic and those which belong to N are called non-basic. The solution xB = (AB)−1b, xN = 0 is called a basic solution. The solution of Eq. (2) is computed by the relation s = c − ATy, where yT = (cB)T(AB)−1 are the dual variables and s are the dual slack variables. The basis AB is dual feasible iff s ≥ 0. The ith row of the coefficient matrix A is denoted by Ai. and the jth column by A.j. Notation xB[i] refers to the ith basic variable (ith element of vector xB). In solving LPs by pivoting methods, a huge amount of computational effort is consumed on the inversion of the basis AB. The basis is maintained in some factorized form. We use the LU-factorization available in MATLAB to compute the inverse of the basis in all three algorithms, iEPSA, EPSA and PDIPSA.

### The iEPSA method

A common characteristic of the majority of simplex-type algorithms is that they can be described as a process that uses simplex paths which lead to optimal solution. One advantage of the Exterior Point algorithms is that they use two paths to reach the optimal basis. One is feasible and the other infeasible (exterior). The relaxation of the feasibility constraints seems to be efficient in practice. Another potential advantage of EPSA is that should the initial direction be feasible (it spans the feasible region), the method can be applied directly on the original problem, without having to first compute an initial feasible basic solution thus completely avoiding Phase I. This is because EPSA never loses contact with the feasible region if the initial direction crosses it. On the other hand, one of the main disadvantages of the EPSA is the difficulty of constructing a good moving direction.

This drawback can be avoided if the exterior path is replaced with a dual feasible simplex path. It has been shown that by replacing the exterior path of an EPSA with a dual feasible simplex path results in an algorithm free from the computational disadvantages of EPSA (Paparrizos, Samaras & Stephanides, 2003b). A more effective version is the PDIPSA (Samaras, 2001). This algorithm can circumvent the problems of stalling and cycling more effectively and as a result improves the performance of the primal-dual exterior point algorithms. The advantage of PDIPSA emanates from the fact that it uses an interior point.

The iEPSA method is initialized with a pair of initial points: an infeasible basic solution and an interior point. The initial interior point (xinterior) can be computed by applying an IPM in Eq. (1) with cT = 0. Next, it constructs a feasible direction (d = xinterior − xcurrent) and computes the pair of entering/leaving variables and a new (better) interior point. The above computations continue, swapping infeasible basic solutions on the exterior of the feasible region in a non-monotonic way, and in the interior by using better interior points (monotonic movement) in order to construct the search directions. The proposed method prioritizes monotonic pivots; however, should there be no monotonic eligible steps, the method moves to the least-worse non-monotonic infeasible basic solution.

If iEPSA finds a primal feasible basic solution, then the EPSA is applied to monotonically converge to the optimal solution. If at any given iteration iEPSA moves to a dual feasible partition then PDIPSA is applied. With the last interior point and the dual feasible partition from iEPSA, PDIPSA can also monotonically find the optimal solution.

#### Step-by-step description of iEPSA and pseudocode

The algorithm consists of two phases. In the first phase, the algorithm generates a sequence of points $\left({x}_{interior}^{i},{x}_{exterior}^{i}\right)$, i = 0, 1, 2, ..., T, where ${x}_{interior}^{i}$ is a point in the relative interior of the feasible region for i = 0, ..., T, and ${x}_{exterior}^{i}$ is a basic solution to LP, that is infeasible to both the primal and the dual problem for i = 0, ..., T − 1. The first phase ends with a pair $\left({x}_{interior}^{T},{x}_{exterior}^{T}\right)$, where the exterior point is either feasible to the primal or the dual problem. If the first phase ands with a basic feasible solution to the primal, then the second phase runs an algorithm called Exterior Point Simplex Algorithm (EPSA) from previous literature (Paparrizos, 1993), to obtain the optimal basic feasible solution. If the first phase ends with a dual feasible solution, the second phase runs an algorithm called Primal-Dual Interior Point Simplex Algorithm (PDIPSA), also from previous literature (Samaras, 2001). We show that the first phase method always ends with a basic solution that is feasible to either the primal or the dual problem. Thus, using the prior results on EPSA and PDIPSA, the overall algorithm is shown to correctly solve LP.

The main idea behind the first phase is the following: the algorithm is initialized with any basic (infeasible) solution ${x}_{exterior}^{0}$ and an interior point ${x}_{interior}^{0}$, found by a standard Interior Point Solver (in this case MOSEK IPM). At every iteration, i = 0, 1, 2, ... one computes the intersection of the line passing



1  Data: Eq. (1), Infeasible Basic Partition [B N], Interior Point xinterior
2  Result: Primal or Dual feasible basic partition [__B __N]
_______________________________________________________________________________________________________________________________________________________________
3  (Initialization) Compute:
4 (AB)−1,xB,wT, (sN)T
5 xcurrent = [ xB  xN]
6 d = xinterior − xcurrent
7 P = {j ∈ N : sj < 0}, Q = {j ∈ N : sj ≥ 0}
8 (sP )T = (cP )T − wTAP , (sQ)T = (cQ)T − wTAQ
9 (General loop)
10  while xB(⁄≥ 0) do
11       αα = xcurrent + αd : α =  xB[ra] _−dB[ra]  = min{ xB[i] _−dB[i]  : dB[i] < 0} ,∀i = 1,...,m
12       ββ = xcurrent + βd : β =  xB[rb] _−dB[rb]  = max{ xB[i] _−dB[i]  : xB[i] < 0} ,∀i = 1,...,m
13       if α = +∞ then
14                   STOP-Eq. 1 is unbounded.
else
15                   Find xmiddle = αα+ββ    2
16                   if cTxmiddle < cTxinterior then
17                      __xinterior = xmiddle
else
18                       if cTxmiddle = cTxinterior then
19                            α = min{(xinterior)[i] c[i]         : −c[i] < 0}
20                            __xinterior = xinterior + α 2 (−cT )
else
21                            d = xinterior − xmiddle
22                            α = min{(xinterior)[i] −(d)[i]      : (d)[i] < 0}
23                            __xinterior = xinterior + α 2 d
end if
end if
24                   Compute:
25                   xB[rb] = xk
26                   HrP = ((AB)−1)rb.AP
27                   HrQ = ((AB)−1)rb.AQ,
28                   θ1 = −sp Hrp = min{−sj Hrj : Hrj < 0,j ∈ P}
29                   θ2 = −sq Hrq = min{−sj Hrj : Hrj < 0,j ∈ Q}
30                   (Pivot-Update)
31                   Find t1,t2 : P(t1) = p , Q(t2) = q.
32                   if θ1 ≤ θ2 then
33                                l = p.
else
34                                l = q
end if
35                   Find t : N(t) = l. Set N(t) = k , B(rb) = l . Update:
36                   (AB)−1, xB,wT, (sN)T, (sP )T, (sQ)T , P,Q, __xcurrent = [ xB  xN] , __d = __xinterior −_xcurrent
end if
end while
Algorithm 1: iEPSA    

through ${x}_{exterior}^{i}$ and ${x}_{interior}^{i}$ with the feasible region. This gives a line segment l (assuming the problem is bounded) with midpoint ${x}_{middle}^{i}$. Otherwise, one takes a half-step from the current ${x}_{interior}^{i}$ in the direction of ${x}_{middle}^{i}$ and sets this as the new ${x}_{interior}^{i+1}$. One also computes the endpoint of the line segment l closest to ${x}_{interior}^{i}$. This endpoint lies on some facet of the feasible region. This facet dictates which nonbasic variable will enter the basis and an appropriate exiting variable is selected. This then gives the new basic solution ${x}_{exterior}^{i+1}$.

A flow diagram of iEPSA combined with EPSA and PDIPSA to provide an integrated solver for LP is shown in Fig. 1. A formal description of the iEPSA method is given in Algorithm 1.

#### An example

We will briefly demonstrate the iEPSA method in a simple example. Points {ααββ} represent the exiting and entering boundary points correspondingly for each feasible direction spanning the polyhedron. Assume we are given the following linear programming problem:

The corresponding feasible region is depicted in Fig. 2. There exist in total eight different variables after the addition of the slack ones. The axis system represents variables x1 (y=0) and x2 (x=0). The numbers with P in brackets on the right at each basic solution stand for the number of elements in vector P. The optimal point is [3,4.2] and the optimal objective value is Z = 7.2. After the addition of the slack variables we have: $A=\left[\begin{array}{cccccccc}\hfill 1\hfill & \hfill -1\hfill & \hfill 1\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill -1\hfill & \hfill 1\hfill & \hfill 0\hfill & \hfill 1\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 3\hfill & \hfill 5\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 1\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill -4\hfill & \hfill -13\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 1\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 1\hfill & \hfill -8\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 1\hfill & \hfill 0\hfill \\ \hfill 8\hfill & \hfill -5\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 1\hfill \\ \hfill \hfill \end{array}\right],c=\left[\begin{array}{c}\hfill -1,-1,0,0,0,0,0,0\hfill \\ \hfill \hfill \end{array}\right],b=\left[\begin{array}{c}\hfill 2\hfill \\ \hfill 4\hfill \\ \hfill 30\hfill \\ \hfill -23\hfill \\ \hfill -12\hfill \\ \hfill 3\hfill \\ \hfill \hfill \end{array}\right]$

Initialization

Assume we start with the infeasible partition B = [1, 3, 4, 5, 7, 8], N = [2, 6] and the following interior point xinterior (calculated from MOSEK’s IPM). All the appropriate computations are: ${\left({A}_{B}\right)}^{-1}=\left[\begin{array}{cccccc}\hfill 1\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0.25\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 1\hfill & \hfill 0\hfill & \hfill -0.25\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 1\hfill & \hfill 0.75\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0.25\hfill & \hfill 1\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill -0.25\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 2\hfill & \hfill 0\hfill & \hfill 1\hfill \\ \hfill \hfill \end{array}\right],{\left({s}_{N}\right)}^{T}=\left[\begin{array}{c}\hfill 2.25,-0.25\hfill \end{array}\right],{w}^{T}=\left[\begin{array}{c}\hfill 0,0,0,0.25,0,0\hfill \end{array}\right]$ ${x}_{B}=\left[\begin{array}{c}\hfill -3.75\hfill \\ \hfill 9.75\hfill \\ \hfill 12.75\hfill \\ \hfill -17.75\hfill \\ \hfill 5.75\hfill \\ \hfill -43\hfill \end{array}\right],{x}_{current}=\left[\begin{array}{c}\hfill 5.75\hfill \\ \hfill 0\hfill \\ \hfill -3.75\hfill \\ \hfill 9.75\hfill \\ \hfill 12.75\hfill \\ \hfill 0\hfill \\ \hfill -17.75\hfill \\ \hfill -43\hfill \end{array}\right],{x}_{interior}=\left[\begin{array}{c}\hfill 0.3189\hfill \\ \hfill 3.0877\hfill \\ \hfill 4.7688\hfill \\ \hfill 1.2312\hfill \\ \hfill 13.6047\hfill \\ \hfill 18.4160\hfill \\ \hfill 12.3829\hfill \\ \hfill 15.8874\hfill \end{array}\right]$

Here w are the dual variables. The N set of indexes actually represents the current non-basic solution. In our case [2,6] is the 2-D point [5.75,0]. A feasible direction d is then constructed by connecting the interior point with the infeasible basic solution: $d={x}_{interior}-{x}_{current}=\left[\begin{array}{c}\hfill 0.3189\hfill \\ \hfill 3.0877\hfill \\ \hfill 4.7688\hfill \\ \hfill 1.2312\hfill \\ \hfill 13.6047\hfill \\ \hfill 18.4160\hfill \\ \hfill 12.3829\hfill \\ \hfill 15.8874\hfill \end{array}\right]-\left[\begin{array}{c}\hfill 5.75\hfill \\ \hfill 0\hfill \\ \hfill -3.75\hfill \\ \hfill 9.75\hfill \\ \hfill 12.75\hfill \\ \hfill 0\hfill \\ \hfill -17.75\hfill \\ \hfill -43\hfill \end{array}\right]=\left[\begin{array}{c}\hfill -5.4311\hfill \\ \hfill 3.0877\hfill \\ \hfill 8.5188\hfill \\ \hfill -8.5188\hfill \\ \hfill 0.8547\hfill \\ \hfill 18.4160\hfill \\ \hfill 30.1329\hfill \\ \hfill 58.8874\hfill \end{array}\right]$

Mapping now the direction d on the basic variables we get dB: ${d}_{B}=\left[\begin{array}{c}\hfill 8.5188\hfill \\ \hfill -8.5188\hfill \\ \hfill 0.8547\hfill \\ \hfill 30.1329\hfill \\ \hfill -5.4311\hfill \\ \hfill 58.8874\hfill \end{array}\right]$

Also we have P = [6] and Q = [2]. The direction from the initial infeasible basic solution (5.75,0) to the interior point is shown in Fig. 3. Since xcurrent is our initial infeasible basic solution and d is our current feasible direction, this direction intersects the feasible region at an exiting point AA1 (as shown in Fig. 3) which can be calculated using the relations below:

General loop - Iteration 1

$\alpha =\frac{{x}_{B\left[r\right]}}{-{d}_{B\left[r\right]}}=min\left\{\frac{{x}_{B\left[i\right]}}{-{d}_{B\left[i\right]}}:{d}_{B\left[i\right]}<0\right\}=min\left\{\frac{{x}_{B}\left[2,5\right]}{-{d}_{B}\left[2,5\right]}\right\}=min\left\{\frac{9.75}{8.5188},\frac{5.75}{5.4311}\right\}=min\left\{1.1445,1.0587\right\}=1.0587$ $\alpha \alpha ={x}_{current}+\alpha d=\left[\begin{array}{c}\hfill 5.75\hfill \\ \hfill 0\hfill \\ \hfill -3.75\hfill \\ \hfill 9.75\hfill \\ \hfill 12.75\hfill \\ \hfill 0\hfill \\ \hfill -17.75\hfill \\ \hfill -43\hfill \end{array}\right]+1.0587\left[\begin{array}{c}\hfill -5.4311\hfill \\ \hfill 3.0877\hfill \\ \hfill 8.5188\hfill \\ \hfill -8.5188\hfill \\ \hfill 0.8547\hfill \\ \hfill 18.4160\hfill \\ \hfill 30.1329\hfill \\ \hfill 58.8874\hfill \end{array}\right]=\left[\begin{array}{c}\hfill 0\hfill \\ \hfill 3.2690\hfill \\ \hfill 5.2690\hfill \\ \hfill 0.7310\hfill \\ \hfill 13.6549\hfill \\ \hfill 19.4973\hfill \\ \hfill 14.1522\hfill \\ \hfill 19.3451\hfill \end{array}\right]$

In a similar manner, the entering point BB1 (as shown in Fig. 3) can be calculated using a maximum ratio test: $\beta =\frac{-{x}_{B\left[r\right]}}{{d}_{B\left[r\right]}}=max\left\{\frac{-{x}_{B\left[i\right]}}{{d}_{B\left[i\right]}}:{x}_{B\left[i\right]}<0\right\}=max\left\{\frac{-{x}_{B}\left[1,4,6\right]}{{d}_{B}\left[1,4,6\right]}\right\}=max\left\{\frac{-3.75}{-8.5188},\frac{-17.75}{-30.1329},\frac{-43}{-58.8874}\right\}=max\left\{0.4402,0.5891,0.7302\right\}=0.7302$ Hence, rb = 3, then the 3rd element of [1, 4, 6] is r = 6.

The entering point then is: $\beta \beta ={x}_{current}+\beta d=\left[\begin{array}{c}\hfill 5.75\hfill \\ \hfill 0\hfill \\ \hfill -3.75\hfill \\ \hfill 9.75\hfill \\ \hfill 12.75\hfill \\ \hfill 0\hfill \\ \hfill -17.75\hfill \\ \hfill -43\hfill \end{array}\right]+0.7302\left[\begin{array}{c}\hfill -5.4311\hfill \\ \hfill 3.0877\hfill \\ \hfill 8.5188\hfill \\ \hfill -8.5188\hfill \\ \hfill 0.8547\hfill \\ \hfill 18.4160\hfill \\ \hfill 30.1329\hfill \\ \hfill 58.8874\hfill \end{array}\right]=\left[\begin{array}{c}\hfill 1.7842\hfill \\ \hfill 2.2547\hfill \\ \hfill 2.4705\hfill \\ \hfill 3.5295\hfill \\ \hfill 13.3741\hfill \\ \hfill 13.4475\hfill \\ \hfill 4.2533\hfill \\ \hfill 0\hfill \end{array}\right]$

It is easy now to compute the middle point MIDDLE1 (as shown in Fig. 3) between αα − ββ: ${x}_{middle}=\frac{\alpha \alpha +\beta \beta }{2}=\left[\begin{array}{c}\hfill 0.8921\hfill \\ \hfill 2.7618\hfill \\ \hfill 3.8697\hfill \\ \hfill 2.1303\hfill \\ \hfill 13.5145\hfill \\ \hfill 16.4724\hfill \\ \hfill 9.2027\hfill \\ \hfill 9.6726\hfill \\ \hfill \hfill \end{array}\right]$

We observe the following:

• Both exiting and entering points have only one zero element which was expected since both points are boundary in a 2-D problem

• Maximum step size β is less than the minimum step α so as the entering point is closer to the infeasible basic solution and the exiting furthest as expected, since the direction will intersect the feasible region

• These two boundary points define a unique feasible ray segment from ββ to αα (ββ → αα)

• The objective function value at ββ is better than in αα (direction is non-improving) and the objective function value at the middle point is better than the initial interior point.

Since the middle point has better objective value than the initial interior point (it is: Zmiddle = (cTmiddle) = [ − 0.8921,  − 2.7618] =  − 3.6539 and Zxinterior = (cTxinterior) = [ − 0.3189,  − 3.0877] =  − 3.4066) we keep this middle point as an improved interior point for the next iteration. We now choose the leaving variable according to ββ boundary point: k = 8, r = 6. The variable xB(r) = xk = x8 is leaving the basis. We can now move on, to select the entering variable xl:

${H}_{rP}={B}^{-1}{A}_{.P}=\left[2\right]>0$ ${H}_{rQ}={B}^{-1}{A}_{.Q}=\left[-31\right]<0$ $l=2,t2=1,P=\left[6\right],Q=\left[8\right]$ ${\theta }_{1}=\left[\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\right],{\theta }_{2}=\left[0.0726\right]$

Since only θ2 could be computed, the selection is done using the set Q. Variable xl = x2 is entering the basis. The pivot operation now updates the basic and non-basic index lists as shown below: $\overline{B}=\left[\begin{array}{c}\hfill 3,4,5,7,1,2\hfill \end{array}\right],\overline{N}=\left[\begin{array}{c}\hfill 8,6\hfill \end{array}\right],{\left({A}_{B}\right)}^{-1}=\left[\begin{array}{cccccc}\hfill 1\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill -0.0242\hfill & \hfill 0\hfill & \hfill -0.1371\hfill \\ \hfill 0\hfill & \hfill 1\hfill & \hfill 0\hfill & \hfill 0.0242\hfill & \hfill 0\hfill & \hfill 0.1371\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 1\hfill & \hfill 0.4435\hfill & \hfill 0\hfill & \hfill -0.1532\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill -0.4758\hfill & \hfill 1\hfill & \hfill -0.3629\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill -0.0403\hfill & \hfill 0\hfill & \hfill 0.1048\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill -0.0645\hfill & \hfill 0\hfill & \hfill -0.323\hfill \end{array}\right],{\left({s}_{N}\right)}^{T}=\left[\begin{array}{c}\hfill 0.0726,-0.1048\hfill \end{array}\right],{w}^{T}=\left[\begin{array}{c}\hfill 0,0,0,0.1048,0,-0.0726\hfill \\ \hfill \hfill \end{array}\right],{x}_{B}=\left[\begin{array}{c}\hfill 2.1452\hfill \\ \hfill 3.8548\hfill \\ \hfill 19.3387\hfill \\ \hfill -2.1452\hfill \\ \hfill 1.2419\hfill \\ \hfill 1.3871\hfill \end{array}\right],{\overline{x}}_{current}=\left[\begin{array}{c}\hfill 1.2419\hfill \\ \hfill 1.3871\hfill \\ \hfill 2.1452\hfill \\ \hfill 3.8548\hfill \\ \hfill 19.3387\hfill \\ \hfill 0\hfill \\ \hfill -2.1452\hfill \\ \hfill 0\hfill \end{array}\right]$

Note that xB ≤ 0 in this pivot. The new direction $\overline{d}={\overline{x}}_{middle}-{\overline{x}}_{current}$ now is: $\overline{d}=\left[\begin{array}{c}\hfill 0.8921\hfill \\ \hfill 2.7618\hfill \\ \hfill 3.8697\hfill \\ \hfill 2.1303\hfill \\ \hfill 13.5145\hfill \\ \hfill 16.4724\hfill \\ \hfill 9.2027\hfill \\ \hfill 9.6726\hfill \end{array}\right]-\left[\begin{array}{c}\hfill 1.2419\hfill \\ \hfill 1.3871\hfill \\ \hfill 2.1452\hfill \\ \hfill 3.8548\hfill \\ \hfill 19.3387\hfill \\ \hfill 0\hfill \\ \hfill -2.1452\hfill \\ \hfill 0\hfill \end{array}\right]=\left[\begin{array}{c}\hfill -0.3498\hfill \\ \hfill 1.3747\hfill \\ \hfill 1.7246\hfill \\ \hfill -1.7246\hfill \\ \hfill -5.8242\hfill \\ \hfill 16.4724\hfill \\ \hfill 11.3479\hfill \\ \hfill 9.6726\hfill \end{array}\right]⟹\overline{{d}_{B}}=\left[\begin{array}{c}\hfill 1.7246\hfill \\ \hfill -1.7246\hfill \\ \hfill -5.8242\hfill \\ \hfill 11.3479\hfill \\ \hfill -0.3498\hfill \\ \hfill 1.3747\hfill \\ \hfill \hfill \end{array}\right]$

General loop: Iteration 2

The new exiting boundary point AA2 (as shown in Fig. 4) is:

$\alpha =\frac{{x}_{B\left[r\right]}}{-{d}_{B\left[r\right]}}=min\left\{\frac{{x}_{B\left[i\right]}}{-{d}_{B\left[i\right]}}:{d}_{B\left[i\right]}<0\right\}=min\left\{\frac{{x}_{B}\left[2,3,5\right]}{-{d}_{B}\left[2,3,5\right]}\right\}=min\left\{\frac{3.8548}{1.7246},\frac{19.3387}{5.8242},\frac{1.2419}{0.3498}\right\}=min\left\{2.2352,3.3204,3.5499\right\}=2.2352$ $\alpha \alpha ={x}_{current}+\alpha d=\left[\begin{array}{c}\hfill 1.2419\hfill \\ \hfill 1.3871\hfill \\ \hfill 2.1452\hfill \\ \hfill 3.8548\hfill \\ \hfill 19.3387\hfill \\ \hfill 0\hfill \\ \hfill -2.1452\hfill \\ \hfill 0\hfill \end{array}\right]+2.2352\left[\begin{array}{c}\hfill 0.3498\hfill \\ \hfill 1.3747\hfill \\ \hfill 1.7246\hfill \\ \hfill -1.7246\hfill \\ \hfill -5.8242\hfill \\ \hfill 16.4724\hfill \\ \hfill 11.3479\hfill \\ \hfill 9.6726\hfill \end{array}\right]=\left[\begin{array}{c}\hfill 0.4599\hfill \\ \hfill 4.4599\hfill \\ \hfill 6\hfill \\ \hfill 0\hfill \\ \hfill 6.3203\hfill \\ \hfill 36.8196\hfill \\ \hfill 23.2200\hfill \\ \hfill 21.6204\hfill \end{array}\right]$

Correspondingly, the entering point BB2 (as shown in Fig. 4) can be calculated using the maximum ratio test: $\beta =\frac{-{x}_{B\left[r\right]}}{{d}_{B\left[r\right]}}=max\left\{\frac{-{x}_{B\left[i\right]}}{{d}_{B\left[i\right]}}:{x}_{B\left[i\right]}<0\right\}=max\left\{\frac{-{x}_{B}\left[4\right]}{{d}_{B}\left[4\right]}\right\}=max\left\{\frac{-2.1452}{-11.3479}\right\}=0.1890$ Hence, rb = 4  so  r = 4. The entering point then is:

$\beta \beta ={x}_{current}+\beta d=\left[\begin{array}{c}\hfill 1.2419\hfill \\ \hfill 1.3871\hfill \\ \hfill 2.1452\hfill \\ \hfill 3.8548\hfill \\ \hfill 19.3387\hfill \\ \hfill 0\hfill \\ \hfill -2.1452\hfill \\ \hfill 0\hfill \end{array}\right]+0.1890\left[\begin{array}{c}\hfill -0.3498\hfill \\ \hfill 1.3747\hfill \\ \hfill 1.7246\hfill \\ \hfill -1.7246\hfill \\ \hfill -5.8242\hfill \\ \hfill 16.4724\hfill \\ \hfill 11.3479\hfill \\ \hfill 9.6726\hfill \end{array}\right]=\left[\begin{array}{c}\hfill 1.1758\hfill \\ \hfill 1.6470\hfill \\ \hfill 2.4712\hfill \\ \hfill 3.5288\hfill \\ \hfill 18.2377\hfill \\ \hfill 3.1139\hfill \\ \hfill 0\hfill \\ \hfill 1.8285\hfill \end{array}\right]$

The new middle point MIDDLE2 (as shown in Fig. 4) between αα − ββ is now: ${\overline{x}}_{middle}=\frac{\alpha \alpha +\beta \beta }{2}=\left[\begin{array}{c}\hfill 0.8179\hfill \\ \hfill 3.0535\hfill \\ \hfill 4.2356\hfill \\ \hfill 1.7644\hfill \\ \hfill 12.2790\hfill \\ \hfill 19.9667\hfill \\ \hfill 11.6100\hfill \\ \hfill 11.7244\hfill \end{array}\right]$

The variable xB(r) = xk = x7r = 4 is leaving the basis. We can now move on to select the entering variable xl:

$P=\left[6\right],Q=\left[8\right]$ ${H}_{rP}={\left({A}_{B}\right)}^{-1}{A}_{.P}=\left[-0.4758\right]<0$ ${H}_{rQ}={\left({A}_{B}\right)}^{-1}{A}_{.Q}=\left[-0.3629\right]<0$ $l=2,t2=1,P=\left[7\right],Q=\left[8\right]$ ${\theta }_{1}=\left[-0.2203\right],{\theta }_{2}=\left[0.2\right]$

Since θ1 ≤ θ2 the selection is done using the set P. Variable xl = x6 is entering the basis. The pivot operation now updates the basic and non-basic index lists as shown below:

$\overline{B}=\left[\begin{array}{c}\hfill 3,4,5,6,1,2\hfill \end{array}\right],\overline{N}=\left[\begin{array}{c}\hfill 8,7\hfill \end{array}\right]$ ${\left({A}_{B}\right)}^{-1}=\left[\begin{array}{cccccc}\hfill 1\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill -0.0508\hfill & \hfill -0.1186\hfill \\ \hfill 0\hfill & \hfill 1\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0.0508\hfill & \hfill 0.1186\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 1\hfill & \hfill 0\hfill & \hfill 0.9322\hfill & \hfill -0.4915\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 1\hfill & \hfill -2.1017\hfill & \hfill 0.7627\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill -0.0847\hfill & \hfill 0.1356\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill -0.1356\hfill & \hfill 0.0169\hfill \end{array}\right],{\left({s}_{N}\right)}^{T}=\left[\begin{array}{c}\hfill 0.1525,-0.2203\hfill \end{array}\right],$ ${w}^{T}=\left[\begin{array}{c}\hfill 0,0,0,0,0.2203,-0.1525\hfill \end{array}\right],{x}_{B}=\left[\begin{array}{c}\hfill 2.2542\hfill \\ \hfill 3.7458\hfill \\ \hfill 17.3390\hfill \\ \hfill 4.5085\hfill \\ \hfill 1.4237\hfill \\ \hfill 1.6780\hfill \end{array}\right]$

Note that xB ≥ 0 in this pivot. Method iEPSA stops here. The basic solutions constructed as shown in Fig. 4 are C1 → G → A1. In practice, EPSA would take over and finish the optimization moving with one extra iteration to the optimal vertex from the feasible vertex A1. Note that in the second pivot, the middle point (MIDDLE2) constructed could have worse objective function than MIDDLE1 (although it seems better in this case). We did not calculate it on purpose here, since the second basic solution constructed is feasible. The sequence of the objective function value at each pair of basic solutions with the corresponding interior points is shown below:

### Proof of Correctness

The proposed method consists of three different algorithms. Besides iEPSA (the non-monotonic part), the rest two (EPSA and PDIPSA) have been already proven to be correct and finite. For more details see (Paparrizos, Samaras & Stephanides, 2003a; Paparrizos, Samaras & Stephanides, 2003b). Since the nature of the iEPSA is non-monotonic, the finiteness cannot be verified by proving that the algorithm improves the objective function value at each iteration. Therefore iEPSA’s finiteness relies on the non-cycling property.

We will now use the same notation used in Zhang (1999) in respect to sign-properties and simplex tableau. Later on we will adjust to what is proven in Fukuda & Terlaky (1997) and explain how this portion of information affects iEPSA’s finiteness. With respect to the basic partition [BN] we call the following augmented matrix a simplex tableau:

where:

${\left|{\overline{a}}_{ij}\right|}_{\left|B\right|×\left|N\right|}={\left({A}_{B}\right)}^{-1}{A}_{N}$ $\left\{{s}_{j}|j\in N\right\}={\left({c}_{N}\right)}^{T}-{\left({c}_{B}\right)}^{T}{\left({A}_{B}\right)}^{-1}{A}_{N}$ $\left\{{x}_{i}|i\in B\right\}={\left({A}_{B}\right)}^{-1}b$ $z={\left({c}_{B}\right)}^{T}{\left({A}_{B}\right)}^{-1}b$

It is well known, that if xB ≥ 0 the basis is primal feasible. If sN ≥ 0 the basis is dual feasible. If both apply, the current basis is optimal. No primal or dual feasibility is required in iEPSA, and the value of the objective function does not improve necessarily in a monotonic way. Let us start by focusing on the correctness first. We have to prove that at each iteration, an available pivot is always given. This actually means that both the selection of the entering and leaving variables are well defined. First let us provide a series of proofs about the monotonicity of the interior point that iEPSA uses to construct the feasible direction at each iteration.

Lemma 1

A direction from an infeasible (exterior) point to an interior one, intersects the feasible region into a unique pair of entering/leaving points.

Proof

Assume the polyhedron X = {x|Ax ≥ b}. Let points ααββ be computed as presented in iEPSA algorithm:

$\alpha \alpha ={x}_{current}+\alpha d$ $\beta \beta ={x}_{current}+\beta d$

Since xinterior and points ααββ lie on the same direction, the linear combination of the interior point is: ${x}_{interior}=\lambda \alpha \alpha +\left(1-\lambda \right)\beta \beta >0,\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\lambda \in \left(0,1\right).$

So for each element i of xinterior the following applies: $\left(\lambda a{a}_{i}+\left(1-\lambda \right)b{b}_{i}\right)>0,\left(\lambda >0\right),\forall i=1,...,m$ $\left(a{a}_{i}+b{b}_{i}\right)>0$

so there is no index i for which both elements in ααββ are equal to zero. This means that points ααββ lie on a different hyperplane of the feasible region.

Lemma 2

Given a pair of a primal infeasible basic solution xcurrent and an interior point xinterior, the direction d = xinterior − xcurrent intersects the feasible region into a pair of leaving/entering ααββ boundary points and the middle point ${x}_{middle}=\frac{\alpha \alpha +\beta \beta }{2}$ is also interior.

Proof

We will use the contradiction method. Assume that xmiddle is not interior. From Lemma1 we know that the entering ββ ≥ 0 and leaving αα ≥ 0 boundary points are not the same. For the middle point we have: ${x}_{middle}=\frac{\alpha \alpha +\beta \beta }{2}\ge 0$

Since xmiddle is not an interior point, there exists at least one element i equal to zero: $\frac{\alpha {\alpha }_{i}+\beta {\beta }_{i}}{2}=0$

which contradicts with Lemma1. So xmiddle is interior (xmiddle > 0).

Lemma 3

From a given interior point, the half step of the minimum ratio test computed for any given descending direction results to a strictly better interior point.

Proof

Let xinterior be the first interior point. For a given descending direction d, its widely know that the minimum ratio test computes the largest step we can move alongside the direction d while preserving feasibility. We get: $\alpha =\frac{{x}_{B\left[{r}_{a}\right]}}{-{d}_{B\left[{r}_{a}\right]}}=min\left\{\frac{{x}_{B\left[i\right]}}{-{d}_{B\left[i\right]}}:{d}_{B\left[i\right]}<0\right\},\forall i=1,2,...,m.$

The new interior point is: ${\overline{x}}_{interior}={x}_{interior}+\frac{\alpha }{2}d$. Suppose now that ${\overline{x}}_{interior}$ is not strictly better. We have:

${c}^{T}{x}_{interior}\le {c}^{T}{\overline{x}}_{interior}⇔$ ${c}^{T}{x}_{interior}-{c}^{T}{\overline{x}}_{interior}\le 0⇔$ ${c}^{T}\left({x}_{interior}-{\overline{x}}_{interior}\right)\le 0$

If we substitute ${\overline{x}}_{interior}$ we take: ${c}^{T}\left({x}_{interior}-{x}_{interior}-\frac{\alpha }{2}d\right)\le 0⇔-\frac{\alpha }{2}{c}^{T}d\le 0$

which is a contradiction since d < 0. Hence, the new interior point ${\overline{x}}_{interior}$ has a better objective value.

Lemma 4

At each iteration of iEPSA, the direction d intersects the feasible region.

Proof

Lemmas 1, 2 and 3 immediately imply that at each iteration we construct the direction towards an interior point. Thus, each direction intersects the feasible region.

Theorem 1

At each iteration of iEPSA, the objective value at the interior point xinterior strictly decreases.

Proof

iEPSA starts by constructing the direction towards an interior point. Lemma 1 implies that the entering and leaving points for this direction are not the same. The algorithm then constructs at each iteration the middle point of the entering feasible ray segment. Lemma 2 shows that this point will also be interior. It then compares the two interior points and acts accordingly to secure the construction of a better interior point. By exploiting the non-monotonicity on the infeasible basic solutions that result in middle-interior points worse than in previous iterations and as a result offering a descending direction between these two we can still provide improvement on the interior point. Lemma 3 promotes the monotonicity in the interior of the feasible region.

We will prove each case by induction. The possible combinations for the objective function value between points xmiddle and xinterior are shown below:

• cTxmiddle < cTxinterior: This case is trivial to examine. We directly acquire a better interior point due to the relational geometric position of the points.

• cTxmiddle = cTxinterior: Since the objective function’s value is same on both points, they either match or lie on a hyperplane vertical to the objective function vector c. We use d =  − c in that case, since the latter is a descending direction, as no direction can be constructed between the two points. Using Lemma 2 the new interior point will have better objective function value than the previous one.

• cTxmiddle > cTxinterior: We have

• ${c}^{T}{x}_{middle}>{c}^{T}{x}_{interior}⇔$ ${c}^{T}{x}_{interior}-{c}^{T}{x}_{middle}<0⇔$ ${c}^{T}\left({x}_{interior}-{x}_{middle}\right)<0⇔$ ${c}^{T}d<0$ So it stands that the direction d = xinterior − xmiddle is a descending direction. According to Lemma 3 the interior point which will be constructed in the next iteration using the half of the minimum ratio step from point xinterior will be better. So in this case a better interior point is also constructed.

For all the possible combinations an improved interior point can be constructed. Thus iEPSA uses monotonicity in interior points.

Lemma 5

At each iteration of iEPSA, a leaving variable is always eligible.

Proof

Assume that at some iteration, xinterior > 0 is the current point and xcurrent < 0 is the current infeasible basic solution. From the constraints of Eq. (1) we have:

$\left(A{x}_{current}=b\phantom{\rule{1em}{0ex}}and\phantom{\rule{1em}{0ex}}A{x}_{interior}=b\right)⇔$ $A{x}_{interior}=A{x}_{current}⇔$ $A{x}_{interior}-A{x}_{current}=0⇔$ $A\left({x}_{interior}-{x}_{current}\right)=0⇔$ $Ad=0$

hence d is a direction. The maximum ratio test is then given by: $\beta =\frac{-{x}_{B\left[r\right]}}{{d}_{B\left[r\right]}}=max\left\{\frac{-{x}_{B\left[i\right]}}{{d}_{B\left[i\right]}}:{x}_{B\left[i\right]}<0\right\},\forall i=1,2,...m$

and we now need to prove that β > 0. We have d = xinterior − xcurrent, and for xcurrent < 0 since xinterior > 0, it follows that d > 0. Thus there exist columns in the maximum ratio test that will be positive, so β > 0 is computable.

Lemma 6

If a pivot from an infeasible basic solution B → B to another is admissible for iEPSA, then the inverse B′ → B is not.

Proof

First we will analyze the sign properties of the algorithm. Assume that k is the leaving variable (k ∈ B), l is the entering one (l ∈ N). The difference in objective function’s value between two consecutive extreme points $\left[B,N\right]$ and $\left[{B}^{\prime },{N}^{\prime }\right]$ is given by: ${z}^{\prime }-z=\Delta z={\left({s}_{N}\right)}_{l}{\left({x}_{{B}^{\prime }}\right)}_{l}.$

We know also that the revised updating equation for the basic solution xB is:

${x}_{{B}^{\prime }}={x}_{B}-\frac{f}{g}{h}_{l},\text{where}$ $f={x}_{B\left(r\right)},g={H}_{rl},{h}_{.l}={\left({A}_{B}\right)}^{-1}{A}_{.l}$

The algorithm selects the entering variable so as HrN = [HrPHrQ] < 0, thus Hrl = g < 0. This means that the conjunction of the pivot row and column, the pivot element, is always negative for iEPSA. For the pivot row it also applies that Hrl =  − 1, xB(r) = xk = 0. So:

${x}_{B{\left(l\right)}^{\prime }}={x}_{l}=\frac{f}{g}=\frac{<0}{<0}>0.$

This means that the leaving variable will be replaced by a positive one after the pivot. Since the pivot from B′ → B is reverse to B → B′, the leaving variable of the first pivot cannot be selected immediately as leaving for the next pivot since it needs to be negative (as a reminder, the leaving variable is selected always by using the maximum ratio test, thus always negative).

Lemma 7

If iEPSA selects the entering variable from set P then the step is monotonic. If it is from set Q then the step is non-monotonic.

Proof

When the selection is done from sets P and Q respectively, and since:

$P=\left\{j\in N:{s}_{j}<0\right\}and\phantom{\rule{1em}{0ex}}Q=\left\{j\in N:{s}_{j}\ge 0\right\}$

we have (via the positivity of the leaving variable on the adjacent basic solution of Lemma 6):

$P⟹\Delta z={z}^{\prime }-z={s}_{l}{x}_{l}=\left(-\right)\left(+\right)<0$ $Q⟹\Delta z={z}^{\prime }-z={s}_{l}{x}_{l}=\left(+\right)\left(+\right)>0$

The sign properties previously proved result into the following unique pair of pivot types, depicted in Figs. 5 and 6. To select an entering variable from set Q (thus pivot of type II), automatically means that either HrP ≥ 0, or P = Ø).

Lemma 8

The selection of the entering variable for iEPSA is well defined.

In Fukuda & Terlaky (1997) three types of terminal tableau for a linear problem are defined. Those are considered to be terminal because they define three different terminal states for a LP: (i) optimality (ii) primal-inconsistency (iii) dual-inconsistency. We emphasize on the second one shown here:

Proof

Notice that since the leaving variable xk = xB(r) is always negative for iEPSA, and in Fukuda & Terlaky (1997) D =  − (AB)−1AN, this tableau version has opposite signs of what iEPSA uses for the pivot row (${H}_{rN}={\left({A}_{B}\right)}_{r}^{-1}{A}_{N}$). We know that iEPSA selects an entering variable with HrN = [HrPHrQ] < 0, thus the pivot row on this terminal tableau matches the case where HrN ≥ 0 for iEPSA. This would be a deadlock for iEPSA, a case where no eligible entering variable could be detected (all pivot elements positive or zero). As a result, this terminal tableau cannot occur in iEPSA run time, since iEPSA is only applicable on feasible LPs as it requires an initial interior point to initialize, and this terminal tableau reveals infeasibility of the primal problem.

Following the insight of what is stated in Fukuda & Terlaky (1997), we will now prove that iEPSA will never reach a deadlock, a case where the method is forced to cycle. We first need a definition:

Definition: We call redundant a constraint in a LP which represents geometrically a half-space implied already by other constraint(s). Thus, a redundant constraint can be eliminated from the original LP without altering the equivalence of its initial feasible region.

Theorem 2

Method iEPSA cannot reach a cycling deadlock.

Proof

The near-terminal tableau of type B as shown in Fig. 7 in Fukuda & Terlaky (1997), actually means that the entering variable represents a redundant constraint. However in terms of iEPSA sign properties, this translates into: (i) negative leaving variable xk (ii) pivot row: ${H}_{rN}={\left({A}_{B}\right)}_{r}^{-1}{A}_{N}\ge 0$ except for the entering variable xl, which gives (HrN).l < 0. This means that only one entering variable is eligible. We now move onto proving that vector HrN cannot contain only one negative element in the general case of cycling. The cycling example that we will analyze is minimal; each variable in the cycle became entering and leaving only once.

This near-terminal tableau means that the constraint represented by variable l is a redundant constraint for the primal problem. Let us assume the general case, where cycling occurred as shown in Fig. 8. For simplicity, we depict basic solutions as nodes in a graph. Each oriented arrow represents an admissible pivot for iEPSA except for the one(s) in red color. A cycling assumption implies a case where the algorithm began on basic solution like node 1, moved after a finite number of pivots to node n and then again to 1, thus producing a cycle. All variables involved into the cycle changed basic/nonbasic status at least once.

Assume the pivot from (1 → n) is given by the pivot operation xksl where k is the index of the leaving variable, and l the index of the entering one. The step (n → 1) was admissible by the algorithm, the sign properties apply, so on basic solution 1, xk > 0 , and of course on n, xl < 0 (via Lemma 6). It is now also clear that (1 → n) cannot be admissible for the algorithm (although 1, n are obviously neighbors) via Lemma 6. However, somewhere before visiting node n, variable k, changed status and left the basis (in order to be eligible as entering on the last node n).

Now, since the algorithm always selects leaving variables throughout a max-ratio test, it’s obvious that all leaving variables are hyperplanes of the feasible region. Thus all leaving variables selected by the algorithm are non-redundant, as removing either of them would result to a new LP which would not be equivalent to the previous one.

If we assume that there is only one pivot admissible by the sign properties of the algorithm on node n (that is, moving to 1), this means that the tableau on that pivot has a negative leaving variable and, there is only one negative element in the pivot row HrN. However according to Fukuda & Terlaky (1997), this is a near-terminal tableau of type B. It means that the constraint that the entering variable represents, is a redundant one for the primal problem. Since the entering variable k on n, is already known to be leaving in some previous iteration, and all leaving variables are non-redundant in this algorithm this is a contradiction. This near-terminal tableau of type B tableau can never appear for an entering variable that has already previously served as an outgoing.

The algorithm as shown in Fig. 8 from n can either move to n → 1 or to 2, since n is a neighbor to 1 and 1 is a neighbor to 2, then n is potentially a neighbor to 2. If not a neighbor, the pivot would not be admissible anyhow to assume a case of cycling. Additionally, node n − 1 is excluded as via Lemma 6 the backwards pivot is non-admissible, since the forward was. The direct backwards pivot to n → 1 is not possible via Lemma 6. So the algorithm can again theoretically cycle with 2 now. We extract the following scenario depicted in Fig. 9: the pivots 1 → 2, n → 2, n → 1 in the case of cycling are all admissible. Via Lemma 6, we know that the leaving variable is always negative, and always being substituted by a positive one on the pivot. Since 1 → 2 is admissible, k1 ≤ 0. Since n → 2 is also admissible, k2 ≤ 0 as well. But in the pivot n → 1, variable k must be substituted at node 1 with a positive variable, and the substitution here is k1. However since 1 → 2 is admissible, again k1 ≤ 0 must apply; contradiction.

This case can only take place if 1 → 2, n → 2 are admissible pivots, but the pivot element for n → 1 is positive, which is not applicable for iEPSA, so as the returning variable k1 can be again negative. This means that the pivot n → 1 is both ways non-admissible for iEPSA if this applies.

## Results

Computational studies can provide preliminary results on the computational behavior of an algorithm, thus enabling us to gain insight of its performance in different types of LPs. The most popular types of test instances available for computational experiments are instances from real world applications. The test bed used in the computational study was a set of benchmark problems (netlib, Kennington, Mészáros) that do not have bounds and ranges sections in their .mps files. All reported times were measured in seconds with built-in function cputime.

The computing environment we used in the computational study is described in Table 1, using the most up-to-date possible software versions. Table 2 presents detailed information about the computational study. The first column includes the name of the problem, the second the number of constraints, the third the number of variables, the fourth the nonzero elements of the matrix A and then the triplets of the results for all three algorithms in terms of cpu time and total number of iterations follow. Finally the last three columns contain the optimal objective value reported by each algorithm. The test bed includes 93 LPs from netlib, Kennington and Mészáros collection. Ordónez & Freund (2003) have shown that nearly 71% of the netlib LPs are ill-conditioned. Hence, numerical difficulties may occur. We implemented an mps parser to read mps-files and convert data into MATLAB mat files.

The proposed algorithm (iEPSA) was implemented in MathWorks MATLAB environment. The main reason for this choice was the support for high-level sparse matrix operations. Four programming adjustments were used to improve the performance of memory bound code in MATLAB. Those were: (i) store and access data in columns, (ii) avoid creating unnecessary variables, (iii) vectorization instead of for-loops and (iv) pre-allocate arrays before accessing them within loops. In order to handle numerical errors, tolerance scalars were introduced. The default values of the above mentioned tolerances were set equal to 1e − 8 for all vectors and matrices for iEPSA. For the basis update of iEPSA, EPSA and PDIPSA we used the MATLAB LU factorization decomposition scheme. Furthermore, to obtain the first (and only) interior point for iEPSA we used a MATLAB interface under a MEX-function provided for the MOSEK IPM (Andersen & Andersen, 2000; Andersen & Ye, 1996). This interface allowed us to modify appropriately the IPM so as to stop the solver directly after finding the first interior point. We emphasize that we used a zero objective function vector with MOSEK, and as a result MOSEK IPM is agnostic in directions pointing towards any existing optimal solutions. In this way, the claim that the first interior point we construct actually brings iEPSA very close to the optimal solution already, is at least unsubstantiated. The pipeline representing how the total running times were calculated for the competing algorithms is shown in Fig. 10. We did not use any scaling technique to solve successfully all tested benchmarks as the EPSA pivoting scheme is scaling invariant (Triantafyllidis & Samaras, 2014).

In Table 2 we present the arithmetic mean (A_MEAN) computed for both the total cpu time (in seconds) and number of iterations (niter). We present the execution time and the number of iterations of each algorithm over the netlib, Kennington and Mészáros set of LPs included. We compare the performance of the proposed new algorithm (iEPSA) against CPLEX (Primal Simplex) using its default settings and forcing the suite to use Primal Simplex (as iEPSA also solves the primal problem) and Gurobi (Primal Simplex) using the same method as well. The proposed algorithm performs fewer iterations on 44/93 benchmarks versus CPLEX and 43/93 versus Gurobi. Specifically, in terms of average number of iterations, iEPSA performs 47.3% less iterations than CPLEX and 28.7% less iterations than Gurobi.

In terms of cpu-time, CPLEX, iEPSA and Gurobi correspondingly required on average 0.452, 5.59 and 0.057 seconds to solve all tested instances. Even if the difference between iEPSA and the commercial solvers is substantial, it is worthy to note that m-code (iEPSA) is significantly slower than pure C implementations (mex-functions of CPLEX and Gurobi), thus justifiable to witness in this computational study. However, the average number of iterations is irrelevant to the programming language used, and only relies on the algorithmic mechanics for each solver. Therefore, it can be a transparent criterion to gain insight about practical performance among the competing algorithms. Also, Fig. 11 shows the violin plots for the total number of iterations for all three algorithms across all tested benchmarks stratified by three levels of sparsity.

Finally, there have been observed differences in the optimal value between the solvers in the problems of the family largeXXX. This has to do with the very nature of the problems themselves where numerical instability is highly present. iEPSA tends to agree completely though with CPLEX objective values.

## Conclusions

In this paper we proposed a new non-monotonic simplex-type algorithm for solving LPs. iEPSA does not maintain monotonicity on the basic solutions but only on the interior point solutions. This new algorithm is a combination of three different methods. The computational results are very encouraging for the new algorithm. Algorithm iEPSA performs 47.3% less iterations than CPLEX and 28.7% less iterations than Gurobi in a collection of 93 well-known benchmarks. Future work includes the extension of the computational studies to a larger number of tested problems from available benchmark collections, improving the cpu-time performance by implementing the algorithms to a low-level programming language and finally, implementing a tailored method to compute the first interior point rather than using a commercial IPM.