spbenders4.gms : Stochastic Benders - Parallel MPI

Description

This example demonstrates a stochastic Benders implementation for the
simple transport example.

This is the fourth example of a sequence of stochastic Benders
implementations using various methods to solve the master and
subproblem.

This fourth example implements the stochastic Benders algorithm using
parallel implementation using MPI. In this example the model is started
from the command line as follows:

   mpiexec -n 4 gams spbenders4 fileStemApFromEnv=PMI_RANK lo=2

On different platforms this command might look a little different. On HPC
system, that often run Linux and use a specialized MPI, it might be more
efficient (or even required) to use a Python version different from the one
GAMS ships with. As documented in https://www.gams.com/latest/docs/API_PY_GETTING_STARTED.html
GAMS Python API packages must also be installed for this Python.
Also the PMI_RANK variable might be called differently. In any case GAMS needs
to be instructed to use an external Python installation by pointing
environment variable GMSPYTHONLIB to the external Python library:

   mpirun -n 4 GMSPYTHONLIB=/path/to/python/lib/libpython3.12.so gams spbenders4 fileStemApFromEnv=OMPI_COMM_WORLD_RANK lo=2

This command spawns card(s)+1 (=4 for the small example) copies of the
spbenders4 model and the environment variable PMI_RANK decided which part
the particular instance plays. The GAMS job with PMI_RANK=0 implements the
master and the GAMS jobs PMI_RANK=1 to PMI_RANK=s implement the subproblems.
The communication of the master variables received and the cut information
happens via the Python package mpi4py and the GAMS embedded code facility.

Keywords: linear programming, stochastic Benders algorithm, transportation
          problem, GAMS embedded code facility, Python, parallel computing,
          message passing interface


Small Model of Type : LP


Category : GAMS Model library


Main file : spbenders4.gms

$title Stochastic Benders - Parallel MPI (SPBENDERS4,SEQ=421)

$onText
This example demonstrates a stochastic Benders implementation for the
simple transport example.

This is the fourth example of a sequence of stochastic Benders
implementations using various methods to solve the master and
subproblem.

This fourth example implements the stochastic Benders algorithm using
parallel implementation using MPI. In this example the model is started
from the command line as follows:

   mpiexec -n 4 gams spbenders4 fileStemApFromEnv=PMI_RANK lo=2

On different platforms this command might look a little different. On HPC
system, that often run Linux and use a specialized MPI, it might be more
efficient (or even required) to use a Python version different from the one
GAMS ships with. As documented in https://www.gams.com/latest/docs/API_PY_GETTING_STARTED.html
GAMS Python API packages must also be installed for this Python.
Also the PMI_RANK variable might be called differently. In any case GAMS needs
to be instructed to use an external Python installation by pointing 
environment variable GMSPYTHONLIB to the external Python library:

   mpirun -n 4 GMSPYTHONLIB=/path/to/python/lib/libpython3.12.so gams spbenders4 fileStemApFromEnv=OMPI_COMM_WORLD_RANK lo=2

This command spawns card(s)+1 (=4 for the small example) copies of the
spbenders4 model and the environment variable PMI_RANK decided which part
the particular instance plays. The GAMS job with PMI_RANK=0 implements the
master and the GAMS jobs PMI_RANK=1 to PMI_RANK=s implement the subproblems.
The communication of the master variables received and the cut information
happens via the Python package mpi4py and the GAMS embedded code facility.

Keywords: linear programming, stochastic Benders algorithm, transportation
          problem, GAMS embedded code facility, Python, parallel computing,
          message passing interface
$offText

$set MPI_RANK notSet
$if setenv PMI_RANK             $set MPI_RANK %sysEnv.PMI_RANK%
$if setenv OMPI_COMM_WORLD_RANK $set MPI_RANK %sysEnv.OMPI_COMM_WORLD_RANK%
$if %MPI_RANK%==notSet $abort.noError This model needs to be started with mpiexec/mpirun. See preamble of the model source for details.

$set MPI_SIZE notSet
$if setenv PMI_SIZE             $set MPI_SIZE %sysEnv.PMI_SIZE%
$if setenv OMPI_COMM_WORLD_RANK $set MPI_SIZE %sysEnv.OMPI_COMM_WORLD_SIZE%
$if %MPI_SIZE%==notSet $abort.noError This model needs to be started with mpiexec/mpirun. See preamble of the model source for details.

$log --- Using Python library %sysEnv.GMSPYTHONLIB%

Set
   i 'factories'            / f1*f3 /
   j 'distribution centers' / d1*d5 /;

Parameter
   capacity(i) 'unit capacity at factories'
               / f1 500, f2 450, f3 650 /
   demand(j)   'unit demand at distribution centers'
               / d1 160, d2 120, d3 270, d4 325, d5 700 /
   prodcost    'unit production cost'                    / 14 /
   price       'sales price'                             / 24 /
   wastecost   'cost of removal of overstocked products' /  4 /;

Table transcost(i,j) 'unit transportation cost'
          d1    d2    d3    d4    d5
   f1   2.49  5.21  3.76  4.85  2.07
   f2   1.46  2.54  1.83  1.86  4.76
   f3   3.26  3.08  2.60  3.76  4.45;

$ifThen not set useBig
   Set s 'scenarios' / lo, mid, hi /;

   Table ScenarioData(s,*) 'possible outcomes for demand plus probabilities'
             d1   d2   d3   d4   d5  prob
      lo    150  100  250  300  600  0.25
      mid   160  120  270  325  700  0.50
      hi    170  135  300  350  800  0.25;
$else
$  if not set nrScen $set nrScen 10
   Set s 'scenarios' / s1*s%nrScen% /;
   Parameter ScenarioData(s,*) 'possible outcomes for demand plus probabilities';
   option seed = 1234;
   ScenarioData(s,'prob') = 1/card(s);
   ScenarioData(s,j)      = demand(j)*uniform(0.6,1.4);
$endIf

$eval cardS card(s)+1
$if not %cardS%==%MPI_SIZE% $abort MPI size needs to be %cardS%

embeddedCode Python:
try:
  from mpi4py import *
except:
  raise Exception("Module mpi4py not found")
comm = MPI.COMM_WORLD

def symbols2List(*symbols):
    return list(map(lambda x: list(gams.get(x)), symbols))
pauseEmbeddedCode
abort$execerror 'Problems setting up mpy4py or some other Python error. Check the log';

Parameter osub(s), cconst(s), ccoeff(s,j);

$ifThen.MPI 0==%MPI_RANK%
* Benders master problem
$if not set maxiter $set maxiter 25
Set
   iter             'max Benders iterations' / 1*%maxiter% /
   itActive(iter)   'active Benders cuts';

Parameter
   cutconst(iter)   'constants in optimality cuts'    / #iter    0 /
   cutcoeff(iter,j) 'coefficients in optimality cuts' / #iter.#j 0 /;

Variable
   ship(i,j)        'shipments'
   product(i)       'production'
   received(j)      'quantity sent to market'
   zmaster          'objective variable of master problem'
   theta            'future profit';

Positive Variable ship;

Equation
   masterobj        'master objective function'
   production(i)    'calculate production in each factory'
   receive(j)       'calculate quantity to be send to markets'
   optcut(iter)     'Benders optimality cuts';

masterobj.. zmaster =e= theta - sum((i,j), transcost(i,j)*ship(i,j))
                              - sum(i, prodcost*product(i));

receive(j)..    received(j) =e= sum(i, ship(i,j));

production(i).. product(i)  =e= sum(j, ship(i,j));

optcut(itActive)..
   theta =l= cutconst(itActive) + sum(j, cutcoeff(itActive,j)*received(j));

product.up(i) = capacity(i);

Model masterproblem / all /;

Scalar
   rgap       'relative gap'       /    0 /
   lowerBound 'global lower bound' / -inf /
   upperBound 'global upper bound' / +inf /
   objMaster                       /    0 /
   objSub                          /    0 /;

option limRow = 0, limCol = 0, solPrint = silent, solver = cplex, solveLink=%solveLink.loadLibrary%;
received.l(j) = 0;
objMaster     = 0;

$if not set rtol $set rtol 0.001
loop(iter,
   continueEmbeddedCode:
   comm.bcast([[0], list(gams.get('received'))], root=0)
   cut = comm.gather(None, root=0)[1:]
   gams.set('osub',   [c[0][0] for c in cut])
   gams.set('cconst', [c[1][0] for c in cut])
   gams.set('ccoeff', [rec for s in cut for rec in s[2]])
   pauseEmbeddedCode osub, cconst, ccoeff
   objSub           = sum(s, osub(s));
   cutconst(iter)   = sum(s, cconst(s));
   cutcoeff(iter,j) = sum(s, ccoeff(s,j));

   if(lowerBound < objMaster + objSub, lowerBound = objMaster + objSub);
   rgap = (upperBound - lowerBound)/(1 + abs(upperBound));
   break$(rgap < %rtol%);
   itActive(iter) = yes;
   solve masterproblem max zmaster using lp;
   upperBound = zmaster.l;
   objMaster  = zmaster.l - theta.l;
);

* Terminate sub jobs
continueEmbeddedCode:
comm.bcast([[1],[]], root=0)
endEmbeddedCode
abort$(rgap >= %rtol%) 'need more iterations', lowerbound, upperbound;
display 'optimal solution', lowerbound, upperbound;

$else.MPI
* Benders' subproblem
Variable
   received(j) 'quantity sent to market (from master)'
   sales(j)    'sales (actually sold)'
   waste(j)    'overstocked products'
   zsub        'objective variable of sub problem';

Positive Variable sales, waste;

Equation
   subobj      'subproblem objective function'
   selling(j)  'part of received is sold'
   market(j)   'upperbound on sales';

subobj..     zsub =e= sum(j, price*sales(j)) - sum(j, wastecost*waste(j));

selling(j).. sales(j) + waste(j) =e= received.l(j);

market(j)..  sales(j) =l= demand(j);

Model subproblem / subobj, selling, market /;

* Infinite loop
Singleton Set ss(s);
ss(s)     = ord(s)=%MPI_RANK%;
demand(j) = scenarioData(ss,j);

Scalar done;

option limRow = 0, limCol = 0, solPrint = silent, solver = cplex, solveLink = %solveLink.loadLibrary%;
while(1,
   continueEmbeddedCode:
   rx = comm.bcast(None, root=0)
   gams.set('done', rx[0])
   gams.set('received', rx[1])
   pauseEmbeddedCode done, received
   abort.noerror$done 'terminating subprocess';
   solve subproblem max zsub using lp;
   osub(ss)     = eps + ScenarioData(ss,'prob')*zsub.l;
   cconst(ss)   = eps + ScenarioData(ss,'prob')*sum(j, market.m(j)*demand(j));
   ccoeff(ss,j) = eps + ScenarioData(ss,'prob')*selling.m(j);
   continueEmbeddedCode:
   comm.gather(symbols2List('osub', 'cconst', 'ccoeff'), root=0 )
   pauseEmbeddedCode
);
$endIf.MPI